Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
300 | 1,274 | ..
Learning Exact Patterns of Quasi-synchronization
among Spiking Neurons
from Data on Multi-unit Recordings
Laura Martignon
Max Planck Institute
for Psychological Research
Adaptive Behavior and Cognition
80802 Munich, Germany
laura@mpipf-muenchen.mpg.de
Kathryn Laskey
Dept. of Systems Engineering
and the Krasnow Institute
George Mason University
Fairfax, Va. 22030
klaskey@gmu.edu
Gustavo Deco
Siemens AG
Central Research
Otto Hahn Ring 6
81730 Munich
gustavo.deco@zfe.siemens.de
Eilon Vaadia
Dept. of Physiology
Hadassah Medical School
Hebrew University of Jerusalem
Jerusalem 91010, Israel
eilon@hbf.huji.ac.il
Abstract
This paper develops arguments for a family of temporal log-linear models
to represent spatio-temporal correlations among the spiking events in a
group of neurons. The models can represent not just pairwise correlations
but also correlations of higher order. Methods are discussed for inferring
the existence or absence of correlations and estimating their strength.
A frequentist and a Bayesian approach to correlation detection are
compared. The frequentist method is based on G 2 statistic with estimates
obtained via the Max-Ent principle. In the Bayesian approach a Markov
Chain Monte Carlo Model Composition (MC3) algorithm is applied to
search over connectivity structures and Laplace's method is used to
approximate their posterior probability. Performance of the methods was
tested on synthetic data. The methods were applied to experimental data
obtained by the fourth author by means of measurements carried out on
behaving Rhesus monkeys at the Hadassah Medical School of the Hebrew
University. As conjectured, neural connectivity structures need not be
neither hierarchical nor decomposable.
Learning Quasi-synchronization Patterns among Spiking Neurons
77
1 INTRODUCTION
Hebb conjectured that information processing in the brain is achieved through the
collective action of groups of neurons, which he called cell assemblies (Hebb, 1949). His
followers were left with a twofold challenge:
? to define cell assemblies in an unambiguous way.
? to conceive and carry out the experiments that demonstrate their existence.
Cell assemblies have been defined in various sometimes conflicting ways, both in terms of
anatomy and of shared function. One persistent approach characterizes the cell assembly
by near-simultaneity or some other specific timing relation in the firing of the involved
neurons. If two neurons converge on a third one, their synaptic influence is much larger
for near-coincident firing, due to the spatio-temporal summation in the dendrite
(Abeles, 1991; Abeles et al. 1993). Thus syn-jiring is directly available to the brain as a
potential code.
The second challenge has led physiologists to develop methods to observe the
simultaneous activity of individual neurons to seek evidence for spatio-temporal patterns.
It is now possible to obtain multi-unit recordings of up to 100 neurons in awake behaving
animals. In the data we analyze, the spiking events (in the 1 msec range) are encoded as
sequences of O's and 1's, and the activity of the whole group is described as a sequence of
binary configurations. This paper presents a statistical model in which the parameters
represent spatio-temporal firing patterns. We discuss methods for estimating these
pararameters and drawing inferences about which interactions are present.
2 PARAMETERS FOR SPATIO-TEMPORAL FIRING PATTERNS
The term spatial correlation has been used to denote synchronous firing of a group of
neurons, while the term temporal correlation has been used to indicate chains of firing
events at specific temporal intervals. Terms like "couple" or "triplet" have been used to
denote spatio-temporal patterns of two or three neurons (Abeles et al., 1993; GrOn, 1996)
frring simultaneously or in sequence. Establishing the presence of such patterns is not
straightforward. For example, three neurons may fire together more often than expected
by chancel without exhibiting an authentic third order interaction. This phenomenon may
be due, for instance, to synchronous frring of two couples out of the three neurons.
Authentic triplets, and, in general, authentic n-th order correlations, must therefore be
distinguished from correlations that can be explained in terms of lower order interactions.
In what follows, we present a parameterized model that represents a spatio-temporal
correlation by a parameter that depends on the involved neurons and on a set of time
intervals, where synchronization is characterized by all time intervals being zero.
Assume that the sequence of configurations !:t = ( x ( W'? .. , x ( N.l J ) of N neurons forms a
8 be the time step, and denote the conditional distribution
configurations by p(:!t I :!(t-oJ' :!(t-2oJ , ,,?':!(t-roJ ).
We
Markov chain of order r. Let
for :!t given previous
assume that all transition probabilities are strictly positive and expand the logarithm of
the conditional distribution as:
I
that is to say, more often than predicted by the null hypothesis of independence.
78
L. Martignon, K. Laskey, G. Deco and E. Vaadia
p(!t I!(t-O ) '!(t-2o ) ,???'!(t-ro}) = czp{ (}o
+ L (J AX A)
(1)
Ae=:
where each A is a subset of pairs of subscripts of the form (i, t - sO) that includes at
least one pair of the form (i, t). Here X A = II x(i t-m 1? denotes the event that all
l$J$k
neurons in A are active. The set
i '
::: c 2 A of all subsets for which () A is non-zero is
p.
called the interaction structure for the distribution
interaction strength for the interaction on subset
A
e:::
A.
J
A.
The effect () A is called the
Clearly, ()A
and is taken to indicate absence of an order-I
A
=0
is equivalent to
I interaction among neurons in
We denote the structure-specific vector of non-zero interaction strengths by (J s.
Consider a set
A
of N binary neurons and denote by
the binary configurations of
p
the probability distribution on
A.
DEFINITION 1: We say that neurons (i 1 ,i2 , .. ..ik ) exhibit a spatio-temporal
pattern if there is a set of time intervals m I 8,m 2 8, ...,m k 8 with at least one
m i = 0, such that ()A "# 0 in Equation (1), where
A ={( i1 ,t- m 18J,..J ik ,t- m k 8)).
DEFINITION 2: A subset (i 1 .i2 , ... , i k ) of neurons exhibits a synchronization or
spatial correlation if (JA
* a for A
= {( iI ' 0 J, ...,( i k,0)) .
In the case of absence of any temporal dependencies the configurations are independent and
we drop the time index:
p(!)
= czp{(}o + I.(}AXA)
where A is any nonempty subset of
A
and
(2)
XA =
n Xi .
ieA
Of course (2) is unrealistic. Temporal correlation of some kind is always present, one
such example being the refractory period after firing. Nevertheless, (2) may be adequate in
cases of weak temporal correlation. Although the models (1) and (2) are statistical not
physiological, it is an established conjecture that synaptic connection between two
neurons will manifest as a non-zero (J A for the corresponding set
A
in the temporal
model (1). Another example leading to non-zero (J A will be simultaneous activation cf
the neurons in A due to a common input, as illustrated in Figure 1 below. Such a (J A
will appear in model (1) with time intervals equal to O. An attractive feature of our
models is that it is capable of distinguishing between cases a. and b. of Figure 1. This
can be seen by extending the model (2) to include the external neurons (H in case a., H,K
in case b.) and then marginalizing. An information-theoretic argument supports the
choice of (JA
*a
as a natural indicator of an order-I
A I interaction among the neurons
in A. Assume that we are in the case of no temporal correlation.
interaction of order I A I
The absence cf
Learning Quasi-synchronization Patterns among Spiking Neurons
79
H
a.
Figure 1
b.
among neurons in A should be taken to mean that the distribution is determined by the
marginal distributions on proper subsets of A. A well established criterion for selecting
a distribution among those matching the lower order marginals fIxed by proper subsets <f
A, is Max-Ent. According to the Max-Ent principle the distribution that maximizes
entropy is the one which is maximally non-committal with regard to missing information.
The probability distribution p * that maximizes entropy among distributions with the
same marginals as the distribution
expansion in which only OB ' B c
p
on proper subsets of
'*
A ,B A
A
has a log-linear
can possibly be non-zero.2
3 THE FREQUENTIST APPROACH
We treat here the case of no temporal dependencies. The general case is treated in
Martignon-Deco,1997; Deco-Martignon,1997. We also assume that our data are
stationary. We test the presence of synchronization of neurons in A by the following
procedure: we condition on silence of neurons in the complement of A in A and call the
reSUlting frequency distribution p . We construct the Max-Ent model determined by the
marginals of
p on proper subsets of A. The well-known method for constructing this
type of Max-Ent models is the I.P.F.P. Algorithm (Bishop et al.,1975).
here another simpler and quicker procedure:
If
B
is a subset of
every index in
Defme
A,
B and
denote by
XB
the confIguration that has a component
1 for
a elsewhere.
p *( XB) = p( XB) + ( _l)IBI ~, where
~ is to be determined by solving for
o~ == 0, where 0 *A is the coefficient corresponding to A
As can be shown (Martignon et ai, 1995),
2 This
We propose
in the log-expansion of
0 *A can be written as
P*
P in the manifold of distributions with a log-
was observed by J. Good in 1963 (Bishop et al. 1~75). It is interesting to note that
minimizes the Kullback-Leibler distance from
linear expansion in which only
p *.
0B' Be A, B
'* A can possibly be non-zero.
80
L. Martignon, K. Laskey, G. Deco and E. Vaadia
e
*A
=
L ( -1) IA-BI In P* ( XB ). The distribution p *
maximizes entropy among those
BcA
with the same marginals of
tests by means of
p on proper subsets of A. 3 We use p * as estimate of p for
C 2 statistic (Bishop et aI., 1975).
4 THE BAYESIAN APPROACH
We treat here the case of no temporal dependencies. The general case is treated in LaskeyMartignon, 1997. Information about p( X) prior to observing any data is represented by
a joint probability distribution called the prior distribution over.3 and the e,s.
Observations are used to update this probability distribution to obtain a posterior
distribution over structures and parameters. The posterior probability of a cluster A can
be interpreted as the probability that the r nodes in cluster A exhibit a degree-r
interaction.
The posterior distribution for ()A represents structure-specific information
about the magnitude of the interaction. The mean or mode of the posterior distribution
can be used as a point estimate of the interaction strength; the standard deviation of the
posterior distribution reflects remaining uncertainty about the interaction strength.
We exhibit a family of log-linear models capable of capturing interactions of all
orders. An algorithm is presented for learning both structure and parameters in a unified
Bayesian framework. Each model structure specifies a set of clusters of nodes, and
structure-specific parameters represent the directions and strengths of interactions among
them. The Bayesian learning algorithm gives high posterior probability to models that
are consistent with the data. Results include a probability, given the observations, that a
set of neurons fires simultaneously, and a posterior probability distribution for the
strength of the interaction, conditional on its occurrence.
The prior distribution we used has two components. The first component assigns a prior
probability to each structure. In our model, interactions are independent of each other and
each interaction has a probability of .1. This reflects the prior expectation that not many
interactions are expected to be present. The second component of the prior distribution is
the conditional distribution of interaction strengths given the structure. If an interaction is
not in the structure, the corresponding strength parameter
e
A
is taken to be identicalIy
zero given structure':::'. All interactions belonging to .3 are taken to be independent and
normally distributed with mean zero and standard deviation 2. This reflects the prior
expectation that interaction strength magnitudes are rarely larger than 4 in absolute value.
Computing the posterior probability of a structure .3 requires integrating out of the
joint mass-density function of the structure
S,
the interaction strength
e
A'
and the
data X. The solution to this integral cannot be obtained in closed form. We use
Laplace's method (Kass-Raftery, 1995; Tierney-Kadane,1986) to estimate the posterior
probability of structures.
3
The posterior distribution of
e
A
given frequency data also
This is due to the fact that there is a unique distribution with the same marginals of pon
proper subsets of
is zero.
A
such that the coefficient corresponding to
A
in its log-expansion
Learning Quasi-synchronization Patterns among Spiking Neurons
cannot be obtained in closed form.
81
We use the mode of the posterior distribution as a
point estimate of ()A . The standard deviation of ()A' which indicates how precisely ()A
can be estimated from the given data, is estimated using a normal approximation to the
posterior distribution (Laskey-Martignon, 1997).
The covariance matrix of the ()A is
estimated as the inverse Fisher information matrix evaluated at the mode of the posterior
distribution. The posterior probability of an interaction ()A is the sum over the posterior
probabilities of all structures containing A. We used a Markov chain Monte Carlo Model
Composition algorithm (MC 3) to search over structures. This stochastic algorithm
converges to a stationary distribution in which structure .3 is visited with probability
equal to its posterior probability.
We ran the Me 3 algorithm for 15,000 runs and
estimated the posterior probability of a structure as its frequency of occurrence over the
15,000 runs. We estimated interaction strength parameters and standard deviations using
only the 100 highest-probability structures. Although the number of possible structures
is astronomical, typically most of the posterior probability is contained in relatively few
structures. We found this to be the case, which justifies using only the most probable
structures to estimate interaction strength parameters.
5 RESULTS
We applied our models to data from an experiment in which spiking events among groups
of neurons were analyzed through multi-unit recordings of 6-16 units in the frontal cortex
of Rhesus monkeys. The monkeys were trained to localize a source of light and, after a
delay, to touch the target from which the light blink was presented. At the beginning of
each trial the monkeys touched a "ready-key", then the central ready light was turned on.
Later, a visual cue was given in the form of a 200-ms light blink coming from either the
left or the right. Then, after a delay of 1 to 32 seconds, the color of the ready light
changed from red to orange and the monkeys had to release the ready key and touch the
target from which the cue was given. The spiking events (in the 1 millisecond range) of
each neuron were encoded as a sequence of zeros and ones, and the activity of the group
was described as a sequence of configurations of these binary states. The fourth author
provided data corresponding to piecewise stationary segments of the trials, which presented
weak temporal correlation, corresponding to intervals of 2000 milliseconds around the
ready-signal. He adjoined these 94 segments and formed a data-set of 188,000 msec. The
data were then binned in time windows of 40 milliseconds. The criterion we used to fix
the binwidth was robustness with regards to variations of the offsets. We selected a subset
of eight of the neurons for which data were recorded. We analyzed recordings prior to the
ready-signal separately from data recorded after the ready-signal. Each of these data sets is
assumed to consist of independent trials from a model of the form (2).
Cluster
A
Postenor prob.
of A
(frequency)
6,8
4,5,6,7
2,3,6
1,3,4
.Y
.30
.4U
close to pnor
Posterior prob.
of A
(best
IOOmodels)
.89
0.32
Q]8
close to pnor
MAP esttmate of
(}A
Standard
deviation of
().
SIgnifIcance
0.47
2.3U
2.30
u.lI
U.64
0.64
4.0853
No
2.35
4.7
Table1: results for pre-ready signal data. Effects with posterior prob. > 0.1
L. Martignon, K. Laskey, G. Deco and E. Vaadia
82
Cluster
A
5,6
4,7
1,4,5,6
1,3,4,6,7
Posterior prob .
of A
(frequency)
.79
.246
0.18
0.24
Posterior prob.
of A
(best
100
models)
0.96
0.18
0.13
0.17
MAP estimate ot
fJ,
Standard
deviation
Slgnincance
of
fJ ,
1.00
0.93
1.06
2.69
0.27
0.34
0.36
0.13
1.82
2.68
No
No
Table2:results for post-ready signal data. Effects with posterior prob >0.1
Another set of data from 5 simulated neurons was provided by the fourth author for a
double-check of the methods. Only second order correlations had been simulated: a
synapse lasting 2 msec, an inhibitory common input, and two excitatory common inputs.
The Bayesian method was very accurate, detecting exactly the simulated interactions.
The frequentist method made one mistake. Other data sets with temporal correlations
have also been analyzed. By means of the frequentist approach on shifted data, temporal
triplets have been detected and even fourth order correlations. Temporal correlograms are
computed for shifts of up to 50 msec (Martignon-Deco, 1997).
References
Hebb, D. (1949) The Organization of Behavior. New York: Wiley, 1949.
Abeles, M .(l991)Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge: Cambridge University Press,
1991.
Abeles, M., H. Bergman, E. Margalit, and E. Vaadia. (1993) "SpatiotemporaI Firing Patterns in the Frontal
Cortex of Behaving Monkeys." Journal of Neurophysiology 70, 4:, 1629-1638.
Griin S. (1996) Unitary Joint-Events in Multiple-Neuron Spiking Activity-Detection, Signijicance and
Interpretation. Verlag Harry Deutsch, Frankfurt.
Martignon L. and Deco G. (1997) "Neurostatistics of Spatio-Temporal Patterns of Neural Activation : the
frequentist approach" Technical Report, MPI-ABC no.3.
Deco G. and Martignon L. (1997) "Higher-order Phenomena among Spiking Events of Groups of Neurons"
Preprint.
Bishop, Y., S. Fienberg, and P. Holland (1975) Discrete Multivariate Analysis. Cambridge, MA: MIT Press.
Martignon L,.v.Hasseln H. Griin S, Aertsen A, Palm G.(1995) "Detecting Higher Order Interactions among
the Spiking Events of a Group of Neurons" Biol.Cyb. 73, 69-81 .
Kass, . and Raftery A. (1995) "Bayes factors"Journal of the American Statistical Association 90, no. 430:,
773-795 .
Tierney, L., and J. B. Kadane (J 986) "Accurate Approximations for Posterior Moments and Marginal
Densities." Journal of the American Statistical Association 81, 82-86
Laskey K., and Martignon L.( 1997) "Neurostatistics of Spatio-temporal Patterns of Neural Activation: the
Bayesian Approach", in preparation
Laskey K. , and Martignon, L.(1996) "Bayesian Learning of Log-linear Models for Neural Connectivity"
Proceedings of the XII Conference on Uncertainty in Artijiciallntelligence, Horvitz E. ed.,
Morgan-Kaufmann, San Mateo.
| 1274 |@word neurophysiology:1 trial:3 seek:1 rhesus:2 covariance:1 carry:1 moment:1 configuration:7 selecting:1 horvitz:1 ka:2 activation:3 artijiciallntelligence:1 follower:1 must:1 written:1 drop:1 update:1 stationary:3 cue:2 selected:1 beginning:1 detecting:2 node:2 simpler:1 correlograms:1 ik:2 persistent:1 pairwise:1 expected:2 behavior:2 mpg:1 nor:1 multi:3 brain:2 window:1 provided:2 estimating:2 maximizes:3 mass:1 circuit:1 null:1 israel:1 what:1 kind:1 interpreted:1 minimizes:1 monkey:6 unified:1 ag:1 temporal:24 every:1 exactly:1 ro:1 unit:4 medical:2 normally:1 appear:1 planck:1 positive:1 engineering:1 timing:1 treat:2 mistake:1 table2:1 establishing:1 subscript:1 firing:8 ibi:1 mateo:1 range:2 bi:1 unique:1 procedure:2 physiology:1 matching:1 pre:1 integrating:1 cannot:2 close:2 influence:1 eilon:2 equivalent:1 map:2 missing:1 zfe:1 pon:1 jerusalem:2 straightforward:1 decomposable:1 assigns:1 his:1 variation:1 laplace:2 target:2 exact:1 kathryn:1 distinguishing:1 hypothesis:1 bergman:1 observed:1 quicker:1 preprint:1 highest:1 ran:1 frring:2 trained:1 solving:1 segment:2 cyb:1 joint:3 various:1 represented:1 monte:2 detected:1 griin:2 encoded:2 larger:2 say:2 drawing:1 otto:1 statistic:2 vaadia:5 sequence:6 propose:1 interaction:30 coming:1 turned:1 ent:5 cluster:5 table1:1 extending:1 double:1 ring:1 converges:1 develop:1 ac:1 school:2 predicted:1 indicate:2 exhibiting:1 direction:1 deutsch:1 anatomy:1 stochastic:1 ja:2 fix:1 probable:1 summation:1 strictly:1 around:1 normal:1 cognition:1 visited:1 reflects:3 adjoined:1 mit:1 clearly:1 always:1 ax:1 release:1 indicates:1 check:1 inference:1 typically:1 margalit:1 relation:1 expand:1 quasi:4 i1:1 germany:1 among:15 animal:1 spatial:2 orange:1 marginal:2 equal:2 construct:1 corticonics:1 represents:2 report:1 develops:1 roj:1 few:1 piecewise:1 simultaneously:2 individual:1 fire:2 detection:2 organization:1 analyzed:3 light:5 xb:4 chain:4 accurate:2 integral:1 capable:2 gmu:1 logarithm:1 psychological:1 instance:1 deviation:6 subset:13 delay:2 kadane:2 dependency:3 abele:5 synthetic:1 density:2 huji:1 together:1 connectivity:3 deco:10 central:2 recorded:2 containing:1 possibly:2 external:1 laura:2 american:2 leading:1 li:1 potential:1 de:2 harry:1 includes:1 coefficient:2 depends:1 later:1 pnor:2 closed:2 analyze:1 characterizes:1 observing:1 red:1 bayes:1 il:1 formed:1 conceive:1 kaufmann:1 blink:2 weak:2 bayesian:8 mc:1 carlo:2 simultaneous:2 synaptic:2 ed:1 definition:2 martignon:14 frequency:5 involved:2 couple:2 manifest:1 astronomical:1 color:1 syn:1 higher:3 maximally:1 synapse:1 evaluated:1 just:1 xa:1 correlation:18 touch:2 mode:3 laskey:7 effect:3 leibler:1 i2:2 illustrated:1 attractive:1 unambiguous:1 mpi:1 criterion:2 m:1 theoretic:1 demonstrate:1 fj:2 common:3 spiking:11 refractory:1 cerebral:1 discussed:1 he:2 interpretation:1 association:2 marginals:5 measurement:1 composition:2 cambridge:3 ai:2 frankfurt:1 had:2 cortex:3 behaving:3 posterior:25 multivariate:1 hadassah:2 conjectured:2 axa:1 verlag:1 binary:4 seen:1 morgan:1 george:1 converge:1 period:1 signal:5 ii:2 multiple:1 technical:1 characterized:1 post:1 va:1 muenchen:1 ae:1 expectation:2 represent:4 sometimes:1 achieved:1 cell:4 separately:1 interval:6 source:1 ot:1 recording:4 call:1 unitary:1 near:2 presence:2 independence:1 shift:1 synchronous:2 york:1 action:1 adequate:1 specifies:1 millisecond:3 simultaneity:1 inhibitory:1 shifted:1 estimated:5 xii:1 discrete:1 group:8 key:2 nevertheless:1 authentic:3 localize:1 tierney:2 neither:1 sum:1 run:2 inverse:1 parameterized:1 fourth:4 uncertainty:2 prob:6 family:2 ob:1 capturing:1 iea:1 activity:4 strength:13 binned:1 precisely:1 awake:1 argument:2 relatively:1 conjecture:1 palm:1 munich:2 according:1 neurostatistics:2 belonging:1 lasting:1 explained:1 taken:4 fienberg:1 equation:1 discus:1 nonempty:1 physiologist:1 available:1 eight:1 observe:1 hierarchical:1 occurrence:2 distinguished:1 frequentist:6 robustness:1 existence:2 denotes:1 remaining:1 cf:2 assembly:4 include:2 hahn:1 aertsen:1 exhibit:4 distance:1 fairfax:1 simulated:3 me:1 manifold:1 code:1 index:2 hebrew:2 collective:1 proper:6 neuron:36 observation:2 markov:3 coincident:1 complement:1 pair:2 defme:1 connection:1 conflicting:1 established:2 below:1 pattern:13 challenge:2 max:6 oj:2 unrealistic:1 event:9 ia:1 natural:1 treated:2 indicator:1 carried:1 raftery:2 ready:9 bca:1 prior:8 marginalizing:1 synchronization:7 interesting:1 krasnow:1 degree:1 consistent:1 principle:2 course:1 elsewhere:1 changed:1 excitatory:1 silence:1 institute:2 absolute:1 distributed:1 regard:2 transition:1 author:3 made:1 adaptive:1 san:1 approximate:1 kullback:1 active:1 assumed:1 spatio:10 xi:1 search:2 triplet:3 dendrite:1 expansion:4 constructing:1 significance:1 hbf:1 whole:1 hebb:3 wiley:1 inferring:1 binwidth:1 msec:4 third:2 touched:1 specific:5 bishop:4 mason:1 offset:1 physiological:1 evidence:1 consist:1 gustavo:2 magnitude:2 justifies:1 entropy:3 led:1 visual:1 contained:1 holland:1 abc:1 ma:1 conditional:4 twofold:1 shared:1 absence:4 fisher:1 determined:3 called:4 experimental:1 siemens:2 rarely:1 support:1 frontal:2 preparation:1 phenomenon:2 dept:2 tested:1 biol:1 |
301 | 1,275 | Representation and Induction of Finite
State Machines using Time-Delay Neural
Networks
Daniel S. Clouse
Computer Science & Engineering Dept.
University of California, San Diego
La Jolla, CA 92093-0114
dclouse@ucsd .edu
Bill G. Horne
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
horne@research.nj.nec.com
C. Lee Giles
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
giles@research.nj .nec.com
Garrison W. Cottrell
Computer Science & Engineering Dept.
University of California, San Diego
La Jolla, CA 92093-0114
gcottrell@ucsd.edu
Abstract
This work investigates the representational and inductive capabilities of time-delay neural networks (TDNNs) in general, and of two
subclasses of TDNN, those with delays only on the inputs (IDNN),
and those which include delays on hidden units (HDNN) . Both architectures are capable of representing the same class of languages,
the definite memory machine (DMM) languages, but the delays on
the hidden units in the HDNN helps it outperform the IDNN on
problems composed of repeated features over short time windows .
1
Introduction
In this paper we consider the representational and inductive capabilities of timedelay neural networks (TDNN) [Waibel et al., 1989] [Lang et al., 1990], also known
as NNFIR [Wan, 1993]. A TDNN is a feed-forward network in which the set of
inputs to any node i may include the output from previous layers not only in the
current time step t, but from d earlier time steps as well. The activation function
404
D. S. Clouse, C. L Giles, B. G. Home and G. W. Cottrell
for node i at time t in such a network is given by equation 1:
i-l
y! = h(I:
d
I: yJ-kWijk)
(1)
j=lk=O
where y: is the activation of node i at time t, Wijk is the connection strength from
node j to node i at delay k, and h is the squashing function.
TDNNs have been used in speech recognition [Waibel et al., 1989], and time series
prediction [Wan, 1993]. In this paper we concentrate on the language induction
problem. A training set of variable-length strings taken from a discrete alphabet
{O, 1} is generated. Each string is labeled as to whether it is in some language L
or not. The network must learn to discriminate strings which are in the language
from those which are not, not only for the training set strings, but for strings the
network has never seen before. The language induction problem provides a simple,
familiar domain in which to gain insight into the capabilities of different network
archi tect ures.
Specifically, in this paper, we will look at the representational and inductive capabilities of the general class of TDNNs versus a subclass of TDNNs, the input-delay
neural networks (IDNNs). An IDNN is a TDNN in which delays are limited to
the network inputs. In section 2, we will show that the classes of functions representable by general TDNNs and IDNNs are equivalent. In section 3, we will show
that the class of languages representable by the TDNNs, are the definite memory
machine (DMM) languages. In section 4, we will demonstrate the inductive capability of the TDNNs in a simulation in which a large DMM is learned using a
small percentage of the possible, short training examples. In section 5, a second set
of simulations will show the difference between representational and inductive bias,
and will demonstrate the utility of internal delays in a TDNN network.
2
TDNN sand IDNN s Are Functionally Equivalent
Since every IDNN is also a TDNN, the set of functions computable by any TDNN
includes all those computable by the IDNNs. [Wan, 1993] also shows that the IDNNs
can compute any function computable by the TDNNs making these two classes of
network architectures functionally equivalent. For completeness, here we include a
description of how to construct from a TDNN, an equivalent IDNN.
Figure 1a shows a TDNN with a single input U at the current time (Ut), and at
four earlier time steps (Ut-l ... Ut-4). The inputs to node R consist of the outputs
of nodes P and Q at the current time step along with one or two previous time
steps. At time t, node P computes !p(Ut, . . . Ut-4), a function of the current input
and four delays. At time t -1, node P computes !P(Ut-l, ... Ut-s). This serves as
one of the delayed inputs to node R. This value could also be computed by sliding
node P over one step in the input tap-delay line along with its incoming weights as
shown in figure lb. Using this construction, all the internal delays can be removed,
and replaced by copies of the original nodes P and Q, along with their incoming
weights. This method can be applied recursively to remove any internal delay in any
TDNN network. Thus, for any function computable by a TDNN, we can construct
an IDNN which computes the same function.
3
TDNNs Can Represent the DMM Languages
In this section , we show that the set of languages which are representable by some
TDNN are exactly those languages representable by the definite memory machines
Representation and Induction of Finite State Machines using TDNNs
405
fp (u 1?. ??? u l -4 )
fp(u 1.!.,,?. u 1? 5 )
fp (u 1.2 ?. ". u 1? 6 )
Ut
a) Generu TDNN
u t.! ? ? u t ? 6
b) Equivalent IDNN
Figure 1: Constructing an IDNN equivalent to a given TDNN
(DMMs). According to Kohavi (1978) a DMM of order d is a finite state machine
(FSM) whose present state can always be determined uniquely from the knowledge
of the most recent d inputs. We equivalently define a DMM of order d as an FSM
whose accepting/rejecting behavior is a function of only the most recent d inputs.
To fit TDNNs and IDNNs into the language induction framework, we consider only
networks with a single 0/1 input. Since any boolean function can be represented
by a feed-forward network with enough hidden units [Horne and Hush, 1994], an
IDNN exists which can perform the mapping from d most recent inputs to any
accepting/rejecting behavior. Therefore, any DMM language can be represented
by some IDNN. Since every IDNN computes a function of its most recent d inputs,
by the definition of DMM, there is no boolean output IDNN which represents a
non-DMM language. Therefore, the IDNNs represent exactly the DMM languages.
Since the TDNN and IDNN classes are functionally equivalent, TDNNs implement
exactly the DMM languages as well.
The shift register behavior of the input tap-delay line in an IDNN completely determines the state transition behavior of any machine represented by the network.
This state transition behavior is fixed by the architecture. For example, figure 2a
shows the state transition diagram for any machine representable by an IDNN with
two input delays. The mapping from the current state to "accept" or "reject" is all
that can be changed with training. Clouse et al. (1994) describes the conditions
under which such a mapping results in a minimal FSM. All mappings used in the
subsequent simulations are minimal FSM mappings.
4
Simulation 1: Large DMM
To demonstrate the close relationship between TDNNs and DMMs, here we present
the results of a simulation in which we trained an IDNN to reproduce the behavior
of a DMM of order 11. The mapping function for the DMM is given in equation 2.
Figure 2b shows the minimal 2048 state transition diagram required to represent
the DMM. The symbol ~ in equation 2 represents the if-and-only-iffunction. The
overbar notation, Uk, represents the negation of Uk, the input at time k. Yk is the
network output at time k. Yk > 0.5 is interpreted as "accept the string seen so far."
Yk ~ 0.5 means "reject."
=
~ (Uk U k-I U k-2
+ Uk-2Uk-3 + Uk-l Uk-2)
(2)
To create training and test sets, we randomly split in two the set of all 4094
Yk
Uk-IO
406
D. S. Clouse, C. L. Giles, B. G. Home and G. W Cottrell
a) DMM of order 3
b) DMM of order 11
Figure 2: Transition diagrams for two DMMs.
0.4
Is
S
!
~
0.2
J
I
II
0.0 ...............!.i.....?... .l ....?.....?....?....?.... ?....?....?....?....?....?..
10
20
30
Percent of total samples (4094) used in training
Figure 3: Generalization error on 2048 state DMM.
strings of length 11 or less. We will report results using various percentages of
possible strings for the training set. The IDNN had 10 input tap-delays, and seven
hidden units. All tap-delays were cleared to 0 before introduction of a new input
string. Weights were trained using online back propagation with learning rate 0.25,
and momentum 0.25. To speed up the algorithm, weights were updated only if the
absolute error on an example was greater than 0.2. Training was stopped when
weight updates were required for no examples in the training set. This generally
required 200 epochs or fewer, though there were trials which required almost 4000
epochs.
Each point in figure 3 represents the mean classification error on the test set across
20 trials. Error bars indicate one standard deviation on each side of the mean.
Each trial consists of a different randomly-chosen training set. The graph plots
error at various training set sizes. Note that with training sets as small as 12
percent of possible strings the network generalizes perfectly to the remaining 88
percent. This kind of performance is possible because of the close match between
the representational bias of the IDNN and this specific problem.
5
Simulation 2: Inductive biases of IDNNs and HDNNs
In section 2, we showed that the IDNNs and general TDNNs can represent the same
class offunctions. It does not follow that these two architectures are equally capable
of learning the same functions. In this section, we show that the inductive biases are
Representation and Induction of Finite State Machines using TDNNs
407
indeed different. We will present our intuitions about the kinds of problems each
architecture is well suited to learning, then back up our intuitions with supporting
simulations.
In the following simulations, we compare two specific networks. The network representing the general TDNNs includes delays on hidden layer outputs. We'll refer to
this as the hidden delay neural network or HDNN. All delays in the second network
are confined to the network inputs, and so we call this the IDNN.
We have been careful to design the two networks to be comparable in size. Each
of the networks contains two hidden layers. The first hidden layer of the IDNN has
four units, and the second five. The IDNN has eight input delays. Each of the two
hidden layers of the HDNN has three units. The HDNN has three input delays, and
five delays on the output of each node of the first hidden layer. Note that in each
network the longest path from input to output requires eight delays. The number
of weights, including bias weights, are also similar - 76 for the HDNN, and 79 for
the IDNN.
In order for the size of the two networks to be similar, the HDNN must have fewer
delays on the network inputs. If we think of each unit in the first hidden layer
as a feature detector, the feature detectors in the HDNN will span a smaller time
window than the IDNN. On the other hand, the HDNN has a second set of delays
which saves the output of the feature detectors over several time steps. If some
narrow feature repeats over time, this second set of delays should help the HDNN
to pick up this regularity. The IDNN, lacking the internal delays, should find it
more difficult to detect this kind of repeated regularity.
To test these ideas, we generated four DMM problems. We call equation 3 the
narrow-repeated problem because it contains a number of identical terms shifted in
time, and because each of these terms is narrow enough to fit in the time window
of the HDNN first layer feature detectors.
Yk
= Uk-8 +-+ (Uk U k-2U k-3 + Uk-1Uk-3U k-4 + Uk-3 U k-SU k-6 + Uk-4 U k-6U k-7)
(3)
The wide-repeated problem, represented by equation 4, is identical to the narrowrepeated problem except that each term has been stretched so that it will no longer
fit in the HDNN feature detector time window.
Yk
=
Uk-8 +-+ (Uk U k-2 U k-4
+ Uk-1 U k-3U k-S + Uk-2 U k-4 U k-6 + Uk-3 U k-SU k-7)
(4)
The narrow-unrepeated problem, represented by equation 5, is composed of narrow
terms, but none of these terms is simply a shifted reproduction of another.
Yk
= Uk-8 +-+ (Uk U k-2U k-3 + Uk-l Uk-3 U k-4 + Uk-3U k-S U k-6 + Uk-4U k-6U k-7)
(5)
Lastly, the wide-unrepeated problem of equation 6 contains wide terms which do not
repeat.
Yk
=
Uk-8 +-+ (Uk U k-3U k-4
+ Uk-1Uk-4Uk-S + Uk-2 U k-S U k-6 + Uk-3U k-6U k-7)
(6)
Each problem in this section requires a minimum of 512 states to represent.
Similar to the simulation of section 3, we trained both networks on subsets of all
possible strings of length 9 or less. Since these problems were more difficult than
that of section 3, often the networks were unable to find a solution which performed
perfectly on the training set. In this case, training was stopped after 8000 epochs.
The results reported later include these trials as well as trials in which training
ended because of perfect performance on the training set. Training for the HDNN
D. S. Clouse, C. L. Giles, B. G. Home and G. W Cottrell
408
0.4
~
r:s
j
Je 02
!
II
o IDNN architecture
-HDNN iKhitectun:
II
02
ff
"
0.0
04
IJ1!lJLU
p_orlalal_
0.0 .. .........?.... ...
20
IIII!LI\J
0.0 -._.....
_iIIlralII/Ioi
20
40
(a) Narrow?Repealed
.. /
" Ii I
,
20
60
40
60
(b) Narrow?Unrepealed
. 'I
11\111\
..... . .. .f
40
60
(e) Wide?Repeated
., iii
0.0 ? ?. . ? .?....
111111
? ..???.. _..?
20
40
(d) Wide-U"",peated
60
Figure 4: Generalization of a HDNN and an IDNN on four DMM problems
was identical to that of the IDNN except that error was propagated back across the
internal delays as in Wan (1993) .
Figure 4 plots generalization error versus percentage of possible strings used in
training for the two networks for each of the four DMM problems. If our intuitions
were correct we would expect to see evidence here that the effect of wider terms,
and lack of repetition would have a stronger adverse effect on the HDNN network
than on the IDNN. This is exactly what we see. The position of the curve for the
IDNN network is stable compared to that of the HDNN when changes are made to
the width and repetition factors .
Statistical analysis supports this conclusion. We ran an ANOVA test [Rice, 1988]
with four factors (which network, term width, term repetition, and training set
size) on the data summarized by the graphs of figure 4. The test detected a significant interaction between the network and width factors (M Snetxwid = 0.3430,
F(l, 1824) = 234.4) , and between the network and repetition factors (MSnetxrep =
0.1181, F(l, 1824) 80.694). These two interactions are significant at p < 0.001,
agreeing with our conclusion that width and repetition each has a stronger effect
on the performance of the HDNN network.
=
Further planned tests reveal that the effects of width and repetition are strong
enough to change which network generalizes better. We ran a one-way ANOVA
test on each problem individually to see which network performs better across the
entire curve. The tests reveal that the HDNN performs with significantly less error
than the IDNN in the narrow-repeated problem (M Serror = 0.0015, M Snet 0.5400, F(1,1824) = 369.0), and in the narrow-unrepeated problem (M Snet =
0.0683, F(1 , 1824) = 46.7). Performance of the IDNN is significantly better in
0.0378 , F(l, 1824)
25.83). All of these
the wide-unrepeated problem (M Snet
comparisons are significant at p < 0.001. The test on the wide-repeated problem
finds no significant difference in performance of the two networks (M Snet = 0.0004,
=
=
Representation and Induction of Finite State Machines using TDNNs
F(l, 1824) = 0.273, p
409
> 0.05).
In addition to confirming our intuitions about the kinds of problems that internal
delays should be helpful in solving, this set of simulations demonstrates the difference between representational and inductive bias. For all DMM problems except
for the wide-unrepeated one, we were able to find, for each network, at least one
set of weights which solve the problem perfectly. Despite the fact that the two
networks are both capable of representing the problems, the differing way in which
they respond to the width and repetition factors demonstrates a difference in their
learning biases.
6
Conclusions
This paper presents a number of interesting ideas concerning TDNNs using both
theoretical and empirical techniques. On the theoretical side, we have precisely
defined the subclass of FSMs which can be represented by TDNNs, the DMM
languages. It is interesting to note that this network architecture which has no recurrent connections is capable of representing languages whose transition diagrams
require loops.
Other ideas were demonstrated using empirical techniques. First, we have shown
that the number of states required to represent an FSM may be a poor predictor of
how difficult the language is to learn. We were able to learn a 2048-state FSM using
a small percentage of the possible training examples. This is possible because of
the close match between the representational bias of the network, and the language
learned.
Second, we presented a set of simulations which demonstrated the utility of internal
delays in a TDNN. These delays were shown to improve generalization on problems
composed of features over short time intervals which reappear repeatedly.
Third, that same set of simulations highlights the difference between representational bias, and inductive bias. Though these two terms are sometimes used interchangeably in the theoretical literature, this work shows that the two concepts are,
in fact, separable.
References
[Clouse et al., 1994] Clouse, D. S., Giles, C. 1., Horne, B. G., and Cottrell, G. W. (1994).
Learning large debruijn automata with feed-forward neural networks. Technical Report
CS94-398, University of California, San Diego, Computer Science and Engineering Dept.
[Horne and Hush, 1994] Horne, B. G. and Hush, D. R. (1994). On the node complexity
of neural networks. Neural Networks, 7(9):1413-1426.
[Kohavi, 1978] Kohavi, Z. (1978). Switching and Finite Automata Theory. McGraw-Hill,
Inc., New York, NY, second edition.
[Lang et al., 1990] Lang, K, Waibel, A., and Hinton, G. (1990). A time-delay neural
network architecture for isolated word recognition. Neural Networks, 3(1):23-44.
[Rice, 1988] Rice, J. A. (1988). Mathematical Statistics and Data Analysis. Brooks/Cole
Publishing Company, Monterey, California.
[Waibel et al., 1989] Waibel, A., Hanazawa, T., Hinton, G., Shikano, K, and Lang, K
(1989) . Phoneme recognition using time-delay neural networks. IEEE Transactions on
Acoustics, Speech and Signal Processing, 37(3):328-339.
[Wan, 1993] Wan, E. A. (1993). Time series prediction by using a connectionist network
with internal delay lines. In Weigend, A. S. and Gershenfeld, N. A., editors, Time Series
Prediction: Forecasting the Future and Understanding the Past. Addison Wesley.
| 1275 |@word trial:5 stronger:2 simulation:12 pick:1 recursively:1 series:3 contains:3 daniel:1 cleared:1 past:1 current:5 com:2 lang:4 activation:2 ij1:1 must:2 cottrell:5 subsequent:1 confirming:1 offunctions:1 remove:1 plot:2 update:1 fewer:2 short:3 accepting:2 provides:1 completeness:1 node:14 five:2 mathematical:1 along:3 consists:1 indeed:1 behavior:6 company:1 window:4 horne:6 notation:1 what:1 kind:4 interpreted:1 string:12 differing:1 nj:4 ended:1 every:2 subclass:3 exactly:4 demonstrates:2 uk:32 unit:7 before:2 engineering:3 io:1 switching:1 despite:1 path:1 limited:1 yj:1 implement:1 definite:3 reappear:1 empirical:2 reject:2 significantly:2 word:1 close:3 bill:1 equivalent:7 demonstrated:2 automaton:2 insight:1 updated:1 diego:3 construction:1 recognition:3 labeled:1 removed:1 yk:8 ran:2 intuition:4 complexity:1 trained:3 solving:1 completely:1 represented:6 various:2 alphabet:1 detected:1 whose:3 solve:1 statistic:1 think:1 hanazawa:1 online:1 interaction:2 ioi:1 loop:1 representational:8 description:1 regularity:2 perfect:1 unrepeated:5 help:2 wider:1 recurrent:1 strong:1 indicate:1 concentrate:1 correct:1 sand:1 require:1 generalization:4 mapping:6 cole:1 individually:1 repetition:7 create:1 always:1 longest:1 detect:1 helpful:1 entire:1 accept:2 hidden:11 reproduce:1 classification:1 construct:2 never:1 identical:3 represents:4 look:1 future:1 report:2 connectionist:1 randomly:2 composed:3 delayed:1 familiar:1 replaced:1 negation:1 wijk:1 capable:4 fsm:6 isolated:1 theoretical:3 minimal:3 stopped:2 earlier:2 giles:6 boolean:2 planned:1 deviation:1 subset:1 predictor:1 delay:35 reported:1 lee:1 wan:6 li:1 summarized:1 includes:2 inc:1 register:1 performed:1 later:1 capability:5 phoneme:1 rejecting:2 none:1 detector:5 definition:1 propagated:1 gain:1 knowledge:1 ut:8 back:3 feed:3 wesley:1 follow:1 though:2 dmm:23 lastly:1 hand:1 su:2 propagation:1 lack:1 reveal:2 effect:4 concept:1 inductive:9 ll:1 interchangeably:1 width:6 uniquely:1 timedelay:1 hill:1 demonstrate:3 performs:2 percent:3 functionally:3 refer:1 significant:4 stretched:1 language:20 had:1 stable:1 longer:1 recent:4 showed:1 jolla:2 seen:2 minimum:1 greater:1 signal:1 ii:4 sliding:1 technical:1 match:2 concerning:1 equally:1 prediction:3 fsms:1 represent:6 sometimes:1 confined:1 tdnns:19 addition:1 ures:1 iiii:1 interval:1 diagram:4 kohavi:3 call:2 split:1 enough:3 iii:1 independence:2 fit:3 architecture:8 perfectly:3 idea:3 computable:4 shift:1 whether:1 utility:2 forecasting:1 speech:2 york:1 repeatedly:1 generally:1 monterey:1 outperform:1 percentage:4 shifted:2 discrete:1 four:7 anova:2 gershenfeld:1 graph:2 weigend:1 respond:1 almost:1 home:3 investigates:1 comparable:1 layer:8 strength:1 precisely:1 archi:1 speed:1 span:1 separable:1 according:1 waibel:5 representable:5 poor:1 describes:1 across:3 smaller:1 agreeing:1 making:1 taken:1 equation:7 addison:1 serf:1 snet:4 generalizes:2 eight:2 save:1 original:1 remaining:1 include:4 publishing:1 unable:1 seven:1 induction:7 length:3 relationship:1 equivalently:1 difficult:3 design:1 perform:1 finite:6 supporting:1 hinton:2 ucsd:2 tect:1 lb:1 required:5 connection:2 tap:4 california:4 acoustic:1 learned:2 narrow:9 hush:3 brook:1 able:2 bar:1 fp:3 including:1 memory:3 representing:4 improve:1 lk:1 tdnn:17 epoch:3 literature:1 understanding:1 lacking:1 expect:1 highlight:1 idnn:32 interesting:2 clouse:7 versus:2 editor:1 squashing:1 changed:1 repeat:2 copy:1 bias:10 side:2 institute:2 wide:8 absolute:1 curve:2 transition:6 computes:4 forward:3 made:1 san:3 far:1 transaction:1 mcgraw:1 incoming:2 shikano:1 learn:3 ca:2 constructing:1 domain:1 edition:1 repeated:7 je:1 ff:1 ny:1 garrison:1 momentum:1 position:1 third:1 specific:2 symbol:1 reproduction:1 evidence:1 consist:1 exists:1 nec:4 suited:1 simply:1 determines:1 rice:3 careful:1 adverse:1 change:2 specifically:1 determined:1 except:3 total:1 discriminate:1 la:2 internal:8 support:1 dept:3 princeton:2 |
302 | 1,276 | Neural Network Modeling of Speech and Music
Signals
Axel Robel
Technical University Berlin, Einsteinufer 17, Sekr. EN-8, 10587 Berlin, Germany
Tel: +49-30-31425699, FAX: +49-30-31421143, email: roebel@kgw.tu-berlin.de
Abstract
Time series prediction is one of the major applications of neural networks. After a short introduction into the basic theoretical foundations
we argue that the iterated prediction of a dynamical system may be interpreted as a model of the system dynamics. By means of RBF neural
networks we describe a modeling approach and extend it to be able to
model instationary systems. As a practical test for the capabilities of the
method we investigate the modeling of musical and speech signals and
demonstrate that the model may be used for synthesis of musical and
speech signals.
1 Introduction
Since the formulation of the reconstruction theorem by Takens [10] it has been clear that
a nonlinear predictor of a dynamical system may be directly derived from a systems time
series. The method has been investigated extensively and with good success for the prediction of time series of nonlinear systems. Especially the combination of reconstruction
techniques and neural networks has shown good results [12].
In our work we extend the ideas of predicting nonlinear systems by the more demanding
task of building system models, which are able to resynthesize the systems time series. In
the case of chaotic or strange attractors the resynthesis of identical time series is known to
be impossible. However, the modeling of the underlying attractor leads to the possibility
to resynthesis time series which are consistent with the system dynamics. Moreover, the
models may be used for the analysis of the system dynamics, for example the estimation
of the Lyapunov exponents [6]. In the following we investigate the modeling of music and
speech signals, where the system dynamics are known to be instationary. Therefore, we
A. Robel
780
develop an extension of the modeling approach, such that we are able to handle instationary
systems.
In the following, we first give a short review concerning the state space reconstruction from
time series by delay coordinate vectors, a method that has been introduced by Takens [10]
and later extended by Sauer et al. [9]. Then we explain the structure of the neural networks we used in the experiments and the enhancements necessary to be able to model
instationary dynamics. As an example we apply the neural models to a saxophone tone
and a speech signal and demonstrate that the signals may be resynthesized using the neural
models. Furthermore, we discuss some of the problems and outline further developments
of the application.
2
Reconstructing attractors
Assume an n-dimensional dynamical system f(-) evolving on an attractor A. A has fractal dimension d, which often is considerably smaller then n. The system state is observed through a sequence of measurements h(Z), resulting in a time series of measurements Yt = h(z(t)). Under weak assumptions concerning h(-) and fe) the fractal embedding theorem[9] ensures that, for D > 2d, the set of all delayed coordinate vectors
z
(1)
with an arbitrary delay time T, forms an embedding of A in the D-dimensional reconstruction space. We call the minimal D, which yields an embedding of A, the embedding
dimension De. Because an embedding preserves characteristic features of A, especially it
is one to one, it may be employed for building a system model. For this purpose the reconstruction of the attractor is used to uniquely identify the systems state thereby establishing
the possibility of uniquely predicting the systems evolution. The prediction function may
be represented by a hyperplane over the attractor in an (D + 1) dimensional space. By
iterating this prediction function we obtain a vector valued system model which, however,
is valid only at the respective attractor.
For the reconstruction of instationary systems dynamics we confine ourselves to the case
of slowly varying parameters and model the in stationary system using a sequence of attractors.
3
RBF neural networks
There are different topologies of neural networks that may be employed for time series
modeling. In our investigation we used radial basis function networks which have shown
considerably better scaling properties, when increasing the number of hidden units, than
networks with sigmoid activation function [8]. As proposed by Verley sen et. al [11] we
initialize the network using a vector quantization procedure and then apply backpropagation training to finally tune the network parameters. The tuning of the parameters yields
an improvement factor of about ten in prediction error compared to the standard RBF network approach [8, 3] . Compared to earlier results [7] the normalization of the hidden layer
activations yields a small improvement in the stability of the models.
The resulting network function for m-dimensional vector valued output is of the form
_
N(x)
= "w?
7
exp (_(C'(7-X)2)
JZ:::iexp(-(Cl(7~X)2)
.I
_
+b
,
(2)
781
Neural Network Modeling ofSpeech and Music Signals
x(i+1)
x(i+
TL-----,,
,,
,,
,,,
,
k(i)
Control
~,
,'~,
\, "
x(i-3Tj
" ,
" "-,
/
~"
,-,'
- x(i- T)
x(i-2T)
x(i)
Fig. I: Input/Output structure of the neural model.
where (T J represents the standard deviation of the Gaussian, the input x and the centers
c are n-dimensional vectors and band Wj are m-dimensional parameters of the network.
Networks of the form eq. (2) with a finite number of hidden units are able to approximate
arbitrary closely all continuous mappings Rn -+ Rm [4]. This universal approximation
property is the foundation of using neural networks for time series modeling, where we
denote them as neural models. In the context of the previous section the neural models are
approximating the systems prediction function.
To be able to represent instationary dynamics, we extend the network according to figure 1
to have an additional input, that enables the control of the actual mapping
(3)
This model is close to the Hidden Control Neural Network described in [2]. From the universal approximation properties of the RBF-networks stated above it follows, that eq. (3)
with appropriate control sequence k( i) is able to approximate any sequence of functions.
In the context of time series prediction the value i represents the actual sample time. The
control sequence may be optimized during training, as described in [2], The optimization
of k( i) requires prohibitively large computational power if the number of different control
values, the domain of k is large. However, as long as the systems instationarity is described
by a smooth function of time, we argue that it is possible to select k( i) to be a fixed linear
function of i. With the preselected k(i) the training of the network adapts the parameters
tj and (Ttj such that the model evolution closely follows the systems instationarity.
4
Neural models
As is shown in figure I we use the delayed coordinate vectors and a selected control sequence to train the network to predict the sequence of the following T time samples. The
782
A. Robel
vector valued prediction avoids the need for a further interpolation of the predicted samples. Otherwise, an interpolation would be necessary to obtain the original sample frequency, but, because the Nyquist frequency is not regarded in choosing T, is not straightforward to achieve.
After training we initialize the network input with the first input vector (X'o, k(O)) of the
time series and iterate the network function shifting the network input and using the latest
output unit to complete the new input. The control input may be copied from the training
phase to resynthesize the training signal or may be varied to emulate another sequence of
system dynamics.
The question that has to be posed in this context is concerned with the stability of the
model. Due to the prediction error of the model the iteration will soon leave the reconstructed attractor. Because there exists no training data from the neighborhood of the attractor the minimization of the prediction error of the network does not guaranty the stability of the model [5). Nevertheless, as we will see in the examples, the neural models are
stable for at least some parameters D and T.
Due to the high density of training data the method for stabilizing dynamical models presented in [5) is difficult to apply in our situation. Another approach to increase the model
stability is to lower the gradient of the prediction function for the directions normal to the
attractor. This may be obtained by disturbing the network input during training with a
small noise level. While conceptually straightforward, we found that this method is only
partly successful. While the resulting prediction function is smoother in the neighborhood
of the attractor, the prediction error for training with noise is considerably higher as expected from the noise free results, such that the overall effect often is negative. To circumvent the problems of training with noise further investigations will consider a optimization
function with regularization that directly penalizes high derivatives of the network with
respect to the input units [1) . The stability of the models is a major subject of further research.
5
Practical results
We have applied our method to two acoustic time series, a single saxophone tone, consisting of 16000 samples sampled at 32kHz and a speech signal of the word manna l . The latter
time series consists of 23000 samples with a sampling rate of 44.1kHz. Both time series
have been normalized to stay within the interval [-1, 1]. The estimation of the dimension
of the underlying attractors yields a dimension of about 2-3 in both cases.
We chose the control input k(i) to be linear increasing from -0.8 to 0.8. Stable models
we found for both time series using D > 5. Namely for the parameter T we observed
considerable impact on the model quality. While smaller T results in better one step ahead
prediction, the iterated model often becomes unstable. This might be explained by the
decrease in variation within the prediction hyperplane, that has to be learned. For small T
the model tends to become linear and does not capture the nonlinear characteristics of the
system. Therefore the iteration of those models failed .
To large values of T results in an insufficient one step ahead prediction error, which pushes
the model far away from the attractor also producing unstable behavior.
'The name of our parallel computer
783
Neural Network Modeling of Speech and Music Signals
Saxophone (synthesized)
Power spectrum estimation
20
0.8
0
0.4
-10
0 .2
?
Original ReSynth _. __..
10
0 .6
Iii'
;g.
0
is
Cl.
-0.2
-20
-30
-40
-0.4
-0.6
-50
-0.8
-60
-1
-70
10000
SOOO
15000
0
0 .1
0 .2
0.3
0 .4
0.5
fIfO
Fig. 2: Synthesized saxophone signal and power spectrum estimation for the original
(solid) and synthesized (dashed) signal.
Saxophone (synthesized with long cnt)
control input
0 .8 '---~--~----'-''---'-aJ~--'''
/
OriCg~~~:
0.6
0 .4
0 .2
o
/
0 .8
0 .6
1/
0 .4
0.2
?
//
-0.2
-0.4
7-
0
-0.2
-0.4
. _________________ __ _____ .,/0
-0.6
-0 .6
-0.8
-0.8
"----~--~-~---'-----'
5000
10000
15000
20000
2S000
-1
5000
10000
15000
20000
25000
Fig. 3: Varying the synthesized tone by varying the control input sequence.
5.1
Modeling a saxophone
In the following we consider the results for the saxophone model. The model we present
consists of 10 input units, 200 hidden units and 5 output units and was trained with additional Gaussian noise at the input. The standard deviation of the noise is 0.0005 and the
RMS training error obtained is 0.005. The resulting saxophone model is able to resynthesize a signal which is nearly indistinguishable from the original one. The resynthesized
time series is shown in figure 2. The time series follows the original one with a small phase
shift, which stems from a small difference in the onset of the model. Also in figure 2 the
power spectrum of the saxophone signal and the neural model is shown. From the spectrum we see the close resemblance of the sound.
One major demand for the practical application of the proposed musical instrument models is the possibility to control the synthesized sound. At the present state there exists only
one control input to the model. Nevertheless, it is interesting to investigate the effect of
varying the control input of the model. We tried different control input sequences to synthesize saxophone tones . It turns out that the model reinains stable such that we are able to
control the envelope of the sound. An example of a tone with increased duration is shown
in figure 3. In this example the control input first follows the trained version, then remains
constant to produce a longer duration of the tone and then increases to reproduce the decay
of the tone from the trained time series.
784
A. Robel
training signal (word: manna)
?
resynthsized signal (word: manna)
0.8
0 .8
0.6
0.6
0.4
0.4
02
0.2
?
0
0
-0.2
-0 .2
-0.4
-0.4
-0.6
-0.6
-0.8
-0.8
-1
-1
0
5000
10000
15000
20000
25000
0
5000
10000
15000
20000
25000
Fig. 4: Original and synthesized signal of the word manna.
5.2 Modeling a speech signal
For modeling the time series of the spoken word manna we used a similar network compared to the saxophone model. Due to the increased instationarity in the signal we needed
an increased number of RBF units in the network. The best results up to now has been
obtained with a network of 400 hidden units, delay time T = 8, output dimension 8 and
input dimension 11.
In figure 4 we show the original and the resynthesized signal. The quality of the model is
not as high as in the case of the saxophone_ Nevertheless, the word is quite understandable. From the figure we see, that the main problems stem from the transitions between
consecutive phonemes. These transitions are rather quick in time and, therefore, there exists only a small amount of data describing the dynamics of the transitions. We assume that
more training examples of the same word will cure the problem. However, it will probably
require a well trained speaker to reproduce the dynamics in speaking the same word twice_
6
Further developments
There are two practical applications that directly follow from the presented results_ The
first one is to synthesize music signals. To consider musicians demands, we need to enhance the control of the synthesized signals. Therefore, in the future we will try to enlarge
the models, incorporating different flavors of sound into the same model and adding additional control inputs. Especially we plan to build models for different volume and pitch.
As a second application we will further investigate the possibilities for using the neural
models as a speech synthesizer. An interesting topic of further research would be the extension of the model with an intonation control input that incorporates the possibility to
synthesize different intonations of the same word from one model.
7 Summary
The article describes a methodology to build instationary models from time series of dynamical systems. We give theoretical arguments for the universality of the models and
discuss some of the restrictions and actual problems. As practical test for the method we
apply the models.to the demanding task of the synthesis of musical and speech signals. It is
demonstrated that the models are capable to resynthesize the trained signals. At the present
Neural Network Modeling of Speech and Music Signals
785
state the envelope and duration of the synthesized signals may be controlled. Intended further developments have been shortly described .
References
[1] C. M. Bishop. Training with noise is equivalent to tikhonov regularization. Neural
Computation, 7(1): 108-116, 1995.
[2] E. Levin. Hidden control neural architecture modelling of nonlinear time varying
systems and its applications. IEEE Transactions on Neural Networks, 4(2):109-116,
1993.
[3] J. Moody and C. Darken. Fast learning in networks of locally-tuned processing units.
Neural Computation, 1:281-294, 1989.
[4] J. Park and I. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3(2):246-257, 1991.
[5] J. C. Principe and J.-M. Kuo. Dynamic modelling of chaotic time series with neural
networks. In G. Tesauro, D. S. Touretzky, and T. Leen, editors, Neural Information
Processing Systems 7 (NIPS 94), 1995.
[6] A. Robel. Neural models for estimating lyapunov exponents and embedding dimension from time series of nonlinear dynamical systems. In Proceedings of the Intern.
Conference on Artijicial Neural Networks, ICANN'95, Vol. II, pages 533-538, Paris,
1995.
[7] A. Robel. Rbf networks for synthesis of speech and music signals . In 3. Workshop Fuzzy-Neuro-Systeme '95, Darmstadt, 1995. Deutsche Gesellschaft fur Informatik e.v.
[8] A. Robel. Scaling properties of neural networks for the prediction of time series. In
Proceedings of the 1996 IEEE Workshop on Neural Networks for Signal Processing
VI, 1996.
[9] T. Sauer, J. A. Yorke, and M. Casdagli. Embedology. Journal of Statistical Physics,
65(3/4):579-616, 1991 .
[to] F. Takens. Detecting Strange Attractors in Turbulence, volume 898 of Lecture Notes
in Mathematics (Dynamical Systems and Turbulence, Warwick 1980), pages 366381. D.A. Rand and L.S. Young, Eds. Berlin: Springer, 1981.
[11] M. Verleysen and K. Hlavackova. An optimized RBF network for approximation of
functions. In Proceedings of the European Symposium on A rtijicial Neural Networks,
ESANN'94,1994.
[12] A. S. Weigend and N. A. Gershenfeld. Time Series Prediction: Forecasting the Future and Understanding the Past. Addison-Wesley Pub. Comp., 1993.
| 1276 |@word version:1 casdagli:1 tried:1 systeme:1 thereby:1 solid:1 series:25 pub:1 tuned:1 past:1 activation:2 synthesizer:1 universality:1 enables:1 stationary:1 selected:1 tone:7 short:2 detecting:1 become:1 symposium:1 consists:2 expected:1 behavior:1 actual:3 increasing:2 becomes:1 estimating:1 underlying:2 moreover:1 deutsche:1 interpreted:1 fuzzy:1 spoken:1 prohibitively:1 rm:1 control:21 unit:10 producing:1 tends:1 establishing:1 interpolation:2 might:1 chose:1 practical:5 backpropagation:1 chaotic:2 procedure:1 universal:3 evolving:1 word:9 radial:2 close:2 turbulence:2 context:3 impossible:1 restriction:1 equivalent:1 quick:1 yt:1 musician:1 demonstrated:1 straightforward:2 latest:1 center:1 duration:3 stabilizing:1 regarded:1 embedding:6 handle:1 stability:5 coordinate:3 variation:1 synthesize:3 observed:2 capture:1 wj:1 ensures:1 decrease:1 dynamic:11 trained:5 basis:2 represented:1 emulate:1 train:1 fast:1 describe:1 choosing:1 neighborhood:2 quite:1 posed:1 valued:3 guaranty:1 warwick:1 otherwise:1 sequence:10 sen:1 reconstruction:6 tu:1 achieve:1 adapts:1 enhancement:1 produce:1 leave:1 cnt:1 develop:1 eq:2 esann:1 predicted:1 lyapunov:2 direction:1 closely:2 require:1 darmstadt:1 investigation:2 extension:2 confine:1 normal:1 exp:1 mapping:2 predict:1 major:3 consecutive:1 purpose:1 estimation:4 minimization:1 gaussian:2 rather:1 varying:5 derived:1 improvement:2 modelling:2 fur:1 hidden:7 reproduce:2 germany:1 overall:1 exponent:2 verleysen:1 takens:3 development:3 plan:1 initialize:2 enlarge:1 sampling:1 identical:1 represents:2 park:1 nearly:1 future:2 preserve:1 delayed:2 phase:2 ourselves:1 consisting:1 intended:1 attractor:15 investigate:4 possibility:5 tj:2 capable:1 necessary:2 sauer:2 respective:1 penalizes:1 theoretical:2 minimal:1 increased:3 modeling:14 earlier:1 deviation:2 predictor:1 delay:3 successful:1 levin:1 considerably:3 density:1 stay:1 fifo:1 axel:1 physic:1 enhance:1 synthesis:3 moody:1 slowly:1 sekr:1 derivative:1 de:2 onset:1 vi:1 later:1 try:1 capability:1 parallel:1 musical:4 characteristic:2 phoneme:1 yield:4 identify:1 conceptually:1 weak:1 iterated:2 informatik:1 comp:1 explain:1 touretzky:1 ttj:1 email:1 ed:1 frequency:2 sampled:1 sandberg:1 wesley:1 higher:1 follow:1 methodology:1 rand:1 formulation:1 leen:1 furthermore:1 nonlinear:6 aj:1 quality:2 resemblance:1 name:1 effect:2 building:2 normalized:1 evolution:2 regularization:2 gesellschaft:1 indistinguishable:1 during:2 uniquely:2 speaker:1 outline:1 complete:1 demonstrate:2 embedology:1 sigmoid:1 sooo:1 khz:2 volume:2 extend:3 synthesized:9 measurement:2 tuning:1 mathematics:1 stable:3 longer:1 tesauro:1 tikhonov:1 kgw:1 success:1 additional:3 employed:2 dashed:1 signal:28 smoother:1 ii:1 sound:4 stem:2 smooth:1 technical:1 long:2 concerning:2 controlled:1 impact:1 prediction:19 pitch:1 basic:1 neuro:1 iteration:2 normalization:1 represent:1 interval:1 envelope:2 probably:1 subject:1 resynthesized:3 incorporates:1 call:1 iii:1 concerned:1 iterate:1 architecture:1 topology:1 idea:1 shift:1 rms:1 forecasting:1 nyquist:1 speech:12 speaking:1 fractal:2 iterating:1 clear:1 tune:1 amount:1 band:1 locally:1 extensively:1 ten:1 vol:1 saxophone:11 nevertheless:3 gershenfeld:1 weigend:1 strange:2 scaling:2 layer:1 copied:1 ahead:2 argument:1 according:1 combination:1 smaller:2 describes:1 reconstructing:1 explained:1 remains:1 discus:2 turn:1 describing:1 needed:1 addison:1 instrument:1 apply:4 away:1 appropriate:1 shortly:1 artijicial:1 original:7 music:7 especially:3 build:2 approximating:1 question:1 gradient:1 berlin:4 topic:1 argue:2 unstable:2 insufficient:1 yorke:1 difficult:1 fe:1 stated:1 negative:1 understandable:1 darken:1 finite:1 situation:1 extended:1 rn:1 varied:1 arbitrary:2 introduced:1 namely:1 paris:1 optimized:2 acoustic:1 learned:1 nip:1 able:9 dynamical:7 preselected:1 shifting:1 power:4 demanding:2 circumvent:1 predicting:2 fax:1 review:1 understanding:1 lecture:1 interesting:2 foundation:2 consistent:1 article:1 editor:1 summary:1 soon:1 free:1 dimension:7 valid:1 avoids:1 transition:3 cure:1 disturbing:1 far:1 transaction:1 reconstructed:1 approximate:2 resynthesize:4 spectrum:4 continuous:1 jz:1 tel:1 investigated:1 cl:2 european:1 domain:1 icann:1 main:1 noise:7 fig:4 en:1 tl:1 intonation:2 young:1 theorem:2 bishop:1 decay:1 exists:3 incorporating:1 quantization:1 workshop:2 adding:1 push:1 demand:2 flavor:1 intern:1 failed:1 springer:1 rbf:7 considerable:1 hyperplane:2 kuo:1 partly:1 select:1 principe:1 latter:1 ofspeech:1 resynthesis:2 |
303 | 1,277 | An Hierarchical Model of Visual Rivalry
Peter Dayan
Department of Brain and Cognitive Sciences
E25-21O Massachusetts Institute of Technology
Cambridge, MA 02139
dayan@psyche.mit.edu 1
Abstract
Binocular rivalry is the alternating percept that can result when
the two eyes see different scenes. Recent psychophysical evidence
supports an account for one component of binocular rivalry similar
to that for other bistable percepts. We test the hypothesisl9, 16, 18
that alternation can be generated by competition between topdown cortical explanations for the inputs, rather than by direct
competition between the inputs. Recent neurophysiological evidence shows that some binocular neurons are modulated with
the changing percept; others are not, even if they are selective between the stimuli presented to the eyes. We extend our model to
a hierarchy to address these effects.
1
Introduction
Although binocular rivalry leads to distinct perceptual distress, it is revealing
about the mechanisms of visual information processing. The first accounts for
rivalry argued on the basis of phenomena such as increases in thresholds for test
stimuli presented in the suppressed eye 24 , 8, 3 that there was a early competitive
process, the outcome of which meant that the system would just ignore input
from one eye in favour of the other. Various experiments have suggested that
simple input competition cannot be the whole story. For instance, in a case in
which rivalry is between a vertical grating in the left eye and a horizontal one
in the right, and in which a vertical grating is presented prior to rivalry to cause
adaptation, the relative suppression of vertical during rivalry is independent of
1 I am very grateful to Bart Anderson, Adam Elga, Geoff Goodhill, Geoff Hinton, David
Leopold, Earl Miller, Read Montague, Bruno Olshausen, Pawan Sinha, Rich Zemel, and
particularly Zhaoping Li and Tommi Jaakkola for their comments on earlier drafts and
discussions. This work was supported by the NIH.
A Hierarchical Model of Visual Rivalry
49
the eye of origin of the adapting grating. 4 Even more compelling, if the rivalrous
stimuli in the two eyes are switched rapidly, the percept switches only slowly competition is more between coherent percepts than merely inputs. Rivalry is an
attractive paradigm for studying models of cortex like the Helmholtz machine 12 , 7
that construct coherent percepts, and in particular for studying hierarchical models,
because of electrophysiological data on the behaviour during rivalry of cells at
different levels of the visual processing hierarchy.16
Leopold & Logothetis16 trained monkeys to report their percepts during rivalrous
and non-rivalrous stimuli whilst recording from neurons VI/2 and V4. Important
findings are that striate monocular neurons are unaffected by rivalry; some striate
binocular neurons that are selective between the stimuli modulate their activities
during rivalry; others do not; some fire more when their preferred stimuli are
suppressed; others still are only selective during rivalry. In this paper we consider
one form of analysis-by-synthesis model of cortical processing7 and show how it
can exhibit rivalry between explanations in the case that the eyes receive different
input. This model can provide an account for many of the behaviours described
above.
2 The Model
Figure Ia shows the full generative model. Units in layers y (modeling VI) and
x and w (modeling early and late extra-striate areas) are all binocular and jointly
explain successively more complex features in the input z according to a top-down
generative model. Apart from the half bars in y, the model is similar to that
learned by the Helmholtz machine 12 , 7for which increasing complexity in higher
layers rather than the increasing input scale is key. In this case, for instance, w2
specifies the occurrence of vertical bars anywhere in the 8 x 8 input grids; X16
specifies the rightmost vertical bar; and Y31 and Y32 the top and bottom half of this
vertical bar. These specifications are provided by a top-down generative model in
which the activations of units are specified by probabilities such as P[Yi = Ilx] =
a (by + Lk xkJ!~) where the sum k is over all the units in the x layer, and 0'0
is a robust normal distribution function. We model the percept in terms of the
activation in the w layer.
We model differing input contrasts by representing the input to Zi by di , where
P[Zi = 1] = O'(di ) and all the Zi are independent. Recognition is formally the statistical inverse to generation, and should produce distribution P[w, x, yld] over all
the choices of the hidden activations. We use a mean field inversion method,13
using a factorised approximation Q[w,x, y; Il,~,~] = Q[w; Il]Q][x; ~]Q[y; ~], with
Q[w; Il] = TIi O'(lli)Wi (1 - O'(lli))l- Wi, etc, and fitting the parameters Il,~, ~ to minimise the approximation cost:
d] ' " Q[
C .1']1
Q[w,x,y;Il,~,~]
:F [Il,~, ~ ] = "'P[
L.J z;
L.J
w, x, y; Il, ,-, 'f/ og
P[w x Iz] .
z
wxy
, ,
,
,y
We report the mean activities of the units in the graphs and use a modified gradient
descent method to find appropriate parameters. Figure Ib shows the resulting
activities of units in response to binocular horizontal (i) and vertical (ii) bars, and
also the two equally likely explanations for rivalrous input (iii and iv). For rivalry,
P. Dayan
50
(a)
(b)
LAYER
wi
???????
??????
........
??????
b? ?
~
y
_
? ? ? ?
YII
wi Ia:::::QJ
lib
"II~I
wl~
,,11l1lil1li1111 II
yl- ylllllill
?1_ ?111111111
L
L
R
R
(ii)
(i)
wi !I:::::iJI
wl~
"II~I
"llmBEl
yllil yl-
?1_ ?1_
L
R
L
R
(iv)
(iii)
%
L
R
Figure 1: a) Hierarchical generative model for 8 x 8 bar patterns across the two eyes. Units are
depicted by their net projective (generative) fields, and characteristic weights are shown. Even though
the net projective field of Xl is the top horizontal bar in both eyes, note that it generates this by increasing
the probability that units YI and Y9 in the y layer will be active, not by having direct connections to
the input z. Unit WI connects to Xl, X2, .?? Xs through J wx = 0.8; XI6 connects to Y31 , Y32 through
J XY
1.0 and Y32 connects to the bottom right half vertical bar through J yz
5.8. Biases are
bw = -0 .75,bx = -1.5, by = -2.7 and bz = -3.3. b) Recognition activity in the network for four
different input patterns. The units are arranged in the same order as (a), and white and black squares
imply activities for the units whose means are less than and greater than 0.5. (i) and (ii) represent normal
binocular stimulation; (iii) and (iv) show the two alternative stable states during rivalrous stimulation,
without the fatigue process.
=
=
there is direct competition in the top left hand quadrant of z, which is reflected in the
competition between YI, Y3 and Y17, Y21. However, the input regions (top right of L
and bottom left of R) for which there is no competition, require the constant activity
of explanations Y9, Yu ,Y18 and Y22. Under the generative model, the coactivation
of YI and Y9 without Xl is quite unlikely (P[XI = OIYI = 1, Y3 = 1] = 0.1), which is
why XI, X3 and also WI become active with YI and Y3.
Given just gradient descent for the rivalrous stimulus, the network would just find
one of the two equally good (or rather bad) solutions in figure 1b(iii,iv). Alternation
ensues when descent is augmented by a fatigue process:
=
'l/JI(t) +<5(-\7I/JIF[1L,~,'l/J] + a ({3'l/JI (t)) -'l/J~(t))
'l/J~ (t) + <5('l/JI (t) - {3'l/J~ (t)),
where {3 is a decay term. In all the simulations, a = 0.5, {3 = 0.1 and <5 = 0.01.
We adopted various heuristics to simplify the process of using this rather cumbersome mean field model. First, fatigue is only implemented for the units in the y
A Hierarchical Model of Visual Rivalry
51
layer, and the 'I.jJ follow the equivalent of the dynamical equations above. Although
adaptation processes can clearly occur at many levels in the system, and indeed
have been used to try to diagnose the mechanisms of rivalry,15 their exact form is
not clear. Bialek & DeWeese l argue that the rate of a switching process should be
adaptive to the expected rate of change of the associated signal on the basis of prior
observations. This is clearly faster nearer to the input.
The second heuristic is that rather than perform gradient descent for the nonfatiguing units, the optimal values of f.1. and ~ are calculated on each iteration by
solving numerically equations such as
The dearth of connections in the network of figure la allows f.1. and ~ to be calculated
locally at each unit in an efficient manner. Whether this is reasonable depends on
the time constants of settling in the mean field model with respect to the dynamics
of switching, and, more particularly on the way that this deterministic model is
made appropriately stochastic.
Figure 2a shows the resulting activities during rivalry of units at various levels of
the hierarchy including the fatigue process. Broadly, the competing explanations
in figure Ib(iii;iv), ie horizontal and vertical percepts, alternate, and units without
competing inputs, such as Y9, are much less modulated than the others, such as Yl.
The activity of Y9 is slightly elevated when horizontal bars are dominant, based on
top-down connections. The activities of the units higher up, such as Xl and WI, do
not decrease to 0 during the suppression period for horizontal bars, leaving weak
activity during suppression. Many of the modulating cells in monkeys were not
completely silent during their periods of less activity.16 Figure 2b shows that the
hierarchical version of the model also behaves in accordance with experimental
results on the effects of varying the input contrast,17, 10, 22, 16 which suggest that
increasing the contrast in both eyes decreases the period of the oscillation (ie increases the frequency), and increasing the contrast in just one eye decreases the
suppression period for that eye much more than it increases its dominance period.
3
Discussion
Following Logothetis and his colleagues l9 , 16, 18 (see also Grossberg ll ) we have
suggested an account of rivalry based on competing top-down hierarchical explanations, and have shown how it models various experimental observations on
rivalry. Neurons explain inputs in virtue of being capable of generating their activities through a top-down statistical generative model. Competition arises between
higher-level explanations of overlapping active regions (ie those involving contrast
changes) of the input rather than between inputs themselves. Note that alternating
the input between the two eyes would have no effect on this behaviour of the
model, since explanations are competing rather than inputs. Of course, the model
is greatly simplified - for instance, it only has units that are not modulating with
the percept in the earliest binocular layer (layer y), whereas in the monkeys, more
than half the cells in V4 were unmodulated during rivalry.I6
The model's accounts of the neurophysiological findings described in the introduction are: i) monocular cells will generally not be modulated if they are involved in
P. Dayan
52
a)
Iterations
------+
o
::r
b)
Contrast Dependence
600 r - - - - - - - - - - - - - - - - - - - - - ,
0.0
::
-
x,
00
~
- - - equal contrast
horizontal dominance (1=1 .25)
---- vertical dominance (r)
Q)
~
3
~
~::n, 1 I 11.1 11-"
Q
I
------ -----500
' .0~
0.5~-Y.
0,0
O'-----,:-:':OOO:::-----::2000:::-:-----:::3000~---::.OOO
1.0
12
lA
1~
Test vertical 'contrast' (r)
Figure 2: a) Mean activities of units at three levels of the hierarchy in response to rivalrous stimuli
with input strengths I = r = 1.75. b) Contrast dependence of the oscillation periods for equal input
strengths, and when I = 1.25 and r is varied.
explaining local correlations in the input from a single eye. This model does not
demonstrate this explicitly, but would if, for instance, each of the inputs Zi actually
consisted of two units, which are always on or off together. In this case one could
get a compact explanation of the joint activities with a set of monocular units which
would then not be modulated. ii) Units such as Y9 in the hierarchical model are
binocular, are selective between the binocular version of the stimuli, and are barely
modulated with the percept. iii) Units such as YI, Xl and WI are binocular, are
selective between the stimuli, and are significantly modulated with the percept.
The final neurophysiological finding is to do with cells that fire when their preferred
stimuli are suppressed, or fire selectively between the stimuli only during rivalry.
There are no units in this model that are selective between the stimuli and are
preferentially activated during suppression of their preferred stimuli. However, in
a model with more complicated stimulus contingencies, they would emerge to
account for the parts of the stimulus in the suppressed eye that are not accounted
for by the explanation of the overlying parts of the dominant explanation, at least
provided that this residual between the true monocular stimulus and the current
explanation is sufficiently complex as to require explaining itself.
We would expect to find two sets of cells that are activated during the suppressed
period by this residual, some of which will form part of the representation of the
stimulus when presented binocularly and some of which will not. Those that do
not (class A) will only even appear to be selective between the stimuli during
rivalry, and will represent parts of the residual that are themselves explained by
more overarching explanations for parts of the complete (binocularly presented)
stimulus. This suggests the experimental test of presenting binocularly a putative
form of the residual (eg dotted lines for competing horizontal and vertical gratings).
We predict that these cells should be activated.
If there are cells that do participate in the binocular representation, then they will
be selective, but will preferentially fire during suppression (class B). Certainly, the
A Hierarchical Model of Visual Rivalry
53
residual will have a high correlation with the full suppressed pattern, and so a
cell that is selective for part of the residual could have appropriate properties.
However, why should such a cell not fire when the full, but currently suppressed,
pattern is dominant? In monkeys,16 there are fewer class B than class A cells (0
versus 3 of 33 cells in Vl/2; 6 versus 8 of 68 cells in V4). Under the model, we
account for these cells based on a competition between units that represent the
residual and those that represent overlapping parts of the complete pattern. In
binocular viewing, explanations are generally stronger than during rivalry. So
even if both such units participate in representing a binocular stimulus, the cells
representing the residual might not reach threshold during the dominance period.
However, during suppression, they no longer suffer from competition, and so will
be activated. The model's explanation for class B cells seems far less natural than
that for class A cells. One experimental test would be to present the preferred
pattern binocularly, reduce the contrast, and see if these cells are suppressed more
strongly.
The overall model mechanistically has much in common with models which place
the competition in rivalry at the level of binocular oriented cells rather than between
monocular cells. 11 ,2 Indeed, the model is based on an explanation-driven account
for normal binocular processing, so this is to be expected. The advantage of
couching rivalry in terms of explanations is that this provides a natural way of
accounting for top-down influences. In fact, one can hope to study top-down
control through studying its effects on the behaviour of cells during rivalry.
The model suffers from various lacunce. Foremost, it is necessary to model the
stochasticity of switching between explanations. 9 ,17 The distributions of dominance times for both humans and monkeys is well characterised by a r distribution
(Lehky14 argues that this is descriptive rather than normative), with strong independence between successive dominance periods. Our mean field recognition
process is deterministic. The stochastic analogue would be some form of Markov
chain Monte-Carlo method such as Gibbs sampling. However, it is not obvious
how to incorporate the equivalent of fatigue in a computationally reasonable way.
In any case, the nature of neuronal randomness is subject to significant debate at
present. Note that the recognition model of a stochastic Helmholtz machine 7,6
would be unsuitable, since it is purely feedforward and does not integrate bottomup and top-down information.
We have adopted a very simple mean field approach to recognition, giving up
neurobiological plausibility for convenience. The determinism of the mean field
model in any case rules it out as a complete explanation, but it does at least
show clearly the nature of competition between explanations. The architecture of
the model is also incomplete. The cortex is replete with what we would model
as lateral connections between units within a single layer. We have constructed
generative models in which there are no such direct connections, because they
significantly complicate the mean field recognition method. It could be that these
connections are important for the recognition process,6 but modeling their effect
would require representing them explicitly. This would also allow modeling of the
apparent diffusive process by which patches of dominance spread and alter. In a
complete model, it would also be necessary to account for competition between
eyes in addition to competition between explanations. 24,8,3
54
P. Dayan
Another gap is some form of contrast gain control. s The model is quite sensitive
to input contrast. This is obviously important for the effects shown in figures 2,
however the range of contrasts over which it works should be larger. It would be
particularly revealing to explore the effects of changing the contrast in some parts
of images and examine the consequent effects on the spreading of dominance.
References
[1] Bialek, W & DeWeese, M (1995). Random switching and optimal processing in the perception of
ambiguous Signals. Physical Review Letters, 74, 3077-3080.
[2] Blake, R (1989). A neural theory of binocular rivalry. Psyclwlogical Review, 96, 145-167.
[3] Blake, R & Fox, R (1974). Binocular rivalry suppression: Insensitive to spatial frequency and
orientation change. Vision Research, 14, 687-692.
[4] Blake, R, Westendorf, DH & Overton, R (1980). What is suppressed during binocular rivalry?
Perception, 9, 223-231.
[5] Carandini, M & Heeger, DJ (1994). Summation and division by neurons in primate visual cortex.
Science, 264, 1333-1336.
[6] Dayan, P & Hinton, GE (1996). Varieties of Helmholtz machine. Neural Networks, 9, 1385-1403.
[7] Dayan, P, Hinton, GE, Neal, RM & Zemel, RS (1995). The Helmholtz machine. Neural Computation,
7,889-904.
[8] Fox, R & Check, R (1972). Independence between binocular rivalry suppression duration and
magnitude of suppression. Journal of Experimental Psyclwlogy, 93, 283-289.
[9] Fox, R & Herrmann, J (1%7). Stochastic properties of binocular rivalry alternations. Perception and
Psychophysics, 2, 432-436.
[10] Fox, R & Rasche, F (1969). Binocular rivalry and reciprocal inhibition. Perception and Psychophyics,
5,215-217.
[11] Grossberg, S (1987). Cortical dynamiCS of three-dimensional form, color and brightness perception: 2. Binocular theory. Perception & Psychphysics, 41, 117-158.
[12] Hinton, GE, Dayan, P, Frey, BJ & Neal, RM (1995). The wake-sleep algorithm for unsupervised
neural networks. Science, 268,1158-1160.
[13] Jaakkola, T, Saul, LK & Jordan, MI (1996). Fast learning by bounding likelihoods in sigmoid type
belief networks. Advances in Neural Information Processing Systems, 8, forthcoming.
[14] Lehky, SR (1988). An astable multivibrator model of binocular rivalry. Perception, 17, 215-228.
[15] Lehky, SR & Blake, R (1991). Organization of binocular pathways: Modeling and data related to
rivalry. Neural Computation, 3,44-53.
[16] Leopold, DA & Logothetis, NK (1996). Activity changes in early visual cortex reflect monkeys'
percepts during binocular rivalry. Nature, 379, 549-554.
[17] Levelt, WJM (1968). On Binocular Rivalry. The Hague, Paris: Mouton.
[18] Logothetis, NK, Leopold, DA & Sheinberg, DL (1996). What is rivalling during binocular rivalry.
Nature, 380, 621-624.
[19] Logothetis, NK & Schall, JD (1989). Neuronal correlates of subjective visual perception. Science"
245,761-763.
[20] Matsuoka, K (1984). The dynamic model of binocular rivalry. Biological Cybernetics, 49, 201-208.
[21] Mueller, 11 (1990). A physiological model of binocular rivalry. Visual Neuroscience, 4, 63-73.
[22] Mueller, 11 & Blake, R (1989). A fresh look at the temporal dynamiCS of binocular rivalry. Biological
Cybernetics, 61, 223-232.
[23] Pearl, J (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San
Mateo, CA: Morgan Kaufmann.
[24] Wales, R & Fox, R (1970). Increment detection thresholds during binocular rivalry suppression.
Perception and PsychophysiCS, 8, 90-94.
[25] Wheatstone, C (1838). Contributions to the theory of vision. I: On some remarkable and hitherto
unobserved phenomena of binocular vision. Philosophical Transactions of the Royal Society of London,
128,371-394.
[26] Wolfe, JM (1986). Stereopsis and binocular rivalry. PsycholOgical Review, 93,269-282.
| 1277 |@word version:2 inversion:1 stronger:1 seems:1 simulation:1 r:1 accounting:1 brightness:1 rightmost:1 subjective:1 current:1 activation:3 wx:1 bart:1 generative:8 half:4 fewer:1 reciprocal:1 provides:1 draft:1 wxy:1 successive:1 constructed:1 direct:4 become:1 wale:1 fitting:1 pathway:1 manner:1 expected:2 indeed:2 themselves:2 examine:1 brain:1 hague:1 jm:1 increasing:5 lib:1 provided:2 what:3 hitherto:1 monkey:6 whilst:1 differing:1 finding:3 unobserved:1 temporal:1 y3:3 rm:2 control:2 unit:26 unmodulated:1 appear:1 accordance:1 local:1 frey:1 switching:4 black:1 might:1 mateo:1 suggests:1 projective:2 range:1 coactivation:1 grossberg:2 x3:1 area:1 adapting:1 revealing:2 significantly:2 elga:1 quadrant:1 suggest:1 get:1 cannot:1 convenience:1 influence:1 equivalent:2 deterministic:2 overarching:1 duration:1 rule:1 his:1 increment:1 hierarchy:4 logothetis:4 exact:1 origin:1 jif:1 helmholtz:5 recognition:7 particularly:3 wolfe:1 bottom:3 region:2 decrease:3 complexity:1 dynamic:4 trained:1 grateful:1 solving:1 purely:1 division:1 basis:2 completely:1 joint:1 montague:1 geoff:2 various:5 distinct:1 fast:1 london:1 monte:1 zemel:2 outcome:1 whose:1 quite:2 heuristic:2 apparent:1 larger:1 plausible:1 jointly:1 itself:1 final:1 obviously:1 advantage:1 descriptive:1 net:2 adaptation:2 yii:1 rapidly:1 competition:14 produce:1 generating:1 adam:1 strong:1 grating:4 implemented:1 tommi:1 stochastic:4 human:1 viewing:1 bistable:1 argued:1 require:3 behaviour:4 biological:2 summation:1 sufficiently:1 blake:5 normal:3 predict:1 rasche:1 bj:1 early:3 spreading:1 currently:1 sensitive:1 modulating:2 wl:2 hope:1 mit:1 clearly:3 always:1 modified:1 rather:9 og:1 varying:1 jaakkola:2 earliest:1 check:1 likelihood:1 greatly:1 contrast:14 suppression:11 am:1 inference:1 mueller:2 dayan:8 vl:1 unlikely:1 hidden:1 selective:9 overall:1 orientation:1 spatial:1 psychophysics:2 field:9 construct:1 equal:2 having:1 sampling:1 zhaoping:1 yu:1 unsupervised:1 look:1 alter:1 others:4 stimulus:21 report:2 simplify:1 intelligent:1 oriented:1 pawan:1 connects:3 fire:5 bw:1 detection:1 organization:1 certainly:1 activated:4 chain:1 overton:1 capable:1 necessary:2 xy:1 fox:5 iv:5 incomplete:1 sinha:1 psychological:1 instance:4 earlier:1 compelling:1 modeling:5 cost:1 ensues:1 mechanistically:1 ie:3 v4:3 yl:3 off:1 probabilistic:1 synthesis:1 e25:1 together:1 reflect:1 successively:1 slowly:1 cognitive:1 bx:1 li:1 account:9 tii:1 factorised:1 explicitly:2 vi:2 depends:1 try:1 diagnose:1 competitive:1 complicated:1 contribution:1 il:7 square:1 kaufmann:1 characteristic:1 percept:13 miller:1 weak:1 lli:2 carlo:1 cybernetics:2 unaffected:1 randomness:1 levelt:1 explain:2 reach:1 cumbersome:1 suffers:1 complicate:1 colleague:1 frequency:2 involved:1 obvious:1 associated:1 di:2 mi:1 gain:1 carandini:1 massachusetts:1 iji:1 color:1 electrophysiological:1 actually:1 higher:3 follow:1 reflected:1 response:2 ooo:2 arranged:1 though:1 strongly:1 anderson:1 just:4 binocular:35 anywhere:1 correlation:2 hand:1 horizontal:8 y9:6 overlapping:2 matsuoka:1 olshausen:1 effect:8 consisted:1 true:1 alternating:2 read:1 neal:2 white:1 attractive:1 eg:1 ll:1 during:24 ambiguous:1 fatigue:5 presenting:1 complete:4 demonstrate:1 argues:1 reasoning:1 image:1 nih:1 xkj:1 common:1 sigmoid:1 behaves:1 stimulation:2 ji:3 physical:1 insensitive:1 extend:1 elevated:1 numerically:1 significant:1 cambridge:1 gibbs:1 mouton:1 grid:1 i6:1 stochasticity:1 bruno:1 distress:1 dj:1 specification:1 stable:1 cortex:4 longer:1 inhibition:1 etc:1 dominant:3 recent:2 apart:1 driven:1 alternation:3 yi:6 morgan:1 greater:1 paradigm:1 period:9 signal:2 ii:7 full:3 replete:1 faster:1 plausibility:1 y22:1 equally:2 involving:1 y21:1 vision:3 foremost:1 bz:1 iteration:2 represent:4 cell:21 diffusive:1 receive:1 whereas:1 addition:1 wake:1 leaving:1 appropriately:1 extra:1 w2:1 sr:2 comment:1 recording:1 subject:1 jordan:1 feedforward:1 iii:6 switch:1 independence:2 variety:1 zi:4 forthcoming:1 architecture:1 competing:5 silent:1 reduce:1 minimise:1 qj:1 favour:1 whether:1 suffer:1 peter:1 cause:1 jj:1 generally:2 clear:1 locally:1 lehky:2 specifies:2 dotted:1 neuroscience:1 broadly:1 iz:1 dominance:8 key:1 four:1 threshold:3 changing:2 deweese:2 graph:1 merely:1 sum:1 inverse:1 overlying:1 letter:1 place:1 reasonable:2 patch:1 oscillation:2 putative:1 layer:10 sleep:1 activity:15 strength:2 occur:1 scene:1 x2:1 l9:1 generates:1 department:1 according:1 alternate:1 across:1 slightly:1 suppressed:9 psyche:1 wi:9 y32:3 primate:1 schall:1 explained:1 binocularly:4 y31:2 monocular:5 equation:2 computationally:1 mechanism:2 ge:3 studying:3 adopted:2 hierarchical:9 appropriate:2 occurrence:1 alternative:1 jd:1 top:12 unsuitable:1 giving:1 yz:1 society:1 psychophysical:1 dependence:2 striate:3 bialek:2 exhibit:1 gradient:3 lateral:1 participate:2 argue:1 barely:1 fresh:1 preferentially:2 debate:1 perform:1 vertical:12 neuron:6 observation:2 markov:1 descent:4 sheinberg:1 hinton:4 varied:1 david:1 paris:1 specified:1 connection:6 philosophical:1 leopold:4 coherent:2 learned:1 pearl:1 nearer:1 address:1 suggested:2 topdown:1 bar:10 goodhill:1 pattern:6 dynamical:1 perception:9 y18:1 including:1 royal:1 explanation:21 belief:1 analogue:1 ia:2 natural:2 settling:1 residual:8 representing:4 technology:1 eye:17 imply:1 lk:2 prior:2 review:3 relative:1 expect:1 generation:1 versus:2 remarkable:1 switched:1 earl:1 contingency:1 integrate:1 story:1 course:1 accounted:1 supported:1 bias:1 allow:1 institute:1 explaining:2 saul:1 emerge:1 rivalry:45 determinism:1 calculated:2 cortical:3 rich:1 made:1 adaptive:1 herrmann:1 simplified:1 san:1 far:1 dearth:1 correlate:1 transaction:1 compact:1 ignore:1 preferred:4 neurobiological:1 active:3 xi:2 stereopsis:1 bottomup:1 why:2 nature:4 robust:1 ca:1 xi6:1 complex:2 da:2 spread:1 whole:1 rivalrous:7 bounding:1 augmented:1 neuronal:2 x16:1 heeger:1 xl:5 perceptual:1 ib:2 late:1 down:8 bad:1 normative:1 x:1 decay:1 consequent:1 virtue:1 evidence:2 dl:1 physiological:1 magnitude:1 nk:3 gap:1 ilx:1 depicted:1 likely:1 explore:1 neurophysiological:3 visual:10 dh:1 ma:1 modulate:1 change:4 characterised:1 wjm:1 experimental:5 la:2 formally:1 selectively:1 support:1 modulated:6 meant:1 arises:1 incorporate:1 phenomenon:2 |
304 | 1,278 | Reinforcement Learning for Mixed
Open-loop and Closed-loop Control
Eric A. Hansen, Andrew G. Barto, and Shlorno Zilberstein
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
{hansen.barto.shlomo }<Dcs.umass .edu
Abstract
Closed-loop control relies on sensory feedback that is usually assumed to be free . But if sensing incurs a cost, it may be costeffective to take sequences of actions in open-loop mode. We describe a reinforcement learning algorithm that learns to combine
open-loop and closed-loop control when sensing incurs a cost. Although we assume reliable sensors, use of open-loop control means
that actions must sometimes be taken when the current state of
the controlled system is uncertain . This is a special case of the
hidden-state problem in reinforcement learning, and to cope, our
algorithm relies on short-term memory. The main result of the paper is a rule that significantly limits exploration of possible memory
states by pruning memory states for which the estimated value of
information is greater than its cost. We prove that this rule allows
convergence to an optimal policy.
1
Introduction
Reinforcement learning (RL) is widely-used for learning closed-loop control policies. Closed-loop control works well if the sensory feedback on which it relies is
accurate, fast, and inexpensive. But this is not always the case. In this paper, we
address problems in which sensing incurs a cost, either a direct cost for obtaining
and processing sensory data or an indirect opportunity cost for dedicating limited
sensors to one control task rather than another. If the cost for sensing is significant,
exclusive reliance on closed-loop control may make it impossible to optimize a performance measure such as cumulative discounted reward. For such problems, we
describe an RL algorithm that learns to combine open-loop and closed-loop control.
By learning to take open-loop sequences of actions between sensing, it can optimize
a tradeoff'between the cost and value of sensing.
Reinforcement Learning for Mixed Open-loop and Closed-loop Control
1027
The problem we address is a special case of the problem of hidden state or partial
observability in RL (e.g., Whitehead &. Lin, 1995; McCallum, 1995). Although we
assume sensing provides perfect information (a significant limiting assumption), use
of open-loop control means that actions must sometimes be taken when the current
state of the controlled system is uncertain. Previous work on RL for partially
observable environments has focused on coping with sensors that provide imperfect
or incomplete information, in contrast to deciding whether or when to sense. Tan
(1991) addressed the problem of sensing costs by showing how to use RL to learn a
cost-effective sensing procedure for state identification, but his work addressed the
question of which sensors to use, not when to sense, and so still assumed closed-loop
control.
In this paper, we formalize the problem of mixed open-loop and closed-loop control
as a Markov decision process and use RL in the form of Q-Iearning to learn an optimal, state-dependent sensing interval. Because there is a combinatorial explosion
of open-loop action sequences, we introduce a simple rule for pruning this large
search space. Our most significant result is a proof that Q-Iearning converges to
an optimal policy even when a fraction of the space of possible open-loop action
sequences is explored.
2
Q-learning with sensing costs
Q-Iearning (Watkins, 1989) is a well-studied RL algorithm for learning to control
a discrete-time, finite state and action Markov decision process (MDP). At each
time step, a controller observes the current state x, takes an action a, and receives
an immediate reward r with expected value r(x, a). With probability p(x, a, y)
the process makes a transition to state y, which becomes the current state on the
next time step. A controller using Q-Iearning learns a state-action value function,
Q(x, a), that estimates the expected total discounted reward for taking action a
in state x and performing optimally thereafter. Each time step, Q is updated for
state-action pair (x, a) after receiving reward r and observing resulting state y, as
follows:
Q(x, a) ~ Q(x, a) + Q:
where
[r + I'V(y) -
Q(x, a)] ,
E (0,1] is a learning rate parameter, I' E [0,1) is a discount factor, and
Q converges to an
optimal state-action value function Q (and V converges to an optimal state value
function V) with probability one if all actions continue to be tried from all states,
the state-action value function is represented by a lookup-table, and the learning
rate is decreased in an appropriate manner.
Q:
V(y) = maXb Q(y, b). Watkins and Dayan (1992) prove that
If there is a cost for sensing, acting optimally may require a mixed strategy of openloop and closed-loop control that allows a controller to take open-loop sequences
of actions between sensing. This possibility can be modeled by an MDP with two
kinds of actions: control actions that have an effect on the current state but do not
provide information, and a sensing action that reveals the current state but has no
other effect. We let 0 (for observation) denote the sensing action and assume it
provides perfect information about the underlying state. Separating control actions
and the sensing action gives an agent control over when to receive sensory feedback,
and hence, control over sensing costs.
When one control action follows another without an intervening sensing action, the
second control action is taken without knowing the underlying state. We model
this by including "memory states" in the state set of the MDP. Each memory
state represents memory of the last observed state and the open-loop sequence of
control actions taken since; because we assume sensing provides perfect information,
1028
E. A. Hansen, A. G. Barto and S. Zilberstein
Xcl
Z
xaa
xah
~X'~'I
xh
~
xhb
Figure 1: A tree of memory states rooted at observed state x. The set of control
actions is {a, b} and the length bound is 2.
remembering this much history provides a sufficient statistic for action selection
(Monahan, 1982). Possible memory states can be represented using a tree like
the one shown in Figure 1, where the root represents the last observed state and
the other nodes represent memory states, one for each possible open-loop action
sequence. For example, let xa denote the memory state that results from taking
control action a in state x. Similarly, let xab denote the memory state that results
from taking control action b in memory state xa. Note that a control action causes
a deterministic transition to a subsequent memory state, while a sensing action
causes a stochastic transition to an observed state - the root of some tree. There
is a tree like the one in figure 1 for each observable state.
This problem is a special case of a partially observable MDP and can be formalized
in an analogous way (Monahan, 1982). Given a state-transition and reward model
for a core MDP with a state set that consists only of the underlying states of a
system (which for this problem we also call observable states), we can define a statetransition and reward model for an MDP that includes memory states in its state
set. As a convenient notation, let p(x, al .. a", y) denote the probability that taking
an open-loop action sequence a} .. a" from state x results in state y, where both x
and y are states of the underlying system. These probabilities can be computed
recursively from the single-step state-transition probabilities of the core MDP as
follows:
z
State-transition probabilities for the sensing action can then be defined as
p(xal .. a" , 0, y) = p(x, al .. a", y),
and a reward function for the generalized MDP can be similarly defined as
r(xal .. a"_l, a,,) = LP(x, al .. a"_l, y)r(y, a,,).
y
where the cost of sensing in state x of the core MDP is r(x,o).
If we assume a bound on the number of control actions that can be taken between
sensing actions (i.e .? a bound on the depth of each tree) and also assume a finite
number of underlying states, the number of possible memory states is finite. It
follows that the MDP we have constructed is a well-defined finite state and action
MDP, and all of the theory developed for Q-Iearning continues to apply, including
its convergence proof. (This is not true of partially observable MDPs in general.)
Therefore, Q-Iearning can in principle find an optimal policy for interleaving control
actions and sensing, assuming sensing provides perfect information.
3
Limiting Exploration
A problem with including memory states in the state set of an MDP is that it
increases the size of the state set exponentially. The combinatorial explosion of
Reinforcement Learningfor Mixed Open-loop and Closed-loop Control
1029
state-action values to be learned raises doubt about the computational feasibility of
this generalization of RL. We present a solution in the form of a rule for pruning each
tree of memory states, thereby limiting the number of memory states that must be
explored. We prove that even if some memory states are never explored, Q-Iearning
converges to an optimal state-action value function. Because the state-action value
function is left undefined for unexplored memory states, we must carefully define
what we mean by an optimal state-action value function.
Definition: A state-action value function is optimal if it is sufficient for generating optimal behavior and the values of the state-action pairs visited when behaving
optimally are optimal.
A state-action value function that is undefined for some states is optimal, by this
definition, if a controller that follows it behaves identically to a controller with a
complete, optimal state-action value function. This is possible if the states for which
the state-action value function is undefined are not encountered when an agent acts
optimally. Barto, Bradtke, and Singh (1995) invoke a similar idea for a different
class of problems.
Let g(xal .. ak) denote the expected reward for taking actions al .. ak in open-loop
mode after observing state x:
k-l
g( xa1.. ak) = r(x, ad + L -yi r (xa l .. ai, ai+d?
i=l
Let h(xa1 .. ak) denote the discounted expected value of perfect information after
reaching memory state xa1 .. ak, which is equal to the discounted Q-value for sensing
in memory state xal .. ak minus the cost for sensing in this state:
h(xa1 .. ak) = -y" LP(xal .. ak,o,y)V(y) = -yk(Q(xa1 .. ak,o) - r(xal .. ak,o)).
y
Both g and h are easily learned during Q-Iearning, and we refer to the learned
estimates as 9 and h. These are used in the pruning rule, as follows:
Pruning rule: If g( xal .. ak) + h{ xa1' .ak) ~ V(x), then memory states that descend
from xal? .ak do not need to be explored. A controller should immediately execute a
sensing action when it reaches one of these memory states.
The intuition behind the pruning rule is that a branch of a tree of memory states
can be pruned after reaching a memory state for which the value of information
is greater than or equal to its cost. Because pruning is based on estimated values,
memory states that are pruned at one point during learning may later be explored as
learned estimates change. The net effect of pruning, however, is to focus exploration
on a subset of memory states, and as Q-Iearning converges, the subset of unpruned
memory states becomes stable. The following theorem is proved in an appendix.
Theorem: Q-learning converges to an optimal state-action value function with
probability one if, in addition to the conditions for convergence given by Watkins
and Dayan (1992), exploration is limited by the pruning rule.
This result is closely related to a similar result for solving this class of problems
using dynamic programming (Hansen, 1997), where it is shown that pruning can
assure convergence to an optimal policy even if no bound is placed on the length
of open-loop action sequences - under the assumption that it is optimal to sense
at finite intervals. This additional result can be extended to Q-Iearning as well,
although we do not present the extension in this paper. An artificial length bound
can be set as low or high as desired to ensure a finite set of memory states.
E. A. Hansen, A. G. Barto and S. Zilberstein
1030
~
r-
I
~3
Itli ll
(in.1i Sill"
I WO
'4 '5
6
:'>10
7 X
10 II I~
13 14 1510
17 IX 19 20
l)
7 WN:-iNO
I~
X WWNNNO
I~
WWO
C)
I~
wwwo
NWO
NWNO
III WNwn
17 NNN O
~
:>;:>;0
II WW()
11\ W NNNO
~
NNWO
12 WWWO
1<) WWNO
6
NN NO
L' NNO
20 WWWNO
.1 "\10
( h)
Figure 2: (a) Grid world with numbered states (b) Optimal policy
We use the notation 9 and h in our statement of the pruning rule to emphasize
its relationship to pruning in heuristic search. If we regard the root of a tree of
memory states as the start state and the memory state that corresponds to the
best open-loop action sequence as the goal state, then 9 can be regarded as the
cost-to-arrive function and the value of perfect information h can be regarded as an
upper bound on the cost-to-go function.
4
Example
We describe a simple example to illustrate the extent of pruning possible using this
rule. Imagine that a "robot" must find its way to a goal location in the upper
left-hand corner of the grid shown in Figure 2a. Each cell of the grid corresponds
to a state, with the states numbered for convenient reference. The robot has five
control actions; it can move north, east, south, or west, one cell at a time, or it
can stop. The problem ends when the robot stops. If it stops in the goal state it
receives a reward of 100, otherwise it receives no reward. The robot must execute
a sequence of actions to reach the goal state, but its move actions are stochastic. If
the robot attempts to move in a particular direction, it succeeds with probability
o.s. With probability 0.05 it moves in a direction 90 degrees off to one side of its
intended direction, with probability 0.05 it moves in a direction 90 degrees off to the
other side, and with probability 0.1 it does not move at all. If the robot's movement
would take it outside the grid, it remains in the same cell. Because its progress is
uncertain, the robot must interleave sensing and control actions to keep track of its
location. The reward for sensing is - 1 (i.e., a cost of 1) and for each move action it
is -4. To optimize expected total reward, the robot must find its way to the goal
while minimizing the combined cost of moving and sensing .
Figure 2b shows the optimal open-loop sequence of actions for each observable state.
If the bound on the length of an open-loop sequence of control actions is five, the
number of possible memory states for this problem is over 64,000, a number that
grows explosively as the length bound is increased (to over 16 million when the
bound is nine) . Using the pruning rule, Q-Iearning must explore just less than 1000
memory states (and no deeper than nine levels in any tree) to converge to an optimal
policy, even when there is no bound on the interval between sensing actions.
5
Conclusion
We have described an extension of Q-Iearning for MDPs with sensing costs and
a rule for limiting exploration that makes it possible for Q-Iearning to converge
to an optimal policy despite exploring a fraction of possible memory states. As
already pointed out, the problem we have formalized is a partially observable MDP,
Reinforcement Learningfor Mixed Open-loop and Closed-loop Control
1031
although one that is restricted by the assumption that sensing provides perfect
information . An interesting direction in which to pursue this work would be to
explore its relationship to work on RL for partially observable MDPs, which has so
far focused on the problem of sensor uncertainty and hidden state. Because some
of this work also makes use of tree representations of the state space and of learned
state-action values (e.g., McCallum, 1995), it may be that a similar pruning rule
can constrain exploration for such problems.
Acknowledgement s
Support for this work was provided in part by the National Science Foundation under grants ECS-9214866 and IRI-9409827 and in part by Rome Laboratory, USAF,
under grant F30602-95-1-0012 .
References
Barto, A.G.; Bradtke, S.J .; &. Singh, S.P. (1995) Learning to act using real-time
dynamic programming. Artificial Intelligence 72(1/2}:81-138.
Hansen, E.A. (1997) Markov decision processes with observation costs. University
of Massachusetts at Amherst, Computer Science Technical Report 97-01.
McCallum, R.A. (1995) Instance-based utile distinctions for reinforcement learning
with hidden state. In Proc. 12th Int. Machine Learning Conf. Morgan Kaufmann.
Monahan, G .E. (1982) A survey of partially observable Markov decision processes:
Theory, models, and algorithms. Management Science 28:1-16.
Tan, M. (1991) Cost-sensitive reinforcement learning for adaptive classification and
control. In Proc. 9th Nat . Conf. on Artificial Intelligence. AAAI Press/MIT Press.
Watkins, C.J.C.H. (1989) Learning from delayed rewards. Ph.D. Thesis, University
of Cambridge, England.
Watkins, C.J.C.H. &. Dayan, P. (1992) Technical note: Q-Iearning. Machine Learning 8(3/4}:279-292.
Whitehad, S.D. &. Lin, L.-J.(1995} Reinforcement learning of non-Markov decision
processes. Artificial Intelligence 73 :271-306.
Appendix
Proof of theorem: Consider an MDP with a state set that consists only of the
memory states that are not pruned. We call it a "pruned MDP" to distinguish
it from the original MDP for which the state set consists of all possible memory
states. Because the pruned MDP is a finite state and action MDP, Q-Iearning with
pruning converges with probability one. What we must show is that the state-action
values to which it converges include every state-action pair visited by an optimal
controller for the original MDP, and that for each of these state-action pairs the
learned state-action value is equal to the optimal state-action value for the original
MDP.
Let Q and if denote the values that are learned by Q-Iearning when its exploration is
limited by the pruning rule, and let Q and V denote value functions that are optimal
when the state set ofthe MDP includes all possible memory states. Because an MDP
has an optimal stationary policy and each control action causes a deterministic
transition to a subsequent memory state, there is an optimal path through each
tree of memory states. The learned value of the root state of each tree is optimal if
and only if the learned value of each memory state along this path is also optimal.
1032
E. A. Hansen, A. G. Barto and S. Zilberstein
Therefore to show that Q-Iearning with pruning converges to an optimal stateaction value function, it is sufficient to show that V = V for every observable state
x. Our proof is by induction on the number of control actions that can be taken
between one sensing action and the next . We use the fact that if Q-Iearning has
converged, then g(xal .. ai) = g(xal .. ai) and h(xat .. ai) = Eyp(x,al .. ai'Y)V(y) for
every memory state xat .. ai .
First note that if g(xat) + 1'r(xal' o) + h(xat} > V(x), that is, if V for some
observable state x can be improved by exploring a path of a single control action
followed by sensing, then it is contradictory to suppose Q-Iearning with pruning has
converged because single-depth memory states in a tree are never pruned. Now,
make the inductive hypothesis that Q-Iearning with pruning has not converged if V
can be improved for some observable state by exploring a path of less than k control
actions before sensing. We show that it has not converged if V can be improved for
some observable state by exploring a path of k control actions before sensing.
Suppose V for some observable state x can be improved by exploring a path that
consists of taking the sequence of control actions at .. aA: before sensing, that is,
g(xat .. aA:)
A:
+ l' r(xat .. aA:, o) + h(xat .. aA:) > V(x),
A
A
Since only pruning can prevent improvement in this case, let xat .. a, be the memory
state at which application of the pruning rule prevents xal .. aA: from being explored.
Because the tree has been pruned at this node, V(x) 2:: g(xat .. ai) + h(xat .. ai), and
so
A:
g(xat .. aA:)
+ l' r(xat .. aA:, o) + h(xat .. aA:) > g(xai .. ai) + h(xat .. ai).
A
A
We can expand this inequality as follows:
g(xat .. a,;) + 1"
L
p(x, al .. ai, y) [g(ya,+1 .. aA:)
+ 1'A:-i r(yai+t .. aA:, 0) + h(yai+t .. aA:)]
y
> g(xat .. a,;) + h(:z:at .. ai)'
Simplification and expansion of h yields
L
p(x, at .. ai, y) [g(yai+t .. aA:) + 1'A:-i r(yai+t .. aA:, 0) + 1'A:-i
yES
L
p(y, ai+1 .. aA:, Z)V(Z)]
z
>
L p(x, al ..ai, y)V(y).
y
Therefore, there is some observable state, y, such that
z
Because the value of observable state y can be improved by taking less than k
control actions before sensing, by the inductive hypothesis Q-Iearning has not yet
converged. 0
The proof provides insight into how pruning works. If a state-action pair along
some optimal path is temporarily pruned, it must be possible to improve the value
of some observable state by exploring a shorter path of memory states that has
not been pruned. The resulting improvement of the value function changes the
threshold for pruning and the state-action pair that was formerly pruned may no
longer be so, making further improvement of the learned value function possible.
| 1278 |@word interleave:1 open:22 tried:1 incurs:3 thereby:1 minus:1 recursively:1 uma:1 current:6 yet:1 must:11 subsequent:2 shlomo:1 stationary:1 intelligence:3 mccallum:3 short:1 core:3 utile:1 provides:7 node:2 location:2 five:2 along:2 constructed:1 direct:1 prove:3 consists:4 combine:2 manner:1 introduce:1 expected:5 behavior:1 discounted:4 becomes:2 provided:1 underlying:5 notation:2 what:2 kind:1 pursue:1 developed:1 unexplored:1 every:3 act:2 stateaction:1 iearning:21 control:41 grant:2 before:4 limit:1 despite:1 ak:13 path:8 studied:1 limited:3 sill:1 procedure:1 coping:1 significantly:1 convenient:2 numbered:2 selection:1 impossible:1 optimize:3 deterministic:2 go:1 xa1:6 iri:1 focused:2 survey:1 formalized:2 immediately:1 rule:15 insight:1 regarded:2 his:1 analogous:1 limiting:4 updated:1 imagine:1 tan:2 suppose:2 programming:2 hypothesis:2 assure:1 continues:1 observed:4 descend:1 movement:1 observes:1 yk:1 intuition:1 environment:1 reward:13 dynamic:2 raise:1 singh:2 solving:1 usaf:1 eric:1 easily:1 indirect:1 represented:2 fast:1 describe:3 effective:1 artificial:4 outside:1 heuristic:1 widely:1 otherwise:1 statistic:1 sequence:14 net:1 loop:35 intervening:1 convergence:4 generating:1 perfect:7 converges:9 illustrate:1 andrew:1 progress:1 direction:5 closely:1 stochastic:2 exploration:7 yai:4 require:1 generalization:1 extension:2 exploring:6 deciding:1 proc:2 combinatorial:2 hansen:7 visited:2 nwo:1 sensitive:1 mit:1 sensor:5 always:1 rather:1 reaching:2 barto:7 zilberstein:4 focus:1 improvement:3 costeffective:1 contrast:1 sense:3 dependent:1 dayan:3 nn:1 hidden:4 expand:1 classification:1 special:3 equal:3 never:2 represents:2 report:1 statetransition:1 national:1 delayed:1 intended:1 attempt:1 possibility:1 undefined:3 behind:1 xat:16 accurate:1 explosion:2 partial:1 shorter:1 tree:14 incomplete:1 desired:1 uncertain:3 increased:1 instance:1 cost:23 subset:2 xal:12 optimally:4 combined:1 amherst:2 xcl:1 invoke:1 receiving:1 off:2 thesis:1 aaai:1 management:1 corner:1 conf:2 doubt:1 lookup:1 includes:2 north:1 int:1 ad:1 later:1 root:4 closed:13 observing:2 start:1 kaufmann:1 yield:1 ofthe:1 yes:1 identification:1 history:1 converged:5 reach:2 definition:2 inexpensive:1 proof:5 stop:3 proved:1 massachusetts:2 formalize:1 carefully:1 improved:5 execute:2 xa:3 just:1 hand:1 receives:3 mode:2 grows:1 mdp:22 effect:3 true:1 inductive:2 hence:1 laboratory:1 ll:1 during:2 rooted:1 generalized:1 complete:1 bradtke:2 behaves:1 rl:9 exponentially:1 million:1 significant:3 refer:1 cambridge:1 ai:16 grid:4 similarly:2 pointed:1 moving:1 stable:1 robot:8 longer:1 behaving:1 inequality:1 continue:1 yi:1 morgan:1 greater:2 remembering:1 additional:1 converge:2 ii:2 branch:1 technical:2 england:1 lin:2 controlled:2 openloop:1 feasibility:1 controller:7 sometimes:2 represent:1 dedicating:1 cell:3 receive:1 addition:1 addressed:2 interval:3 decreased:1 south:1 call:2 iii:1 maxb:1 identically:1 wn:1 observability:1 imperfect:1 idea:1 knowing:1 tradeoff:1 whether:1 wo:1 nnn:1 cause:3 nine:2 action:77 discount:1 ph:1 estimated:2 track:1 discrete:1 thereafter:1 reliance:1 threshold:1 prevent:1 fraction:2 uncertainty:1 arrive:1 decision:5 appendix:2 bound:10 followed:1 distinguish:1 simplification:1 encountered:1 constrain:1 pruned:10 performing:1 department:1 lp:2 making:1 restricted:1 taken:6 remains:1 end:1 whitehead:1 apply:1 appropriate:1 original:3 ensure:1 include:1 opportunity:1 f30602:1 move:7 question:1 already:1 strategy:1 exclusive:1 separating:1 extent:1 induction:1 assuming:1 length:5 modeled:1 relationship:2 minimizing:1 statement:1 policy:9 upper:2 observation:2 markov:5 finite:7 immediate:1 extended:1 ino:1 dc:1 ww:1 rome:1 learningfor:2 pair:6 learned:10 distinction:1 address:2 usually:1 reliable:1 memory:45 including:3 improve:1 mdps:3 formerly:1 acknowledgement:1 xaa:1 mixed:6 interesting:1 foundation:1 agent:2 degree:2 sufficient:3 principle:1 unpruned:1 placed:1 last:2 free:1 side:2 deeper:1 taking:7 regard:1 feedback:3 depth:2 transition:7 cumulative:1 world:1 sensory:4 monahan:3 reinforcement:10 adaptive:1 far:1 ec:1 cope:1 pruning:24 observable:17 emphasize:1 keep:1 reveals:1 xai:1 assumed:2 search:2 table:1 learn:2 obtaining:1 expansion:1 main:1 xab:1 west:1 xh:1 watkins:5 learns:3 interleaving:1 ix:1 theorem:3 showing:1 sensing:40 explored:6 nat:1 explore:2 prevents:1 temporarily:1 partially:6 aa:14 corresponds:2 relies:3 ma:1 goal:5 change:2 acting:1 contradictory:1 total:2 ya:1 succeeds:1 east:1 support:1 |
305 | 1,279 | Adaptively Growing Hierarchical
Mixtures of Experts
Jiirgen Fritsch, Michael Finke, Alex Waibel
{fritsch+,finkem, waibel }@cs.cmu.edu
Interactive Systems Laboratories
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
We propose a novel approach to automatically growing and pruning
Hierarchical Mixtures of Experts. The constructive algorithm proposed here enables large hierarchies consisting of several hundred
experts to be trained effectively. We show that HME's trained by
our automatic growing procedure yield better generalization performance than traditional static and balanced hierarchies. Evaluation of the algorithm is performed (1) on vowel classification
and (2) within a hybrid version of the JANUS r9] speech recognition system using a subset of the Switchboard large-vocabulary
speaker-independent continuous speech recognition database .
INTRODUCTION
The Hierarchical Mixtures of Experts (HME) architecture [2,3,4] has proven useful for classification and regression tasks in small to medium sized applications
with convergence times several orders of magnitude lower than comparable neural networks such as the multi-layer perceptron. The HME is best understood as a
probabilistic decision tree, making use of soft splits of the input feature space at the
internal nodes, to divide a given task into smaller, overlapping tasks that are solved
by expert networks at the terminals of the tree. Training of the hierarchy is based
on a generative model using the Expectation Maximisation (EM) [1,3] algorithm as
a powerful and efficient tool for estimating the network parameters.
In [3], the architecture of the HME is considered pre-determined and remains fixed
during training. This requires choice of structural parameters such as tree depth
and branching factor in advance. As with other classification and regression techniques, it may be advantageous to have some sort of data-driven model-selection
mechanism to (1) overcome false initialisations (2) speed-up training time and (3)
adapt model size to task complexity for optimal generalization performance. In
[11], a constructive algorithm for the HME is presented and evaluated on two small
classification tasks: the two spirals and the 8-bit parity problems. However, this
1. Fritsch, M. Finke and A. Waibel
460
algorithm requires the evaluation of the increase in the overall log-likelihood for all
potential splits (all terminal nodes) in an existing tree for each generation. This
method is computationally too expensive when applied to the large HME's necessary in tasks with several million training vectors, as in speech recognition, where
we can not afford to train all potential splits to eventually determine the single best
split and discard all others. We have developed an alternative approach to growing
HME trees which allows the fast training of even large HME's, when combined with
a path pruning technique. Our algorithm monitors the performance of the hierarchy in terms of scaled log-likelihoods, assigning penalties to the expert networks,
to determine the expert that performs worst in its local partition. This expert will
then be expanded into a new subtree consisting of a new gating network and several
new expert networks.
HIERARCHICAL MIXTURES OF EXPERTS
We restrict the presentation of the HME to the case of classification, although it was
originally introduced in the context of regression. The architecture is a tree with
gating networks at the non-terminal nodes and expert networks at the leaves. The
gating networks receive the input vectors and divide the input space into a nested
set of regions, that correspond to the leaves of the tree. The expert networks also
receive the input vectors and produce estimates of the a-posteriori class probabilities
which are then blended by the gating network outputs. All networks in the tree
are linear, with a softmax non-linearity as their activation function. Such networks
are known in statistics as multinomiallogit models, a special case of Generalized
Linear Models (GLIM) [5] in which the probabilistic component is the multinomial
density. This allows for a probabilistic interpretation of the hierarchy in terms of
a generative likelihood-based model. For each input vector x, the outputs of the
gating networks are interpreted as the input-dependent multinomial probabilities
for the decisions about which child nodes are responsible for the generation of the
actual target vector y. After a sequence of these decisions, a particular expert
network is chosen as the current classifier and computes multinomial probabilities
for the output classes. The overall output of the hierarchy is
N
N
P(ylx,0) = L9i(X, Vi) L9jli(X, Vij)P(ylx, Oij)
i=1
j=1
where the 9i and 9jli are the outputs of the gating networks.
The HME is trained using the EM algorithm [1] (see [3] for the application of EM
to the HME architecture) . The E-step requires the computation of posterior node
probabilities as expected values for the unknown decision indicators:
hi =
9i I:j 9jli Pij(y)
I:i 9i I:j 9jli Pij(y)
The M-step then leads to the following independent maximum-likelihood equations
Vij
= arg ~ax """"
L.,.. L.,.. hk(t)
'J
t
k
"
" hqk
(t)
L.,..
I
(t)
log911k
AdaptiveLy Growing HierarchicaL Mixtures of Experts
461
where the (Jij are the parameters of the expert networks and the Vi and Vij are
the parameters of the gating networks. In the case of a multinomiallogit model ,
Pij{y) = Yc , where Yc is the output of the node associated with the correct class. The
above maximum likelihood equations might be solved by gradient ascent, weighted
least squares or Newton methods. In our implementation, we use a variant of Jordan
& Jacobs' [3] least squares approach.
GROWING MIXTURES
In order to grow an HME, we have to define an evaluation criterion to score the
experts performance on the training data. This in turn will allow us to select
and split the worst expert into a new subtree, providing additional parameters
which can help to overcome the errors made by this expert . Viewing the HME
as a probabilistic model of the observed data, we partition the input dependent
likelihood using expert selection probabilities provided by the gating networks
1(0; X)
LlogP(y(t)lx(t) , 0)
= LLgk lo g P (y (t )lx(t) , 0)
k
L L lo g[P(y(t)lx(t) , 0)]gk = Llk(0;X)
k
k
where the gk are the products of the gating probabilities along the path from the
root node to the k-th expert. gk is the probability that expert k is responsible
for generating the observed data (note, that the gk sum up to one) . The expertdependent scaled likelihoods Ik (0; X) can be used as a measure for the performance
of an expert within its region of responsibility. We use this measure as the basis of
our tree growing algorithm :
1. Initialize and train a simple HME consisting of only one gate and several experts.
2. Compute the expert-dependent scaled likelihoods lk(9j X) for each expert in one
additional pass through the training data.
3. Find the expert k with minimum lk and expand the tree , replacing the expert by
a new gate with random weights and new experts that copy the weights from the
old expert with additional small random perturbations.
4. Train the architecture to a local minimum of the classification error using a crossvalidation set.
5. Continue with step (2) until desired tree size is reached.
The number of tree growing phases may either be pre-determined, or based on
difference in the likelihoods before and after splitting a node. In contrast to the
growing algorithm in [11], our algorithm does not hypothesize all possible node
splits, but determines the expansion node(s) directly, which is much faster , especially when dealing with large hierarchies. Furthermore, we implemented a path
pruning technique similar to the one proposed in [11], which speeds up training
and testing times significantly. During the recursive depth-first traversal of the tree
(needed for forward evaluation , posterior probability computation and accumulation of node statistics) a path is pruned temporarily if the current node's probability
of activation falls below a certain threshold . Additionally, we also prune subtrees
permanently, if the sum of a node's activation probabilities over the whole training
set falls below a certain threshold . This technique is consistent with the growing
algorithm and also helps preventing instabilities and singularities in the parameter
updates, since nodes that accumulate too little training information will not be
considered for a parameter update because such nodes are automatically pruned by
the algorithm.
1. Fritsch, M. Finke and A. Waibel
462
Figure 1: Histogram trees for a standard and a grown HME
VOWEL CLASSIFICATION
In initial experiments, we investigated the usefulness of the proposed tree growing
algorithm on Peterson and Barney's [6] vowel classification data that uses formant
frequencies as features. We chose this data set since it is small, non-artificial and
low-dimensional, which allows for visualization and understanding of the way the
growing HME tree performs classification tasks.
x
x
~
0
o
0
The vowel data set contains
1520 samples consisting of the
formants FO, F1, F2 and F3
and a class label, indicating
one of 10 different vowels.
Experiments were carried out
on the 4-dimensional feature
space, however, in this paper
graphical representations are
restricted to the F1-F2 plane.
The figure to the left shows
the data set represented in
this plane (The formant frequencies are normalized to
the range [0,1]).
In the following experiments, we use binary branching HME's exclusively, but in
general the growing algorithm poses no restrictions on the tree branching factor.
We compare a standard, balanced HME of depth 3 with an HME that grows from
a two expert tree to a tree with the same number of experts (eight) as the standard
HME. The size of the standard HME was chosen based on a number of experiments
with different sized HME's to find an optimal one. Fig. 1 shows the topology
of the standard and the fully grown HME together with histograms of the gating
probability distributions at the internal nodes.
Fig. 2 shows results on 4-dimensional feature vectors in terms of correct classification rate and log-likelihood. The growing HME achieved a slightly better (1.6%
absolute) classification rate than the fixed HME. Note also, that the growing HME
outperforms the fixed HME even before it reaches its full size. The growing HME
was expanded every 4 iterations, which explains the bumpiness of the curves.
Fig. 3 shows the impact of path pruning during training on the final classification
rate of the grown HME's. The pruning factor ranges from no pruning to full pruning
(e.g. only the most likely path survives).
Fig. 4 shows how the gating networks partition the feature space. It contains plots
463
Adaptively Growing Hierarchical Mixtures of Experts
100
'0" eo'"
.--:==- =--SiC'
lOl/"lkolihood for o"'ndard and growing HME
Avorage Claaalficlllion Ra'o for dlfforenl
r-----r-'-r---,--,-----,r----;--.--,
,
i
eeLS%
.-- -:c'= ---
80
? . , .? -
-.. . t
-..
!
?-r
1
--....
?'1-
_
40
_ ._
_
: ..
H~ -+~.=-t. !
-1 __
}
o
o
4
?1500
.--.?
.
-1=...--.----L--
-.--.-
~ -=--+=
----?t?
-+.---. -.--+--- .----
10
-~.-
?1000
g
:
--+--
-500
8
12
16
epoch
20
24
- -.,.r-
?2500
-3000
28
16
opoch
20
24
28
Figure 2: Classification rate and log-likelihood for standard and growing HME
CIa? ? Hlcatlon rato when pruning gro,mg HME?. '*'ring training
~r--.--~~-r~~--r--~
"j'
91
I
""1-
l .O__ ll.1--_ I l-~--.....J.-2.2------ __
90
_ __
89
~
88
87
88
85
84
83
-6
-5
-4
-3
-2
prunklg factor (10'1<)
-1
o
Figure 3: Impact of path pruning during training of growing HME's
of the activation regions of all 8 experts of the standard HME in the 2-dimensional
range [-0.1, 1.1]2. Activation probabilities (product of gating probabilities from
root to expert) are colored in shades of gray from black to white. Fig. 5 shows the
same kind of plot for all 8 experts of the grown HME. The plots in the upper right
corner illustrate the class boundaries obtained by each HME.
Figure 4: Expert activations for standard HME
Fig. 4 reveals a weakness of standard HME's: Gating networks at high levels in the
tree can pinch off whole branches, rendering all the experts in the subtree useless.
In our case, half of the experts of the standard HME do not contribute to the final
decision at all (black boxes). The growing HME's are able to overcome this effect .
All the experts of the grown HME (Fig. 5) have non-zero activation patterns and
the overlap between experts is much higher in the growing case, which indicates a
higher degree of cooperation among experts. This can also be seen in the histogram
trees in Fig. 3, where gating networks in lower levels of the grown tree tend to
1. Fritsch, M. Finke and A. Waibel
464
Figure 5: Expert activations for grown HME
average the experts outputs. The splits formed by the gating networks also have
implications on the way class boundaries are formed by the HME. There are strong
de~endencies visible between the class boundaries and some of the experts activation
reglOns.
EXPERIMENTS ON SWITCHBOARD
We recently started experiments using standard and growing HME's as estimators of .posterior phone probabilities in a hybrid version of the JANUS [9] speech
recognizer. Following the work in [12], we use different HME's for each state of
a phonetic HMM. The posteriors for 52 phonemes computed by the HME's are
converted into scaled likelihoods by dividing by prior probabilities to account for
the likelihood based training and decoding of HMM's. During training, targets for
the HME's are generated by forced-alignment using a baseline mixture of Gaussian
HMM system. We evaluate the system on the Switchboard spontaneous telephone
speech corpus. Our best current mixture of Gaussians based context-dependent
HMM system achieves a word accuracy of 61.4% on this task, which is among the
best current systems [7]. We started by using phonetic context-independent (CI)
HME's for 3-state HMM's. We restricted the training set to all dialogues involving speakers from one dialect region (New York City), since the whole training set
contains over 140 hours of speech. Our aim here was, to reduce training time (the
subset contains only about 5% of the data) to be able to compare different HME
architectures.
Context
CI
I ff experts
I
64
64
Word Acc .
33.8~
35.1
0
Figure 6: Preliminary results on Switchboard telephone data
To improve performance, we then build context-dependent (CD) models consisting
of a separate HME for each biphone context and state. The Cb HME's output is
smoothed with the CI models based on prior conteie probabilities. Current work
focuses on improving context modeling (e.g. larger contexts and decision tree based
clustering) .
Fig. 6 summarizes the results so far, showing consistently that growing HME's
outperform equally sized standard HME's. The results are not directly comparable
Adaptively Growing Hierarchical Mixtures of Experts
465
with our best Gaussian mixture system, since we restricted context modeling to
biphones and used only a small subset of the Switchboard database for training.
CONCLUSIONS
In this paper, we presented a method for adaptively growing Hierarchical Mixtures
of Experts. We showed, that the algorithm allows the HME to use the resources
(experts) more efficiently than a standard pre-determined HME architecture. The
tree growing algorithm leads to better classification performance compared to standard HME's with equal numbers of parameters. Using growing instead of fixed
HME's as continuous density estimators in a hybrid speech recognition system also
improves performance.
References
[1] Dempster, A.P., Laird, N.M. & Rubin, D.B. (1977) Maximum likelihood from incomplete data via the EM algorithm. J.R. Statist. Soc. B 39, 1-38.
[2] Jacobs, R. A., Jordan, M. 1., Nowlan, S. J., & Hinton, G. E. (1991) Adaptive mixtures
of local experts. In Neural Computation 3, pp. 79-87, MIT press.
[31 Jordan, M.l. & Jacobs RA. (1994) Hierarchical Mixtures of Experts and the EM
Algorithm. In Neural Computation 6, pp. 181-214. MIT press.
[41 Jordan, M.l. & Jacobs, RA. (1992) Hierarchies of adaptive experts. In Advances in
Neural Information Processing Systems 4, J. Moody, S. Hanson, and R Lippmann, eds.,
pp. 985-993. Morgan Kaufmann, San Mateo, CA.
[5] McCullagh, P. & Nelder, J.A. (1983) Generalized Linear Models. Chapman and Hall,
London.
[6] Peterson, G. E. & Barney, H. L. (1952) Control measurements used in a study of the
vowels. J oumal of the Acoustical Society of America 24, 175-184.
[7] Proceedings of LVCSR Hub 5 workshop, Apr. 29 - May 1 (1996) MITAGS, Linthicum
Heights, Maryland.
[8] Syrdal, A. K. & Gopal, H. S. (1986) A perceptual model of vowel recognition based on
the auditory representation of American English vowels. Journal of the Acoustical Society
of America, 79 (4):1086-1100.
[9] Zeppenfeld T., Finke M., Ries K., Westphal M. & Waibel A. (1997) Recognition of
Conversational Telephone Speech using the Janus Speech Engine. Proceedings of ICASSP
97, Muenchen, Germany
[10] Waterhouse, S.R, Robinson, A.J. (1994) Classification using Hierarchical Mixtures of
t;xperts. In Proc. 1994 IEEE Workshop on Neural Networks for Signal Processing IV, pp.
177-186.
[11] Waterhouse, S.R, Robinson, A.J. (1995) Constructive Algorithms for Hierarchical
Mixtures of Experts. In Advances in Neural information Processing Systems 8.
[12] Zhao, Y., Schwartz, R, Sroka, J. & Makhoul, J. (1995) Hierarchical Mixtures of Experts Methodology Applied to Continuous Speech Recognition. In ICASSP 1995, volume
5, pp. 3443-6, May 1995.
| 1279 |@word version:2 advantageous:1 jacob:4 barney:2 initial:1 contains:4 score:1 exclusively:1 initialisation:1 outperforms:1 existing:1 current:5 nowlan:1 activation:9 assigning:1 visible:1 partition:3 enables:1 hypothesize:1 plot:3 update:2 generative:2 leaf:2 half:1 plane:2 colored:1 node:16 contribute:1 lx:3 height:1 along:1 ik:1 ra:3 expected:1 growing:28 multi:1 formants:1 terminal:3 automatically:2 actual:1 little:1 provided:1 estimating:1 linearity:1 medium:1 kind:1 interpreted:1 biphone:1 developed:1 every:1 interactive:1 scaled:4 classifier:1 schwartz:1 control:1 before:2 understood:1 local:3 path:7 might:1 chose:1 black:2 mateo:1 range:3 responsible:2 testing:1 maximisation:1 recursive:1 procedure:1 significantly:1 pre:3 word:2 selection:2 context:9 instability:1 accumulation:1 restriction:1 splitting:1 estimator:2 jli:3 hierarchy:8 target:2 spontaneous:1 us:1 pa:1 recognition:7 expensive:1 zeppenfeld:1 database:2 observed:2 solved:2 worst:2 region:4 balanced:2 dempster:1 complexity:1 traversal:1 trained:3 ries:1 f2:2 basis:1 hqk:1 icassp:2 represented:1 america:2 llogp:1 grown:7 train:3 dialect:1 forced:1 fast:1 london:1 artificial:1 pinch:1 larger:1 fritsch:5 statistic:2 formant:2 gro:1 laird:1 final:2 sequence:1 mg:1 propose:1 jij:1 product:2 crossvalidation:1 convergence:1 produce:1 generating:1 ring:1 help:2 illustrate:1 pose:1 strong:1 dividing:1 implemented:1 c:1 soc:1 correct:2 viewing:1 explains:1 f1:2 generalization:2 preliminary:1 singularity:1 considered:2 hall:1 cb:1 achieves:1 recognizer:1 proc:1 label:1 city:1 tool:1 weighted:1 survives:1 mit:2 gaussian:2 gopal:1 aim:1 ax:1 focus:1 consistently:1 likelihood:14 indicates:1 hk:1 contrast:1 baseline:1 posteriori:1 dependent:5 expand:1 germany:1 overall:2 classification:15 arg:1 among:2 softmax:1 special:1 initialize:1 equal:1 f3:1 chapman:1 others:1 phase:1 consisting:5 vowel:8 evaluation:4 alignment:1 weakness:1 mixture:17 subtrees:1 implication:1 necessary:1 tree:24 incomplete:1 divide:2 old:1 reglons:1 desired:1 iv:1 soft:1 modeling:2 blended:1 finke:5 subset:3 hundred:1 usefulness:1 too:2 combined:1 adaptively:5 density:2 probabilistic:4 eel:1 off:1 decoding:1 michael:1 together:1 moody:1 corner:1 expert:53 dialogue:1 american:1 zhao:1 account:1 potential:2 hme:57 de:1 converted:1 vi:2 performed:1 root:2 responsibility:1 reached:1 sort:1 square:2 formed:2 accuracy:1 phoneme:1 kaufmann:1 efficiently:1 yield:1 correspond:1 acc:1 fo:1 reach:1 ed:1 frequency:2 pp:5 jiirgen:1 associated:1 static:1 auditory:1 improves:1 glim:1 originally:1 higher:2 methodology:1 evaluated:1 box:1 furthermore:1 until:1 replacing:1 overlapping:1 sic:1 gray:1 grows:1 effect:1 normalized:1 laboratory:1 white:1 ll:1 during:5 branching:3 speaker:2 criterion:1 generalized:2 performs:2 novel:1 recently:1 multinomial:3 volume:1 million:1 interpretation:1 accumulate:1 mellon:1 measurement:1 automatic:1 lol:1 sroka:1 posterior:4 showed:1 driven:1 discard:1 phone:1 phonetic:2 certain:2 binary:1 continue:1 seen:1 minimum:2 additional:3 morgan:1 eo:1 prune:1 determine:2 signal:1 branch:1 full:2 faster:1 adapt:1 equally:1 impact:2 variant:1 regression:3 involving:1 muenchen:1 cmu:1 expectation:1 histogram:3 iteration:1 achieved:1 receive:2 grow:1 ascent:1 tend:1 jordan:4 structural:1 split:7 spiral:1 rendering:1 architecture:7 restrict:1 topology:1 reduce:1 penalty:1 lvcsr:1 speech:10 york:1 afford:1 useful:1 ylx:2 statist:1 outperform:1 carnegie:1 threshold:2 monitor:1 sum:2 powerful:1 decision:6 summarizes:1 comparable:2 bit:1 layer:1 hi:1 alex:1 speed:2 pruned:2 conversational:1 expanded:2 waibel:6 makhoul:1 smaller:1 slightly:1 em:5 making:1 restricted:3 computationally:1 equation:2 visualization:1 remains:1 resource:1 turn:1 eventually:1 mechanism:1 needed:1 gaussians:1 eight:1 hierarchical:12 cia:1 alternative:1 gate:2 permanently:1 clustering:1 graphical:1 newton:1 especially:1 build:1 society:2 traditional:1 gradient:1 separate:1 maryland:1 hmm:5 acoustical:2 useless:1 providing:1 gk:4 implementation:1 unknown:1 upper:1 hinton:1 perturbation:1 smoothed:1 introduced:1 hanson:1 engine:1 hour:1 robinson:2 able:2 below:2 pattern:1 yc:2 overlap:1 hybrid:3 oij:1 indicator:1 improve:1 lk:2 started:2 carried:1 epoch:1 understanding:1 prior:2 waterhouse:2 fully:1 generation:2 proven:1 degree:1 switchboard:5 pij:3 consistent:1 rubin:1 vij:3 cd:1 lo:2 cooperation:1 parity:1 copy:1 english:1 allow:1 perceptron:1 fall:2 peterson:2 absolute:1 overcome:3 depth:3 vocabulary:1 curve:1 boundary:3 computes:1 preventing:1 forward:1 made:1 adaptive:2 san:1 far:1 pruning:9 janus:3 lippmann:1 dealing:1 reveals:1 corpus:1 pittsburgh:1 llk:1 nelder:1 continuous:3 additionally:1 ca:1 improving:1 expansion:1 investigated:1 apr:1 bumpiness:1 whole:3 child:1 fig:9 ff:1 perceptual:1 shade:1 gating:15 showing:1 hub:1 workshop:2 false:1 effectively:1 ci:3 magnitude:1 subtree:3 likely:1 temporarily:1 nested:1 determines:1 sized:3 presentation:1 mccullagh:1 determined:3 telephone:3 r9:1 pas:1 indicating:1 select:1 internal:2 constructive:3 evaluate:1 |
306 | 128 | 785
ELECTRONIC
RECEPTORS
FOR TACTILE/HAPTIC?
SENSING
Andreas G. Andreou
Electrical and Computer Engineering
The Johns Hopkins University
Baltimore, MD 21218
ABSTRACT
We discuss synthetic receptors for haptic sensing. These are based on
magnetic field sensors (Hall effect structures) fabricated using standard
CMOS technologies. These receptors, biased with a small permanent
magnet can detect the presence of ferro or ferri-magnetic objects in the
vicinity of the sensor. They can also detect the magnitude and direction
of the magnetic field.
INTRODUCTION
The organizational structure and functioning of the sensory periphery in living beings has
always been the subject of extensive research. Studies of the retina and the cochlea have
revealed a great deal of information as to the ways information is acquired and
preprocessed; see for example review chapters in [Barlow and MoHon, 1982].
Understanding of the principles underlying the operation of sensory channels can be
utilized to develop machines that can sense their environment and function in it, much
like living beings. Although vision is the principal sensory channel to the outside world,
the "skin senses" can in some cases provide information that is not available through
vision. It is interesting to note, that the performance in identifying objects through the
haptic senses can be comparable to vision [Klatzky el al, 1985]; longer learning periods
may be necessary though. Tactually guided exploration and shape perception for robotic
applications has been extensively investigated by [Hemanli el al, 1988].
A number of synthetic sensory systems for vision and audition based on physiological
models for the retina and the cochlea have been prototyped by Mead and his coworkers in
VLSI [Mead, 1989]. The key to success in such endeavors is the ability to integrate
transducers (such as light sensitive devices) and local processing electronics on the same
chip. A technology that offers that possibility is silicon CMOS; furthermore, it is
readily available to engineers and scientists through the MOSIS fabrication
services[Cohen and Lewicki, 1981].
Receptor cells, are structures in the sensory pathways whose purpose is to convert
environmental signals into electrical activity (strictly speaking this is true for
? Haptic refers to the perception of vibration, skeletal conformation or position and
skin deformation. Tactile refers to the perceptual system that includes only the
cutaneous senses of vibration and deformation.
786
Andreou
exteroceptors). The retina rods and cones are examples of receptors for light stimuli and
the Pacinian corpuscles are mechanoreceptors that are sensitive to indentation or pressure
on the skin. A synthetic receptor is thus the first and necessary functional element in any
synthetic sensory system. For the development of vision systems parasitic bipolar devices
can be used [Mead, 1985] to perform the necessary light to electrical signal transduction
as well as low level signal amplification. On the other hand, implementation of synthetic
receptors for tactile perception is still problematic [Barth et. al., 1986]. Truly tactile
transducers (devices sensitive to pressure stimuli) are not available in standard CMOS
processes and are only found in specialized fabrication lines. However, devices that are
sensitive to magnetic fields can be used to some extend as a substitute.
In this paper, we discuss the development of electronic receptor elements that can be used
in synthetic haptic/tactile sensing systems. Our receptors are devices which are sensitive
to steady state or varying magnetic fields and give electrical signals proportional to the
magnetic induction. They can all be fabricated using standard silicon processes such as
those offered by MOSIS. We show how our elements can be used for tactile and haptic
sensing and compare its characteristics with the features of biological receptors. The
spatial resolution of the devices, its frequency response and dynamic range are more than
adequate. One of our devices has nano-watt power diSSipation and thus can be used in large
arrays for high resolution sensing.
THE MAGNETIC-FIELD SENSORY PARADIGM
In this section we show qualitatively how to implement synthetic sensory functions of
the haptic and tactile senses by using magnetic fields and their interaction with ferri or
ferro-magnetic objects. This will motivate the more detailed discussion that follows on
the transducer devices.
DIRECT SENSING:
In this mode of operation the transducer will detect the magnitude and direction of the
magnetic induction and convert it into an electrical signal. If the magnetic field is
provided by the fringing fields of a small permanent magnet, the strength of the signal
will falloff with the distance of the sensor from the magnet Such an arrangement for one
dimensional sensing is shown in Figure 1. The experimental data are from the MOS
Hall-voltage generator that is described in the next section. The magnetic field was
provided by a cylindrical, rare-earth, permanent magnet with magnetic induction
B0=250mT, measured on the end surfaces.The vertical axis shows the signal from the
transducer (Hall-voltage) and the horizontal axis represents the distance d of the sensor
from the surface of the magnet.
The above scheme can be used to sense the angular displacement between two fingers at a
joint (inset b). By placing a small magnet on one side of the joint and the receptor on the
other, the signal from the receptor can be conditioned and converted into a measure of the
e
angle between the two fingers. The output of our receptor would thus correspond to the
output from the Joint Fibers that originate in the Joint Capsule [Johnson, 1981]. Joint
angle perception and manual stereognosis is mediated in part by these fibers. The above is
just one example of how to use our integrated electronic receptor element for sensing
Electronic Receptors for TactileIHaptic Sensing
skeletal conformation and position. Since there is no moving parts other than the joint
itself, this is a reliable scheme.
2.63
Sensor
I 1MAGNET~
2.~
V
1.88
(8)
1.50
1.13
.750
Magnet
Sensor
.315
(b)
O.OO~--~~--~----r---~----,-----~
0.00
1.00
__~____, -__~~__~
e.oo
3.00
If.OO
5.00
6.00
7.00
8.00
Distance ot magnet trom sensor (mm)
9.00
10.
Figure 1. Direct Sensing Using an Integrated Hall-Voltage Transducer
PASSIVE SENSING
In this mode of operation, the device or an array of devices are permanently biased with a
uniform magnetic field whose uniformity can be disturbed in the presence of a ferro or
ferri-magnetic object in the vicinity. Signals from an array of such elements would
provide ifnormation on the shape of the object that causes the disturbance of the magnetic
field. Non-magnetic objects can be sensed if the surface of the chip is covered with a
compliant membrane that has some magnetic properties. Note that our receptors can
detect the presence of an object without having a direct contact with the object itself.
This may in some cases be advantageous.
In this application, the magnetic field sensor would act more like the Ruffini organs
which exist in the deeper tissue and are primarily sensitive to static skin stretch. The
above scheme could also be used for sensing dynamic. stimuli and there is a variety of
receptor cells. such as the Pacinian and Meissner's corpuscles that perfonn that function
in biological tactile senses [Johnson. 1981].
787
7BB
Andreou
SILICON TRANSDUCERS
Magnetic field sensors can be integrated on silicon in a variety of forms. The transduction
mechanism is due to some galvanomagnetic effect; the Hall effect or some related
phenomenon [Andreou. 1986]. For an up-todate review of integrated magnetic field
sensors as well as for the fme points of the discussion that follows in the next two
sections. please refer to [Baltes and Popovic. 1986]. The simplest Hall device is the Hallvoltage sensor. This is a four terminal device with two current terminals and two voltage
terminals to measure the Hall-voltage (Figure 2). A magnetic field B in the direction
perpendicular to the current flow. sets up the Hall-voltage in the direction indicated in the
figure. The Hall-voltage is such that it compensates for the Lorentz force on the charge
carriers. In the experiment below. we have used a MOS Hall generator instead of a bulk
device. The two current contacts are the source and drain of the device and the voltage
contacts are formed by small diffusion areas in the channel. The Hall-voltage is linearly
related to the magnetic induction B and the total current in the sample I between the
drain and the source which is controlled by the gate voltage Vgs.
M89P n-t
Vds=5V
3.00r-------------------------------~--------~--------~--~~~--------~
B=250mT
1.25
"::>
e
.750
'>J'
D.OO
IU
l:
I
:::>
-.7:50
-I."
-2.25
B=-2S0mT
-3 .01 +------r-----~--.,.---_r_----_r------_r_----_r------_,__-_..-~
0.0
.SO
1.0
2.0
1.5
1.5
3.0
5.1
't.5
".0
3.'
Vgs (V)
Figure 2. An Integrated MOS Hall-Voltage Sensor and Measured Characteristics
Electronic Receptors for TactileIHaptic Sensing
The constant of proportionality K is related to the dimensions of the device and to
silicon electronic transport parameters. The device dimensions and biasing conditions are
shown in the figure above. Note that the Hall voltage reverses. when the direction of the
magnetic field is reversed. The above device was fabricated in a standard 2-micron n-well
CMOS process through MOSIS (production run M89P). The signal output of this
sensor is a voltage and is relatively small. For the same biasing conditions. the signals
can be increased only if the channel is made shorter (increase the transconductance of the
device). On the otherhand. when the length of the device approaches its width the Hallvoltage is shorted out by the heavily doped source and drain regions and the signal
degrades again. Some of the problems with the Hall-voltage sensor can be avoided if we
use a device that gives a current as its output signal; this is discussed in the next section.
THE HALL-CURRENT?
Vgs
lI- t
SENSOR
0
B=
250mT
0
"<t
""
.....
.....
f11
::r:
I
1-4
.1-
11
10-11 '!::--'-!---,---,-----r----y----,.---r-----,r----r----I
.50
.55
.60
.65
.7a
.75
.80
.85
.90
.95
1.0
Figure 3. The Hall-Current Sensor
? Hall current is a misnomer (used also by this author) that exists in the literature to
characterize the current flow in the sample when the Hall field is shorted out.
789
790
Andreou
This current is a direct consequence of the Lorentz force and is perpendicular to the
direction of cmrent flow without a magnetic field The Lorentz force:
t= qO X e
depends on the velocity of the carriers in the sample and on the magnetic induction. Since
this force is responsible for the transverse cmrent flow a more appropriate name for this
sensor is a Lorentz-current sensor. Obviously. given a magnetic field strength. if we
want a maximum signal from our device we want the carriers to have the maximum
velocity in the channel. We achieve that by operating the devices in the saturation region
were the carriers transverse the channel at what is called the "saturation velocity". In this
configuration we can also use short channel devices (and consequently smaller devices) so
that the high fields in the channel can be set with lower voltages. The geometry for such
sensor as well as the biasing circuit is shown in the insets of Figure 3. As with the
previous sensor. the magnetic field is applied in the direction perpendicular to the channel
(also perpendicular to the plane of the gate electrode).
This sensor is a MOS transistor with three terminals and the gate. It has a single source
but a split drain. The poly silicon gate is extended in the region between the two drains so
that they are electrically independent of each other. The device is biased with a constant
drain-source voltage (with the two drains at the same potential) and the cmrents from the
two sources are monitored. The current-mode circuitry for the synthetic neurons described
by K wabena [Kwabena et al .? 1989] can be employed for this function. We operate the
device in the subthreshold region (denoted by the gate-source volrages between 0.5 and 0.9
volts). On the application of a transverse magnetic field we observe the imbalance
between the two drain currents. The Hall-current. plotted in Figure 3. is twice that
cmrent
Note that we can operate the device and observe an effect due to the magnetic field at very
low currents in the nano-amp range. The graph in Figure 3 shows also the dependence of
the Hall-current on the gate voltage. This is a logarithmic relationship because the Hallcurrent is directly related to the total current in the sample Ids through a linear
relationship; it is also linearly related to the magnetic field with a proportionality
constant Kh. The derivation of a fonnula for the Hall-cmrent can be found in [Andreou.
1986].
DISCUSSION
The frequency response of the sensors described above is more than adequate for our
applications. The frequency response of the Hall effect mechanism is many order of
magnitude higher than the requirements of our circuits which have to work only the Hz an
kHz range. Another important criterion for the receptors is their spatial resolution. We
have fabricated and tested devices with areas as small as 144 square microns. These are
comparable or even better with what is found in biological systems (10 to 150 receptors
per square cm). On the otherhand. it is more likely that our final "receptor" elements will
be larger. partly. because of the additional electronics. The experimental data shown
above are only for stimuli that are static and are simply the raw output from our
Electronic Receptors for Tactile/Haptic Sensing
transducer itself. Clearly such output signals will not be of much use without some local
processing. For example, adaptation mechanisms have to be included in our synthetic
receptors for cutaneous sensing. The sensitivity of our transducer elements may be a
problem. In that case more sophisticated structures such as parasitic bipolar
magnetotransistors or combination of MOS Hall and bipolar devices can be employed for
low level signal amplification [Andreou, 1986]. Voltage offsets in the Hall-voltage sensor
would also present some problem; the same is true for .current imbalances due to
fabrication imperfections in the Hall current sensor. One of the most attractive properties
of the Hall-current type sensor described in this paper is its ability to work with very
lower voltages and very low currents; one of our devices can operate with bias voltage as
low as 3SOmV and total cmrent of InA without compromising its sensitivity. Power
dissipation may be a problem when large arrays of these devices are considered.
Devices for sensing temperature can also be implemented on a standard silicon CMOS
process. Thus a multisensor chip, could be designed that would respond to more than one
of the somatic senses.
CONCLUSIONS
We have demonstrated how to use the magnetic field as a paradigm for haptic sensing.
We have also reported on a silicon magnetic field sensor that operates with power
dissipation as low as 3SOpW without any compromise in its performance. This is a dualdrain MaS Hall-current device operating in the subthreshold region. Our elements are
only very simple "receptors" without any adaptation mechanisms or local processing; this
wi11 be the next step in our work.
Acknowledg ments
This work was funded by the Independent Research and Development program of the
Johns Hopkins University, Applied Physics Laboratory. The support and personal interest
of Robert Jenkins is gratefully acknowledged. The author has benefited by the occasional
contact with Ken Johnson of the Biomedical Engineering Department
References
H.B. Barlow and J.D. Mollon eds.,The senses, Cambridge University Press, Oxford,
1982.
RL. Klatzky, S1. Lederman and V.A. Metzger, "Identifying objects by touch: An 'expert
system'," Percept. Psychophys. vol. 37. 1985.
H. Hemami, I.S. Bay and R.E. Goddard, "A Conceptual Framework for Tactually Guided
Exploration and Shape Perception," IEEE Trans. Biomedical Engineering, vol. 35, No.
2, Feb. 1988.
C.A. Mead. Analog VLSI and Neural Systems, Addison and Wesley, (in press).
D. Cohen and G. Lewicki, "MOSIS-The ARPA silicon broker," Proc. of the Second
Caltech Conference on VLSI, Pasadena, Califomia,1981.
C.A. Mead, A sensitive electronic photoreptor, 1985 Chapel Hill Conference on VLSI,
Chapel Hill, 1985.
791
792
Andreou
P.w. Barth. M.1. Zdeblick. Z. Kuc and P.A. Beck. "Flexible tactile arrays for robotics:
architectural robustness and yield considerations. Tech. Digest. IEEE Solid State
Sensors Workshop. Hilton Head Island. 1986.
A.G. Andreou. "The Hall effect and related phenomena in microelectronic devices." Ph.D.
Dissertation. The Johns Hopkins University. Baltimore. MD. 1986.
H.P. Baltes and R.S. Popovic. "Integrated Semiconductor Magnetic Field Sensors."
Proceedings IEEE. vo1.74. No.8. Aug. 1986.
K.O. Johnson and G.D. Lamb, "Neural mechanisms of spatial discrimination: Neural
patterns evoked by Braille-like dot patterns in the monkey." 1. of Phys .? vol. 310. pp.
117-144. 1981.
K.A. Boahen. A.G. Andreou. P.O. Pouliquen and A. Pavasovic. "Architectures for
Associative Memories Using Current-Mode Analog MOS Circuits:' Proceedings of the
Decennial Caltech Conference on VLSI. C. Seitz ed. MIT Press. 1989.
Appendix
Summaries of Invited Talks
| 128 |@word cylindrical:1 advantageous:1 proportionality:2 seitz:1 sensed:1 pressure:2 solid:1 electronics:2 configuration:1 amp:1 current:23 lorentz:4 readily:1 john:3 shape:3 designed:1 discrimination:1 device:34 plane:1 short:1 dissertation:1 direct:4 transducer:9 pathway:1 acquired:1 f11:1 terminal:4 provided:2 underlying:1 circuit:3 what:2 cm:1 monkey:1 fabricated:4 perfonn:1 act:1 charge:1 bipolar:3 carrier:4 service:1 engineering:3 local:3 scientist:1 consequence:1 semiconductor:1 receptor:24 id:1 mead:5 oxford:1 twice:1 evoked:1 tactually:2 range:3 perpendicular:4 responsible:1 implement:1 displacement:1 area:2 refers:2 fonnula:1 disturbed:1 otherhand:2 demonstrated:1 resolution:3 ferro:3 identifying:2 chapel:2 array:5 his:1 heavily:1 element:8 velocity:3 utilized:1 electrical:5 region:5 boahen:1 environment:1 microelectronic:1 dynamic:2 personal:1 motivate:1 uniformity:1 compromise:1 joint:6 chip:3 chapter:1 fiber:2 finger:2 talk:1 derivation:1 outside:1 whose:2 larger:1 compensates:1 ability:2 itself:3 final:1 associative:1 obviously:1 transistor:1 interaction:1 metzger:1 adaptation:2 achieve:1 doped:1 amplification:2 kh:1 electrode:1 requirement:1 cmos:5 object:9 oo:4 develop:1 measured:2 conformation:2 b0:1 aug:1 implemented:1 revers:1 direction:7 guided:2 compromising:1 exploration:2 biological:3 strictly:1 stretch:1 mm:1 hall:28 considered:1 great:1 indentation:1 mo:6 circuitry:1 purpose:1 earth:1 proc:1 sensitive:7 vibration:2 organ:1 mit:1 clearly:1 sensor:28 always:1 imperfection:1 r_:2 varying:1 voltage:21 ferri:3 tech:1 detect:4 sense:2 el:2 integrated:6 pasadena:1 vlsi:5 iu:1 flexible:1 denoted:1 development:3 spatial:3 field:27 having:1 kwabena:1 represents:1 placing:1 shorted:2 stimulus:4 primarily:1 retina:3 beck:1 geometry:1 interest:1 possibility:1 truly:1 light:3 sens:7 necessary:3 shorter:1 plotted:1 deformation:2 arpa:1 increased:1 organizational:1 rare:1 uniform:1 ruffini:1 fabrication:3 johnson:4 characterize:1 reported:1 synthetic:9 sensitivity:2 physic:1 compliant:1 vgs:3 hopkins:3 wi11:1 again:1 nano:2 audition:1 expert:1 li:1 converted:1 potential:1 includes:1 permanent:3 depends:1 formed:1 square:2 fme:1 characteristic:2 percept:1 correspond:1 subthreshold:2 yield:1 raw:1 tissue:1 falloff:1 phys:1 manual:1 ed:2 frequency:3 pp:1 monitored:1 static:2 sophisticated:1 barth:2 wesley:1 higher:1 response:3 though:1 furthermore:1 angular:1 just:1 misnomer:1 biomedical:2 hand:1 horizontal:1 transport:1 qo:1 touch:1 mode:4 indicated:1 name:1 effect:6 true:2 functioning:1 barlow:2 vicinity:2 volt:1 laboratory:1 deal:1 attractive:1 width:1 please:1 steady:1 criterion:1 hill:2 dissipation:3 temperature:1 passive:1 consideration:1 specialized:1 functional:1 mt:3 rl:1 cohen:2 khz:1 extend:1 discussed:1 analog:2 mechanoreceptors:1 silicon:9 refer:1 cambridge:1 gratefully:1 funded:1 dot:1 moving:1 longer:1 surface:3 operating:2 feb:1 periphery:1 success:1 caltech:2 additional:1 employed:2 paradigm:2 period:1 coworkers:1 living:2 signal:16 offer:1 controlled:1 vision:5 pavasovic:1 cochlea:2 robotics:1 cell:2 want:2 baltimore:2 source:7 hilton:1 biased:3 ot:1 operate:3 invited:1 haptic:9 subject:1 hz:1 flow:4 presence:3 revealed:1 split:1 variety:2 architecture:1 andreas:1 rod:1 tactile:10 speaking:1 cause:1 adequate:2 detailed:1 covered:1 extensively:1 ph:1 ken:1 simplest:1 exist:1 problematic:1 per:1 bulk:1 skeletal:2 vol:3 key:1 four:1 acknowledged:1 preprocessed:1 diffusion:1 mosis:4 graph:1 convert:2 cone:1 run:1 angle:2 micron:2 respond:1 lamb:1 electronic:8 architectural:1 appendix:1 comparable:2 activity:1 strength:2 transconductance:1 relatively:1 department:1 watt:1 combination:1 electrically:1 membrane:1 smaller:1 island:1 cutaneous:2 s1:1 pouliquen:1 discus:2 mechanism:5 addison:1 end:1 available:3 operation:3 jenkins:1 observe:2 occasional:1 appropriate:1 magnetic:34 robustness:1 permanently:1 gate:6 substitute:1 goddard:1 klatzky:2 contact:4 skin:4 arrangement:1 digest:1 degrades:1 dependence:1 md:2 distance:3 reversed:1 vd:1 originate:1 multisensor:1 induction:5 length:1 relationship:2 robert:1 implementation:1 perform:1 imbalance:2 vertical:1 neuron:1 extended:1 head:1 somatic:1 transverse:3 extensive:1 andreou:10 trans:1 below:1 perception:5 psychophys:1 pattern:2 biasing:3 saturation:2 program:1 reliable:1 memory:1 power:3 force:4 disturbance:1 scheme:3 technology:2 axis:2 mediated:1 review:2 understanding:1 literature:1 drain:8 ina:1 interesting:1 proportional:1 generator:2 integrate:1 offered:1 principle:1 production:1 summary:1 side:1 bias:1 deeper:1 dimension:2 world:1 sensory:8 author:2 qualitatively:1 made:1 avoided:1 bb:1 robotic:1 conceptual:1 popovic:2 bay:1 channel:9 capsule:1 investigated:1 poly:1 linearly:2 benefited:1 transduction:2 position:2 perceptual:1 inset:2 sensing:17 offset:1 physiological:1 ments:1 exists:1 workshop:1 magnitude:3 conditioned:1 logarithmic:1 simply:1 likely:1 lewicki:2 prototyped:1 environmental:1 ma:1 endeavor:1 consequently:1 broker:1 included:1 operates:1 kuc:1 vo1:1 principal:1 engineer:1 total:3 called:1 partly:1 experimental:2 parasitic:2 support:1 magnet:9 tested:1 phenomenon:2 |
307 | 1,280 | Ensemble Methods for Phoneme
Classification
Steve Waterhouse
Gary Cook
Cambridge University Engineering Department
Cambridge CB2 IPZ, England, Tel: [+44] 1223 332754
Email: srwl00l@eng.cam .ac .uk.gdc@eng .cam .ac.uk
Abstract
This paper investigates a number of ensemble methods for improving the performance of phoneme classification for use in a speech
recognition system. Two ensemble methods are described ; boosting
and mixtures of experts, both in isolation and in combination. Results are presented on two speech recognition databases: an isolated
word database and a large vocabulary continuous speech database.
These results show that principled ensemble methods such as boosting and mixtures provide superior performance to more naive ensemble methods such as averaging .
INTRODUCTION
There is now considerable interest in using ensembles or committees of learning
machines to improve the performance of the system over that of a single learning
machine. In most neural network ensembles, the ensemble members are trained on
either the same data (Hansen & Salamon 1990) or different subsets of the data (Perrone & Cooper 1993) . The ensemble members typically have different initial conditions and/or different architectures. The subsets of the data may be chosen at
random , with prior knowledge or by some principled approach e.g. clustering. Additionally, the outputs of the networks may be combined by any function which results
in an output that is consistent with the form of the problem. The expectation of
ensemble methods is that the member networks pick out different properties present
in the data, thus improving the performance when their outputs are combined .
The two techniques described here, boosting (Drucker, Schapire & Simard 1993)
and mixtures of experts (Jacobs, Jordan, Nowlan & Hinton 1991), differ from simple
ensemble methods.
Ensemble Methods/or Phoneme Classification
801
In boosting, each member of the ensemble is trained on patterns that have been
filtered by previously trained members of the ensemble. In mixtures, the members
of the ensemble, or "experts", are trained on data that is stochastically selected by
a gate which additionally learns how to best combine the outputs of the experts.
The aim of the work presented here is twofold and inspired from two differing but
complimentary directions. Firstly, how does one select which data to train the ensemble members on and secondly, given these members how does one combine them
to achieve the optimal result? The rest of the paper describes how a combination
of boosting and mixtures may be used to improve phoneme error rates .
PHONEME CLASSIFICATION
Speech
Figure 1: The ABBOT hybrid connectionist-HMM speech recognition system with
an MLP ensemble acoustic model
The Cambridge University Engineering Department connectionist speech recognition system (ABBOT) uses a hybrid connectionist - hidden Markov model (HMM)
approach. This is shown in figure 1. A connectionist acoustic model is used to map
each frame of acoustic data to posterior phone probabilities. These estimated phone
probabilities are then used as estimates of the observation probabilities in an HMM
framework. Given new acoustic data and the connectionist-HMM framework , the
maximum a posteriori word sequence is then extracted using a single pass, start
synchronous decoder. A more complete description of the system can be found
in (Hochberg, Renals & Robinson 1994).
Previous work has shown how a novel boosting procedure based on utterance selection can be used to increase the performance of the recurrent network acoustic
model (Cook & Robinson 1996). In this work a combined boosting and mixturesof-experts approach is used to improve the performance of MLP acoustic models.
Results are presented for two speech recognition tasks. The first is phonetic classification on a small isolated digit database . The second is a large vocabulary
continuous speech recognition task from the Wall Street Journal corpus.
ENSEMBLE METHODS
Most ensemble methods can be divided into two separate methods; network selection and network combination. Network selection addresses the question of how to
S. Waterhouse and G. Cook
802
choose the data each network is trained on. Network combination addresses the
question of what is the best way to combine the outputs of these trained networks.
The simplest method for network selection is to train separate networks on separate
regions of the data, chosen either randomly, with prior knowledge or according to
some other criteria, e.g. clustering.
The simplest method of combining the outputs of several networks is to form an
average, or simple additive merge: y(t) =
L~=l Yk(t), where Yk(t) is the output
of the kth network at time t.
k
Boosting
Boosting is a procedure which results in an ensemble of networks. The networks
in a boosting ensemble are trained sequentially on data that has been filtered by
the previously trained networks in the ensemble. This has the advantage that
only data that is likely to result in improved generalization performance is used
for training. The first practical application of a boosting procedure was for the
optical character recognition task (Drucker et al. 1993). An ensemble offeedforward
neural networks was trained using supervised learning. Using boosting the authors
reported a reduction in error rate on ZIP codes from the United States Postal Service
of 28% compared to a single network. The boosting procedure is as follows: train a
network on a randomly chosen subset of the available training data. This network
is then used to filter the remaining training data to produce a training set for a
second network with an even distribution of cases which the first network classifies
correctly and incorrectly. After training the second network the first and second
networks are used to produce a training set for a third network . This training set
is produced from cases in the remaining training data that the first two networks
disagree on.
The boosted networks are combined using either a voting scheme or a simple add as
described in the previous section. The voting scheme works as follows: if the first
two networks agree, take their answer as the output, if they disagree, use the third
network's answer as the output.
Mixtures of Experts
The mixture of experts (Jacobs et al. 1991) is a different type of ensemble to the
two considered so far. The ensemble members or experts are trained with data
which is stochastically selected by a 9ate. The gate in turn learns how to best
combine the experts given the data. The training of the experts, which are typically
single or multi-layer networks, proceeds as for standard networks, with an additional
weighting of the output error terms by the posterior probabilty hi (t) of selecting
=
expert i given the current data point at time (t): hi(t) 9i(t).Pj(t) /Lj 9j(t).Pj(t) ,
where 9j(t) is the output of the gate for expert i, and Pj(t) is the probability of
obtaining the correct output given expert i. In the case of classification, considered
here, the experts use softmax output units. The gate, which is typically a single
or multi-layered network with softmax output units is trained using the posterior
probabilities as targets. The overall output y(t) ofthe mixture of experts is given by
the weighted combination of the gate and expert outputs: y(t) L~=l 9k(t).Yk(t),
where Yk(t) is the output of the kth expert, and 9k(t) is the output of the gate for
=
Ensemble Methods/or Phoneme Classification
803
expert k at time t.
The mixture of experts is based on the principle of divide and conquer, in which a
relatively hard problem is broken up into a series of smaller easier to solve problems.
By using the posterior probabilities to weight the experts and provide targets for
the gate, the effective data sets used to train each expert may overlap.
SPEECH RECOGNITION RESULTS
This section describes the results of experiments on two speech databases: the
Bellcore isolated digits database and the Wall Street Journal Corpus (Paul & Baker
1992). The inputs to the networks consist of 9 frames of acoustic feature vectors;
the frame on which the network is currently performing classification, plus 4 frames
of left context and 4 frames of right context. The context frames allow the network
to take account of the dynamical nature of speech. Each acoustic feature vector
consists of 8th order PLP plus log energy coefficients along with the dynamic delta
coeficients of these coefficients, computed with an analysis window of 25ms, every
12.5 ms at a sampling rate of 8kHz. The speech is labelled with 54 phonemes
according to the standard ABBOT phone set.
Bellcore Digits
The Bellcore digits database consists of 150 speakers saying the words "zero"
through "nine", "oh", "no" and "yes". The database was divided into a training set of 122 speakers, a cross validation set of 13 speakers and a test set of 15
speakers. Each method was evaluated over 10 partitions of the data into different
training, cross validation and test sets. In all the experiments on the Bellcore digits
multi-layer perceptrons with 200 hidden units were used as the basic network members in the ensembles. The gates in the mixtures were also multi-layer perceptrons
with 20 hidden units.
Ensemble
Simple ensemble
Simple ensemble
Simple ensemble
Simple ensemble
Simple ensemble
Simple ensemble
Boosted ensemble
Boosted ensemble
Boosted ensemble
Boosted ensemble
Boosted ensemble
Boosted ensemble
Combination
Method
cheat
vote
average
soft gated
hard gated
mixed
cheat
vote
average
soft gated
hard gated
mixed
Phone Error Rate
(J'
Average
14.7 %
0.9
20.3 %
1.2
19.3 %
1.2
20.9 %
1.2
19.3 %
1.0
17.1 %
1.3
11.9 %
1.0
17.8 %
1.1
17.4 %
1.1
17.8 %
1.0
17.4 %
1.2
16.4 %
1.0
Table 1: Comparison of phone error rates using different ensemble methods on the
Bellcore isolated digits task.
804
S. Waterhouse and G. Cook
Table 1 summarises the results obtained on the Bellcore digits database. The meaning of the entries are as follows. Two types of ensemble were trained:
Simple Ensemble: consisting of 3 networks each trained on 1/3 of the training
data each (corresponding to 40 speakers used for training and 5 for cross
validation for each network),
Boosted Ensemble: consisting of 3 networks trained according to the boosting
algorithm of the previous section. Due to the relatively small size of the
data set, it was necessary to ensure that the distributions of the randomly
chosen data were consistent with the overall training data distribution.
Given each set of ensemble networks, 6 combination methods were evaluated:
cheat: The cheat scheme uses the best ensemble member for each example in the
data set. The best ensemble member is determined by looking at the correct
label in the labelled test set (hence cheating). This method is included as
a lower bound on the error. Since the tests are performed on unseen data,
this bound can only be approached by learning an appropriate combination
function of the ensemble member outputs.
average: The ensemble members' outputs are combined using a simple average.
vote: The voting scheme outlined in the previous section.
gated: In the gated combination method, the ensemble networks were kept fixed
whilst the gate was trained. Two types of gating were evaluated, standard
or sojtgating, and hard or winner take all (WTA) training. In WTA training
the targets for the gate are binary, with a target of l.0 for the output
corresponding to the expert whose probability of generating the current
data point correctly is greatest, and 0.0 for the other outputs.
mixed: In contrast to the gated method, the mixed combination method both trains
a gate and retrains the ensemble members using the mixture of experts
framework.
From these results it can be concluded that boosting provides a significant improvement in performance over a simple ensemble. In addition, by training a gate
to combine the boosted networks performance can be further enhanced. As might be
expected, re-training both the boosted networks and the gate provides the biggest
improvement, as shown by the result for the mixed boosted networks.
Wall Street Journal Corpus
The training data used in these experiments is the short term speakers from the
Wall Street Journal corpus. This consists of approximately 36,400 sentences from
284 different speakers (SI284). The first network is trained on l.5 million frames
randomly selected from the available training data (15 million frames). This is
then used to filter the unseen training data to select frames for training the second
network. The first and second networks are then used to select data for the third
network as described previously. The performance of the boosted MLP ensemble
Ensemble Methods/or Phoneme Classification
805
Word Error Rate
Test
Set
Language
Model
Lexicon
eLh2_93
dt....s5_93
trigram
bigram
20k
5k
Single MLP
16.0%
20.4%
Boosted
12.9%
16.5%
Gated
12.9%
16.5%
Mixed
11.2 %
15.1%
Table 2: Evaluation of the performance of boosting MLP acoustic models
was evaluated on a number of ARPA benchmark tests. The results are summarised
in Table 2.
Initial experiments use the November 1993 Hub 2 evaluation test set (eLh2-93) .
This is a 5,000 word closed vocabulary, non-verbalised punctuation test. It consists
of 200 utterances, 20 from each of 10 different speakers, and is recorded using a
Sennheiser HMD 410 microphone. The prompting texts are from the Wall Street
Journal. Results are reported for a system using the standard ARPA 5k bigram
language model.
The Spoke 5 test (dt....s5_93) is designed for evaluation of unsupervised channel adaptation algorithms. It consists of a total of 216 utterances from 10 different speakers.
Each speaker's data was recorded with a different microphone. In all cases simultaneous recordings were made using a Sennheiser microphone. The task is a 5,000
word, closed vocabulary, non-verbalised punctuation test. Results are only reported
for the data recorded using the Sennheiser microphone. This is a matched test since
the same microphone is used to record the training data. The standard ARPA 5k
bigram language model was used for the tests. Further details of the November
1993 spoke 5 and hub 2 tests, can be found in (Pallett, Fiscus, Fisher, Garofolo,
Lund & Pryzbocki 1994).
Four techniques were evaluated on the WSJ corpus; a single network with 500 hidden
units, a boosted ensemble with 3 networks with 500 hidden units each, a gated
ensemble of the boosted networks and a mixture trained from boosted ensembles.
As can be seen from the table, boosting has resulted in significant improvements in
performance for both the test sets over a single model. In addition, in common with
the results on the Bellcore digits, whilst the gating combination method does not
give an improvement over simple averaging, the retraining of the whole ensemble
using the mixed combination method gives an average improvement of a further 8%
over the averaging method.
CONCLUSION
This paper has described a number of ensemble methods for use with neural network
acoustic models. It has been shown that through the use of principled methods
such as boosting and mixtures the performance of these models may be improved
over standard ensemble techniques. In addition, by combining the techniques via
boot-strapping mixtures using the boosted networks the performance of the models
can be improved further. Previous work, which focused on boosting at the word
level showed improvements for a recurrent network:HMM hybrid at the word level
over the baseline system (Cook & Robinson 1996). This paper has shown how the
performance of a static MLP system can also be improved by boosting at the frame
level.
806
S. Waterhouse and G. Cook
Acknowledgements
Many thanks to Bellcore for providing the digits data set to our partners, ICSI;
Nikki Mirghafori for help with datasets; David Johnson for providing the starting
point for our code development; and Dan Kershaw for his invaluable advice.
References
Cook, G. & Robinson, A. (1996), Boosting the performance of connectionist large
-vocabulary speech recognition, in 'International Conference on Spoken Language Processing'.
Drucker, H., Schapire, R. & Simard, P. (1993), Improving Performance in Neural
Networks Using a Boosting Algorithm, in S. Hanson, J. Cowan & C. Giles, eds,
'Advances in Neural Information Processing Systems 5', Morgan Kauffmann,
pp.42-49.
Hansen, L. & Salamon, P. (1990), 'Neural Network Ensembles', IEEE Transactions
on Pattern Analysis and Machine Intelligence 12, 993-1001.
Hochberg, M., Renals, S. & Robinson, A. (1994), 'ABBOT: The CUED hybrid connectionist-HMM large-vocabulary recognition system', Proc. of Spoken
Language Systems Technology Worshop, ARPA.
Jacobs, R. A., Jordan, M. I., Nowlan, S. J . & Hinton, G. E. (1991), 'Adaptive
mixtures oflocal experts' , Neural Computation 3 (1), 79-87.
Pallett, D., Fiscus, J., Fisher, W., Garofolo, J., Lund, B. & Pryzbocki, M. (1994),
'1993 Benchmark Tests for the ARPA Spoken Language Program', ARPA
Workshop on Human Language Technology pp. 51-73. Merrill Lynch Conference Center, Plainsboro, NJ.
Paul, D. & Baker, J . (1992), The Design for the Wall Street Journal-based CSR Corpus, in 'DARPA Speech and Natural Language Workshop', Morgan Kaufman
Publishers, Inc., pp. 357-62.
Perrone, M. P. & Cooper, L. N. (1993), When networks disagree: Ensemble methods for hybird neural networks, in 'Neural Networks for Speech and Image
Processing', Chapman-Hall.
| 1280 |@word merrill:1 bigram:3 retraining:1 eng:2 jacob:3 pick:1 reduction:1 initial:2 series:1 united:1 selecting:1 current:2 nowlan:2 additive:1 partition:1 designed:1 intelligence:1 selected:3 cook:7 short:1 record:1 filtered:2 provides:2 boosting:22 postal:1 lexicon:1 firstly:1 along:1 consists:5 combine:5 dan:1 expected:1 multi:4 inspired:1 window:1 classifies:1 baker:2 matched:1 what:1 kaufman:1 complimentary:1 whilst:2 differing:1 spoken:3 nj:1 every:1 voting:3 uk:2 unit:6 service:1 engineering:2 cheat:4 merge:1 approximately:1 might:1 plus:2 garofolo:2 practical:1 cb2:1 digit:9 procedure:4 oflocal:1 word:8 selection:4 layered:1 context:3 map:1 center:1 starting:1 focused:1 oh:1 his:1 sennheiser:3 kauffmann:1 target:4 enhanced:1 us:2 recognition:10 database:9 region:1 fiscus:2 icsi:1 yk:4 principled:3 broken:1 cam:2 dynamic:1 trained:17 darpa:1 train:5 effective:1 approached:1 whose:1 solve:1 unseen:2 sequence:1 advantage:1 adaptation:1 renals:2 combining:2 achieve:1 description:1 produce:2 generating:1 wsj:1 help:1 cued:1 recurrent:2 ac:2 differ:1 direction:1 correct:2 filter:2 human:1 generalization:1 wall:6 secondly:1 considered:2 hall:1 trigram:1 proc:1 label:1 currently:1 hansen:2 weighted:1 lynch:1 aim:1 boosted:17 improvement:6 contrast:1 baseline:1 posteriori:1 typically:3 lj:1 hidden:5 overall:2 classification:9 bellcore:8 development:1 softmax:2 sampling:1 chapman:1 ipz:1 unsupervised:1 connectionist:7 randomly:4 resulted:1 consisting:2 interest:1 mlp:6 evaluation:3 mixture:15 punctuation:2 necessary:1 divide:1 re:1 isolated:4 arpa:6 soft:2 giles:1 abbot:4 coeficients:1 subset:3 entry:1 johnson:1 reported:3 answer:2 combined:5 thanks:1 international:1 recorded:3 choose:1 stochastically:2 expert:24 simard:2 prompting:1 account:1 coefficient:2 inc:1 performed:1 closed:2 start:1 phoneme:8 ensemble:62 ofthe:1 yes:1 produced:1 simultaneous:1 ed:1 email:1 energy:1 pp:3 static:1 knowledge:2 salamon:2 steve:1 dt:2 supervised:1 improved:4 evaluated:5 hence:1 kershaw:1 plp:1 speaker:10 criterion:1 m:2 complete:1 invaluable:1 meaning:1 image:1 novel:1 superior:1 common:1 khz:1 winner:1 million:2 significant:2 cambridge:3 outlined:1 language:8 add:1 posterior:4 showed:1 phone:5 phonetic:1 binary:1 seen:1 morgan:2 additional:1 zip:1 england:1 cross:3 divided:2 basic:1 expectation:1 addition:3 concluded:1 publisher:1 rest:1 recording:1 cowan:1 member:15 jordan:2 isolation:1 architecture:1 pallett:2 drucker:3 retrains:1 synchronous:1 speech:15 nine:1 probabilty:1 simplest:2 schapire:2 estimated:1 delta:1 correctly:2 summarised:1 four:1 pj:3 spoke:2 kept:1 saying:1 hochberg:2 investigates:1 layer:3 hi:2 bound:2 performing:1 optical:1 relatively:2 department:2 according:3 combination:12 perrone:2 describes:2 ate:1 smaller:1 character:1 wta:2 agree:1 previously:3 turn:1 committee:1 available:2 appropriate:1 gate:13 clustering:2 remaining:2 ensure:1 conquer:1 summarises:1 question:2 kth:2 separate:3 street:6 decoder:1 hmm:6 partner:1 code:2 providing:2 design:1 gated:9 disagree:3 boot:1 observation:1 markov:1 datasets:1 benchmark:2 november:2 incorrectly:1 hinton:2 looking:1 frame:10 csr:1 david:1 cheating:1 sentence:1 hanson:1 acoustic:10 robinson:5 address:2 proceeds:1 dynamical:1 pattern:2 lund:2 program:1 greatest:1 overlap:1 natural:1 hybrid:4 scheme:4 improve:3 technology:2 naive:1 utterance:3 text:1 prior:2 acknowledgement:1 waterhouse:4 mixed:7 validation:3 consistent:2 principle:1 pryzbocki:2 allow:1 vocabulary:6 author:1 made:1 adaptive:1 far:1 transaction:1 sequentially:1 corpus:6 continuous:2 table:5 additionally:2 nature:1 channel:1 tel:1 obtaining:1 improving:3 whole:1 paul:2 advice:1 biggest:1 cooper:2 third:3 weighting:1 learns:2 gating:2 hub:2 consist:1 hmd:1 workshop:2 easier:1 likely:1 gary:1 extracted:1 twofold:1 labelled:2 fisher:2 considerable:1 hard:4 included:1 determined:1 averaging:3 microphone:5 total:1 pas:1 vote:3 perceptrons:2 select:3 |
308 | 1,281 | Dynamically Adaptable CMOS
Winner-Take-AII Neural Network
Kunihiko Iizuka, Masayuki Miyamoto and Hirofumi Matsui
Information Technology Research Laboratories
Sharp
Tenri, Nara, lAP AN
Abstract
The major problem that has prevented practical application of analog
neuro-LSIs has been poor accuracy due to fluctuating analog device
characteristics inherent in each device as a result of manufacturing.
This paper proposes a dynamic control architecture that allows analog
silicon neural networks to compensate for the fluctuating device
characteristics and adapt to a change in input DC level. We have
applied this architecture to compensate for input offset voltages of an
analog CMOS WTA (Winner-Take-AlI) chip that we have fabricated.
Experimental data show the effectiveness of the architecture.
1
INTRODUCTION
Analog VLSI implementation of neural networks, such as silicon retinas and adaptive
filters, has been the focus of much active research. Since it utilizes physical laws that
electric devices obey for neural operation, circuit scale can be much smaller than that of a
digital counterpart and massively parallel implementation is possible. The major problem
that has prevented practical applications of these LSIs has been fluctuating analog device
characteristics inherent in each device as a result of manufacturing. Historically, this has
been the main reason most analog devices have been superseded by digital devices.
Analog neuro VLSI is expected to conquer this problem by making use of its adaptability.
This optimistic view comes from the fact that in spite of the unevenness of their
components, biological neural networks show excellent competence.
This paper proposes a CMOS circuit architecture that dynamically compensates for
fluctuating component characteristics and at the same time adapts device state to
incoming signal levels. There are some engineering techniques available to compensate
714
K. lizuka, M. Miyamoto and H. Matsui
for MOS threshold fluctuation, e.g., the chopper comparator, but they need a periodical
change of mode to achieve the desired effect. This is because there are two modes one for
the adaptation and one for the signal processing. This is quite inconvenient because extra
clock signals are needed and a break of signal processing takes place.
Incoming signals usually consist of a rapidly changing foreground component and a
slowly varying background component. To process these signals incessantly, biological
neural networks make use of multiple channels having different temporaVspatial scales.
While a relatively slow/large channel is used to suppress background floating, a
faster/smaller channel is devoted to process the foreground signal. The proposed method
inspired by this biological consideration utilizes different frequency bands for adaptation
and signal processing (Figure 1), where negative feedback is applied through a low pass
filter so that the feedback will not affect the foreground signal processing.
Gain
LOW PASS
COMI'ARATOR
FILTER
FOREGROUND
BAND
BACKGROUND
nAND
Output SignAl
Input Signal
Frequency
(a)
(b)
Figure 1: Dynamic adaptation by frequency divided control. (a) model diagram, (b)
frequency division.
In the first part of this paper, a working analog CMOS WTA chip that we have test
fabricated is introduced. Then, dynamical adaptation for this WTA chip is described and
experimental results are presented.
2
ANALOG CMOS WTA CHIP
2.1
ARCHITECTURE AND SPECIFICATION
CM
v," I
?
2nd LAYER
Figure 2: Analog CMOS WTA chip architecture
FEEDBACK
CONTROLLER
Dynamically Adaptable CMOS Wmner-Take-All Neural Network
715
Vdd
M5
M4
Vhl--f
I--Vb2
eM
M
In l)U~
Olltput
VII\~
I
I
I
MI
114 2
V!!IS
(b)
(ll)
Figure 3: Circuit diagrams for (a) the competitive cell and (b) the feedback controller.
As a basic building block to construct neuro-chips, analog WfA circuits have been
investigated by researchers such as [Lazzaro, 1989] and [Pedroni, 1994]. All CMOS
analog WfA circuits are based on voltage follower circuits [Pedroni, 1995] to realize
competition through inhibitory interaction, and they use feedback mechanisms to enhance
resolution gain. The architecture of the chip that we have fabricated is shown in Figure 2
and the circuit diagram is in Figure 3. This WfA chip indicates the lowest input voltage
by making the output voltage corresponds to the lowest input voltage near Vss (winner),
and others nearly the power supply voltage Vdd (loser). The circuit is similar to [Sheu,
1993], but represents two advances.
1. The steering current that the feedback controller absorbs from the line CM is
enlarged, allowing the winner cell can compete with others in the region where
resolution gain is the largest.
2 The feedback controller originally placed after the second competitive layer is
removed in order to guarantee the existence of at least one output node whose voltage
is nearly zero.
Table 1 shows the specifications of the fabricated chip.
Table 1: Specifications of the fabricated WfA chip
2.2
INPUT OFFSET VOLTAGE
Input offset voltages of a WfA chip may greatly deteriorate chip performance. Examples
of input offset voltage distribution of the fabricated chips are shown in Figure 4. Each
input offset voltage is measured relative to the first input node. The input offset voltage
716
K. lizuJea, M. Miyamoto and H. Matsui
=
~Vj
of the j-th input node is defined as ~Vj Vinj - Vin1 when the voltages of output
nodes Outj and Out1 are equal; Vin1 is fixed to a certain voltage and the voltage of other
input nodes are fixed at a relatively high voltage.
Figure 4: Examples of measured input offset voltage distribution.
The primary factor of the input offset voltage is considered to be fluctuation of MOS
transistor threshold voltages in the first layer competitive cell. Then, the input offset
voltage ~Vj of this cell yielded by the small fluctuation ~Vth i of Vth i is calculated as
follows:
_
-~Vtht + gd 1 + gd 2 + gm2 (~Vth2 _ ~Vth3) + gm4(gd 1 + gd 2 + gm2) ~Vth4
gmt
gm1 gm3
where gmi and gdl are the transconductance and the drain conductance of MOS Mi,
respectively. Using design and process parameters, we can estimate the input offset
voltage to be
AVj. -AVthl + (AVth 2 -AVth 3 )+O.l5AVth4
,
Based on our experiences, the maximum fluctuation of Vth in a chip is usually smaller
than 20 mY, and it is reasonable to consider that the difference I~Vth2 - ~VtJrI is even
smaller; perhaps less than 5 mV, because M2 and M3 compose a current mirror and are
closely placed. This implies that the maximum of ~Vj is about 28 mY, which is in rough
agreement with the measured data.
i
3
DYNAMICAL ADAPTATION ARCmTECTURE
In Figure 5, we show circuit implementation of the dynamically adaptable wrA function.
In each feedback channel, the difference between each output and the reference Vref is
fed back to the input node through a low pass filter consisting of Rand C. The charge
stored in capacitor C is controlled by this feedback signal.
Let the linear approximation of the wrA chip DC characteristic be
Vouti =A ( Vin , - VOJ,
where Vin i and Voutt are the voltages at the nodes In/ and Out, respectively, andA and VOL
are functions of Vinj (j '" i ). The input offset voltage relative to the node In] is considered
to be the difference between VOL and V0 1. On the other hand, the DC characteristic of the
i-th feedback path can be approximated as
717
Dynamically Adaptable CMOS Winner- Take-All Neural Network
Inl'
In2'
Cl..
Cl..
In32
,
cl..
?
?
?
?
?
?
R
R'
Inl
In2
? ? ?
WTA Chip
Out l
Out2
?
?
?
Vref
?
Outl
?
Vref
?
Outn
Out2
Figure 5: wrA chip equipped with adaptation circuit where R=10MQ and C=0.33JAF.
Yin; = B (Yout; - Vref).
It follows from the above two equations that
AB
B
Vin ? - - - - Vo? - - - Vre r ? Vo ?
I
1- AB
I
1- AB
'.J
I
The last term is derived using the assumptions A ? 1 and B ? -1. This means that the
voltage difference between the DC level of the input and YO; is clamped on the capacitor
C. This in turn implies that the input offset voltage will be successfully compensated for.
The role of the low pass filters is twofold.
1. They guarantee stable dynamics of the feedback loop; we can make the cutoff
frequency of the low pass filters small enough so that the gain of the feedback path is
attenuated before the phase of the feedback signal is delayed by more than 1800 ?
2. They prevent the feed-forward wrA operation from being affected, as shown in
Figure 1, the adaptive control is carried out on a different, non-overlapped frequency
band than wrA operation.
4
EXPERIMENTAL RESULTS
Experiments concerning the adaptable wr A function were carried out by applying pulses
of 90% duty to the input nodes In', and In'l, while other input nodes were fixed to a
certain voltage. In Figures 6 (a) and 6 (b), the output waveforms of Outl , Out2 , Out] and
the waveform of the pulse applied to the node In', are shown. Figure 6(a) shows the result
when the same pulse was applied to both In', and In'z. Figure 6(b) shows the result when
the amplitude of the pulse to In', was greater than that of the pulse to In'z by 10 mY. The
schematic explanation of this behavior is in Figure 7. The outputs remained at the same
levels for a while after the inputs were shut off, since there was no strong inducement. As
a result of adaptation, the winning frequencies of every output nodes become equal in a
long time scale. This explains the unstable output during the period of quiescent inputs.
718
K. Iizuka. M. Miyamoto and H. Matsui
The chip used in this measurement had a relative input offset voltage of 15 mV between
nodes In1 and In 2 ? We can see in Figure 6 (a) that this offset voltage was completely
compensated for because the output waveforms of corresponding nodes were the same.
(a)
(b)
Figure 6: The output waveforms ot the dynamically adaptable CMOS WTA neural
network. Pulse waves were applied to nodes In'] and In'2; other nodes voltages were fixed.
When the amplitude of each pulse was the same (a), the corresponding output waveforms
were the same. When the amplitude of the pulse fed to In'I was greater than that to In '2 by
10 mV (b), the output voltage at Out] was low (winner) and that at Out2 was high (loser)
during the period the pulse was low (on).
Outputs
Inputs
quiescent
Out 1
i;i
L.." mii~iii~~iii~!iim
Q,
Out2
~
?
?
?
Out3
~mmmmmm~mm
iiii ..........Lo-s-er-
?
?
?
roser
.
~mmmmm~mmm
~'---v---'
Hysteresis
Unstable
Figure 7: The schematic explanation of the dynamically adaptable WT A behavior.
5
CONCLUSION
We have proposed a dynamic adaptation architecture that uses frequency divided control
and applied this to a CMOS WTA chip that we have fabricated. Experimental results
show that the architecture successfully compensated for input offset voltages of the WTA
719
Dynamically Adaptable CMOS Winner- Take-All Neural Network
chip due to inherent device characteristic fluctuations. Moreover, this architecture gives
analog neuro-chips the ability to adapt to incoming signal background levels. This
adaptability has a lot of applications. For example, in vision chips, the adaptation may be
used to compensate for the fluctuation of photo sensor characteristics, to adapt the gain of
photo sensors to background illumination level and to automatically control color balance.
As another application, Figure 8 describes an analog neuron with weighted synapses,
where the time constant RC is much larger than the time constant of input signals.
Inputs
11
???
11
c
Output
Figure 8: Analog neuron with weighted synapses where the time constant RC is much
larger than that of input signals.
The key to this architecture is use of non-overlapping frequency bands for adaptation to
background and foreground signal processing. For neuro-VLSIs, this requires
implementing circuits with completely different time scale constants. In modern VLSI
technology, however, this is not a difficult problem because processes for very high
resistances, i.e., teraohms, are available.
Acknowledgment
The authors would like to thank Morio Osaka for his help in chip fabrication and Kazuo
Hashiguchi for his support in experimental work.
References
Choi, J. & Sheu, B.J. (1993) A high-precision VLSI winner-take-all circuit for selforganizing neural networks. IEEE J. Solid-State Circuits, vo1.28, no.5, pp.576-584.
Lazzaro, J., Ryckebush, S., Mahowald, M.A., & Mead, C. (1989) Winner-take-all
networks of O(N) complexity. In D.S. Touretzky (eds.), Advances in Neural Information
Processing Systems 1, pp. 703-711. Cambridge, MA: MIT Press.
Pedroni, V.A. (1994) Neural n-port voltage comparator network, Electron. Lett., vo1.30,
no.21, pp1774-1775.
Pedroni, V.A. (1995) Inhibitory Mechanism Analysis of Complexity O(N) MOS WinnerTake-All Networks. IEEE Trans. Circuits Syst. I, vo1.42, no.3, pp.172-175.
| 1281 |@word nd:1 pulse:9 out1:1 solid:1 current:2 follower:1 realize:1 v:1 device:10 shut:1 node:16 rc:2 become:1 supply:1 compose:1 absorbs:1 deteriorate:1 expected:1 behavior:2 inspired:1 automatically:1 equipped:1 moreover:1 circuit:14 lowest:2 vref:4 cm:2 fabricated:7 guarantee:2 every:1 charge:1 gm1:1 control:5 before:1 engineering:1 mead:1 fluctuation:6 path:2 dynamically:8 matsui:4 practical:2 acknowledgment:1 block:1 spite:1 periodical:1 applying:1 compensated:3 resolution:2 m2:1 osaka:1 his:2 mq:1 us:1 agreement:1 overlapped:1 approximated:1 role:1 region:1 removed:1 complexity:2 dynamic:4 vdd:2 ali:1 division:1 completely:2 aii:1 chip:23 quite:1 whose:1 larger:2 compensates:1 ability:1 transistor:1 interaction:1 adaptation:10 loop:1 rapidly:1 loser:2 roser:1 achieve:1 adapts:1 yout:1 competition:1 vinj:2 cmos:12 help:1 mmm:1 measured:3 strong:1 come:1 implies:2 waveform:5 closely:1 filter:6 implementing:1 explains:1 biological:3 gm4:1 mm:1 considered:2 mo:4 electron:1 major:2 largest:1 successfully:2 weighted:2 rough:1 mit:1 sensor:2 varying:1 voltage:32 derived:1 focus:1 yo:1 indicates:1 greatly:1 lizuka:1 nand:1 vlsi:4 proposes:2 equal:2 construct:1 having:1 represents:1 nearly:2 foreground:5 others:2 inherent:3 retina:1 modern:1 vth2:2 m4:1 delayed:1 floating:1 phase:1 consisting:1 ab:3 conductance:1 devoted:1 experience:1 voj:1 masayuki:1 desired:1 miyamoto:4 inconvenient:1 mahowald:1 fabrication:1 stored:1 my:3 gd:4 off:1 enhance:1 slowly:1 syst:1 hysteresis:1 mv:3 view:1 break:1 lot:1 optimistic:1 competitive:3 wave:1 parallel:1 vin:3 accuracy:1 characteristic:8 researcher:1 synapsis:2 touretzky:1 ed:1 frequency:9 pp:3 gdl:1 mi:2 gain:5 color:1 amplitude:3 adaptability:2 back:1 adaptable:8 feed:1 originally:1 rand:1 outj:1 clock:1 working:1 hand:1 overlapping:1 vre:1 mode:2 perhaps:1 building:1 effect:1 counterpart:1 laboratory:1 ll:1 during:2 wfa:5 m5:1 vo:2 consideration:1 physical:1 winner:9 sheu:2 vin1:2 analog:16 silicon:2 measurement:1 cambridge:1 gmt:1 winnertake:1 had:1 specification:3 stable:1 v0:1 out2:5 massively:1 certain:2 greater:2 steering:1 kazuo:1 period:2 signal:17 multiple:1 out3:1 faster:1 adapt:3 compensate:4 nara:1 long:1 divided:2 concerning:1 prevented:2 gm2:2 controlled:1 schematic:2 neuro:5 basic:1 controller:4 vision:1 cell:4 background:6 iiii:1 diagram:3 extra:1 ot:1 capacitor:2 effectiveness:1 near:1 iii:2 enough:1 affect:1 architecture:11 attenuated:1 duty:1 resistance:1 lazzaro:2 selforganizing:1 band:4 inhibitory:2 wr:1 vol:2 affected:1 key:1 threshold:2 changing:1 prevent:1 cutoff:1 compete:1 place:1 reasonable:1 utilizes:2 mii:1 layer:3 yielded:1 transconductance:1 relatively:2 gmi:1 poor:1 smaller:4 describes:1 em:1 wta:9 making:2 equation:1 avj:1 turn:1 mechanism:2 needed:1 fed:2 photo:2 available:2 operation:3 obey:1 fluctuating:4 existence:1 in2:2 conquer:1 in1:1 primary:1 thank:1 unstable:2 reason:1 balance:1 difficult:1 negative:1 suppress:1 implementation:3 design:1 allowing:1 neuron:2 dc:4 sharp:1 competence:1 introduced:1 trans:1 usually:2 dynamical:2 explanation:2 power:1 historically:1 technology:2 superseded:1 carried:2 drain:1 relative:3 law:1 pedroni:4 wra:5 digital:2 port:1 lo:1 outl:2 placed:2 last:1 iim:1 unevenness:1 feedback:13 calculated:1 lett:1 forward:1 author:1 adaptive:2 arcmtecture:1 active:1 incoming:3 anda:1 quiescent:2 iizuka:2 table:2 channel:4 excellent:1 investigated:1 cl:3 electric:1 vj:4 main:1 enlarged:1 slow:1 precision:1 winning:1 clamped:1 remained:1 choi:1 er:1 offset:15 consist:1 mirror:1 illumination:1 vii:1 chopper:1 lap:1 yin:1 vth:3 inl:2 corresponds:1 ma:1 kunihiko:1 comparator:2 manufacturing:2 vb2:1 twofold:1 change:2 wt:1 vo1:3 pas:5 experimental:5 m3:1 support:1 |
309 | 1,282 | Unification of Information Maximization
and Minimization
Ryotaro Kamimura
Information Science Laboratory
Tokai University
1117 Kitakaname Hiratsuka Kanagawa 259-12, Japan
E-mail: ryo@cc.u-tokaLac.jp
Abstract
In the present paper, we propose a method to unify information
maximization and minimization in hidden units. The information
maximization and minimization are performed on two different levels: collective and individual level. Thus, two kinds of information:
collective and individual information are defined. By maximizing
collective information and by minimizing individual information,
simple networks can be generated in terms of the number of connections and the number of hidden units. Obtained networks are
expected to give better generalization and improved interpretation
of internal representations. This method was applied to the inference of the maximum onset principle of an artificial language. In
this problem, it was shown that the individual information minimization is not contradictory to the collective information maximization. In addition, experimental results confirmed improved
generalization performance, because over-training can significantly
be suppressed.
1
Introduction
There have been many attempts to interpret neural networks from the information
theoretical point of view [2], [4], [5]. Applied to the supervised learning, information has been maximized and minimized, depending on problems. In these methods,
information is defined by the outputs of hidden units. Thus, the methods aim to
control hidden unit activity patterns in an optimal manner. Information maximization methods have been used to interpret explicitly internal representations and
simultaneously to reduce the number of necessary hidden units [5]. On the other
hand, information minimization methods have been especially used to improve generalization performance [2], [4] and to speed up learning. Thus, if it is possible to
Unification of Information Maximization and Minimization
509
maximize and minimize information simultaneously, information theoretic methods
are expected to be applied to a wide range of problems.
In this paper, we unify the above mentioned two methods, namely, information
maximization and minimization methods, into one framework to improve generalization performance and to interpret explicitly internal representations. However, it
is apparently impossible to maximize and minimize simultaneously the information
defined by the hidden unit activity. Our goal is to maximize and to minimize information on two different levels, namely, collective and individual levels. This means
that information can be maximized in collective ways and information is minimized
for individual input-hidden connections. The seeming contradictory proposition of
the simultaneous information maximization and minimization can be overcome by
assuming the existence of the two levels for the information control.
Information is supposed to be controlled by an information controller located outside
neural networks and used exclusively to control information. By assuming the
information controller, we can clearly see how information appropriately defined can
be maximized or minimized. In addition, the actual implementation of information
methods is much easier by introducing a concept of the information controller.
2
Concept of Information
In this section, we explain a concept of information in a general framework of an information theory. Let Y take on a finite number of possible values Yl, Y2, ... , YM with
probabilities P(Yl), P(Y2), ... , p(YM), respectively. Then, initial uncertainty H(Y) of
a random variable Y is defined by
M
H(Y) = - L p(Yj) 10gp(Yj).
( 1)
j=l
Now, consider conditional uncertainty after the observation of another random variable X, taking possible values Xl, X2, ... , Xs with probabilities p(Xt},P(X2), ... ,p(XM),
respectively. Conditional uncertainty H(Y I X) can be defined as
S
M
H(Y I X) = - LP(x8) LP(Yj
8=1
I x8) logp(xj I Y8)'
(2)
j=1
We can easily verify that conditional uncertainty is always less than or equal to
initial uncertainty. Information is usually defined as the decrease of this uncertainty
[1].
I(Y
I X)
H(Y) - H(Y
I X)
M
S
M
- L p(Yj ) 10gp(Yj) + L p( X8) L p(Yj I X,) 10gp(Yj I X8)
j=1
8=1
j=1
~
~ ()
I ) p(Yj(.)
I X8)
L..,.; L..,.;P X8 p(Yj X8 log
8
=
where
I(Y I x,)
.
J
LP(x8)I(Y I x8)
P YJ
(3)
R. Kamimura
510
j
j
which is referred to as conditional information. Especially, when prior uncertainty
is maximum, that is, a prior probability is equi-probable (1/ M), then informlttion
is
S
I(Y I X)
10gM
M
+ I:p(x I:p(Yj I x
3 )
3=1
3
)logp(Yj
IX
3 )
(5)
j=1
where log M is maximum uncertainty concering A.
3
Formulation of Information Controller
In this section, we apply a concept of information to actual network architectures
and define collective information and individual information. The notation in the
above section is changed into ordinary notation used in the neural network.
3.1
Unification by Information Controller
Two kinds of information, collective information and individual information, are
controlled by using an information controller. The information controller is devised
to interpret the mechanism of the information maximization and minimization more
explicitly. As shown in Figure 1, the information controller is composed of two
subcomponents, that is, an individual information minimizer and collective information ma.'Cimizer. A collective information maximizer is used to increase collective
information as much as possible. An individual information minimizer is used to
decrease individual information. By this minimization, the majority of connections
are pushed toward zero. Eventually, all the hidden units tend to be intermediately
activated. Thus, when the collective information maximizer and individual information maximizer are simultaneously applied, a hidden unit activity pattern is a
pattern of the maximum information in which only one hidden unit is on, while
all the other hidden units are off. However, multiple strongly negative connections
to produce a maximum information state, are replaced by extremely weak inputhidden connections. Strongly negative connections are inhibited by the individual
information minimization. This means that by the information controller, information can be maximized and at the same time one of the most important properties
of the information minimization, namely, weight decay or weight elimination, can
approximately be realized. Consequently, the information controller can generate
much simplified networks in terms of hidden units and in terms of input-hidden
connections.
3.2
Collective Information Maximizer
A neural network to be controlled is composed of input, hidden and output units
with bias, as shown in Figure 1. The jth hidden unit receives a net input from
input units and at the same time from a collective information maximizer:
L
uj
= Xj + I: Wjk~k
(6)
k=O
where Xj is an information maximizer from the jth collective information maximizer
to the jth hidden unit, L is the number of input units, Wjk is a connection from
the kth input unit to the jth hidden unit and ~k is the kth element of the 8th input
511
Unification of Information Maximization and Minimization
Bias- Hidden
Connections
Bias-Output
conn71ons
Bias
WiO
....
Input- Hidden
Connections
~~
I
Target
.
.?, .. X
~
\ \ j
\. \ Information
\ \ Maximizers
..
...'.,\
?
Collective
Information
Maximizer
Individual
Information
Minimizer
Information Controller
Figure 1: A network architecture, realizing the information controller.
pattern. The jth hidden unit produces an activity or an output by a sigmoidal
activation function:
vJ
f(uJ)
1
1
(7)
+ exp( -uJ) .
The collective information maximizer is used to maximize the information contained
in hidden units. For this purpose, we should define collective information. Now,
suppose that in the previous formulation in information, a symbol X and Y represent a set of input patterns and hidden units respectively. Then, let us approximate
a probability p(Yj I x$) by a normalized output pj of the jth hidden unit computed
by
(8)
where the summation is over all the hidden units. Then, it is reasonable to suppose
that at an initial stage all the hidden units are activated randomly or uniformly
and all the input patterns are also randomly given to networks. Thus, a probability
p(Yj) of the activation of hidden units at the initial stage is equi-probable, that is,
11M. A probability p(x$) of input patterns is also supposed to be equi-probable,
namely, liS. Thus, information in the equation (3) is rewritten as
I(Y I X)
M
~
1
1
1
S
M
- L M log M + S L L pJ 10gpJ
j=l
$=lj=l
R. Kamimura
512
Hidden Unit (0)
Input Unit (C)
k
J
Fires
Does not lire
Figure 2: An interpretation of an input-hidden connection for
defining the individual information.
1
log !If
+S
S
M
2: 2:pj logpj
(9)
3=lj=1
where log !If is maximum uncertainty. This information is considered to be the
information acquired in a course oflearning. Information maximizers are updated to
increase collective information. For obtaining update rules, we should differentiate
the information function with respect to information maximizers Xj:
{3soI(Y
I X)
OXj
Ii ~ (lOg pj -
f/:"
log p:" ) pj (1 - vj)
(10)
where {3 is a parameter.
3.3
Individual Information Minimization
For representing individual information by a concept of information discussed in
the previous section, we consider an output Pjk from the jth hidden unit only with
a connection from the kth input unit to the jth output unit:
(11)
which is supposed to be a probability of the firing of the jth hidden unit, given the
firing ofthe kth input unit, as shown in Figure 2. Since this probability is considered
to be a probability, given the firing of the kth input unit, conditional information is
appropriate for measuring the information. In addition, it is reasonable to suppose
that a probability of the firing of the jth hidden unit is 1/2 at an initial stage
of learning, because we have no knowledge on hidden units. Thus, conditional
information for a pair of the kth unit and the jth hidden unit is formulated as
Ij k (D
I fires) ~
-
Pj k
log ~ - (1 -
Pj k )
log ( 1 -
~)
logpjk + (1 - Pjk) 10g(1- Pjk)
log2 + Pjk logpjk + (1 - Pjk)log(l -
+Pjk
Pjk)
(12)
If connections are close to zero, this function is close to minimum information,
meaning that it is impossible . to estimate the firing of the kth hidden unit. If
513
Unification of Information Maximization and Minimization
Table 1: An example of obtained input-hidden connections Wjle
by the information controller. The parameter {3, It and 1} were
0.015, 0.0008, and 0.01.
Hidden
Units vJ
1
2
3
4
5
6
7
8
9
10
Input Units
1
3.09
-3.35
-0.01
0.00
0.00
0.02
0.00
0.00
0.02
0.01
2
10.77
0.11
0.00
0.00
0.00
0.01
0.00
0.00
0.01
0.00
e:
3
26.48
0 .33
0.00
0.00
-0.01
-0.04
-0.01
-0.01
-0.03
-0.02
Bias
4
13.82
-3.08
-0.01
0.00
0.00
0.01
0.00
0.00
0.01
0.00
WjO
22.07
-0.95
0.00
0.00
0.00
0.06
0.00
0.00
0.03
0.07
Information
Maximizer Xj
-60.88
1.63
-10.93
-10.94
-10.97
-12.01
-11.01
-11.00
-11.61
-11.67
connections are larger, the information is larger and correlation between input and
hidden units is larger. Total individual information is the sum of all the individual
individual information, namely,
M
I(D
L
2: 2:
I fires)
Ijle(D
I fires),
(13)
j=lle=O
because each connection is treated separately or independently. The individual
information minimization directly controls the input-hidden connections. By differentiating the individual information function and a cross entropy cost function
with respect to input-hidden connections Wjle, we have rules for updating concerning
input-hidden connections:
-It
ol(D I fi1>es)
8wjle
oG
-1}-OWjle
S
-It Wjle pjle(1- Pjle)
+ 1} 2:c5J~k
(14)
!=1
where c5J is an ordinary delta for the cross entropy function and 1} and It are parameters. Thus, rules for updating with respect to input-hidden connections are
closely related to the weight decay method. Clearly, as the individual information
minimization corresponds to diminishing the strength of input-hidden connections.
4
Results and Discussion
The information controller was applied to the segmentation of strings of an artificial language into appropriate minimal elements, that is, syllables. Table 1 shows
input-hidden connections with the bias and the information maximizers. Hidden
units were ordered by the magnitude of the relevance of each hidden unit [6]. Collective information and individual information could sufficiently be maximized and
minimized. Relative collective and individual information were 0.94 and 0.13. In
this state, all the input-hidden connections except connections into the first two
hidden units are almost zero. Information maximizers Xj are all strongly negative
for these cases. These negative information maximizers make eight hidden units
(from the third to tenth hidden unit) inactive, that is, close to zero. By carefully
514
R. Kamimura
Table 2: Generalization performance comparison for 200 and 200 training patterns. Averages in the table are average generalization errors over
seven errors of ten errors with ten different initial values.
(a) 200 patterns
Generalization Errors
Error Rates
Std. Dev.
Averages
Std. Dev.
0.010
0.087
0.015
0.004
0.082
0.009
0.014
0.064
0.015
0.011
0.052
0.008
RM S
Methods
Standard
Weight Decay
Weight Elimination
Information Controller
Averages
0.188
0.183
0.172
0.167
(b) 300 patterns
Generalization Errors
Error Rates
Std. Dev.
A verages
Std. Dev.
0.009
0.024
0.009
0.003
0.012
0.004
0.005
0.009
0.006
0.006
0.008
0.004
RMS
Methods
Standard
Weight Decay
Weight Elimination
Information Controller
A verages
0.108
0.110
0.083
0.072
examing the first two hidden units, we could see that the first hidden unit and the
second hidden unit are concerned with rules for syllabification and a exceptional
case.
Then, networks were trained to infer the well-formedness of strings in addition to
the segmentation to examine generalization performance. Table 2 shows generalization errors for 200 and 300 training patterns. As clearly shown in the figure,
the best generalization performance in terms of RMS and error rates is obtained by
the information controller. Thus, experimental results confirmed that in all cases
the generalization performance of the information controller is well over the other
methods. In addition, experimental results explicitly confirmed that better generalization performance is due to the suppression of over-training by the information
controller.
References
[1] R. Ash, Information Theo1'1), John Wiley & Sons: New York, 1965.
[2] G. Deco, W. Finnof and H. G. Zimmermann, "Unsupervised mutual information criterion for elimination of overtraining in Supervised Multilayer Networks," Neural Computation, Vol. 7, pp.86- 107, 1995.
[3] R. Kamimura "Entropy minimization to increase the selectivity: selection and
competition in neural networks," Intelligent Engineering Systems through Artificial Neural Networks, ASME Press, pp.227-232, 1992.
[4] R. Kamimura, T. Takagi and S. Nakanishi, "Improving generalization performance by information minimization," IEICE Transactions on Information and
Systems, Vol. E78-D, No.2, pp.163-173, 1995.
[5] R. Kamimura and S. Nakanishi, "Hidden information maximization for feature
detection and rule discovery," Network: Computation in Neural Systems, Vo1.6,
pp.577-602, 1995.
[6] M. C. Mozer and P. Smolen sky, "Using relevance to reduce network size automatically," Connection Science, Vo.l, No.1, pp.3-16, 1989.
| 1282 |@word especially:2 uj:3 concept:5 y2:2 verify:1 normalized:1 majority:1 closely:1 laboratory:1 realized:1 elimination:4 kth:7 pjk:7 smolen:1 initial:6 criterion:1 generalization:14 exclusively:1 formedness:1 asme:1 proposition:1 probable:3 theoretic:1 summation:1 vo:1 mail:1 seven:1 toward:1 assuming:2 e:1 subcomponents:1 sufficiently:1 considered:2 activation:2 exp:1 meaning:1 minimizing:1 kamimura:7 john:1 negative:4 jp:1 purpose:1 update:1 discussed:1 interpretation:2 implementation:1 collective:21 interpret:4 observation:1 realizing:1 exceptional:1 finite:1 takagi:1 minimization:19 defining:1 equi:3 clearly:3 always:1 language:2 aim:1 sigmoidal:1 og:1 namely:5 pair:1 connection:25 manner:1 selectivity:1 acquired:1 expected:2 suppression:1 examine:1 inference:1 ol:1 minimum:1 usually:1 lj:2 pattern:11 automatically:1 diminishing:1 actual:2 hidden:53 xm:1 maximize:4 ii:1 notation:2 multiple:1 infer:1 treated:1 kind:2 string:2 cross:2 mutual:1 equal:1 devised:1 concerning:1 nakanishi:2 representing:1 improve:2 controlled:3 sky:1 unsupervised:1 controller:19 multilayer:1 york:1 x8:9 minimized:4 rm:1 intelligent:1 control:4 unit:49 inhibited:1 represent:1 randomly:2 prior:2 composed:2 simultaneously:4 addition:5 separately:1 engineering:1 individual:25 discovery:1 relative:1 replaced:1 fire:4 appropriately:1 attempt:1 firing:5 detection:1 approximately:1 tend:1 ash:1 principle:1 activated:2 range:1 concerned:1 course:1 changed:1 xj:6 yj:14 architecture:2 jth:11 unification:5 necessary:1 reduce:2 bias:6 lle:1 wide:1 lira:1 taking:1 differentiating:1 inactive:1 rms:2 significantly:1 theoretical:1 minimal:1 overcome:1 dev:4 simplified:1 close:3 selection:1 measuring:1 logp:2 maximization:12 ordinary:2 impossible:2 cost:1 introducing:1 oflearning:1 transaction:1 approximate:1 maximizing:1 ten:2 independently:1 y8:1 unify:2 generate:1 rule:5 ryo:1 delta:1 table:5 yl:2 off:1 kanagawa:1 ym:2 updated:1 vol:2 target:1 gm:1 suppose:3 obtaining:1 c5j:2 deco:1 improving:1 vj:3 intermediately:1 pj:7 element:2 tenth:1 located:1 updating:2 std:4 li:1 japan:1 sum:1 wjo:1 uncertainty:9 seeming:1 soi:1 referred:1 almost:1 reasonable:2 explicitly:4 wiley:1 onset:1 decrease:2 performed:1 view:1 mentioned:1 mozer:1 apparently:1 pushed:1 xl:1 third:1 syllable:1 ix:1 wjle:4 oxj:1 trained:1 activity:4 minimize:3 strength:1 xt:1 symbol:1 maximized:5 x2:2 ofthe:1 x:1 syllabification:1 decay:4 easily:1 maximizers:6 weak:1 speed:1 extremely:1 magnitude:1 confirmed:3 cc:1 theo1:1 artificial:3 overtraining:1 simultaneous:1 explain:1 easier:1 outside:1 entropy:3 son:1 suppressed:1 larger:3 lp:3 pp:5 contained:1 ordered:1 gp:3 zimmermann:1 corresponds:1 minimizer:3 differentiate:1 equation:1 ma:1 knowledge:1 net:1 conditional:6 eventually:1 propose:1 segmentation:2 mechanism:1 goal:1 carefully:1 formulated:1 consequently:1 fi1:1 supervised:2 rewritten:1 except:1 uniformly:1 improved:2 apply:1 eight:1 supposed:3 formulation:2 appropriate:2 strongly:3 vo1:1 competition:1 contradictory:2 wjk:2 stage:3 total:1 correlation:1 experimental:3 hand:1 receives:1 existence:1 wio:1 produce:2 internal:3 logpjk:2 maximizer:10 depending:1 log2:1 relevance:2 ij:1 ieice:1 |
310 | 1,283 | A variational principle for
model-based morphing
Lawrence K. Saul'" and Michael I. Jordan
Center for Biological and Computational Learning
Massachusetts Institute of Technology
79 Amherst Street, EI0-034D
Cambridge, MA 02139
Abstract
Given a multidimensional data set and a model of its density,
we consider how to define the optimal interpolation between two
points. This is done by assigning a cost to each path through space,
based on two competing goals-one to interpolate through regions
of high density, the other to minimize arc length. From this path
functional, we derive the Euler-Lagrange equations for extremal
motionj given two points, the desired interpolation is found by solving a boundary value problem. We show that this interpolation can
be done efficiently, in high dimensions, for Gaussian, Dirichlet, and
mixture models.
1
Introduction
The problem of non-linear interpolation arises frequently in image, speech, and
signal processing. Consider the following two examples: (i) given two profiles of the
same face, connect them by a smooth animation of intermediate poses[l]j (ii) given a
telephone signal masked by intermittent noise, fill in the missing speech. Both these
examples may be viewed as instances of the same abstract problem. In qualitative
terms, we can state the problem as follows[2]: given a multidimensional data set,
and two points from this set, find a smooth adjoining path that is consistent with
available models of the data. We will refer to this as the problem of model-based
morphing.
In this paper, we examine this problem it arises from statistical models of multidimensional data. Specifically, our focus is on models that have been derived from
Current address: AT&T Labs, 600 Mountain Ave 2D-439, Murray Hill, NJ 07974
L K. Saul and M. I. Jordan
268
some form of density estimation. Though there exists a large body of work on
the use of statistical models for regression and classification, there has been comparatively little work on the other types of operations that these models support.
Non-linear morphing is an example of such an operation, one that has important
applications to video email[3], low-bandwidth teleconferencing[4]' and audiovisual
speech recognition [2] .
A common way to describe multidimensional data is some form of mixture modeling.
Mixture models represent the data as a collection of two or more clusters; thus, they
are well-suited to handling complicated (multimodal) data sets. Roughly speaking,
for these models the problem of interpolation can be divided into two tasks- how
to interpolate between points in the same cluster, and how to interpolate between
points in different clusters. Our paper will therefore be organized along these lines.
Previous studies of morphing have exploited the properties of radial basis function
networks[l] and locally linear models[2]. We have been influenced by both these
works, especially in the abstract formulation of the problem. New features of our
approach include: the fundamental role played by the density, the treatment of nonGaussian models, the use of a continuous variational principle, and the description
of the interpolant by a differential equation.
2
Intracluster interpolation
Let Q = {q(1), q(2), .. . , qlQI} denote a set of multidimensional data points, and let
P( q) denote a model of the distribution from which these points were generated.
Given two points, our problem is to find a smooth adjoining path that respects the
statistical model of the data. In particular, the desired interpolant should not pass
through regions of space that the modeled density P( q) assigns low probability.
2.1
Clusters and metrics
To develop these ideas further, we begin by considering a special class of modelsnamely, those that represent clusters. We say that P( q) models a data cluster
if P( q) has a unique (global) maximum; in turn, we identify the location of this
maximum, q"', as the prototype.
Let us now consider the geometry of the space inhabited by the data. To endow this
space with a geometric structure, we must define a metric, ga,B( q), that provides a
measure of the distance between two nearby points:
V[q, q + dq] =
r
[~gap(q) dqadqp
1
+ 0 (Idql') .
(1)
Intuitively speaking, the metric should reflect the fact that as one moves away from
the center of the cluster , the density of the data dies off more quickly in some
directions than in others. A natural choice for the metric, one that meets the above
criteria, is the negative Hessian of the log-likelihood:
(2)
269
A Variational Principle for Model-based Morphing
This metric is positive-definite if In P( q) is concave; this will be true for all the
examples we discuss.
2.2
From densities to paths
The problem of model-based interpolation is to balance two competing goalsone to interpolate through regions of high density, the other to avoid excessive
deformations. Using the metric in eq. (1), we can now assign a cost (or penalty) to
each path based on these competing goals.
Consider the path parameterized by q(t) . We begin by dividing the path into
segments, each of which is traversed in some small time interval, dt. We assign a
value to each segment by
_ {[P(q(t?] _l}V[q(t),q(t+dt)]
?(t) P( q*) e
,
(3)
where f ~ O. For reasons that will become clear shortly, we refer to f as the
line tension. The value assigned to each segment dep~n~s on two terms: a ratio
of probabilities, P( q(t?j P( q*), which favors points near the prototype, and the
constant multiplier, e- l . Both these terms are upper bo~nded by unity, and hence
so is their product. The value of the segment also decays with its length, as a result
of the exponent, V[q(t), q(t + dt)].
We derive a path functional by piecing these segments together, multiplying their
individual contributions, and taking the continuum limit. A value for the entire
path is obtained from the product:
e- S
= II ?(t).
(4)
Taking the logarithm of both sides, and considering the limit dt
path functional
S[q( t)1
=
J{-In [~~!i) 1
-+
0, we obtain the
H~ go~( <il
+l
1
q)4 0
dt,
(5)
where q == it [q] is the tangent vector to the path at time t. The terms in this
functional balance the two competing goals for non-linear interpolation. The first
favors paths that interpolate through regions of high density, while the second favors
paths with small arc lengths; both are computed under the metric induced by the
modeled density. The line tension f determines the cost per unit arc length and
modulates the competition between the two terms. Note that the value of the
functional does not depend on the rate at which the path is traversed.
To minimize this functional, we use the following result from the calculus of
variations. Let ?(q, q) denote the integrand of eq. (5), such that S[q(t)] =
f dt ?( q, q). Then the path which minimizes this functional obeys the EulerLagrange equations[5]:
(6)
L. K. Saul and M. I. Jordan
270
We define the model-based interpolant between two points as the path which minimizes this functional; it is found by solving the associated boundary value problem.
The function C( q, q) is known as the Lagrangian. In the next sections, we present
eq. (5) for two distributions of interest-the multivariate Gaussian and the Dirichlet.
2.3
Gaussian cloud
The simplest model of multidimensional data is the multivariate Gaussian. In this
case, the data is modeled by
_ IM11/2
P(x) - (27r)N/2 exp
{I
-2" [x Mx]
T
}
,
(7)
where M is the inverse covariance matrix and N is the dimensionality. Without
loss of generality, we nave chosen the coordinate system so that the mean of the
data coincides with the origin. For the Gaussian, the mean also defines the location
of the prototype; moreover, from eq. (2), the metric induced by this model is just
the inverse covariance matrix. From eq. (5), we obtain the path functional:
(8)
To find a model-based interpolant, we seek the path that minimizes this functional.
Because the functional is parameterization-invariant, it suffices to consider paths
that are traversed at a constant (unit) rate: xTMx = 1. From eq. (6), we find that
the optimal path with this parameterization satisfies:
{~ [xTMx] + f} x+ [xTMx] x = x.
(9)
This is a set of coupled non-linear equations for the components of x(t). However,
note that at any moment in time, the acceleration, x, can be expressed as a linear
combination of the position, x, and the velocity, x. It follows that the motion of x
lies in a plane; in particular, it lies in the plane spanned by the initial conditions,
x and x, at time t = O. This enables one to solve the boundary value problem
efficiently, even in very high dimensions.
Figure la shows some solutions to this boundary value problem for different values
of the line tension , f. Note how the paths bend toward the origin, with the degree
of curvature determined by the line tension, f.
2.4
Dirichlet simplex
For many types of data, the multivariate Gaussian distribution is not the most
appropriate model. Suppose that the data points are vectors of positive numbers
whose elements sum to one. In particular, we say that w is a probability vector
if w = (Wi, W2, . .. , WN) E 'R N , Wct > 0 for all a, and I:Ct Wct = 1. Clearly, the
multivariate Gaussian is not suited to data of this form, since no matter what the
mean and covariance matrix, it cannot assign zero probability to vectors outside
the simplex. Instead, a more natural model is the Dirichlet distribution:
pew)
= f(O)
IJ
8",-1
;(Oct) ,
(10)
A Variational Principle for Model-based Morphing
271
where ()a > 0 for all Ct, and () == :La ()a. Here, f(.) is the gamma function, and
are parameters that determine the statistics of P(w). Note that P(w) = 0 for
vectors that are not probability vectors; in particular, the simplex constraints on w
are implicit assumptions of the model.
()a
We can rewrite the Dirichlet distribution in a more revealing form as follows. First,
let w'" denote the probability vector with elements w~ = ()al(). Then, making a
change of variables from w to In w, we have:
P(ln w) =
;8
(11)
exp { - () [KL (w"'llw)]},
where Z8 is a normalization factor that depends on ()a (but not w), and the quantity
in the exponent is () times the Kullback-Leibler (KL) divergence,
KL(w"'llw)= Lw:ln
a
[::J.
(12)
The KL divergence measures the mismatch between wand w"', with KL(w"'llw) = 0
if and only if w = w?. Since KL(w"'llw) has no other minima besides the one at w"',
we shall say that P(ln w) models a data cluster in the variable In w .
The metric induced by this modeled density is computed by following the prescription of eq. (2). For two nearby points inside the simplex, wand w + dw, the result
of this prescription is that the squared distance is given by
ds 2 = ()
L
a
dw~ .
(13)
Wa
Up to a multiplicative factor of2(), eq. (13) measures the infinitesimal KL divergence
between wand w + dw. This is a natural metric for vectors whose elements can
be interpreted as probabilities.
The functional for non-linear interpolation is found by substituting the modeled
density and the induced metric into eq. (5). For the Dirichlet distribution, this gives:
S[w(t)] =
J
{O[KL(W'Il W )] H}
[O~
::r
1
dt.
(14)
Our problem is to find the path that minimizes this functional. Because the functional is parameterization-invariant, it again suffices to consider paths that are
traversed at a constant rate, or :La w;lwa = 1. In addition to this, however, we
must also enforce the constraint that w remains inside the simplex; this is done
by introducing a Lagrange multiplier. Following this procedure, we find that the
optimal path is described by:
[0 KL(w'llw) HJ
{w. - 2~. + ~. } - 0 [~ :: Wp] w.
= O(w. - w;). (15)
Given two endpoints, this differential equation defines a boundary value problem
for the optimal path. Unlike before, however, in this case the motion of w is not
L. K. Saul and M. I. Jordan
272
confined to a plane. Hence, the boundary value problem for eq. (15) does not
collapse to one dimension, as does its Gaussian counterpart, eq. (9).
To remedy this situation, we have developed an efficient approximation that finds
a near-optimal interpolant, in lieu of the optimal one. This is done in two steps:
first, by solving eq. (15) exactly in the limit ? -+ 00; second, by using this limiting
solution, WOO(t), to find the lowest-cost path that can be expressed as the convex
combination:
w(t) = m(t)w'" + [1 - m(t)) WOO(t).
(16)
The lowest-cost path of this form is found by substituting eq. (16) into the Dirichlet
functional, eq. (14), and solving the Euler-Lagrange equations for m(t). The motivation for eq. (16) is that for finite ?, we expect the optimal interpolant to deviate
from WOO(t) and bend toward the prototype at w*. In practice, this approximation
works very well, and by collapsing the boundary value problem to one dimension, it
allows cheap computation of the Dirichlet interpolants. Some paths from eq. (16),
as well as the ? -+ 00 paths on which they are based, are shown in figure lb. These
paths were computed for the twelve dimensional simplex (N = 12), then projected
onto the WI w2-plane.
3
Intercluster interpolation
The Gaussian and Dirichlet distributions of the previous section are clearly inadequate for modeling for multimodal data sets. In this section, we extend the
variational principle to mixture models, which describe the data as a collection of
k 2: 2 clusters. In particular, suppose the data is modeled by
k
P(q) =
L
7r z
P(qlz).
(17)
z=1
Here, we have assumed that the conditional densities P( qlz) model data clusters as
defined in section 2.1, and the coefficients 7rz = P(z) define prior probabilities for
the latent variable, z E {I, 2, ... , k}.
The crucial step for mixture models is to develop the appropriate generalization of
eq. (5). To this end, let ?z(q, q) denote the Lagrangian derived from the conditional
density, P(qlz), and ?z the line tension i that appears in this Lagrangian. We now
combine these Lagrangians into a single functional:
S[q(t), z(t)) =
J
dt ?z(t)(q, q).
(18)
Note that eq. (18) is a functional of two arguments, not one. For mixture models,
which define a joint density P(q, z) 7rz P(qlz), our goal is to find the optimal path
in the joint space q ? z. Here, z(t) is a piecewise-constant function of time that
assigns a discrete label to each point along the path; in other words, it provides a
temporal segmentation of the path, q(t). The purpose of z(t) in eq. (18) is to select
which Lagrangian is used to compute the contribution from the interval [t, t + dt).
=
lTo respect the weighting of the mixture components in eq. (17), we set the line tensions
according to iz = i-In 'Tr z. Thus, components with higher weights have lower line tensions.
A Variational Principle for Model-based Morphing
0.7
0.0
o.
273
,
'.CI>
*
'.
-.---,.,\~.-.-.-.
0.5
0 ..
~0.3
'lI
-0.
'liS
'.0
..': 1.(1)
0.2
0.1
00
j
'.1
0.?
0.4
W1
*
0.8
?
-6
0
xl
Figure 1: Model-based morphs for (a) Gaussian distribution; (b) Dirichlet distribution; (c) mixture of Gaussians. The prototypes are shown as asterisks; f denotes the
line tension. Figure lc shows the convergence of the iterative algorithm; n denotes
the number of iterations.
As before, we define the model-based interpolant as the path q(t) that minimizes
eq. (18). In this case, however, both q(t) and z(t) must be simultaneously optimized
to recover this path. We have implemented an iterative scheme to perform this
optimization, one that alternately (i) estimates the segmentation z(t), (ii) computes
the model-based interpolant within each cluster based on this segmentation, and
(iii) reestimates the points (along the cluster boundaries) where z(t) changes value.
In short, the strategy is to optimize z(t) for fixed q(t), then optimize q(t) for fixed
z(t) .
Figure lc shows how this algorithm operates on a simple mixture of Gaussians. In
this example, the covariance matrices were set equal to the identity matrix, and
the means of the Gaussians were distributed along a circle in the Xlx2-plane. Note
that with each iteration, the interpolant converges more closely to the path that
traverses this circle. The effect is similar to the manifold-snake algorithm of Bregler
and Omohundro[2].
4
Discussion
In this paper we have proposed a variational principle for model-based interpolation.
Our framework handles Gaussian, Dirichlet, and mixture models, and the resulting
algorithms scale well to high dimensions. Future work will concentrate on the
application to real images.
References
[1] T. Poggio and F. Girosi. Networks for approximation and learning. Proc. of IEEE, vol
78:9 (1990).
[2] C. Bregler and S. Omohundro. Nonlinear image interpolation using manifold learning.
In G. Tesauro, D. Touretzky, and T. Leen (eds.). Advances in Neural Information
Processing Systems 7, 973-980. MIT Press, Cambridge, MA (1995).
[3] T . Ezzat. Example based analysis and synthesis for images of faces. MIT EECS M.S.
thesis (1996).
[4] D. Beymer, A. Shashua, and T. Poggio. Example based image analysis and synthesis.
AI Memo 1161, MIT (1993).
[5] H. Goldstein. Classical Mechanics. Addison-Wesley, London (1980).
| 1283 |@word calculus:1 seek:1 covariance:4 tr:1 moment:1 initial:1 current:1 assigning:1 must:3 girosi:1 enables:1 cheap:1 parameterization:3 plane:5 short:1 provides:2 location:2 traverse:1 along:4 differential:2 become:1 qualitative:1 combine:1 inside:2 roughly:1 frequently:1 examine:1 mechanic:1 audiovisual:1 little:1 considering:2 begin:2 moreover:1 lowest:2 what:1 mountain:1 interpreted:1 minimizes:5 developed:1 nj:1 temporal:1 multidimensional:6 concave:1 exactly:1 unit:2 positive:2 before:2 limit:3 meet:1 path:37 interpolation:12 collapse:1 obeys:1 unique:1 practice:1 definite:1 procedure:1 revealing:1 word:1 radial:1 cannot:1 ga:1 onto:1 bend:2 optimize:2 lagrangian:4 center:2 missing:1 go:1 convex:1 assigns:2 fill:1 spanned:1 dw:3 handle:1 variation:1 coordinate:1 limiting:1 suppose:2 origin:2 velocity:1 element:3 recognition:1 role:1 cloud:1 region:4 interpolant:9 depend:1 solving:4 segment:5 rewrite:1 lto:1 basis:1 teleconferencing:1 multimodal:2 joint:2 describe:2 london:1 outside:1 whose:2 solve:1 say:3 favor:3 statistic:1 product:2 description:1 competition:1 convergence:1 cluster:12 converges:1 xtmx:3 derive:2 develop:2 pose:1 ij:1 dep:1 eq:21 dividing:1 implemented:1 direction:1 concentrate:1 closely:1 piecing:1 assign:3 suffices:2 generalization:1 lagrangians:1 biological:1 traversed:4 bregler:2 exp:2 lawrence:1 substituting:2 continuum:1 purpose:1 estimation:1 proc:1 label:1 extremal:1 mit:3 clearly:2 gaussian:11 avoid:1 hj:1 endow:1 derived:2 focus:1 likelihood:1 ave:1 entire:1 snake:1 llw:5 classification:1 exponent:2 special:1 equal:1 excessive:1 future:1 simplex:6 others:1 piecewise:1 gamma:1 divergence:3 interpolate:5 individual:1 simultaneously:1 geometry:1 interest:1 mixture:10 adjoining:2 poggio:2 logarithm:1 desired:2 circle:2 deformation:1 instance:1 modeling:2 cost:5 introducing:1 euler:2 masked:1 inadequate:1 connect:1 eec:1 morphs:1 density:15 fundamental:1 amherst:1 twelve:1 off:1 michael:1 together:1 quickly:1 synthesis:2 nongaussian:1 w1:1 squared:1 reflect:1 again:1 thesis:1 collapsing:1 li:2 coefficient:1 matter:1 depends:1 multiplicative:1 lab:1 shashua:1 recover:1 complicated:1 contribution:2 minimize:2 il:2 efficiently:2 identify:1 multiplying:1 influenced:1 touretzky:1 ed:1 email:1 infinitesimal:1 associated:1 treatment:1 massachusetts:1 dimensionality:1 organized:1 segmentation:3 goldstein:1 appears:1 wesley:1 higher:1 dt:9 tension:8 formulation:1 done:4 though:1 leen:1 generality:1 just:1 implicit:1 d:1 nonlinear:1 defines:2 effect:1 true:1 multiplier:2 counterpart:1 remedy:1 hence:2 assigned:1 leibler:1 wp:1 coincides:1 criterion:1 hill:1 omohundro:2 motion:2 image:5 variational:7 common:1 functional:17 endpoint:1 extend:1 refer:2 cambridge:2 ai:1 pew:1 of2:1 curvature:1 multivariate:4 tesauro:1 exploited:1 minimum:1 determine:1 signal:2 ii:3 smooth:3 divided:1 prescription:2 regression:1 metric:11 iteration:2 represent:2 normalization:1 confined:1 addition:1 interval:2 wct:2 crucial:1 w2:2 unlike:1 induced:4 jordan:4 near:2 intermediate:1 iii:1 wn:1 competing:4 bandwidth:1 idea:1 prototype:5 penalty:1 speech:3 speaking:2 hessian:1 clear:1 locally:1 simplest:1 per:1 discrete:1 shall:1 iz:1 vol:1 lwa:1 sum:1 wand:3 inverse:2 parameterized:1 interpolants:1 dy:1 ct:2 played:1 constraint:2 nearby:2 integrand:1 argument:1 according:1 combination:2 unity:1 wi:2 making:1 ei0:1 intuitively:1 invariant:2 ln:3 equation:6 remains:1 turn:1 discus:1 addison:1 end:1 lieu:1 available:1 operation:2 gaussians:3 away:1 appropriate:2 enforce:1 shortly:1 rz:2 denotes:2 dirichlet:11 include:1 murray:1 especially:1 classical:1 comparatively:1 move:1 quantity:1 strategy:1 mx:1 distance:2 street:1 manifold:2 reason:1 toward:2 length:4 besides:1 modeled:6 ratio:1 balance:2 negative:1 memo:1 perform:1 upper:1 arc:3 finite:1 situation:1 intermittent:1 lb:1 kl:9 optimized:1 alternately:1 address:1 mismatch:1 video:1 natural:3 scheme:1 technology:1 woo:3 coupled:1 deviate:1 morphing:7 geometric:1 prior:1 tangent:1 inhabited:1 loss:1 expect:1 asterisk:1 degree:1 consistent:1 principle:7 dq:1 side:1 institute:1 saul:4 face:2 taking:2 distributed:1 boundary:8 dimension:5 z8:1 computes:1 collection:2 projected:1 kullback:1 global:1 assumed:1 continuous:1 latent:1 iterative:2 reestimates:1 motivation:1 noise:1 animation:1 profile:1 intracluster:1 body:1 lc:2 position:1 xl:1 lie:2 lw:1 weighting:1 decay:1 exists:1 modulates:1 ci:1 gap:1 suited:2 beymer:1 lagrange:3 expressed:2 bo:1 determines:1 satisfies:1 ma:2 oct:1 intercluster:1 conditional:2 goal:4 viewed:1 identity:1 acceleration:1 change:2 telephone:1 specifically:1 determined:1 operates:1 pas:1 la:3 select:1 support:1 arises:2 handling:1 |
311 | 1,284 | Analytical Mean Squared Error Curves
in Temporal Difference Learning
Satinder Singh
Department of Computer Science
University of Colorado
Boulder, CO 80309-0430
baveja@cs.colorado.edu
Peter Dayan
Brain and Cognitive Sciences
E25-210, MIT
Cambridge, MA 02139
bertsekas@lids.mit.edu
Abstract
We have calculated analytical expressions for how the bias and
variance of the estimators provided by various temporal difference
value estimation algorithms change with offline updates over trials
in absorbing Markov chains using lookup table representations. We
illustrate classes of learning curve behavior in various chains, and
show the manner in which TD is sensitive to the choice of its stepsize and eligibility trace parameters.
1
INTRODUCTION
A reassuring theory of asymptotic convergence is available for many reinforcement
learning (RL) algorithms. What is not available, however, is a theory that explains
the finite-term learning curve behavior of RL algorithms, e.g., what are the different
kinds of learning curves, what are their key determinants, and how do different
problem parameters effect rate of convergence. Answering these questions is crucial
not only for making useful comparisons between algorithms, but also for developing
hybrid and new RL methods. In this paper we provide preliminary answers to some
of the above questions for the case of absorbing Markov chains, where mean square
error between the estimated and true predictions is used as the quantity of interest
in learning curves.
Our main contribution is in deriving the analytical update equations for the two
components of MSE, bias and variance, for popular Monte Carlo (MC) and TD(A)
(Sutton, 1988) algorithms. These derivations are presented in a larger paper. Here
we apply our theoretical results to produce analytical learning curves for TD on
two specific Markov chains chosen to highlight the effect of various problem and
algorithm parameters, in particular the definite trade-offs between step-size, Q, and
eligibility-trace parameter, A. Although these results are for specific problems, we
Analytical MSE Curves/or TD Learning
1055
believe that many ofthe conclusions are intuitive or have previous empirical support,
and may be more generally applicable.
2
ANALYTICAL RESULTS
A random walk , or trial, in an absorbing Markov chain with only terminal payoffs
produces a sequence of states terminated by a payoff. The prediction task is to
determine the expected payoff as a function of the start state i, called the optimal
value function, and denoted v.... Accordingly, vi = E {rls1 = i}, where St is the
state at step t, and r is the random terminal payoff. The algorithms analysed are
iterative and produce a sequence of estimates of v" by repeatedly combining the
result from a new trial with the old estimate to produce a new estimate. They have
the form: viet) = Viet - 1) + a(t)oi(t) where vet) = {Viet)} is the estimate of the
optimal value function after t trials, Oi (t) is the result for state i based on random
trial t, and the step-size a( t) determines how the old estimate and the new result
are combined. The algorithms differ in the os produced from a trial.
Monte Carlo algorithms use the final payoff that results from a trial to define the
Oi(t) (e.g., Barto & Duff, 1994). Therefore in MC algorithms the estimated value ofa
state is unaffected by the estimated value of any other state. The main contribution
of TD algorithms (Sutton, 1988) over MC algorithms is that they update the value
of a state based not only on the terminal payoff but also on the the estimated
values of the intervening states. When a state is first visited, it initiates a shortterm memory process, an eligibility trace, which then decays exponentially over time
with parameter A. The amount by which the value of an intervening state combines
with the old estimate is determined in part by the magnitude of the eligibility trace
at that point.
In general, the initial estimate v(O) could be a random vector drawn from some
distribution, but often v(O) is fixed to some initial value such as zero. In either case,
subsequent estimates, vet); t > 0, will be random vectors because of the random os .
The random vector v( t) has a bias vector b( t) d~ E {v(t) - v"'} and a covariance
matrix G(t) d~ E{(v(t) - E{v(t)})(v(t) - E{v(t)})T}. The scalar quantity of
interest for learning curves is the weighted MSE as a function of trial number t, and
is defined as follows:
MSE(t)
where Pi = {JJT [I -
=
l:i Pi(E{(Vi(t) - vi?}) = l:i pi(bl(t) + Gii(t?,
QJ -1 )d l:j (J.'T [I - QJ -1)j is the weight for state i, which is the
expected number of visits to i in a trial divided by the expected length of a trial 1
(J.'i is the probability of starting in state i; Q is the transition matrix of the chain).
In this paper we present results just for the standard TD(A) algorithm (Sutton,
1988), but we have analysed (Singh & Dayan, 1996) various other TD-like algorithms
(e.g., Singh & Sutton, 1996) and comment on their behavior in the conclusions. Our
analytical results are based on two non-trivial assumptions: first that lookup tables
are used, and second that the algorithm parameters a and A are functions of the
trial number alone rather than also depending on the state. We also make two
assumptions that we believe would not change the general nature of the results
obtained here: that the estimated values are updated offline (after the end of each
trial), and that the only non-zero payoffs are on the transitions to the terminal
states. With the above caveats, our analytical results allow rapid computation of
exact mean square error (MSE) learning curves as a function of trial number.
10 t her reasonable choices for the weights, PI, would not change the nature of the results
presented here.
S. Singh and P. Dayan
1056
2.1
BIAS, VARIANCE, And MSE UPDATE EQUATIONS
The analytical update equations for the bias, variance and MSE are complex and
their details are in Singh & Dayan (1996) - they take the following form in outline:
b(t)
C(t)
+ Bmb(t AS + BSC(t -
am
1)
(1)
1) + IS (b(t - 1?
(2)
where matrix Bm depends linearly on a(t) and BS and IS depend at most quadratically on a(t). We coded this detail in the C programming language to develop a
software tool 2 whose rapid computation of exact MSE error curves allowed us to experiment with many different algorithm and problem parameters on many Markov
chains. Of course, one could have averaged together many empirical MSE curves
obtained via simulation of these Markov chains to get approximations to the analytical MSE error curves, but in many cases MSE curves that take minutes to
compute analytically take days to derive empirically on the same computer for five
significant digit accuracy. Empirical simulation is particularly slow in cases where
the variance converges to non-zero values (because of constant step-sizes) with long
tails in the asymptotic distribution of estimated values (we present an example in
Figure lc). Our analytical method, on the other hand, computes exact MSE curves
for L trials in O( Istate space 13 L) steps regardless of the behavior of the variance
and bias curves.
2.2
ANALYTICAL METHODS
Two consequences of having the analytical forms of the equations for the update
of the mean and variance are that it is possible to optimize schedules for setting a
and A and, for fixed A and a, work out terminal rates of convergence for band C.
Computing one-step optimal a's: Given a particular A, the effect on the MSE
of a single step for any of the algorithms is quadratic in a. It is therefore straightforward to calculate the value of a that minimises MSE(t) at the next time step.
This is called the greedy value of a. It is not clear that if one were interested
in minimising MSE(t + t'), one would choose successive a(u) that greedily minimise MSE(t); MSE(t + 1); .... In general, one could use our formulre and dynamic
programming to optimise a whole schedule for a( u), but this is computationally
challenging.
Note that this technique for setting greedy a assumes complete knowledge about
the Markov chain and the initial bias and covariance of v(O), and is therefore not
directly applicable to realistic applications of reinforcement learning. Nevertheless,
it is a good analysis tool to approximate omniscient optimal step-size schedules,
eliminating the effect of the choice of a when studying the effect of the A.
Computing one-step optimal A'S: Calculating analytically the A that would
minimize MSE(t) given the bias and variance at trial t - 1 is substantially harder
because terms such as [/ - A(t)Q]-l appear in the expressions. However, since it is
possible to compute MSE(t) for any choice of A, it is straightforward to find to any
desired accuracy the Ag(t) that gives the lowest resulting MSE(t). This is possible
only because MSE(t) can be computed very cheaply using our analytical equations.
The caveats about greediness in choosing ag{t) also apply to Ag(t). For one of the
Markov chains, we used a stochastic gradient ascent method to optimise A( u) and
2The analytical MSE error curve software is available via anonymous ftp from the
following address: ftp.cs.colorado.edu /users/baveja/ AMse. tar.Z
1057
Analytical MSE Curves/or TD Learning
a( u) to minimise MSE(t + tf) and found that it was not optimal to choose Ag (t)
and ag(t) at the first step.
Computing terminal rates of convergence: In the update equations 1 and 2,
. b(t) depends linearly on b(t - 1) through a matrix Bm; and C(t) depends linearly
on C(t - 1) through a matrix B S . For the case of fixed a and A, the maximal and
minimal eigenvalues of B m and B S determine the fact and speed of convergence
of the algorithms to finite endpoints. If the modulus of the real part of any of
the eigenvalues is greater than 1, then the algorithms will not converge in general.
We observed that the mean update is more stable than the mean square update,
i.e., appropriate eigenvalues are obtained for larger values of a (we call the largest
feasible a the largest learning rate for which TD will converge). Further, we know
that the mean converges to v" if a is sufficiently small that it converges at all,
and so we can determine the terminal covariance. Just like the delta rule, these
algorithms converge at best to an (-ball for a constant finite step-size. This amounts
to the MSE converging to a fixed value, which our equations also predict. Further,
by calculating the eigenvalues of B m , we can calculate an estimate of the rate of
decrease of the bias.
3
LEARNING CURVES ON SPECIFIC MARKOV
CHAINS
We applied our software to two problems: a symmetric random walk (SRW), and a
Markov chain for which we can control the frequency of returns to each state in a
single run (we call this the cyclicity of the chain).
_,.
e"....,., MIlE DiIdll.IIan
a) ...
c ) ?? ,--------------,
b) "
??
..
.
,.,.
i
I?.
I.? .
? ...
.---=,;;...~:;;:_~,=iiJ....,
.....1,.,-----..:-.---;,;;:;;
...
T....
N~
10.0
tellO
110.0
_0
110.0
TNlN ......
Figure 1: Comparing Analytical and Empirical MSE curves. a) analytical and empirical
learning curves obtained on the 19 state SRW problem with parameters Ci = 0.01, ,\ = 0.9.
The empirical curve was obtained by averaging together more than three million simulation
runs, and the analytical and empirical MSE curves agree up to the fourth decimal place;
b) a case where the empirical method fails to match the analytical learning curve after
more than 15 million runs on a 5 state SRW problem. The empirical learning curve is
very spiky. c) Empirical distribution plot over 15.5 million runs for the MSE at trial 198.
The inset shows impulses at actual sample values greater than 100. The largest value is
greater than 200000.
Agreement: First, we present empirical confirmation of our analytical equations
on the 19 state SRW problem. We ran TD(A) for specific choices of a and A for more
than three million simulation runs and averaged the resulting empIrical weighted
MSE error curves. Figure la shows the analytical and empirical learning curves,
which agree to within four decimal places.
Long-Tails of Empirical MSE distribution: There are cases in which the agreement is apparently much worse (see Figure Ib). This is because of the surprisingly
long tails for the empirical MSE distribution - Figure lc shows an example for a 5
S. Singh and P. Dayan
1058
state SRW. This points to interesting structure that our analysis is unable to reveal.
Effect of a and A: Extensive studies on the 19 state SRW that we do not have
space to describe fully show that: HI) for each algorithm, increasing a while holding
A fixed increases the asymptotic value of MSE, and similarly for increasing A whilst
holding a constant; H2) larger values of a or A (except A very close to 1) lead
to faster convergence to the asymptotic value of MSE if there exists one; H3) in
general, for each algorithm as one decreases A the reasonable range of a shrinks,
i.e., larger a can be used with larger A without causing excessive MSE. The effect
in H3 is counter-intuitive because larger A tends to amplify the effective step-size
and so one would expect the opposite effect. Indeed, this increase in the range of
feasible a is not strictly true, especially very near A = 1, but it does seem to hold
for a large range of A.
Me versus TD(A): Sutton (1988) and others have investigated the effect of A
on the empirical MSE at small trial numbers and consistently shown that TD is
better for some A < 1 than MC (A = 1). Figure 2a shows substantial changes as a
function of trial number in the value of A that leads to the lowest MSE. This effect
is consistent with hypotheses HI-H3. Figure 2b confirms that this remains true
even if greedy choices of a tailored for each value of A are used. Curves for different
values of A yield minimum MSE over different trial number segments. We observed
these effects on several Markov chains.
a)
b)
Accumulate
.,.
,\
.
\\
? oo \ " .
:--...???
~''::
0 .8
00000
0.8
50
_____.o--------.----_u)_ ?.__
,.,..~
...----.... -------
1OQ.0? 150.0
100.0
HO.O
Tn.! Numbor
Figure 2: U-shaped Curves. a) Weighted MSE as a function of A and trial number for
fixed ex = 0.05 (minimum in A shown as a black line). This is a 3-d version of the U-shaped
curves in Sutton (1988), with trial number being the extra axis. b) Weighted MSE as a
function of trial number for various A using greedy ex. Curves for different values of A yield
minimum MSE over different trial number segments.
Initial bias: Watkins (1989) suggested that A trades off bias for variance, since A .....
1 has low bias, but potentially high variance, and conversely for A ..... 0. Figure 3a
confirms this in a problem which is a little like a random walk, except that it is highly
cyclic so that it returns to each state many times in a single trial. If the initial bias
is high (low), then the initial greedy value of A is high (low). We had expected the
asymptotic greedy value of A to be 0, since once b(t) ..... 0, then A = leads to lower
variance updates. However, Figure 3a shows a non-zero asymptote - presumably
because larger learning rates can be used for A > 0, because of covariance. Figure 3b
shows, however, that there is little advantage in choosing A cleverly except in the
first few trials, at least if good values of a are available.
?
Eigenvalue stability analysis: We analysed the eigenvalues of the covariance
update matrix BS (c.f. Equation 2) to determine maximal fixed a as a function
of A. Note that larger a tends to lead to faster learning, provided that the values
converge. Figure 4a shows the largest eigenvalue of B S as a function of A for various
Analytical MSE Curvesfor TD Learning
1059
Accumulate
a)
1.0
,-----~-
b)
_ _ _ _-----,
02
--
0.0 ':-------,~---:-:':":__----:-::'.
0.0
SO.O
100.0
lSO.0
Figure 3: Greedy A for a highly cyclic problem. a) Greedy A for high and low initial bias
(using greedy a) . b) Ratio of MSE for given value of A to that for greedy A at each trial.
The greedy A is used for every step.
a)
b)
4.0
Largest Feasible a
c)
,---~-~-~-~---,
0.10
1 3 .0 \ \
,i
1
2.0
10
.
'."
-
???~:..
0.00
....,
5
...- . .
" -??
'- 0
. . .,... ........ 0.1
?_? .... ,... ....... ?o,01
..
" 'I
0.0 ':-----:"0--......,...,.--:":--,.__----'
0.0
0.2
0.4
0.'
0.'
1.0
A
O?OIo.~O--------,,0A':"".- - - - - - : ".0
0?900.'-0--02--0~
.4--0~.6--0.~8--',.0
A
Figure 4: Eigenvalue analysis of covariance reduction. a) Maximal modulus of the eigenvalues of B S ? These determine the rate of convergence of the variance. Values greater than
1 lead to instability. b) Largest a such that the covariance is bounded. The inset shows a
blowup for 0.9 ;:; A ;:; 1. Note that A = 1 is not optimal. c) Maximal bias reduction rates
as a function of A, after controlling for asymptotic variance (to 0.1 and 0.01) by choosing
appropriate a's. Again, A < 1 is optimal.
a. If this eigenvalue is larger than 1, then the algorithm will diverge - a behavior
that we observed in our simulations. The effect of hypothesis H3 above is evident for larger A, only smaller a can be used. Figure 4b shows this in more graphic form,
indicating the largest a that leads to stable eigenvalues for B S . Note the reversal
very dose to A = 1, which provides more evidence against the pure MC algorithm.
The choice of a and A control both rate of convergence and the asymptotic MSE.
In Figure 4c we control for the asymptotic variance by choosing appropriate as as
a function of .x and plot maximal eigenvalues of 8 m (c.f. Equation 1; it controls
the terminal rate of convergence of the bias to zero) as a function of A. Again , we
see evidence for T Dover Me.
4
CONCLUSIONS
We have provided analytical expressions for calculating how the bias and variance
of various TD and Monte Carlo algorithms change over iterations. The expressions
themselves seem not to be very revealing, but we have provided many illustrations
of their behavior in some example Markov chains. We have also used the analysis
to calculate one-step optimal values of the step-size a and eligibility trace A parameters. Further, we have calculated terminal mean square errors and maximal bias
reduction rates. Since all these results depend on the precise Markov chains chosen,
1060
S. Singh and P. Dayan
it is hard to make generalisations.
We have posited four general conjectures: HI) for constant A, the larger a, the
larger the terminal MSE; H2) the larger a or A (except for A very close to I), the
faster the convergence to the asymptotic MSE, provided that this is finite; H3) the
smaller A, the smaller the range of a for which the terminal MSE is not excessive;
H4) higher values of A are good for cases with high initial biases. The third of
these is somewhat surprising, because the effective value of the step-size is really
al(l - A). However, the lower A, the more the value of a state is based on the
value estimates for nearby states. We conjecture that with small A, large a can
quickly lead to high correlation in the value estimates of nearby states and result
in runaway variance updates.
Two main lines of evidence suggest that using values of A other than 1 (Le., using a
temporal difference rather than a Monte-Carlo algorithm) can be beneficial. First,
the greedy value of A chosen to minimise the MSE at the end of the step (whilst
using the associated greedy a) remains away from 1 (see Figure 3). Second, the
eigenvalue analysis of B S showed that the largest value of a that can be used is
higher for A < 1 (also the asymptotic speed with which the bias can be guaranteed
to decrease fastest is higher for A < 1).
Although in this paper we have only discussed results for the standard TD(A) algorithm (called Accumulate), we have also analysed Replace TD(A) of Singh &
Sutton (1996) and various others. This analysis clearly provides only an early step
to understanding the course of learning for TD algorithms, and has focused exclusively on prediction rather than control. The analytical expressions for MSE might
lend themselves to general conclusions over whole classes of Markov chains, and our
graphs also point to interesting unexplained phenomena, such as the apparent long
tails in Figure 1c and the convergence of greedy values of A in Figure 3. Stronger
analyses such as those providing large deviation rates would be desirable.
References
Barto, A.G. &. Duff, M. (1994). Monte Carlo matrix inversion and reinforcement learning.
NIPS 6, pp 687-694.
Singh, S.P. &. Dayan, P. (1996). Analytical mean squared error curves in temporal difference learning. Machine Learning, submitted.
Singh, S.P. &. Sutton, R.S. (1996). Reinforcement learning with replacing eligibility traces.
Machine Learning, to appear.
Sutton, R.S. (1988). Learning to predict by the methods of temporal difference. Machine
Learning, 3, pp 9-44.
Watkins, C.J.C.H. (1989). Learning from Delayed Rewards. PhD Thesis. University of
Cambridge, England.
| 1284 |@word trial:26 determinant:1 version:1 eliminating:1 inversion:1 stronger:1 confirms:2 simulation:5 covariance:7 harder:1 reduction:3 initial:8 cyclic:2 exclusively:1 omniscient:1 comparing:1 surprising:1 analysed:4 subsequent:1 realistic:1 asymptote:1 plot:2 update:12 alone:1 greedy:14 accordingly:1 dover:1 caveat:2 provides:2 successive:1 five:1 h4:1 combine:1 manner:1 indeed:1 blowup:1 expected:4 themselves:2 rapid:2 behavior:6 brain:1 terminal:11 td:17 actual:1 little:2 increasing:2 provided:5 bounded:1 lowest:2 what:3 kind:1 substantially:1 whilst:2 ag:5 temporal:5 every:1 ofa:1 control:5 appear:2 bertsekas:1 tends:2 consequence:1 sutton:9 black:1 might:1 conversely:1 challenging:1 co:1 fastest:1 range:4 averaged:2 definite:1 oio:1 digit:1 empirical:16 revealing:1 suggest:1 get:1 amplify:1 close:2 greediness:1 instability:1 optimize:1 straightforward:2 regardless:1 starting:1 focused:1 pure:1 estimator:1 rule:1 deriving:1 stability:1 lso:1 updated:1 controlling:1 colorado:3 user:1 exact:3 programming:2 hypothesis:2 agreement:2 particularly:1 observed:3 calculate:3 trade:2 decrease:3 counter:1 ran:1 substantial:1 reward:1 dynamic:1 singh:10 depend:2 segment:2 various:8 derivation:1 describe:1 effective:2 monte:5 choosing:4 whose:1 apparent:1 larger:13 final:1 sequence:2 eigenvalue:13 advantage:1 analytical:26 maximal:6 causing:1 combining:1 intuitive:2 intervening:2 convergence:11 produce:4 converges:3 ftp:2 illustrate:1 depending:1 develop:1 derive:1 minimises:1 oo:1 h3:5 c:2 differ:1 stochastic:1 runaway:1 explains:1 really:1 preliminary:1 anonymous:1 strictly:1 hold:1 sufficiently:1 presumably:1 predict:2 early:1 estimation:1 applicable:2 visited:1 unexplained:1 sensitive:1 largest:8 tf:1 tool:2 weighted:4 bsc:1 mit:2 offs:1 clearly:1 rather:3 tar:1 barto:2 consistently:1 greedily:1 am:1 dayan:7 her:1 interested:1 denoted:1 once:1 having:1 shaped:2 excessive:2 others:2 few:1 delayed:1 interest:2 highly:2 chain:17 old:3 walk:3 desired:1 theoretical:1 minimal:1 dose:1 deviation:1 graphic:1 answer:1 combined:1 st:1 off:1 diverge:1 together:2 e25:1 quickly:1 squared:2 again:2 thesis:1 choose:2 worse:1 cognitive:1 return:2 lookup:2 vi:3 depends:3 apparently:1 start:1 contribution:2 minimize:1 square:4 oi:3 accuracy:2 variance:16 yield:2 ofthe:1 produced:1 mc:5 carlo:5 unaffected:1 submitted:1 against:1 frequency:1 pp:2 associated:1 popular:1 knowledge:1 schedule:3 higher:3 day:1 shrink:1 just:2 spiky:1 correlation:1 hand:1 replacing:1 o:2 reveal:1 impulse:1 believe:2 modulus:2 effect:12 true:3 analytically:2 symmetric:1 mile:1 eligibility:6 outline:1 complete:1 evident:1 tn:1 absorbing:3 rl:3 empirically:1 endpoint:1 exponentially:1 million:4 tail:4 discussed:1 accumulate:3 significant:1 cambridge:2 similarly:1 language:1 baveja:2 had:1 stable:2 showed:1 minimum:3 greater:4 somewhat:1 determine:5 converge:4 desirable:1 match:1 faster:3 england:1 minimising:1 long:4 posited:1 divided:1 visit:1 coded:1 prediction:3 converging:1 iteration:1 tailored:1 crucial:1 extra:1 ascent:1 comment:1 gii:1 oq:1 seem:2 call:2 near:1 opposite:1 minimise:3 qj:2 expression:5 peter:1 repeatedly:1 useful:1 generally:1 clear:1 amount:2 band:1 estimated:6 delta:1 key:1 four:2 nevertheless:1 drawn:1 graph:1 run:5 fourth:1 place:2 reasonable:2 hi:3 guaranteed:1 quadratic:1 software:3 nearby:2 speed:2 conjecture:2 jjt:1 department:1 developing:1 ball:1 cleverly:1 smaller:3 beneficial:1 lid:1 making:1 b:2 boulder:1 computationally:1 equation:10 agree:2 remains:2 initiate:1 know:1 end:2 reversal:1 studying:1 available:4 apply:2 away:1 appropriate:3 stepsize:1 ho:1 assumes:1 calculating:3 especially:1 bl:1 question:2 quantity:2 gradient:1 unable:1 me:2 trivial:1 length:1 illustration:1 decimal:2 ratio:1 providing:1 potentially:1 holding:2 trace:6 markov:14 finite:4 payoff:7 precise:1 duff:2 extensive:1 bmb:1 quadratically:1 nip:1 address:1 suggested:1 optimise:2 memory:1 lend:1 hybrid:1 axis:1 shortterm:1 understanding:1 asymptotic:10 fully:1 expect:1 highlight:1 interesting:2 versus:1 amse:1 h2:2 consistent:1 pi:4 course:2 surprisingly:1 offline:2 bias:20 viet:3 allow:1 curve:31 calculated:2 transition:2 computes:1 reinforcement:4 bm:2 approximate:1 satinder:1 vet:2 iterative:1 table:2 nature:2 confirmation:1 srw:6 mse:48 investigated:1 complex:1 main:3 linearly:3 terminated:1 whole:2 allowed:1 slow:1 lc:2 iij:1 fails:1 answering:1 ib:1 watkins:2 third:1 minute:1 specific:4 inset:2 decay:1 evidence:3 exists:1 ci:1 phd:1 magnitude:1 cheaply:1 scalar:1 determines:1 ma:1 reassuring:1 replace:1 feasible:3 change:5 hard:1 determined:1 except:4 generalisation:1 averaging:1 called:3 la:1 indicating:1 support:1 phenomenon:1 ex:2 |
312 | 1,285 | Spectroscopic Detection of Cervical
Pre-Cancer through Radial Basis
Function Networks
Kagan Tumer
kagan@pine.ece.utexas.edu
Dept. of Electrical and Computer Engr.
The University of Texas at Austin,
Rebecca Richards-Kortum
kortum@mail.utexas.edu
Biomedical Engineering Program
The University of Texas at Austin
Nirmala Ramanujam
nimmi@ccwf.cc.utexas.edu
Biomedical Engineering Program
The University of Texas at Austin
Joydeep Ghosh
ghosh@ece.utexas.edu
Dept. of Electrical and Computer Engr.
The University of Texas at Austin
Abstract
The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy,
fail to achieve a concurrently high sensitivity and specificity. In
vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in pre-cancerous tissue. RBF ensemble
algorithms based on such spectra provide automated, and near realtime implementation of pre-cancer detection in the hands of nonexperts. The results are more reliable, direct and accurate than
those achieved by either human experts or multivariate statistical
algorithms.
1
Introduction
Cervical carcinoma is the second most common cancer in women worldwide, exceeded only by breast cancer (Ramanujam et al., 1996). The mortality related to
cervical cancer can be reduced if this disease is detected at the pre-cancerous state,
known as squamous intraepitheliallesion (SIL). Currently, a Pap smear is used to
982
K. Turner, N. Ramanujam, R. Richards-Kortum and J. Ghosh
screen for cervical cancer {Kurman et al., 1994}. In a Pap test, a large number of
cells obtained by scraping the cervical epithelium are smeared onto a slide which
is then fixed and stained for cytologic examination. The Pap smear is unable to
achieve a concurrently high sensitivityl and high specificity2 due to both sampling
and reading errors (Fahey et al., 1995). Furthermore, reading Pap smears is extremely labor intensive and requires highly trained professionals. A patient with
a Pap smear interpreted as indicating the presence of SIL is followed up by a diagnostic procedure called colposcopy. Since this procedure involves biopsy, which
requires histologic evaluation, diagnosis is not immediate.
In vivo fluorescence spectroscopy is a technique which has the capability to quickly,
non-invasively and quantitatively probe the biochemical and morphological changes
that occur as tissue becomes neoplastic. The measured spectral information can be
correlated to tissue histo-pathology, the current "gold standard" to develop clinically
effective screening and diagnostic algorithms. These mathematical algorithms can
be implemented in software thereby, enabling automated, fast, non-invasive and
accurate pre-cancer screening and diagnosis in hands of non-experts.
A screening and diagnostic technique for human cervical pre-cancer based on laser
induced fluorescence spectroscopy has been developed recently (Ramanujam et al.,
1996). Screening and diagnosis was achieved using a multivariate statistical algorithm (MSA) based on principal component analysis and logistic discrimination of
tissue spectra acquired in vivo. Furthermore, we designed Radial Basis Function
(RBF) network ensembles to improve the accuracy of the multivariate statistical
algorithm, and to simplify the decision making process. Section 2 presents the data
collection/processing techniques. In Section 3, we discuss the MSA, and describe
the neural network based methods. Section 4 contains the experimental results and
compares the neural network results to both the results of the MSA and to current
clinical detection methods. A discussion of the results is given in Section 5.
2
Data Collection and Processing
A portable fluorimeter consisting of two nitrogen pumped-dye lasers, a fiber-optic
probe and a polychromator coupled to an intensified diode array controlled by an
optical multi-channel analyzer was utilized to measure fluorescence spectra from the
cervix in vivo at three excitation wavelengths: 337, 380 and 460 nm (Ramanujam
et al., 1996). Tissue biopsies were obtained only from abnormal sites identified by
colposcopy and subsequently analyzed by the probe to comply with routine patient
care procedure. Hemotoxylin and eosin stained sections of each biopsy specimen
were evaluated by a panel of four board certified pathologists and a consensus
diagnosis was established using the Bethesda classification system. Samples were
classified as normal squamous (NS), normal columnar (NC), low grade (LG) SIL
and high grade (HG) SIL. Table 1 provides the number of samples in the training
(calibration) and test sets. Based on this data set, a clinically useful algorithm
needs to discriminate SILs from the normal tissue types.
Figure 1 illustrates average fluorescence spectra per site acquired from cervical sites
at 337 nm excitation from a typical patient. Evaluation of the spectra at 337 nm exlSensitivity is the correct classification percentage on the pre-cancerous tissue samples.
2Specificity is the correct classification percentage on normal tissue samples.
983
Spectroscopic Detection of Cervical Pre-cancer through RBF Networks
Table 1: Histo-pathologic classification of samples.
Histo-pathology
Nonnal
SIL
1faining Set
107 (SN: 94; SC: 13)
58 (LG: 23; HG: 35)
Test Set
108 (SN: 94; SC: 14)
59 (LG: 24; HG: 35)
citation highlights one of the classification difficulties, namely that the fluorescence
intensity of SILs (LG and HG) is less than that of the corresponding normal squamous tissue and greater than that of the corresponding normal columnar tissue over
the entire emission spectrum 3 . Fluorescence spectra at all three excitation wavelengths comprise of a total of 161 excitation-emission wavelengths pairs. However,
there is a significant cost penalty for using all 161 values. To alleviate this concern,
a more cost-effective fluorescence imaging system was developed, using component
loadings calculated from principal component analysis. Thus, the number of required fluorescence excitation-emission wavelength pairs were reduced from 161 to
13 with a minimal drop in classification accuracy (Ramanujam et al., 1996).
0.50 ,----,-----,-----,----,-----.-----,-----.----,
.~ 0.40
I/)
NC
NS
HG
LG
c
<I>
C 0.30
<I>
g
<I>
oI/) 0.20
<I>
....
o
:::l
II 0.10
0.00 <--__--'-=.:=------'-____---l..._ _--L-_ _- ' - -_ _ _ _L...-_---'
300.0
400.0
500.0
600.0
700.0
Wavelength (nm)
~
Figure 1: Fluorecsence spectra from a typical patient at 337 nm excitation.
3
3.1
Algorithm Development
Multivariate Statistical Algorithms
The multivariate statistical algorithm development described in (Ramanujam et al.,
1996) consists of the following five steps: (1) pre-processing to reduce inter-patient
and intra-patient variation of spectra from a tissue type, (2) dimension reduction
of the pre-processed tissue spectra using Principal Component Analysis (PCA),
(3) selection of diagnostically relevant principal components, (4) development of a
classification algorithm based on logistic discrimination, and finally (5) retrospective
and prospective evaluation of the algorithm's accuracy on a training (calibration)
and test (prediction) set, respectively. Discrimination between SILs and the two
normal tissue types could not be achieved effectively using MSA. Therefore two
3S pectral features observed in Figure 1 are representative of those measured at 380 nm
and 460 nm excitation (not shown here).
984
K. Tumer, N. Ramanujam, R. Richards-Kortum and 1. Ghosh
constituent algorithms were developed: algorithm (1), to discriminate between SILs
and normal squamous tissues, and algorithm (2), to discriminate between SILs and
normal columnar tissues (Ramanujam et al., 1996).
3.2
Algorithms based on Neural Networks
The second stage of algorithm development consists of evaluating the applicability of
neural networks to this problem. Initially, both Multi-Layered Perceptrons (MLPs)
and Radial Basis function (RBF) networks were considered. However, MLPs failed
to improve upon the MSA results for both algorithms (1) and (2), and frequently
converged to spurious solutions. Therefore, our study focuses on RBF networks and
RBF network ensembles.
Radial Basis Function Networks: The first step in applying RBF networks to
this problem consisted of retracing the two-step process outlined for the multivariate
statistical algorithm. For constituent algorithm (1) the kernels were initialized using
a k-means clustering algorithm on the training set containing NS tissue samples
and SILs. The RBF networks had 10 kernels, whose locations and spreads were
adjusted during training. For constituent algorithm (2), we selected 10 kernels, half
of which were fixed to patterns from the columnar normal class, while the other half
were initialized using a k-means algorithm. Neither the kernel locations nor their
spreads were adjusted during training. This process was adopted to rectify the large
discrepancy between the samples from each category (13 for columnar normal vs.
58 for SILs). For each algorithm, the training time was estimated by maximizing
the performance on one validation set. Once the stopping time was established, 20
cases were run for each algorithm4.
Linear and Order statistics Combiners: There were significant variations
among different runs of the RBF networks for all three algorithms. Therefore,
selecting the "best" classifier was not the ideal choice. First, the definition of
"best" depends on the selection of the validation set, making it difficult to ascertain
whether one network will outperform all others given a different test set, as the validation sets are small. Second, selecting only one classifier discards a large amount
of potentially relevant information. In order to use all the available data, and to
increase both the performance and the reliability of the methods, the outputs of
RBF networks were pooled before a classification decision was made.
The concept of combining classifier outputs5 has been explored in a multitude of
articles (Hansen and Salamon, 1990; Wolpert, 1992). In this article we use the
median combiner, which belongs to the class order statistics combiners introduced
in (Turner and Ghosh, 1995), and the averaging combiner, which performs an arithmetic average of the corresponding outputs.
4
Results
Two-step algorithm: The ensemble results reported are based on pooling 20 different runs of RBF networks, initialized and trained as described in the previous
section. This procedure was repeated 10 times to ascertain the reliability of the
4Each run has a different initialization set of kernels/spreads/weights.
extensive bibliography is available in (Turner and Ghosh, 1996).
5 An
Spectroscopic Detection of Cervical Pre-cancer through RBF Networks
985
result and to obtain the standard deviations. For an application such as pre-cancer
detection, the cost of a misclassification varies greatly from one class to another.
Erroneously labeling a healthy tissue as pre-cancerous can be corrected when further tests are performed. Labeling a pre-cancerous tissue as healthy however, can
lead to disastrous consequences. Therefore, for algorithm (1), we have increased the
cost of a misclassified SIL until the sensitivity6 reached a satisfactory level. The
sensitivity and specificity values for constituent algorithm (1) based on both MSA
and RBF ensembles are provided in Table 2. Table 3 presents sensitivity and specificity values for constituent algorithm (2) obtained from MSA and RBF ensembles7 .
For both algorithms (1) and (2), the RBF based combiners provide higher specificity than the MSA. The median combiner provides results similar to those of the
average combiner, except for algorithm (2) where it provides better specificity. In
order to obtain the final discrimination between normal tissue and SILs, constituent
algorithms (1) and (2) are used sequentially, and the results are reported in Table 4.
Table 2: Accuracy of constituent algorithm (1) for differentiating SILs and normal
squamous tissues, using MSA and RBF ensembles.
Algorithm
MSA
RBF-ave
RBF-med
Specificity
63%
66% ?1%
66% ?1%
Sensitivity
90%
90% ?O%
90% ?1%
Table 3: Accuracy of constituent algorithm (2) for differentiating SILs and normal
columnar tissues, using MSA and RBF ensembles.
Algorithm
MSA
RBF-ave
RBF-med
Specificity
36%
37% ?5%
44% ?7%
Sensitivity
97%
97% ?O%
97% ?O%
One-step algorithm: The results presented above are based on the multi-step
algorithm specifically developed for the MSA, which could not consolidate algorithms (1) and (2) into one step. Since the ultimate goal of these two algorithms is
to separate SILs from normal tissue samples, a given pattern has to be processed
through both algorithms. In order to simplify this decision process, we designed a
one step RBF network to perform this separation. Because the pre-processing for
algorithms (1) and (2) is different8 , the input space is now 26-dimensional. We initialized 10 kernels using a k-means algorithm on a trimmed 9 version of the training
set. The kernel locations and spreads were not adjusted during training. The cost
of a misclassified SIL was set at 2.5 times the cost of a misclassified normal tissue
SIn this case, the cost of misclassifying a SIL was three times the cost of misclassifying
a normal tissue sample.
7In this case, there was no need to increase the cost of a misclassified SIL, because of
the high prominence of SILs in the training set.
8Normalization vs. normalization followed by mean scaling.
9The trimmed set has the same number of patterns from each class. Thus, it forces
each class to have a similar number of kernels. This set is used only for initializing the
kernels.
986
K. Tumer; N. Ranulnujam. R. Richards-Kortum and 1. Ghosh
sample, in order to provide the best sensitivity/specificity pair. The average and
median combiner results are obtained by pooling 20 RBF networks 10 .
Table 4: One step RBF algorithm compared to multi-step MSA and clinical methods
for differentiating SILs and normal tissue samples.
Algorithm
2-step MSA
2-step RB F -ave
2-step RBF-med
RBF-ave
RBF-med
Pap smear (human expert)
Colposcopy (human expert)
Specificity
63%
65% ?2%
67% ?2%
67% ?.75%
65.5% ?.5%
68% ?21%
48%?23 %
Sensitivity
83%
87% ?1%
87% ?1%
91% ?1.5%
91% ?1%
62% ?23%
94% ?6%
The results of both the two-step and one-step RBF algorithms and the results of
the two-step MSA are compared to the accuracy of Pap smear screening and colposcopy in expert hands in Table 4. A comparison of one-step RBF algorithms
to the two-step RBF algorithms indicates that the one-step algorithms have similar specificities, but a moderate improvement in sensitivity relative to the two-step
algOrithms. Compared to the MSA, the one-step RBF algorithms have a slightly
decreased specificity, but a substantially improved sensitivity. In addition to the
improved sensitivity, the one step RBF algorithms simplify the decision making process. A comparison between the one step RBF algorithms and Pap smear screening
indicates that the RBF algorithms have a nearly 30% improvement in sensitivity
with no compromise in specificity; when compared to colposcopy in expert hands,
the RBF ensemble algorithms maintain the sensitivity of expert colposcopists, while
improving the specificity by almost 20%. Figure 2 shows the trade-off between specificity and sensitivity for clinical methods, MSA and RBF ensembles, obtained by
changing the misclassification cost. The RBF ensembles provide better sensitivity
and higher reliability than any other method for a given specificity value.
1.0
A
+
.z.
?S 0.8
A
A A
~RBF-ave
~
'(;;
G-
cQ)
(fJ
A
+
+
~
RBF-mad
~MSA
0.6
+
+ Pap smear
+
0.4
0.0
+
~
____ ____
0.2
~
~
____
Colposcopy
______ ____ ____ ____- J
0.6
0.8
0.4
1 - Specificity
Figure 2: Trade-off between sensitivity and specifity for MSA and RBF ensembles.
For reference, Pap smear and colposcopy results from the literature are included (Fahey et al., 1995).
~~
__
A
~
~
~
~
lOThis procedure is repeated 10 times, in order to determine the standard deviation.
Spectroscopic Detection of Cervical Pre-cancer through RBF Networks
5
987
Discussion
The classification results of both the multivariate statistical algorithms and the
radial basis function network ensembles demonstrate that significant improvement
in classification accuracy can be achieved over current clinical detection modalities
using cervical tissue spectral data obtained from in vivo fluorescence spectroscopy.
The one-step RBF algorithm has the potential to significantly reduce the number
of pre-cancerous cases missed by Pap smear screening and the number of normal
tissues misdiagnosed by expert colposcopists.
The qualitative nature of current clinical detection modalities leads to a significant
variability in classification accuracy. For example, estimates of the sensitivity and
specificity of Pap smear screening have been shown to range from 11-99% and 1497%, respectively (Fahey et al., 1995). This limitation can be addressed by the
RBF network ensembles which demonstrate a significantly smaller variability in
classification accuracy therefore enabling more reliable classification. In addition
to demonstrating a superior sensitivity, the RBF ensembles simplify the decision
making process of the two-step algorithms based on MSA into a single step that
discriminates between SILs and normal tissues. We note that for the given data
set, both MSA and MLP were unable to provide satisfactory solutions in one step.
The one-step algorithm development process can be readily implemented in software, enabling automated detection of cervical pre-cancer. It provides near real
time implementation of pre-cancer detection in the hands of non-experts, and can
lead to wide-scale implementation of screening and diagnosis and more effective
patient management in the prevention of cervical cancer. The success of this application will represent an important step forward in both medical laser spectroscopy
and gynecologic oncology.
Acknowledgements: This research was supported in part by NSF grant ECS 9307632,
AFOSR contract F49620-93-1-0307, and Lifespex, Inc.
References
Fahey, M. T., Irwig, L., and Macaskill, P. (1995). Meta-analysis of pap test accuracy.
American Journal of Epidemiology, 141(7):680-689.
Hansen, L. K. and Salamon, P. (1990). Neural network ensembles. IEEE 1Tansactions on
Pattern Analysis and Machine Intelligence, 12(10):993-1000.
Kurman, R. J ., Henson, D. E., Herbst, A. L., Noller, K. L., and Schiffman, M. H. (1994).
Interim guidelines of management of abnormal cervical cytology. Journal of American
Medical Association, 271:1866-1869.
Ramanujam, N., Mitchell, M. F., Mahadevan, A., Thomsen, S., Malpica, A., Wright,
T., Atkinson, N., and Richards-Kortum, R. R. (1996). Cervical pre-cancer detecion
using a multivariate statistical algorithm based on fluorescence spectra at multiple
excitation wavelengths. Photochemistry and Photobiology, 64(4):720-735.
Turner, K. and Ghosh, .J. (1995). Order statistics combiners for neural classifiers. In
Proceedings of the World Congress on Neural Networks, pages 1:31-34, Washington
D.C. INNS Press.
Turner, K. and Ghosh, J. (1996). Error correlation and error reduction in ensemble classifiers. Connection Science. (to appear).
Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5:241-259.
| 1285 |@word version:1 loading:1 prominence:1 thereby:1 reduction:2 cytology:1 contains:1 selecting:2 current:5 readily:1 designed:2 drop:1 discrimination:4 v:2 half:2 selected:1 intelligence:1 tumer:3 provides:4 location:3 five:1 mathematical:1 direct:1 qualitative:1 consists:2 epithelium:1 acquired:2 inter:1 frequently:1 nor:1 multi:4 grade:2 becomes:1 provided:1 panel:1 interpreted:1 substantially:2 developed:4 ghosh:9 classifier:5 medical:2 grant:1 appear:1 before:1 engineering:2 congress:1 consequence:1 initialization:1 range:1 procedure:5 significantly:2 pre:20 radial:5 specificity:18 onto:1 selection:2 layered:1 applying:1 maximizing:1 array:1 variation:2 utilized:1 richards:5 observed:1 electrical:2 initializing:1 morphological:2 trade:2 disease:1 discriminates:1 engr:2 trained:2 compromise:1 upon:1 basis:5 fiber:1 laser:3 stacked:1 fast:1 effective:3 describe:1 detected:1 sc:2 labeling:2 whose:1 pumped:1 statistic:3 certified:1 final:1 inn:1 relevant:2 combining:1 achieve:2 gold:1 constituent:8 msa:22 develop:1 measured:2 implemented:2 diode:1 involves:1 pathologic:1 biopsy:3 correct:2 subsequently:1 human:4 generalization:1 alleviate:1 spectroscopic:4 adjusted:3 considered:1 wright:1 normal:20 pine:1 early:1 currently:1 hansen:2 healthy:2 utexas:4 fluorescence:11 smeared:1 concurrently:2 combiner:5 emission:3 focus:1 cancerous:6 improvement:3 indicates:2 greatly:1 ave:5 stopping:1 biochemical:2 entire:1 nonexperts:1 initially:1 spurious:1 misdiagnosed:1 misclassified:4 nonnal:1 classification:13 among:1 development:5 prevention:1 comprise:1 once:1 washington:1 sampling:1 nearly:1 discrepancy:1 others:1 quantitatively:2 simplify:4 pectral:1 consisting:1 maintain:1 detection:13 screening:9 mlp:1 highly:1 intra:1 evaluation:3 analyzed:1 hg:5 accurate:2 initialized:4 joydeep:1 minimal:1 increased:1 cost:10 applicability:1 deviation:2 reported:2 varies:1 epidemiology:1 sensitivity:17 contract:1 off:2 quickly:2 mortality:2 nm:7 containing:1 management:2 woman:1 algorithm4:1 expert:9 american:2 potential:1 pooled:1 inc:1 depends:1 performed:1 reached:1 capability:1 vivo:5 mlps:2 oi:1 accuracy:10 ensemble:16 cc:1 tissue:28 classified:1 converged:1 definition:1 nitrogen:1 invasive:1 treatment:1 mitchell:1 routine:1 salamon:2 exceeded:1 higher:2 improved:2 evaluated:1 furthermore:2 biomedical:2 stage:1 until:1 correlation:1 hand:5 logistic:2 consisted:1 concept:1 satisfactory:2 sin:1 during:3 excitation:8 smear:12 demonstrate:2 performs:1 fj:1 recently:1 common:1 superior:1 association:1 significant:4 outlined:1 analyzer:1 pathology:2 had:1 reliability:3 rectify:1 calibration:2 henson:1 multivariate:8 dye:1 belongs:1 moderate:1 discard:1 meta:1 success:1 herbst:1 greater:1 care:1 specimen:1 determine:1 ii:1 arithmetic:1 multiple:1 worldwide:1 clinical:5 controlled:1 prediction:1 breast:1 patient:7 represent:1 kernel:9 normalization:2 achieved:4 cell:1 addition:2 decreased:1 addressed:1 median:3 modality:2 induced:1 pooling:2 med:4 cervix:1 near:2 presence:1 ideal:1 mahadevan:1 automated:3 identified:1 reduce:2 intensive:1 texas:4 whether:1 pca:1 ultimate:1 trimmed:2 retrospective:1 penalty:1 useful:1 amount:1 slide:1 processed:2 category:1 reduced:3 outperform:1 cervical:16 percentage:2 misclassifying:2 nsf:1 diagnostic:3 stained:2 per:1 carcinoma:1 estimated:1 rb:1 diagnosis:5 four:1 demonstrating:1 changing:1 neither:1 imaging:1 pap:14 run:4 almost:1 separation:1 realtime:1 missed:1 decision:5 consolidate:1 scaling:1 abnormal:2 followed:2 atkinson:1 occur:2 optic:1 software:2 bibliography:1 pathologist:1 erroneously:1 extremely:1 optical:1 interim:1 clinically:2 smaller:1 ascertain:2 slightly:1 bethesda:1 making:4 discus:1 fail:1 adopted:1 available:2 probe:4 spectral:2 fahey:4 professional:1 clustering:1 kagan:2 unable:2 diagnostically:1 separate:1 intensified:1 prospective:1 mail:1 portable:1 consensus:1 mad:1 cq:1 nc:2 lg:5 difficult:1 disastrous:1 potentially:1 implementation:3 guideline:1 perform:1 enabling:3 immediate:1 variability:2 oncology:1 intensity:1 rebecca:1 introduced:1 namely:1 pair:3 required:1 extensive:1 connection:1 established:2 specifity:1 pattern:4 reading:2 program:2 reliable:2 misclassification:2 difficulty:1 examination:1 force:1 squamous:5 turner:5 improve:2 coupled:1 sn:2 comply:1 literature:1 acknowledgement:1 relative:1 afosr:1 highlight:1 limitation:1 sil:9 validation:3 article:2 austin:4 cancer:17 supported:1 combiners:4 wide:1 differentiating:3 f49620:1 calculated:1 dimension:1 evaluating:1 world:1 noninvasively:1 forward:1 collection:2 made:1 ec:1 citation:1 sequentially:1 spectrum:11 table:9 channel:1 nature:1 spectroscopy:5 improving:1 spread:4 repeated:2 site:3 representative:1 screen:1 board:1 n:3 invasively:1 explored:1 multitude:1 concern:1 effectively:1 illustrates:1 columnar:6 wolpert:2 wavelength:6 failed:1 labor:1 goal:1 rbf:44 change:2 included:1 typical:2 except:1 corrected:1 specifically:1 averaging:1 principal:4 called:1 total:1 discriminate:3 ece:2 experimental:1 perceptrons:1 indicating:1 dept:2 correlated:1 |
313 | 1,286 | Analog VLSI Circuits for
Attention-Based, Visual Tracking
Timothy K. Horiuchi
Computation and Neural Systems
California Institute of Technology
Pasadena, CA 91125
timmer@klab.caltech.edu
Tonia G. Morris
Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA, 30332-0250
tmorris@eecom.gatech.edu
Christof Koch
Computation and Neural Systems
California Institute of Technology
Pasadena, CA 91125
Stephen P. DeWeerth
Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA, 30332-0250
Abstract
A one-dimensional visual tracking chip has been implemented using neuromorphic, analog VLSI techniques to model selective visual
attention in the control of saccadic and smooth pursuit eye movements. The chip incorporates focal-plane processing to compute
image saliency and a winner-take-all circuit to select a feature for
tracking. The target position and direction of motion are reported
as the target moves across the array. We demonstrate its functionality in a closed-loop system which performs saccadic and smooth
pursuit tracking movements using a one-dimensional mechanical
eye.
1
Introduction
Tracking a moving object on a cluttered background is a difficult task. When more
than one target is in the field of view, a decision must be made to determine which
target to track and what its movement characteristics are. If motion information is
being computed in parallel across the visual field, as is believed to occur in the middle temporal area (MT) of primates, some mechanism must exist to preferentially
extract the activity of the neurons associated with the target at the appropriate
Analog VLSI Circuits for Attention-Based, VISual Tracking
707
Photoreceptors
Temporal
Derivative
Spatial
Derivative
Direction
of Motion
Hysteretic
winner-take-all
-
ST Target
.........- - - -.........- - - -.........- - - Position
---~~---~----~-. Sa~ade
Trigger
Figure 1: System Block Diagram: P = adaptive photoreceptor circuit, TD = temporal derivative circuit, SD = spatial derivative, DM = direction of motion, HYS
WTA = hysteretic winner-take-all, P2V = position to voltage, ST = saccade trigger. The TD and SD are summed to form the saliency map from which the WTA
finds the maximum. The output of the WTA steers the direction-of-motion information onto a common output line. Saccades are triggered when the selected pixel
is outside a specified window located at the center of the array.
time. Selective visual attention is believed to be this mechanism.
In recent years, many studies have indicated that selective visual attention is involved in the generation of saccadic [10] [7] [12] [15] and smooth pursuit eye movements [9] [6] [16]. These studies have shown that attentional enhancement occurs
at the target location just before a saccade as well as at the target location during
smooth pursuit. In the case of saccades, attempts to dissociate attention from the
target location has been shown to disrupt the accuracy or latency.
Koch and Ullman [11] have proposed a model for attentional selection based on the
formation of a saliency map by combining the activity of elementary feature maps
in a topographic manner. The most salient locations are where activity from many
different feature maps coincide or at locations where activity from a preferentiallyweighted feature map, such as temporal change, occurs. A winner-take-all (WTA)
mechanism, acting as the center of the attentional "spotlight," selects the location
with the highest saliency.
Previous work on analog VLSI-based, neuromorphic, hardware simulation of visual
tracking include a one-dimensional, saccadic eye movement system triggered by
temporal change [8] and a two-dimensional, smooth pursuit system driven by visual
motion detectors [5]. Neither system has a mechanism for figure-ground discrimination of the target. In addition to this overt form of attentional shifting, covert
T. Horiuchi, T. G. Morris, C. Koch and S. P. DeWeerth
708
g
0
.t::.
C.
10.5 V
Vphoto
>
E
~
6
I
ISpatial
Derivative
0
C/)
E
CD
t:
I
ITemporal
Derivative
::J
0
0
f0-
E
CD
t:
::J
0
,t
i t ??
F? '
~
"~....-r,
.?. ,,..,..,.
Direction
of Motion
0
4
7
10
13
16
19
22
Pixel Position
Figure 2: Example stimulus - Traces from top to bottom: Photoreceptor voltage,
absolute value of the spatial derivative, absolute value of the temporal derivative,
and direction-of-motion. The stimulus is a high-contrast, expanding bar, which
provides two edges moving in opposite directions. The signed, temporal and spatial
derivative signals are used to compute the direction-of-motion shown in the bottom
trace.
attentional shifts have been modeled using analog VLSI circuits [4] [14], based on
the Koch and Ullman model. These circuits demonstrate the use of delayed, transient inhibition at the selected location to model covert attentional scanning. In
this paper we describe an analog VLSI implementation of an attention-based , visual
tracking architecture which combines much of this previous work. Using a hardware
model of the primate oculomotor system [8], we then demonstrate the use of the
tracking chip for both saccadic and smooth pursuit eye movements.
2
System Description
The computational goal of this chip is the selection of a target, based on a given
measure of saliency, and the extraction of its retinal position and direction of motion. Figure 1 shows a block diagram of the computation. The first few stages of
processing compute simple feature maps which drive the WTA-based selection of
a target to track. The circuits at the selected location signal their position and
the computed direction-of-motion. This information is used by an external saccadic
and smooth pursuit eye movement system to drive the eye. The saccadic system
uses the position information to foveate the target and the smooth pursuit system
uses the motion information to match the speed of the target .
Adaptive photoreceptors [2] (at the top of Figure 1) transduce the incoming pattern
of light into an array of voltages. The temporal (TD) and spatial (SD) derivatives
are computed from these voltages and are used to generate the saliency map and
direction of motion. Figure 2 shows an example stimulus and the computed features.
The saliency map is formed by summing the absolute-value of each derivative (ITDI
+ ISD I) and the direction-of-motion (DM) signal is a normalized product of the two
Analog VLSI Circuits for Attention-Based, VISual Tracking
709
3.5
3.0
2.5
2.0
1.5
Target Position
1 . 0+---r-~---+---r--~--+---r-~---+---r--~
-0.1
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Time (seconds)
Figure 3: Extracting the target's direction of motion: The WTA output voltage is
used to switch the DM current onto a common current sensing line. The output
of this signal is seen in the top trace. The zero-motion level is indicated by the
flat line shown at 2.9 volts. The lower trace shows the target's position from the
position-to-voltage encoding circuits. The target's position and direction of motion
are used to drive saccades and smooth pursuit eye movements during tracking .
??
TD?SD
derlvatIves.
ITDI+lsDI
In the saliency map , the temporal and spatial derivatives can be differentially
weighted to emphasize moving targets over stationary targets. The saliency map
provides the input to a winner-take-all (WTA) computation which finds the maximum in this map. Spatially-distributed hysteresis is incorporated in this winnertake-all computation [4] by adding a fixed current to the winner's input node and
its neighbors. This distributed hysteresis is motivated by the following two ideas:
1) once a target has been selected it should continue to be tracked even if another
equally interesting target comes along, and 2) targets will typically move continuously across the array. Hysteresis reduces oscillation of the winning status in the
case where two or more inputs are very close to the winning input level and the local distribution of hysteresis allows the winning status to freely shift to neighboring
pixels rather than to another location further away.
The WTA output signal is used to drive three different circuits: the position-tovoltage (P2V) circuit [3], the DM-current-steering circuit (see Figure 3), and the
saccadic triggering (ST) circuit. The only circuits that are active are those at the
winning pixel locations. The P2V circuit drives the common position output line
to a voltage representing it's position in the array, the DM-steering circuit puts
the local DM circuit's current onto the common motion output line, and the ST
circuit drives a position-specific current onto a common line to be compared against
an externally-set threshold value. By creating a "V" shaped profile of ST currents
centered on the array, winning pixels away from the center will exceed the threshold
T.
710
Horiuch~
T. G. Mo"is, C. Koch and S. P. DeWeerth
"2
o 2.7
E
(/)
o
Q.
a; 2.6
.~
Q.
-; 2.5
N
'0
>
2.4
2.1+----+----+---~----~--_4----~--~~--~
o
5
10
15
20
25
30
35
40
Time (msec)
Figure 4: Position vs. time traces for the passage of a strong edge across the array
at five different speeds. The speeds shown correspond to 327, 548, 1042, 1578,2294
pixels/sec.
and send saccade requests off-chip. Figure 3 shows the DM and P2V outputs for
an oscillating target.
To test the speed of the tracking circuit, a single edge was passed in front of the array
at varying speeds. Figure 4 shows some of these results. The power consumption
of the chip (23 pixels and support circuits, not including the pads) varies between
0.35 m Wand 0.60 m W at a supply voltage of 5 volts. This measurement was taken
with no clock signal driving the scanners since this is not essential to the operation
of the circuit.
3
System Integration
The tracking chip has been mounted on a neuromorphic, hardware model of the
primate oculomotor system [8] and is being used to track moving visual targets.
The visual target is mounted to a swinging apparatus to generate an oscillating
motion. Figure 5 shows the behavior of the system when the retinal target position
is used to drive re-centering saccades and the target direction of motion is used
drive smooth pursuit. Saccades are triggered when the selected pixel is outside a
specified window centered on the array and the input to the smooth pursuit system is
suppressed during saccades. The smooth pursuit system mathematically integrates
retinal motion to match the eye velocity to the target velocity.
4
Acknowledgements
T. H. is supported by an Office of Naval Research AASERT grant and by the NSF
Center for Neuromorphic Systems Engineering at Caltech. T . M. is supported by
the Georgia Tech Research Institute.
711
Analog VLSI Circuits for Attention-Based, VISual Tracking
Analog VLSI
40
Human Subject
?200
~
0.0
__
--~--~~~
0.5
1.0
1.5
2.0
__
--~--~~oo
2.5
3.0
From Collewijn and Tamminga, 1984
3.S
Time (seconds)
Figure 5: Saccades and Smooth Pursuit: In this example, a swinging target is
tracked over a few cycles. Re-centering saccades are triggered when the target
leaves a specified window centered on the array. For comparison, on the right, we
show human data for the same task [1].
References
[1] H. Collewijn and E. Tamminga, "Human smooth and saccadic eye movements
during voluntary pursuit of different target motions on different backgrounds"
J. Physiol., VoL 351, pp. 217-250. (1984)
[2] T. Delbriick, Ph.D. Thesis, Computation and Neural Systems Program California Institute of Technology (1993)
[3] S. P. DeWeerth, "Analog VLSI Circuits for Stimulus Localization and Centroid
Computation" Inti. 1. Compo Vis. 8(3), pp. 191-202. (1992)
[4] S. P. DeWeerth and T. G. Morris, "CMOS Current Mode Winner-Take-All
with Distributed Hysteresis" Electronics Letters, Vol. 31, No. 13, pp. 1051-1053.
(1995)
[5] R. Etienne-Cummings, J. Van der Spiegel, and P. Mueller "A Visual Smooth
Pursuit Tracking Chip" Advances in Neural Information Processing Systems 8
(1996)
[6] V. Ferrara and S. Lisberger, "Attention and Target Selection for Smooth Pursuit
Eye Movements" J. Neurosci., 15(11), pp. 7472-7484, (1995)
[7] J. Hoffman and B. Subramaniam, "The Role of Visual Attention in Saccadic
Eye Movements" Perception and Psychophysics, 57(6), pp. 787-795, (1995)
[8] T . Horiuchi, B. Bishofberger, and C. Koch, "An Analog VLSI Saccadic System"
Advances in Neural Information Processing Systems 6, Morgan Kaufmann, pp.
582-589, (1994)
[9] B. Khurana, and E . Kowler, "Shared Attentional Control of Smooth Eye Movement and Perception" Vision Research, 27(9), pp. 1603-1618, (1987)
T. Horiuchi, T. G. Morris, C. Koch and S. P. DeWeerth
712
10
5
Analog VLSI
Monkey
Target Position ..............
0
Ci)
Cl
Q)
:s
-5
Q)
C>
c:
?
Eye Position
-10
-15
100 msec saccadic delay added
-20 -I---+--+---+-----1--+--_+_---.,r-----+---+----i
2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8
From Lisberger et aI., 1987
nme (seconds)
Figure 6: Step-Ramp Experiment: In this experiment, the target jumps from the
fixation point to a new location and begins moving with constant velocity. On the
left, the analog VLSI system tracks the target. For comparison, on the right, we
show data from a monkey performing the same task [13].
[10] E. Kowler, E. Anderson, B. Dosher, E. Blaser, "The Role of Attention in the
Programming of Saccades" Vision Research, 35(13), pp. 1897-1916, (1995)
[11] C. Koch and S. Ullman, "Shifts in selective visual attention: towards the underlying neural circuitry" Human Neurobiology, 4:219-227, (1985)
[12] R. Rafal, P. Calabresi, C. Brennan, and T. Scioltio, "Saccade Preparation
Inhibits Reorienting to Recently Attended Locations" 1. Exp. Psych: Hum.
Percep. and Perf., 15, pp. 673-685, (1989)
[13] S. G. Lisberger, E. J. Morris, and L. Tychsen, "Visual motion processing and
sensory-motor integration for smooth pursuit eye movements." In Ann. Rev.
Neurosci., Cowan et al., editors. Vol. 10, pp. 97-129, (1987)
[14] T. G. Morris and S. P. DeWeerth, "Analog VLSI Circuits for Covert Attentional Shifts" Proc. 5th Inti. Conf. on Microelectronics for Neural Networks and
Fuzzy Systems - MicroNeur096, Feb 12-14, 1996. Lausanne, Switzerland, IEEE
Computer Society Press, Los Alamitos, CA, pp. 30-37, (1996)
[15] S. Shimojo, Y. Tanaka, O. Hikosaka, and S. Miyauchi, "Vision, Attention,
and Action - inhibition and facilitation in sensory motor links revealed by the
reaction time and the line-motion." In Attention and Performance XVI, T. Inui
& J. L. McClelland, editors. MIT Press, (1995)
[16] W. J. Tam and H. Ono, "Fixation Disengagement and Eye-Movement Latency"
Perception and Psychophysics, 56(3) pp. 251-260, (1994)
| 1286 |@word middle:1 itdi:2 simulation:1 attended:1 electronics:1 reaction:1 current:8 kowler:2 percep:1 must:2 physiol:1 motor:2 discrimination:1 stationary:1 v:1 selected:5 leaf:1 plane:1 brennan:1 compo:1 provides:2 node:1 location:12 five:1 along:1 supply:1 fixation:2 combine:1 manner:1 behavior:1 td:4 window:3 begin:1 underlying:1 circuit:25 what:1 psych:1 monkey:2 fuzzy:1 temporal:9 control:2 grant:1 christof:1 timmer:1 before:1 engineering:3 local:2 sd:4 apparatus:1 encoding:1 signed:1 lausanne:1 block:2 area:1 onto:4 ga:2 selection:4 close:1 put:1 map:11 transduce:1 center:4 send:1 attention:15 cluttered:1 swinging:2 nme:1 array:10 facilitation:1 target:36 trigger:2 tychsen:1 programming:1 us:2 velocity:3 located:1 bottom:2 role:2 electrical:2 cycle:1 dissociate:1 movement:14 highest:1 localization:1 chip:8 horiuchi:4 describe:1 formation:1 outside:2 ramp:1 aasert:1 topographic:1 spiegel:1 triggered:4 subramaniam:1 product:1 neighboring:1 loop:1 combining:1 description:1 differentially:1 los:1 enhancement:1 oscillating:2 cmos:1 object:1 oo:1 sa:1 strong:1 implemented:1 come:1 direction:15 switzerland:1 functionality:1 centered:3 human:4 transient:1 elementary:1 mathematically:1 scanner:1 klab:1 koch:8 ground:1 exp:1 mo:1 circuitry:1 driving:1 proc:1 overt:1 integrates:1 weighted:1 hoffman:1 mit:1 rather:1 varying:1 voltage:8 gatech:1 office:1 naval:1 reorienting:1 tech:1 contrast:1 centroid:1 mueller:1 typically:1 pad:1 pasadena:2 vlsi:14 selective:4 selects:1 pixel:8 spatial:6 summed:1 integration:2 psychophysics:2 field:2 once:1 extraction:1 shaped:1 stimulus:4 few:2 delayed:1 attempt:1 atlanta:2 light:1 edge:3 re:2 steer:1 hys:1 neuromorphic:4 delay:1 front:1 reported:1 scanning:1 varies:1 st:5 off:1 continuously:1 vphoto:1 thesis:1 rafal:1 external:1 creating:1 conf:1 derivative:12 tam:1 ullman:3 retinal:3 sec:1 hysteresis:5 vi:1 view:1 closed:1 parallel:1 formed:1 accuracy:1 kaufmann:1 characteristic:1 correspond:1 saliency:9 drive:8 detector:1 centering:2 against:1 pp:12 involved:1 dm:7 associated:1 cummings:1 anderson:1 just:1 stage:1 deweerth:7 clock:1 mode:1 indicated:2 normalized:1 spatially:1 volt:2 during:4 demonstrate:3 performs:1 motion:24 covert:3 passage:1 image:1 recently:1 common:5 mt:1 tracked:2 winner:7 analog:14 spotlight:1 measurement:1 ai:1 focal:1 winnertake:1 moving:5 f0:1 inhibition:2 feb:1 recent:1 driven:1 inui:1 continue:1 der:1 caltech:2 dosher:1 seen:1 morgan:1 steering:2 freely:1 determine:1 signal:6 stephen:1 reduces:1 smooth:18 match:2 believed:2 hikosaka:1 equally:1 vision:3 background:2 addition:1 diagram:2 subject:1 cowan:1 incorporates:1 extracting:1 exceed:1 revealed:1 switch:1 architecture:1 opposite:1 triggering:1 idea:1 shift:4 bishofberger:1 motivated:1 passed:1 action:1 latency:2 morris:6 hardware:3 ph:1 mcclelland:1 generate:2 exist:1 nsf:1 track:4 vol:3 salient:1 hysteretic:2 threshold:2 neither:1 isd:1 year:1 wand:1 letter:1 oscillation:1 collewijn:2 decision:1 activity:4 occur:1 flat:1 speed:5 performing:1 inhibits:1 request:1 across:4 suppressed:1 rev:1 wta:8 primate:3 inti:2 taken:1 mechanism:4 pursuit:17 operation:1 away:2 appropriate:1 top:3 include:1 etienne:1 society:1 move:2 added:1 alamitos:1 hum:1 occurs:2 saccadic:12 attentional:8 link:1 consumption:1 modeled:1 preferentially:1 difficult:1 trace:5 implementation:1 neuron:1 voluntary:1 neurobiology:1 incorporated:1 delbriick:1 disengagement:1 mechanical:1 specified:3 california:3 xvi:1 tanaka:1 bar:1 pattern:1 perception:3 oculomotor:2 program:1 including:1 shifting:1 power:1 representing:1 technology:5 eye:16 perf:1 extract:1 acknowledgement:1 generation:1 interesting:1 mounted:2 editor:2 cd:2 supported:2 institute:6 neighbor:1 absolute:3 distributed:3 van:1 sensory:2 made:1 adaptive:2 coincide:1 jump:1 emphasize:1 status:2 active:1 incoming:1 photoreceptors:2 summing:1 shimojo:1 disrupt:1 ca:3 expanding:1 cl:1 neurosci:2 profile:1 georgia:3 position:19 msec:2 winning:5 externally:1 ono:1 specific:1 sensing:1 microelectronics:1 essential:1 adding:1 ci:1 timothy:1 visual:18 tracking:15 saccade:13 lisberger:3 foveate:1 goal:1 ann:1 towards:1 shared:1 change:2 acting:1 ade:1 photoreceptor:2 select:1 support:1 preparation:1 |
314 | 1,287 | VLSI Implementation of Cortical Visual Motion
Detection Using an Analog Neural Computer
Ralph Etienne-Cummings
Electrical Engineering,
Southern Illinois University,
Carbondale, IL 62901
Naomi Takahashi
The Moore School,
University of Pennsylvania,
Philadelphia, PA 19104
Jan Van der Spiegel
The Moore School,
University of Pennsylvania,
Philadelphia, PA 19104
Alyssa Apsel
Electrical Engineering,
California Inst. Technology,
Pasadena, CA 91125
Paul Mueller
Corticon Inc.,
3624 Market Str,
Philadelphia, PA 19104
Abstract
Two dimensional image motion detection neural networks have been
The
implemented using a general purpose analog neural computer.
neural circuits perform spatiotemporal feature extraction based on the
cortical motion detection model of Adelson and Bergen. The neural
computer provides the neurons, synapses and synaptic time-constants
required to realize the model in VLSI hardware. Results show that
visual motion estimation can be implemented with simple sum-andthreshold neural hardware with temporal computational capabilities.
The neural circuits compute general 20 visual motion in real-time.
1 INTRODUCTION
Visual motion estimation is an area where spatiotemporal computation is of fundamental
importance. Each distinct motion vector traces a unique locus in the space-time domain.
Hence, the problem of visual motion estimation reduces to a feature extraction task, with
each feature extractor tuned to a particular motion vector. Since neural networks are
particularly efficient feature extractors, they can be used to implement these visual motion
estimators. Such neural circuits have been recorded in area MT of macaque monkeys,
where cells are sensitive and selective to 20 velocity (Maunsell and Van Essen, 1983).
In this paper, a hardware implementation of 20 visual motion estimation with
spatiotemporal feature extractors is presented. A silicon retina with parallel, continuous
time edge detection capabilities is the front-end of the system. Motion detection neural
networks are implemented on a general purpose analog neural computer which is
composed of programmable analog neurons, synapses, axon/dendrites and synaptic time-
R. Etienne-Cummings, 1. van der Spiege~ N. Takahashi, A. Apsel and P. Mueller
686
constants (Van der Spiegel et al., 1994). The additional computational freedom introduced
by the synaptic time-constants, which are unique to this neural computer, is required to
realize the spatiotemporal motion estimators.
The motion detection neural circuits are
based on the early ID model of Adelson and Bergen and recent 2D models of David Heeger
(Adelson and Bergen, 1985; Heeger et at., 1996). However, since the neurons only
computed delayed weighted sum-and-threshold functions, the models must be modified.
The original models require division for intensity normalization and a quadratic nonlinearity to extract spatiotemporal energy. In our model, normalization is performed by
the silicon retina with a large contrast sensitivity (all edges are normalized to the same
output), and rectification replaces the quadratic non-linearity. Despite these modifications,
we show that the model works correctly. The visual motion vector is implicitly coded as
a distribution of neural activity.
Due to its computational complexity, this method of image motion estimation has not
been attempted in discrete or VLSI hardware. The general purpose analog neural computer
offers a unique avenue for implementing and investigating this method of visual motion
estimation. The analysis, implementation and performance of spatiotemporal visual
motion estimators are discussed.
2 SPATIOTEMPORAL FEATURE EXTRACTION
The technique of estimating motion with spatiotemporal feature extraction was proposed
by Adelson and Bergen in 1985 (Adelson and Bergen, 1985). It emerged out of the
observation that a point moving with constant velocity traces a line in the space-time
domain, shown in figure la. The slope of the line is proportional to the velocity of the
point. Hence, the velocity is represented as the orientation of the line. Spatiotemporal
orientation detection units, similar to those proposed by Hubel and Wiesel for spatial
orientation detection, can be used for detecting motion (Hubel and Wiesel, 1962). In the
frequency domain, the motion of the point is also a line where the slope of the line is the
velocity of the point. Hence orientation detection filters, shown as circles in figure lb,
can be used to measure the motion of the point relative to their tuned velocity. A
population of these tuned filters, figure ic, can be used to measure general image motion.
x-direction
:>
m
e
-Velocity
(a)
(b)
+Velocity
(c)
Figure 1: (a) ID Motion as Orientation in the Space-Time Domain.
(b) and (c) Motion detection with Oriented Spatiotemporal Filters.
If the point exhibits 2D motion, the problem is substantially more complicated, as
observed by David Heeger (1987). A point executing 2D motion spans a plane in the
frequency domain. The spatiotemporal orientation filter tuned to this motion must also
span a plane (Heeger et ai., 1987, 1996). Figure 2a shows a filter tuned to 2D motion.
Unfortunately, this torus shaped filter is difficult to realize without special mathematical
tools. Furthermore, to create a general set of filters for measuring general 2D motion, the
filters must cover all the spatiotemporal frequencies and all the possible velocities of the
stimuli. The latter requirement is particularly difficult to obtain since there are two
degrees of freedom (vX' v.y) to cover.
687
VLSI Implementation of Cortical Vzsual Motion Detection
I
Ii-
(a)
(b)
Figure 2: (a) 20 Motion Detection with 20 Oriented Spatiotemporal
Filters. (b) General 20 Motion Detection with 2 Sets of 10 Filters.
To circumvent these problems, our model decomposes the image into two orthogonal
images, where the perpendicular spatial variation within the receptive field of the filters
are eliminated using spatial smoothing. Subsequently, ID spatiotemporal motion
detection is used on each image to measure the velocity of the stimuli. This technique
places the motion detection filters, shown as the circles in figure 2b, only in the rox-ro t
and roy-rot planes to extract 20 motion, thereby drastically reducing the complexity of the
20 motion detection model from O(n2) to O(2n).
2.1
CONSTRUCTING THE SPATIOTEMPORAL MOTION FILTERS
The filter tuned to a velocity vox (vOy) is centered at ro"x (rooy ) and root where vox = roo/ro ox
(vo y = roo/ro Oy )' To create the filters, quadrature pairs (i.e. odd and even pairs) of spatial
and temporal band-pass filters centered at the appropriate spatiotemporal frequencies are
summed and differenced (Adelson and Bergen, 1985). The 1tI2 phase relationship between
the filters allows them to be combined such that they cancel in opposite quadrants,
leaving the desired oriented filter, as shown in figure 3a. Equation 1 shows examples of
quadrature pairs of spatial and temporal filters implemented. The coefficients of the filters
balance the area under their positive and negative lobes. The spatial filters in equation 1
have a 5 x 5 receptive field, where the sampling interval is determined by the silicon
retina. Figure 3b shows a contour plot of an oriented filter (a=11 rads/s, 02=201=4Da).
S(even) = [0.5 - 0.32Cos(wx )- 0.18Cos(2wx )]
S(odd)
= [-{).66jSin(w x ) -
T(even)
T(odd)
=
=
0.32jSin(2w x )]
(a)
(b)
-w26
t 2
. a? 6 "" 6
(jw t + a)(jwt + 61)(jwt + 62 ) ,
1
2
. 66
JWt 1 2
. a? 6 "" 6
(jwt +a)(jwt +61)(jwt +62 )
,
1
2
Left Motion = S(e)T(e) - S(o)T(o) or S(e)T(o) - S(o)T(e)
Right Motion
= S(e)T(e)+S(o)T(o) or S(e)T(o)+S(o)T(e)
(c)
(I)
(d)
(e)
(f)
To cover a wide range of velocity and stimuli, multiple filters are constructed with
various velocity, spatial and temporal frequency selectivity. Nine filters are chosen per
dimension to mosaic the rox-ro t and roy-rot planes as in figure 2b. The velocity of a
stimulus is given by the weighted average of the tuned velocity of the filters, where the
weights are the magnitudes of each filter's response. All computations for 20 motion
detection based on cortical models have been realized in hardware using a large scale
general purpose analog neural computer.
688
R. Etienne-Cummings, J. van der Spiegel, N. Takahashi, A. Apsel and P. Mueller
Tuned Velocity: Vx
=6.3 mmls
60
30
Wt
[Hz]
__ 1_
l
I I
I I
S(e)T(e)+S(o)T(o)
' -~l t~
r.
- lI ='I ~ = '?;.
I I
o
-30
Wx
-60
S(e)T(e)-S(o)T(o)
Left Motion (-vx)
Right Motion (+vx)
(a)
-3 .0 -1.5
OOx
0
I
[Vmm]
(b)
Figure 3: (a) Constructing Oriented Spatiotemporal Filters.
Contour Plot of One of the Filters Implemented.
(b)
3 HARDWARE IMPLEMENTATION
3.1
GENERAL PURPOSE ANALOG NEURAL COMPUTER
The computer is intended for fast prototyping of neural network based applications. It
offers the flexibility of programming combined with the real-time performance of a
hardware system (Mueller, 1995). It is modeled after the biological nervous system, i.e.
the cerebral cortex, and consists of electronic analogs of neurons, synapses, synaptic time
constants and axon/dendrites. The hardware modules capture the functional and
computational aspects of the biological counterparts. The main features of the system are:
configurable interconnection architecture, programmable neural elements, modular and
expandable architecture, and spatiotemporal processing. These features make the network
ideal to implement a wide range of network architectures and applications.
The system, shown in part in figure 4, is constructed from three types of modules (chips):
(1) neurons, (2) synapses and (3) synaptic time constants and axon/dendrites. The neurons
have a piece-wise linear transfer function with programmable (8bit) threshold and
minimum output at threshold. The synapses are implemented as a programmable
resistance whose values are variable (8 bit) over a logaritnmic range between 5KOhm and
lOMohm.
The time constant, realized with a load-compensated transconductance
amplifier, is selectable between O.5ms and Is with a 5 bit resolution. The axon/dendrites
are implemented with an analog cross-point switch matrix. The neural computer has a
total of 1024 neurons, distributed over 64 neuron modules, with 96 synaptic inputs per
neuron, a total of 98,304 synapses, 6,656 time constants and 196,608 cross point
switches. Up to 3,072 parallel buffered analog inputs/outputs and a neuron output analog
mulitplexer are available. A graphical interface software, which runs on the host
computer, allows the user to symbolically and physically configure the network and
display its behavior (Donham, 1994). Once a particular network has been loaded, the
neural network runs independently of the digital host and operates in a fully analog,
parallel and continuous time fashion.
3.2
NEURAL IMPLEMENTATION OF SPATIOTEMPORAL FILTERS
The output of the silicon retina, which transforms a gray scale image into a binary image
of edges, is presented to the neural computer to implement the oriented spatiotemporal
filters. The first and second derivatives of Gaussian functions are chosen to implement
the odd and even spatial filters, respectively. They are realized by feeding the outputs of
VLSI Implementation of Cortical VISual Motion Detection
689
Neuron~
~
"'/
?
??
Synapses wij
kL.~
ILft~
? ??
Time
c onstants
.::oW itches
(
~
ANALOG INPUTS AND OUTPUTS
Figure 4: Block Diagram of the Overall Neural Network Architecture.
the retina, with appropriate weights, into a layer of neurons. Three parallel channels
with varying spatial scales are implemented for each dimension. The output of the even
(odd) spatial filter is subsequently fed to three parallel even (odd) temporal filters, which
also have varying temporal tuning. Hence, three non-oriented pairs of spatiotemporal
filters are realized for each channel. Six oriented filters are realized by summing arx.l
differencing the non-oriented pairs. The oriented filters are rectified, and lateral inhibition
is used to accentuate the higher response. Figure 4 shows a schematic of the neural
circuitry used to implement the orientation selective filters.
The image layer of the network in figure 5 is the direct, parallel output of the silicon
retina. A 7 x 7 pixel array from the retina is decomposed into 2, 1 x 7 orthogonal linear
images, and the nine motion detection filters are implemented per image. The total
number of neurons used to implement this network is 152, the number of synapse is 548
and the number of time-constants is 108. The time-constant values ranges from 0.75 ms
to 375 ms. After the networks have been programmed into the VLSI chips of the neural
computer, the system operates in full parallel and continuous time analog mode.
Consequently, this system realizes a silicon model for biological visual image motion
measurement, starting from the retina to the visual cortex.
Odd Spatial Filter S(o)"' dG(x)ldx"' 2xExp(-x 2)
Even Spatial Filter S(e)"' ()2G(x)ldx 2
!l\
It\.
\i'V
ooW
~u-
0
? 0?1b21 0
=C4x2_2)Exp(-x2)
.ffit
~
Velocity Selective Spatiotemporal Filters
Figure 5:
Neural Network
Spatiotemporal Filters.
Implementation
of the
Oriented
R. Etienne-Cummings, 1. van der Spiegel, N. Takahashi, A. Apsel and P. Mueller
690
4 RESULTS
The response of the spatiotemporal filters implemented with the neural computer are
shown in figure 6. The figure is obtained by sampling the output of the neurons at
IMHz using the on-chip analog multiplexers. In figure 6a, the impulse response of the
spatial filters are shown as a point moves across their receptive field. Figure 6b shows
the outputs of the even and odd temporal filters for the moving point. At the output of
the filters, the even and odd signals from the spatial filters are no longer out of phase.
This transformation yields to constructive or destructive interference when they are
summed and differenced. When the point move in opposite direction, the odd filters
changes such that the output of the temporal filters become 1800 out of phase.
Subsequent summing and differencing will have the opposite result. Figure 6c shows the
output for all nine x-velocity selective filters as a point moves with positive velocity .
. . . ,. .;....... . . :.......
;. ... [~::o~:':l
:? :~~:?~??11
. +....;. . +....?-f--?+?i
. . ? :?~~,?~'?
..
!.?.???I
liliIt1?JffiIJ
0].0. _00 (a)::!?':!: ti.::li~?!1
=?T?. r. . ' . . 1.;..
.+.
,. ,. . . . ,~ ", c:~::::::t:~?i?' :; to;:::':I
~
f.... j....
??1 ...... ?.. ??: .. 1? .. ?..1........... :
.. -.. ":" ._-:--_ .._!. ......
..... ; .- :...... -;......
6
1
.; ... 11
j'
I
:::
:::
-+ .. .
:::
?????r???-r?????r???-- ????t?o??r????r????? ??-??r????r-??? ?~? ?? ? i.:. ?1. ?: . ~ . :?;?. ??;. ?. I. ?'. : . . :;:? . ?i. "' 1
I . .:.:. .: ;.:. .
diJ"i.C~) .~IT.r::l{~)i,;.~.r+:
Figure 6: Output of the Neural Circuits for a Moving Point:
Spatial Filters, (b) Temporal Filters and (c) Motion Filters.
(a)
Figure 7 shows the tuning curves for the filters tuned to x-motion. The variations in the
responses are due to variations in the analog components of the neural computer. Some
aliasing is noticeable in the tuning curves when there is a minor peak in the opposite
direction. This results from the discrete properties of the spatial filters, as seen in
Tuning Curves
1.0
___
mml,
?._ ?? -27mmfs
_ _ _ .c)mmls
+2~
S
~
-=u::~
08
"mmls
--+-- .... mmls
.. _g_.
__ ._ ?.5 mmfs
--...- +22 mmls
:..::..:..:
+l!:::~:
__ ?? _ -6 mm/s
06
.~.
___
'"~
:a
e
+) . ~
mm/:l
- -e -- -3.2 mmls
--+-- +14.5
0.4
z
--a - --6
0.0
?6.0
mm/s
_+3mm1s
--9 -
0.2
?8.0
mml~
- - . -- -Bmm/s
--..-- +3 ~ mm/3
"
0
.4.0
?2 0
00
20
4.0
Speed [em/sl
Figure 7: Tuning Curves for the Nine X-Motion Filters.
6.0
80
- -2.t mmls
691
VLSI Implementation a/Cortical Visual Motion Detection
figure 3b. Due to the lateral inhibition employed, the aliasing effects are minimal.
Similar curves are obtained for the y-motion tuned filters.
For a point moving with v. = 8.66 mmls and Vy = 5 mmls, the output of the motion
filters are shown in Table I. Computing a weighted average using equation 2, yields v.m
= 8.4 mmls and vym= 5.14 mmls. This result agrees with the actual motion of the point.
vm= k
~J
vtunedo./~
0
llL.J1
i
(2)
i
Table 1: Filter Responses for a Point Moving at 10 mmls at 30?.
X Filters [Speed in mmls]
Ifuned Speed
l!iesQonse
truned ~peed
Response
9
Y Filters [Speed in mm/s]
3.5 14 .
3.5
4. 1
3.5
0.52 0 .95 0.57 0.53 0.9
0.3 0 .7
0 .9 0.31 0.35 0.67 0.92 0 .3 0.85 0 .9 0 .54 0 .9
0.9
-27
-3.2 -13
-6
-2
25
-7 .7
4
-5
0 .0 0 .05 0.1
22
-18
7
-6
3
26
-2 .1 -25
0. 1 0.05 0.05 0.0 0 .05 0.1
0.1
9.5
5
20
7.8
3.7
15
-8
-4.1
-21
O.O~
0.1
0.3 0.05 0 .01 0 .23 0 .05 0 . 1
-7
-4
-14
-5
5 CONCLUSION
2D image motion estimation based on spatiotemporal feature extraction has been
implemented in VLSI hardware using a general purpose analog neural computer. The
neural circuits capitalize on the temporal processing capabilities of the neural computer.
The spatiotemporal feature extraction approach is based on the 1D cortical motion
detection model proposed by Adelson and Bergen, which was extended to 2D by Heeger et
al. To reduce the complexity of the model and to allow realization with simple sum-andthreshold neurons, we further modify the 2D model by placing filters only in the (O.-ro t
and (Oy-(Ot planes, and by replacing quadratic non-linearities with a rectifiers. The
modifications do not affect the performance of the model. While this technique of image
motion detection requires too much hardware for focal plane implementation, our results
show that it is realizable when a silicon "brain," with large numbers of neurons and
synaptic time constant, is available. This is very reminiscent of the biological master.
References
E. Adelson and J. Bergen, "Spatiotemporal Energy Models for the Perception of Motion,"
1. Optical Society of America, Vol. A2, pp. 284-99, 1985
C. Donham, "Real Time Speech Recognition using a General Purpose Analog
Neurocomputer," Ph.D. Thesis, Univ. of Pennsylvania, Dept. of Electrical Engineering,
Philadelphia, PA, 1995.
D. Heeger, E. Simoncelli and J. Movshon, "Computational Models of Cortical Visual
Processing," Proc. National Academy of Science, Vol. 92, no. 2, pp. 623, 1996
D. Heeger, "Model for the Extraction of Image Flow," 1. Optical Society of America,
Vol. 4, no. 8, pp. 1455-71 , 1987
D. Hubel and T. Wiesel, "Receptive Fields, Binocular Interaction and Functional
Architecture in the Cat's Visual Cortex," 1. Physiology, Vol. 160, pp. 106-154, 1962
J. Maunsell and D. Van Essen, "Functional Properties of Neurons in Middle Temporal
Visual Area of the Macaque Monkey. I. Selectivity for Stimulus Direction, Speed and
Orientation," 1. Neurophysiology , Vol. 49, no. 5, pp. 1127-47, 1983
P. Mueller, 1. Van der Spiegel, D . Blackman, C. Donham and R. Etienne-Cummings, "A
Programmable Analog Neural Computer with Applications to Speech Recognition,"
Proc. Compo & Info. Sci. Symp. (CISS), J. Hopkins, May 1995.
| 1287 |@word neurophysiology:1 middle:1 wiesel:3 donham:3 lobe:1 thereby:1 tuned:10 must:3 reminiscent:1 realize:3 subsequent:1 j1:1 wx:3 cis:1 plot:2 nervous:1 plane:6 compo:1 detecting:1 provides:1 mathematical:1 constructed:2 direct:1 become:1 consists:1 symp:1 market:1 behavior:1 aliasing:2 brain:1 decomposed:1 mmls:13 jsin:2 actual:1 str:1 lll:1 estimating:1 linearity:2 circuit:6 substantially:1 monkey:2 transformation:1 temporal:11 ti:1 ro:6 unit:1 maunsell:2 positive:2 engineering:3 modify:1 despite:1 id:3 co:2 programmed:1 perpendicular:1 range:4 unique:3 block:1 implement:6 jan:1 area:4 physiology:1 quadrant:1 compensated:1 starting:1 independently:1 resolution:1 estimator:3 array:1 population:1 variation:3 vzsual:1 user:1 programming:1 mosaic:1 pa:4 velocity:19 roy:2 element:1 particularly:2 recognition:2 observed:1 module:3 electrical:3 capture:1 complexity:3 division:1 chip:3 represented:1 various:1 america:2 cat:1 univ:1 distinct:1 fast:1 whose:1 emerged:1 modular:1 roo:2 interconnection:1 spiegel:5 peed:1 interaction:1 realization:1 flexibility:1 academy:1 requirement:1 executing:1 itch:1 school:2 minor:1 odd:10 noticeable:1 implemented:11 direction:4 filter:62 subsequently:2 centered:2 vx:4 vox:2 implementing:1 require:1 feeding:1 xexp:1 biological:4 mm:5 ic:1 exp:1 circuitry:1 early:1 a2:1 purpose:7 estimation:7 proc:2 realizes:1 sensitive:1 agrees:1 create:2 tool:1 weighted:3 gaussian:1 modified:1 varying:2 blackman:1 contrast:1 realizable:1 inst:1 mueller:6 bergen:8 pasadena:1 vlsi:8 selective:4 wij:1 ralph:1 overall:1 pixel:1 orientation:8 spatial:16 special:1 smoothing:1 summed:2 field:4 once:1 extraction:7 shaped:1 eliminated:1 sampling:2 placing:1 adelson:8 cancel:1 capitalize:1 stimulus:5 vmm:1 retina:8 oriented:11 composed:1 dg:1 national:1 delayed:1 phase:3 intended:1 freedom:2 detection:22 amplifier:1 essen:2 g_:1 configure:1 edge:3 carbondale:1 orthogonal:2 circle:2 desired:1 minimal:1 cover:3 measuring:1 dij:1 front:1 too:1 configurable:1 spatiotemporal:27 combined:2 fundamental:1 sensitivity:1 peak:1 vm:1 hopkins:1 thesis:1 recorded:1 rox:2 derivative:1 multiplexer:1 li:2 takahashi:4 coefficient:1 inc:1 piece:1 performed:1 root:1 complicated:1 capability:3 parallel:7 slope:2 il:1 loaded:1 expandable:1 yield:2 rectified:1 synapsis:7 synaptic:7 energy:2 frequency:5 destructive:1 pp:5 oow:1 higher:1 cummings:5 response:7 synapse:1 jw:1 ox:1 furthermore:1 binocular:1 replacing:1 mode:1 gray:1 impulse:1 effect:1 normalized:1 counterpart:1 hence:4 moore:2 m:3 vo:1 motion:57 interface:1 image:15 wise:1 functional:3 mt:1 ti2:1 cerebral:1 analog:19 discussed:1 silicon:7 buffered:1 measurement:1 ai:1 tuning:5 focal:1 illinois:1 nonlinearity:1 rot:2 moving:5 cortex:3 longer:1 inhibition:2 ldx:2 recent:1 selectivity:2 binary:1 der:6 seen:1 minimum:1 additional:1 employed:1 arx:1 signal:1 ii:1 multiple:1 full:1 simoncelli:1 reduces:1 offer:2 cross:2 host:2 coded:1 schematic:1 physically:1 normalization:2 accentuate:1 cell:1 interval:1 diagram:1 leaving:1 ot:1 hz:1 flow:1 ideal:1 switch:2 affect:1 pennsylvania:3 architecture:5 opposite:4 reduce:1 avenue:1 six:1 movshon:1 resistance:1 speech:2 nine:4 programmable:5 transforms:1 band:1 ph:1 hardware:10 sl:1 vy:1 correctly:1 per:3 discrete:2 vol:5 threshold:3 symbolically:1 sum:3 run:2 master:1 place:1 electronic:1 bit:3 layer:2 display:1 quadratic:3 replaces:1 activity:1 x2:1 software:1 aspect:1 speed:5 span:2 transconductance:1 optical:2 across:1 em:1 modification:2 bmm:1 interference:1 rectification:1 equation:3 locus:1 mml:2 fed:1 end:1 available:2 appropriate:2 original:1 graphical:1 etienne:5 society:2 move:3 realized:5 receptive:4 southern:1 exhibit:1 ow:1 lateral:2 sci:1 modeled:1 relationship:1 balance:1 differencing:2 difficult:2 unfortunately:1 info:1 trace:2 negative:1 implementation:10 perform:1 neuron:17 observation:1 extended:1 alyssa:1 lb:1 intensity:1 introduced:1 david:2 pair:5 required:2 kl:1 differenced:2 rad:1 california:1 macaque:2 selectable:1 prototyping:1 perception:1 b21:1 circumvent:1 technology:1 extract:2 philadelphia:4 relative:1 fully:1 oy:2 proportional:1 digital:1 degree:1 jwt:6 drastically:1 apsel:4 allow:1 wide:2 van:8 distributed:1 curve:5 dimension:2 cortical:8 contour:2 implicitly:1 hubel:3 investigating:1 summing:2 naomi:1 rooy:1 continuous:3 decomposes:1 table:2 channel:2 transfer:1 ca:1 dendrite:4 constructing:2 domain:5 da:1 main:1 paul:1 n2:1 quadrature:2 fashion:1 axon:4 heeger:7 torus:1 extractor:3 load:1 rectifier:1 importance:1 magnitude:1 visual:17 neurocomputer:1 oox:1 consequently:1 change:1 determined:1 reducing:1 operates:2 wt:1 total:3 pas:1 la:1 attempted:1 latter:1 constructive:1 dept:1 |
315 | 1,288 | Minimizing Statistical Bias with Queries
David A. Cohn
Adaptive Systems Group
Harlequin, Inc.
One Cambridge Center
Cambridge, MA 02142
cOhnCharlequin.com
Abstract
I describe a querying criterion that attempts to minimize the error
of a learner by minimizing its estimated squared bias. I describe
experiments with locally-weighted regression on two simple problems, and observe that this "bias-only" approach outperforms the
more common "variance-only" exploration approach, even in the
presence of noise.
1
INTRODUCTION
In recent years, there has been an explosion of interest in "active" machine learning
systems. These are learning systems that make queries, or perform experiments
to gather data that are expected to maximize performance. When compared with
"passive" learning systems, which accept given, or randomly drawn data, active
learners have demonstrated significant decreases in the amount of data required to
achieve equivalent performance. In industrial applications, where each experiment
may take days to perform and cost thousands of dollars, a method for optimally
selecting these points would offer enormous savings in time and money.
An active learning system will typically attempt to select data that will minimize
its predictive error. This error can be decomposed into bias and variance terms.
Most research in selecting optimal actions or queries has assumed that the learner
is approximately unbiased, and that to minimize learner error, variance is the only
thing to minimize (e.g. Fedorov [1972]' MacKay [1992]' Cohn [1996], Cohn et al.,
[1996], Paass [1995]). In practice, however, there are very few problems for which
we have unbiased learners. Frequently, bias constitutes a large portion of a learner's
error; if the learner is deterministic and the data are noise-free, then bias is the only
source of error. Note that the bias term here is a statistical bias, distinct from the
inductive bias discussed in some machine learning research [Dietterich and Kong,
1995].
418
D.A. Cohn
In this paper I describe an algorithm which selects actions/ queries designed to minimize the bias of a locally weighted regression-based learner. Empirically, "varianceminimizing" strategies which ignore bias seem to perform well , even in cases where,
strictly speaking, there is no variance to minimize. In the tasks considered in this
paper, the bias-minimizing strategy consistently outperforms variance minimization, even in the presence of noise.
1.1
BIAS AND VARIANCE
Let us begin by defining P(x, y) to be the unknown joint distribution over x and y,
and P( x) to be the known marginal distribution of x (commonly called the input
distribution). We denote the learner's output on input x, given training set D as
y(x; D). We can then write the expected error of the learner as
1
E [(y(x;D) - y(x))2Ix] P(x)dx,
(1)
where E[?] denotes the expectation over P and over training sets D. The expectation
inside the integral may be decomposed as follows (Geman et al. , 1992):
E [(y(x;D) - y(x))2Ix]
E [(y(x) - E[ylx]?]
(2)
+ (Ev [y(x; D)] - E[ylx])2
+Ev [(y(x;D) - Ev[y(x;D)])2]
where Ev [.] denotes the expectation over training sets. The first term in Equation 2
is the variance of y given x - it is the noise in the distribution, and does not depend
on our learner or how the training data are chosen. The second term is the learner's
squared bias, and the third is its variance; these last two terms comprise the expected
squared error of the learner with respect to the regression function E[Ylx].
Most research in active learning assumes that the second term of Equation 2 is
approximately zero, that is, that the learner is unbiased. If this is the case, then
one may concentrate on selecting data so as to minimize the variance of the learner.
Although this "all-variance" approach is optimal when the learner is unbiased , truly
unbiased learners are rare . Even when the learner's representation class is able
to match the target function exactly, bias is generally introduced by the learning
algorithm and learning parameters. From the Bayesian perspective, a learner is
only unbiased if its priors are exactly correct.
The optimal choice of query would, of course, minimize both bias and variance, but
I leave that for future work . For the purposes of this paper, I will only be concerned
with selecting queries that are expected to minimize learner bias. This approach
is justified in cases where noise is believed to be only a small component of the
learner's error. If the learner is deterministic and there is no noise, then strictly
speaking, there is no error due to variance - all the error must be due to learner
bias. In cases with non-determinism or noise, all-bias minimization, like all-variance
minimization, becomes an approximation of the optimal approach.
The learning model discussed in this paper is a form of locally weighted regression
(LWR) [Cleveland et al., 1988], which has been used in difficult machine learning
tasks, notably the "robot juggler" of Schaal and Atkeson [1994]. Previous work
[Cohn et al., 1996] discussed all-variance query selection for LWR; in the remainder
of this paper, I describe a method for performing all-bias query selection. Section 2
describes the criterion that must be optimized for all-bias query selection. Section 3
describes the locally weighted regression learner used in this paper and describes
419
Minimizing Statistical Bias with Queries
how the all-bias criterion may be computed for it . Section 4 describes the results
of experiments using this criterion on several simple domains. Directions for future
work are discussed in Section 5.
2
ALL-BIAS QUERY SELECTION
Let us assume for the moment that we have a source of noise-free examples (Xi, Yi)
and a deterministic learner which, given input X, outputs estimate Y(X).l Let us
also assume that we have an accurate estimate of the bias of y which can be used
to estimate the true function y(x) = y(x) - bias(x). We will break these rather
strong assumptions of noise-free examples and accurate bias estimates in Section 4,
but they are useful for deriving the theoretical approach described below.
Given an accurate bias estimate, we must force the biased estimator into the best
approximation of y(x) with the fewest number of examples. This, in effect, transforms the query selection problem into an example filter problem similar to that
studied by Plutowski and White [1993] for neural networks. Below, I derive this
criterion for estimating the change in error at X given a new queried example at x.
Since we have (temporarily) assumed a deterministic learner and noise-free data,
the expected error in Equation 2 simplifies to:
E [(Y( X; 'D) - y( x))2Ix, 'D]
We want to select a new
is minimized:
(Y(x; 'D) - y(x))2
(3)
x such that when we add (x, f)), the resulting squared bias
(Y' - y? == (y(x; 'D U (x, f))) - y(x))2 .
(4)
I will, for the remainder of the paper, use the "'" to indicate estimates based on the
initial training set plus the additional example (x, y). To minimize Expression 4,
we need to compute how a query at x will change the learner's bias at x. If we
assume that we know the input distribution,2 then we can integrate this change
over the entire domain (using Monte Carlo procedures) to estimate the resulting
average change, and select a x such that the expected squared bias is minimized.
Defining bias == y - y and f:,.y == y' - y, we can write the new squared bias as:
bias,2
(y' - y)2 = (Y + f:,.y _ y)2
f:,.y2 + 2f:,.y . bias + bias 2
(5)
Note that since bias as defined here is independent of x , minimizing the bias is
equivalent to minimizing f:,.y2 + 2f:,.y . bias .
The estimate of bias' tells us how much our bias will change for a given x . We may
optimize this value over x in one of a number of ways . In low dimensional spaces, it
is often sufficient to consider a set of "candidate" x and select the one promising the
smallest resulting error. In higher dimensional spaces, it is often more efficient to
search for an optimal x with a response surface technique [Box and Draper, 1987],
or hill climb on abias,2 / ax.
Estimates of bias and f:,.y depend on the specific learning model being used. In
Section 3, I describe a locally weighted regression model, and show how differentiable
estimates of bias and f:,.y may be computed for it.
1 For clarity, I will drop the argument :z; except where required for disambiguation. I
will also denote only the univariate case; the results apply in higher dimensions as well.
2This assumption is contrary to the assumption norma.lly made in some forms of learning, e.g. PAC-learning, but it is appropriate in many domains.
420
2.1
D. A. Cohn
AN ASIDE: WHY NOT JUST USE
Y- Mas?
If we have an accurate bias estimate, it is reasonable to ask why we do not simply
use the corrected y - C;;;S as our predictor. The answer has two parts, the first
of which is that for most learners, there are no perfect bias estimators - they
introduce their own bias and variance, which must be addressed in data selection.
Second, we can define a composite learner Ye == Y - C:;;;S. Given a random training
sample then, we would expect Ye to outperform y. However, there is no obvious
way to select data for this composite learner other than selecting to maximize the
performance of its two components. In our case, the second component (the bias
estimate) is non-analytic, which leaves us selecting data so as to maximize the
performance of the first component (the uncorrected estimator). We are now back
to our original problem: we can select data so as to minimize either the bias or
variance of the uncorrected LWR-based learner. Since the purpose of the correction
is to give an unbiased estimator, intuition suggests that variance minimization would
be the more sensible route in this case. Empirically, this approach does not appear
to yield any benefit over uncorrected variance minimization (see Figure 1).
3
LOCALLY WEIGHTED REGRESSION
The type of learner I consider here is a form of locally weighted regression (LWR)
that is a slight variation on the LOESS model of Cleveland et al. [1988] (see Cohn
et al., [1996] for details). The LOESS model performs a linear regression on points
in the data set, weighted by a kernel centered at x. The kernel shape is a design
parameter: the original LOESS model uses a "tricubic" kernel; in my experiments
I use the more common Gaussian
where Ie is a smoothing parameter. For brevity, I will drop the argument x for hi(x),
and define n = 2: i hi. We can then write the weighted means and covariances as:
J.l:r;
" hi Xi,
="
L..J
J.ly =
,.
n
Yi
L,. h,-,
n
""
U:r;y = L..J hi
,.
(Xi -
X)(Yi - J.ly)
n
.
We use these means and covariances to produce an estimate Y at the x around which
the kernel is centered, with a confidence term in the form of a variance estimate:
In all the experiments discussed in this paper, the smoothing parameter Ie was set
so as to minimize
u2.
The low cost of incorporating new training examples makes this form of locally
weighted regression appealing for learning systems which must operate in real time,
or with time-varying target functions (e.g. [Schaal and Atkeson 1994]).
421
Minimizing Statistical Bias with Queries
3.1
COMPUTING D..y FOR LWR
If we know what new point (x, y) we're going to add, computing D..y for LWR is
straightforward. Defining h as the weight given to x, and n as n + h we can write
I
A
~y
A
=
I
I
y - y = J.LY
A
A
h(Y ___ J.Ly)
n
+ - xy (x U
U/2
x
xy (x - J.Lx )
J.L xI ) - J.L y - U-u2
_ u xy (x _ J.Lx)
u;
x
+ (x _ n~x
n
_
~x)
n
.
-
nuXY_ + h . ~x :xKii - J.Ly)
nu;+h?(x-J.Lx)2
Note that computing D..y requires us to know both the x and y of the new point . In
practice, we only know x. If we assume , however, that we can estimate the learner's
bias at any x, then we can also estimate the unknown value y ~ y(x) - bias(x) .
Below, I consider how to compute the bias estimate.
3.2
ESTIMATING BIAS FOR LWR
The most common technique for estimating bias is cross-validation . Standard crossvalidation however , only gives estimates of the bias at our specific training points ,
which are usually combined to form an average bias estimate. This is sufficient if
one assumes that the training distribution is representative of the test distribution
(which it isn't in query learning) and if one is content to just estimate the bias
where one already has training data (which we can't be).
In the query selection problem, we must be able to estimate the bias at all possible
x. Box and Draper [1987] suggest fitting a higher order model and measuring the
difference. For the experiments described in this paper, this method yielded poor
results; two other bias-estimation techniques, however , performed very well.
One method of estimating bias is by bootstrapping the residuals of the training
points. One produces a "bootstrap sample" of the learner's residuals on the training
data, and adds them to the original predictions to create a synthetic training set . By
averaging predictions over a number of bootstrapped training sets and comparing
the average prediction with that of the original predictor, one arrives at a first-order
bootstrap estimate of the predictor's bias [Connor 1993; Efron and Tibshirani, 1993] .
It is known that this estimate is itself biased towards zero; a standard heuristic is
to divide the estimate by 0.632 [Efron, 1983].
Another method of estimating bias of a learner is by fitting its own cross-validated
residuals . We first compute the cross-validated residuals on the training examples.
These produce estimates of the learner 's bias at each of the training points. We
can then use these residuals as training examples for another learner (again LWR)
to produce estimates of what the cross-validated error would be in places where we
don't have training data.
4
EMPIRICAL RESULTS
In the previous two sections, I have explained how having an estimate of D..y and
bias for a learner allows one to compute the learner's change in bias given a new
query, and have shown how these estimates may be computed for a learner that
uses locally weighted regression . Here, I apply these results to two simple problems
and demonstrate that they may actually be used to select queries that minimize
the statistical bias (and the error) of the learner. The problems involve learning the
kinematics of a planar two-jointed robot arm : given the shoulder and elbow joint
angles , the learner must predict the tip position.
422
D.A. Cohn
4.1
BIAS ESTIMATES
I tested the accuracy of the two bias estimators by observing their correlations on 64
reference inputs, given 100 random training examples from the planar arm problem.
The bias estimates had a correlation with actual biases of 0.852 for the bootstrap
method , and 0.871 for the cross-validation method.
4.2
BIAS MINIMIZATION
I ran two sets of experiments using the bias-minimizing criterion in conjunction with
the bias estimation technique of the previous section on the planar arm problem.
The bias minimization criterion was used as follows: At each time step, the learner
was given a set of 64 randomly chosen candidate queries and 64 uniformly chosen
reference points. It evaluated E' (x) for each reference point given each candidate
point and selected for its next query the candidate point with the smallest average
E' (x) over the reference points. I compared the bias-minimizing strategy (using the
cross-validation and bootstrap estimation techniques) against random sampling and
the variance-minimizing strategy discussed in Cohn et al. [1996]. On a Sparc 10,
with m training examples , the average evaluation times per candidate per reference
point were 58 + 0.16m J.lseconds for the variance criterion, 65 + 0.53m J.lseconds for
the cross-validation-based bias criterion, and 83 + 3.7m J.lseconds for the bootstrapbased bias criterion (with 20x resampling) .
To test whether the bias-only assumption was robust against the presence of noise,
1% Gaussian noise was added to the input values of the training data in all experiments. This simulates noisy position effectors on the arm , and results in nonGaussian noise in the output coordinate system.
In the first series of experiments, the candidate shoulder and elbow joint angles
were drawn uniformly over (U[O, 271"], U[O, 71")) . In unconstrained domains like this,
random sampling is a fairly good default strategy. The bias minimization strategies
still significantly outperform both random sampling and the variance minimizing
strategy in these experiments (see Figure 1).
10
g
'il'" ,0- 2
"'10
:a
~
c:
~10
' '\
--
1
~
. random
- variance-min
o cross-val-min
~ x bootstrap-min
100
\\
0
}1O-
".'~
-3
10 0
I,
1
-1
10
g
200
-2
,- -
~~&e-.!)jni(llIZI~
10 .:
,
10-3 ~ ~~~~rmiWngar- iOlmizing
300
trainlno set size
o
100
200
300
trainino set size
400
Iheta 1
Figure 1: (left) MSE as a function of number of noisy training examples for the
unconstrained arm problem. Errors are averaged over 10 runs for the bootstrap
method and 15 runs for all others. One run with the cross-validation-based method
was excluded when k failed to converge to a reasonable value . (center) MSE as
a function of number of noisy training examples for the constrained arm problem .
The bias correction strategy discussed in Section 2.1 does no better than the uncorrected variance-minimizing strategy, and much worse than the bias-minimization
strategy. (right) Sample exploration trajectory in joint-space for the constrained
arm problem , explored according to the bias minimizing criterion.
In the second series of experiments, candidates were drawn uniformly from a region
Minimizing Statistical Bias with Queries
423
local to the previously selected query: (0 1 ? 0.217", O
2 ? 0.117"). This corresponds to
restricting the arm to local motions. In a constrained problem such as this, random sampling is a poor strategy; both the bias and variance-reducing strategies
outperform it at least an order of magnitude . Further, the bias-minimization strategy outperforms variance minimization by a large margin (Figure 1). Figure 1 also
shows an exploration trajectory produced by pursuing the bias-minimizing criterion. It is noteworthy that, although the implementation in this case was a greedy
(one-step) minimization, the trajectory results in globally good exploration.
5
DISCUSSION
I have argued in this paper that, in many situations, selecting queries to minimize
learner bias is an appropriate and effective strategy for active learning. I have given
empirical evidence that, with a LWR-based learner and the examples considered
here, the strategy is effective even in the presence of noise.
Beyond minimizing either bias or variance, an important next step is to explicitly
minimize them together . The bootstrap-based estimate should facilitate this, as
it produces a complementary variance estimate with little additional computation.
By optimizing over both criteria simultaneously, we expect to derive a criterion that
that, in terms of statistics, is truly optimal for selecting queries.
REFERENCES
Box, G., & Draper, N. (1987). Empirical model-building and response surfaces,
Wiley, New York.
Cleveland, W., Devlin, S., & Grosse, E. (1988) . Regression by local fitting.
Journal of Econometrics, 37, 87-114.
Cohn, D. (1996) Neural network exploration using optimal experiment design .
Neural Networks, 9(6):1071-1083.
Cohn, D., Ghahramani, Z., & Jordan, M. (1996) . Active learning with statistical models. Journal of Artificial Inteligence Research 4:129-145 .
Connor, J. (1993). Bootstrap Methods in Neural Network Time Series Prediction.
In J . Alspector et al., eds., Proc. of the Int. Workshop on Applications of Neural
Networks to Telecommunications, Lawrence Erlbaum, Hillsdale, N.J.
Dietterich, T., & Kong, E. (1995) . Error-correcting output coding corrects
bias and variance . In S. Prieditis and S. Russell, eds., Proceedings of the 12th
International Conference on Machine Learning.
Efron, B. (1983) Estimating the error rate of a prediction rule: some improvements
on cross-validation. J. Amer. Statist. Assoc. 78:316-331.
Efron, B. & Tibshirani, R. (1993). An introduction to the bootstrap. Chapman
& Hall, New York .
Fedorov, V. (1972). Theory of Optimal Experiments. Academic Press, New York.
Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the
bias/variance dilemma. Neural Computation, 4, 1-58.
MacKay, D. (1992). Information-based objective functions for active data selection, Neural Computation, 4, 590-604.
Paass, G., and Kindermann, J. (1994). Bayesian Query Construction for Neural Network Models. In G. Tesauro et al., eds., Advances in Neural Information
Processing Systems 7, MIT Press.
Plutowski, M., & White, H. (1993). Selecting concise training sets from clean
data. IEEE Transactions on Neural Networks, 4, 305-318.
Schaal, S. & Atkeson, C. (1994). Robot Juggling: An Implementation of
Memory-based Learning. Control Systems 14, 57-71.
| 1288 |@word kong:2 covariance:2 concise:1 moment:1 initial:1 series:3 selecting:9 bootstrapped:1 outperforms:3 com:1 comparing:1 dx:1 must:7 shape:1 analytic:1 designed:1 drop:2 aside:1 resampling:1 greedy:1 leaf:1 selected:2 lx:3 fitting:3 inside:1 introduce:1 notably:1 expected:6 alspector:1 frequently:1 globally:1 decomposed:2 actual:1 little:1 elbow:2 becomes:1 begin:1 cleveland:3 estimating:6 what:2 bootstrapping:1 exactly:2 assoc:1 control:1 ly:5 appear:1 local:3 approximately:2 noteworthy:1 plus:1 studied:1 suggests:1 averaged:1 practice:2 bootstrap:9 procedure:1 empirical:3 significantly:1 composite:2 confidence:1 suggest:1 selection:8 optimize:1 equivalent:2 deterministic:4 demonstrated:1 center:2 straightforward:1 correcting:1 estimator:5 rule:1 deriving:1 variation:1 coordinate:1 target:2 construction:1 us:2 econometrics:1 geman:2 thousand:1 region:1 decrease:1 russell:1 ran:1 intuition:1 depend:2 predictive:1 dilemma:1 learner:45 joint:4 fewest:1 distinct:1 describe:5 effective:2 monte:1 query:25 artificial:1 tell:1 heuristic:1 statistic:1 itself:1 noisy:3 differentiable:1 varianceminimizing:1 remainder:2 achieve:1 crossvalidation:1 produce:5 perfect:1 leave:1 derive:2 strong:1 uncorrected:4 indicate:1 concentrate:1 direction:1 correct:1 norma:1 filter:1 exploration:5 centered:2 hillsdale:1 argued:1 strictly:2 correction:2 around:1 considered:2 hall:1 lawrence:1 predict:1 smallest:2 purpose:2 estimation:3 proc:1 kindermann:1 create:1 weighted:11 minimization:12 mit:1 gaussian:2 rather:1 varying:1 conjunction:1 ax:1 validated:3 schaal:3 improvement:1 consistently:1 industrial:1 dollar:1 typically:1 entire:1 accept:1 bienenstock:1 going:1 selects:1 loess:3 smoothing:2 constrained:3 mackay:2 fairly:1 marginal:1 comprise:1 saving:1 lwr:9 having:1 sampling:4 chapman:1 constitutes:1 paass:2 future:2 minimized:2 others:1 few:1 randomly:2 simultaneously:1 attempt:2 interest:1 evaluation:1 truly:2 arrives:1 accurate:4 integral:1 explosion:1 xy:3 divide:1 re:1 theoretical:1 effector:1 measuring:1 cost:2 rare:1 predictor:3 erlbaum:1 optimally:1 answer:1 my:1 combined:1 synthetic:1 international:1 ie:2 corrects:1 tip:1 together:1 nongaussian:1 squared:6 again:1 worse:1 coding:1 int:1 inc:1 explicitly:1 performed:1 break:1 observing:1 portion:1 minimize:15 il:1 accuracy:1 variance:30 yield:1 bayesian:2 produced:1 carlo:1 trajectory:3 ed:3 against:2 obvious:1 ask:1 efron:4 actually:1 back:1 higher:3 day:1 planar:3 response:2 amer:1 evaluated:1 box:3 just:2 correlation:2 cohn:11 facilitate:1 dietterich:2 effect:1 ye:2 unbiased:7 true:1 y2:2 inductive:1 building:1 excluded:1 white:2 criterion:14 hill:1 demonstrate:1 performs:1 motion:1 passive:1 common:3 empirically:2 discussed:7 slight:1 significant:1 cambridge:2 connor:2 queried:1 unconstrained:2 had:1 robot:3 money:1 surface:2 add:3 own:2 recent:1 perspective:1 optimizing:1 sparc:1 tesauro:1 route:1 yi:3 plutowski:2 additional:2 converge:1 maximize:3 match:1 academic:1 offer:1 believed:1 cross:10 prediction:5 regression:12 expectation:3 lly:1 kernel:4 justified:1 want:1 addressed:1 source:2 biased:2 operate:1 doursat:1 simulates:1 thing:1 contrary:1 climb:1 seem:1 jordan:1 presence:4 concerned:1 simplifies:1 devlin:1 prieditis:1 whether:1 expression:1 juggling:1 speaking:2 york:3 action:2 generally:1 useful:1 involve:1 ylx:3 juggler:1 amount:1 transforms:1 locally:9 statist:1 outperform:3 inteligence:1 estimated:1 tibshirani:2 per:2 write:4 group:1 enormous:1 drawn:3 clarity:1 clean:1 draper:3 year:1 run:3 angle:2 telecommunication:1 place:1 reasonable:2 pursuing:1 disambiguation:1 jointed:1 hi:4 yielded:1 argument:2 min:3 performing:1 according:1 poor:2 describes:4 appealing:1 explained:1 equation:3 previously:1 kinematics:1 know:4 apply:2 observe:1 appropriate:2 original:4 denotes:2 assumes:2 ghahramani:1 objective:1 already:1 added:1 strategy:15 sensible:1 minimizing:16 difficult:1 design:2 implementation:2 unknown:2 perform:3 fedorov:2 defining:3 situation:1 shoulder:2 david:1 introduced:1 required:2 optimized:1 nu:1 able:2 beyond:1 below:3 usually:1 ev:4 memory:1 force:1 residual:5 arm:8 isn:1 prior:1 val:1 expect:2 querying:1 validation:6 integrate:1 gather:1 sufficient:2 course:1 last:1 free:4 bias:90 determinism:1 benefit:1 dimension:1 default:1 commonly:1 adaptive:1 made:1 atkeson:3 transaction:1 ignore:1 active:7 assumed:2 xi:4 don:1 search:1 why:2 promising:1 robust:1 mse:2 domain:4 noise:14 complementary:1 representative:1 grosse:1 wiley:1 position:2 harlequin:1 candidate:7 third:1 ix:3 specific:2 pac:1 explored:1 evidence:1 incorporating:1 workshop:1 restricting:1 magnitude:1 margin:1 simply:1 univariate:1 failed:1 temporarily:1 u2:2 corresponds:1 ma:2 towards:1 content:1 change:6 except:1 corrected:1 uniformly:3 averaging:1 reducing:1 called:1 select:7 brevity:1 tricubic:1 tested:1 |
316 | 1,289 | The CONDENSATION algorithm conditional density propagation and
applications to visual tracking
A. Blake and M. IsardDepartment of Engineering Science,
University of Oxford,
Oxford OXI 3PJ, UK.
Abstract
The power of sampling methods in Bayesian reconstruction of noisy
signals is well known. The extension of sampling to temporal problems is discussed . Efficacy of sampling over time is demonstrated
with visual tracking.
1
INTRODUCTION
The problem of tracking curves in dense visual clutter is a challenging one. Trackers
based on Kalman filters are of limited power; because they are based on Gaussian
densities which are unimodal they cannot represent simultaneous alternative hypotheses. Extensions to the Kalman filter to handle multiple data associations
(Bar-Shalom and Fortmann, 1988) work satisfactorily in the simple case of point
targets but do not extend naturally to continuous curves.
Tracking is the propagation of shape and motion estimates over time, driven by
a temporal stream of observations. The noisy observations that arise in realistic
problems demand a robust approach involving propagation of probability distributions over time. Modest levels of noise may be treated satisfactorily using Gaussian
densities, and this is achieved effectively by Kalman filtering (Gelb, 1974). More
pervasive noise distributions, as commonly arise in visual background clutter, demand a more powerful, non-Gaussian approach.
One very effective approach is to use random sampling. The CONDENSATION algorithm , described here, combines random sampling with learned dynamical models
to propagate an entire probability distribution for object position and shape, over
time . The result is accurate tracking of agile motion in clutter, decidedly more
? Web: http://www.robots.ox.ac.uk/ ...ab/
A. Blake and M. lsard
362
robust than what has previously been attainable by Kalman filtering. Despite the
use of random sampling, the algorithm is efficient, running in near real-time when
applied to visual tracking.
2
SAMPLING METHODS
A standard problem in statistical pattern recognition is to find an object parameterised as x with prior p(x), using data z from a single image. The posterior density
p(xlz) represents all the knowledge about x that is deducible from the data. It can
be evaluated in principle by applying Bayes' rule (Papoulis, 1990) to obtain
p(xlz) = kp(zlx)p(x)
(1)
where k is a normalisation constant that is independent of x. However p(zlx) may
become sufficiently complex that p(xlz) cannot be evaluated simply in closed form .
Such complexity arises typically in visual clutter, when the superfluity of observable
features tends to suggest multiple, competing hypotheses for x. A one-dimensional
illustration of the problem is illustrated in figure 1 in which multiple features give
Measured
features
z,
Z2
+ +
p(z I x)
I
I
I
I
I
I
I
I
Z!j
Z:J
Z4
+
+ +
I
I
I
I
I
I
I
I
I
I
I
I
Ze
+
I
I
I
I
?x
a
x
Figure 1: One-dimensional observation model. A probabilistic observation
model allowing for clutter and the possibility of missing the target altogether is
specified here as a conditional density p( z Ix ) .
rise to a multimodal observation density function p(zlx).
When direct evaluation of p(xlz) is infeasible, iterative sampling techniques can be
used (Geman and Geman, 1984; Ripley and Sutherland, 1990; Grenander et al.,
1991; Storvik, 1994). The factored sampling algorithm (Grenander et al., 1991).
generates a random variate x from a distribution p(x) that approximates the posterior p(xlz). First a sample-set {s(1), ... , s(N)} is generated from the prior density
p(x) and then a sample x = Xi, i E {I, ... , N} is chosen with probability
p(zlx = s(i?)
7ri
= Lj=l
N
. .
p(zlx = s(3?)
Sampling methods have proved remarkably effective for recovering static objects
from cluttered images. For such problems x is multi-dimensional, a set of parameters
for curve position and shape. In that case the sample-set {s(1), ... , s(N)} represents
363
The CONDENSATION Algorithm
a distribution of x-values which can be seen as a distribution of curves in the image
plane, as in figure 2.
Figure 2: Sample-set representation of shape distributions for a curve with
parameters x, modelling the outline (a) of the head of a dancing girl. Each sample
s(n) is shown as a curve (of varying position and shape) with a thickness proportional
to the weight 1r(n). The weighted mean of the sample set (b) serves as an estimator
of mean shape
3
THE CONDENSATION ALGORITHM
The CONDENSATION algorithm is based on factored sampling but extended to apply iteratively to successive images in a sequence. Similar sampling strategies have
appeared elsewhere (Gordon et al., 1993; Kitigawa, 1996), presented as developments of Monte-Carlo methods. The methods outlined here are described in detail
elsewhere. Fuller descriptions and derivation of the CONDENSATION algorithm are
in (Isard and Blake, 1996; Blake and Isard, 1997) and details of the learning of
dynamical models, which is crucial to the effective operation of the algorithm are
in (Blake et al., 1995).
Given that the estimation process at each time-step is a self-contained iteration
of factored sampling, the output of an iteration will be a weighted, time-stamped
1, ... I N with weights 1r~n) I representing approximsample-set, denoted s~n), n
ately the conditional state-density p(xtIZe) at time t, where Zt (Zl, ... I Zt). How
is this sample-set obtaine-d? Clearly the process must begin with a prior density
and the effective prior for time-step t should be p(xtIZt-t}. This prior is of course
multi-modal in general and no functional representation of it is available. It is derived from the sample set representation (S~~)ll 1r~~)1)' n 1, ... , N of p(Xt-lIZt-l),
the output from the previous time-step, to which prediction must then be applied.
=
=
=
The iterative process applied to the sample-sets is depicted in figure 3. At the
top of the diagram, the output from time-step t - 1 is the weighted sample-set
{(st)l' 1rt?l) , n
I, . .. ,N}. The aim is to maintain, at successive time-steps,
sample sets of fixed size N, so that the algorithm can be guaranteed to run within
a given computational resource. The first operation therefore is to sample (with
=
A. Blake and M. !sard
364
p(x 1 1Z,-1 )
p(x11 Z,)
Figure 3: One time-step in the CONDENSATION algorithm. Blob centres represent sample values and sizes depict sample weights.
replacement) N times from the set {S~~)l}' choosing a given element with probability
ll't)l' Some elements, especially those with high weights, may be chosen several
times, leading to identical copies of elements in the new set. Others with relatively
low weights may not be chosen at all.
Each element chosen from the new set is now subjected to a predictive step. {The
dynamical model we generally use for prediction is a linear stochastic differential
equation (s.d.e.) learned from training sets of sample object motion (Blake et al.,
1995).) The predictive step includes a random component, so identical elements
may now split as each undergoes its own independent random motion step. At this
stage, the sample set {s~n)} for the new time-step has been generated but, as yet,
without its weights; it is approximately a fair random sample from the effective
prior density p(XtIZt-l) for time-step t. Finally, the observation step from factored
sampling is applied, generating weights from the observation density p(Zt IXt) to
obtain the sample-set representation {(s~n), ll'}n?} of state-density for time t.
The algorithm is specified in detail in figure 4. The process for a single time-step
consists of N iterations to generate the N elements of the new sample set. Each
iteration has three steps, detailed in the figure, and we comment below on each.
1. Select nth new sample s~(n) to be some S~~l from the old sample set,
sampled with replacement with probability 1l'~~1' This is achieved efficiently
by using cumulative weights C~~l (constructed in step 3).
2. Predict by sampling randomly from the conditional density for the dynamical model to generate a sample for the new sample-set.
3. Measure in order to generate weights ll'~n) for the new sample. Each weight
365
The CONDENSATION Algorithm
is evaluated from the observation density function which, being multimodal
in general, "infuses" multi modality into the state density.
Iterate
From the "old" sample-set {s~~t 7ltL c~~t n ;;:: 1, ... , N} at time-step t construct a "new" sample-set {s~n),7r~n),c~n)},n;;:: 1, .. . ,N for time t.
Construct the
nth
1,
of N new samples as follows:
1. Select a sample s~(n) as follows:
(a) generate a random number r E [0,1], uniformly distributed.
(b) find, by binary subdivision, the smallest j for which C~~l ~ r
=
(c) set s~(n) S~~l
2. Predict by sampling from
p(XtIXt-l
= S/~~)l)
to choose each s~n).
3. Measure and weight the new position in terms of the measured features Zt:
then normalise so that Ln 7r~n) ;;:: 1 and store together with cumulative
where
P robability as (s(n)
t , 7r(n)
t
, c(n?)
t
(0)
Ct
0,
(n)
Ct
Ct
Figure 4: The
(n-l)
+ 7rt(n)
CONDENSATION
(n
= 1 .. . N).
algorithm.
At any time-step, it is possible to "report" on the current state, for example by
evaluating some moment of the state density as
N
?(f(xd] ;;::
l:= 7r~n) f (s~n?) .
(2)
n=l
4
RESULTS
A good deal of experimentation has been performed in applying the CONDENSATION
algorithm to the tracking of visual motion, including moving hands and dancing
figures. Perhaps one of the most stringent tests was the tracking of a leaf on a bush,
in which the foreground leaf is effectively camouflaged against the background.
A 12 second (600 field) sequence shows a bush blowing in the wind, the task being
to track one particular leaf. A template was drawn by hand around a still of one
chosen leaf and allowed to undergo affine deformations during tracking. Given that
a clutter-free training sequence is not available, the motion model was learned by
means of a bootstrap procedure (Blake et al., 1995). A tracker with default dynamics proved capable of tracking the first 150 fields of a training sequence before losing
| 1289 |@word especially:1 recovering:1 iteratively:1 filter:2 stochastic:1 propagate:1 illustrated:1 deal:1 strategy:1 stringent:1 attainable:1 ll:4 during:1 self:1 rt:2 papoulis:1 moment:1 normalise:1 efficacy:1 outline:1 extension:2 motion:6 current:1 z2:1 tracker:2 sufficiently:1 blake:8 camouflaged:1 yet:1 around:1 must:2 image:4 predict:2 illustration:1 realistic:1 functional:1 shape:6 ltl:1 smallest:1 rise:1 depict:1 estimation:1 discussed:1 association:1 isard:2 leaf:4 extend:1 approximates:1 zt:4 allowing:1 plane:1 observation:8 outlined:1 weighted:3 z4:1 gelb:1 fortmann:1 centre:1 clearly:1 extended:1 gaussian:3 successive:2 aim:1 head:1 moving:1 robot:1 constructed:1 direct:1 become:1 differential:1 varying:1 pervasive:1 posterior:2 consists:1 derived:1 own:1 combine:1 shalom:1 driven:1 modelling:1 specified:2 store:1 proportional:1 learned:3 binary:1 zlx:5 multi:3 bar:1 seen:1 dynamical:4 pattern:1 entire:1 typically:1 lj:1 ately:1 below:1 appeared:1 begin:1 signal:1 including:1 multiple:3 x11:1 unimodal:1 power:2 what:1 denoted:1 treated:1 development:1 decidedly:1 nth:2 field:2 construct:2 fuller:1 representing:1 sampling:16 temporal:2 identical:2 represents:2 prediction:2 involving:1 xd:1 foreground:1 others:1 report:1 uk:2 zl:1 gordon:1 iteration:4 represent:2 randomly:1 achieved:2 prior:6 condensation:10 before:1 sutherland:1 engineering:1 background:2 remarkably:1 tends:1 diagram:1 despite:1 replacement:2 crucial:1 oxford:2 maintain:1 ab:1 modality:1 filtering:2 comment:1 approximately:1 normalisation:1 undergo:1 possibility:1 affine:1 evaluation:1 challenging:1 principle:1 near:1 limited:1 split:1 deducible:1 elsewhere:2 iterate:1 course:1 satisfactorily:2 accurate:1 variate:1 copy:1 capable:1 competing:1 free:1 infeasible:1 bootstrap:1 modest:1 procedure:1 template:1 old:2 distributed:1 deformation:1 curve:6 default:1 evaluating:1 cumulative:2 suggest:1 commonly:1 cannot:2 applying:2 generally:1 detailed:1 www:1 kalman:4 xlz:5 demonstrated:1 missing:1 clutter:6 observable:1 cluttered:1 ixt:1 thickness:1 http:1 generate:4 xi:1 ripley:1 sard:1 factored:4 rule:1 estimator:1 st:1 density:16 continuous:1 iterative:2 track:1 probabilistic:1 handle:1 robust:2 together:1 target:2 dancing:2 losing:1 complex:1 drawn:1 choose:1 hypothesis:2 stamped:1 pj:1 agile:1 element:6 dense:1 recognition:1 ze:1 noise:2 leading:1 arise:2 geman:2 fair:1 allowed:1 run:1 powerful:1 includes:1 stream:1 position:4 performed:1 wind:1 closed:1 complexity:1 bayes:1 ct:3 guaranteed:1 ix:1 dynamic:1 perhaps:1 xt:1 xtixt:1 predictive:2 efficiently:1 ri:1 girl:1 multimodal:2 generates:1 bayesian:1 effectively:2 derivation:1 carlo:1 relatively:1 effective:5 demand:2 monte:1 kp:1 simultaneous:1 depicted:1 choosing:1 simply:1 against:1 visual:7 contained:1 tracking:10 naturally:1 static:1 noisy:2 sampled:1 proved:2 ln:1 blob:1 sequence:4 grenander:2 knowledge:1 resource:1 equation:1 reconstruction:1 previously:1 conditional:4 blowing:1 subjected:1 serf:1 oxi:1 available:2 operation:2 experimentation:1 modal:1 apply:1 uniformly:1 evaluated:3 description:1 ox:1 parameterised:1 stage:1 subdivision:1 alternative:1 hand:2 altogether:1 select:2 web:1 generating:1 top:1 running:1 propagation:3 object:4 arises:1 ac:1 undergoes:1 bush:2 measured:2 |
317 | 129 | 160
SCALING AND GENERALIZATION IN
NEURAL NETWORKS: A CASE STUDY
Subutai Ahmad
Center for Complex Systems Research
University of Illinois at Urbana-Champaign
508 S. 6th St., Champaign, IL 61820
Gerald Tesauro
IBM Watson Research Center
PO Box 704
Yorktown Heights, NY 10598
ABSTRACT
The issues of scaling and generalization have emerged as key issues in
current studies of supervised learning from examples in neural networks.
Questions such as how many training patterns and training cycles are
needed for a problem of a given size and difficulty, how to represent the
inllUh and how to choose useful training exemplars, are of considerable
theoretical and practical importance. Several intuitive rules of thumb
have been obtained from empirical studies, but as yet there are few rigorous results. In this paper we summarize a study Qf generalization in
the simplest possible case-perceptron networks learning linearly separable functions. The task chosen was the majority function (i.e. return
a 1 if a majority of the input units are on), a predicate with a number of useful properties. We find that many aspects of.generalization
in multilayer networks learning large, difficult tasks are reproduced in
this simple domain, in which concrete numerical results and even some
analytic understanding can be achieved.
1
INTRODUCTION
In recent years there has been a tremendous growth in the study of machines which
learn. One class of learning systems which has been fairly popular is neural networks. Originally motivated by the study of the nervous system in biological organisms and as an abstract model of computation, they have since been applied to a
wide variety of real-world problems (for examples see [Sejnowski and Rosenberg, 87]
and [Tesauro and Sejnowski, 88]). Although the results have been encouraging,
there is actually little understanding of the extensibility of the formalism. In particular, little is known of the resources required when dealing with large problems
(i.e. scaling), and the abilities of networks to respond to novel situations (i.e. generaliz ation).
The objective of this paper is to gain some insight into the relationships between
three fundament~l quantities under a variety of situations. In particular we are interested in the relationships between the size of the network, the number of training
Scaling and Generalization in Neural Networks
instances, and the generalization that the network performs, with an emphasis on
the effects of the input representation and the particular patterns present in the
training set.
As a first step to a detailed understanding, we summarize a study of scaling and
generalization in the simplest possible case. Using feed forward networks, the type of
networks most common in the literature, we examine the majority function (return
a 1 if a majority of the inputs are on), a boolean predicate with a number of useful
features. By using a combination of computer simulations and analysis in the limited
domain of the majority function, we obtain some concrete numerical results which
provide insight into the process of generalization and which will hopefully lead to a
better understanding of learning in neural networks in general.?
2
THE MAJORITY FUNCTION
The function we have chosen to study is the majority function, a simple predicate
whose output is a 1 if and only if more than half of the input units are on. This
function has a number of useful properties which facilitate a study of this type.
The function has a natural appeal and can occur in several different contexts in the
real-world. The problem is linearly separable (i.e. of predicate order 1 [Minsky and
Papert, 69]). A version of the perceptron convergence theorem applies, so we are
guaranteed that a network with one layer of weights can learn the function. Finally,
when there are an odd number of input units, exactly half of the possible inputs
results in an output of 1. This property tends to minimize any negative effects that
may result from having too many positive or negative training examples.
3
METHODOLOGY
The class of networks used are feed forward networks [Rumelhart and McClelland, 86],
a general category of networks that include perceptrons and the multi-layered networks most often used in current research. Since majority is a boolean function
of predicate order 1, we use a network with no hidden units. The output function
used was a sigmoid with a bias. The basic procedure consisted of three steps. First
the network was initialized to some random starting weights. Next it was trained
using back propagation on a set of training patterns. Finally, the performance of
the network was tested on a set of random test patterns. This performance figure
was used as the estimate of the network's generalization. Since there is a large
amount of randomness in the procedure, most of our data are averages over several
simulations.
The material contained in this paper is a condensation of portions of the first author's
M.S. thesis [Ahmad, 88].
O.
161
162
Ahmad and Tesauro
0.50
f
0.42
0.33
0.25
0.17
0.08
0.00
70
0
140
210
280
350
5
420
Figure 1: The average failure rate as a function of S. d = 25
Notation. In the following discussion, we denote 5 to be the number of training
patterns, d the number of input units, and c the number of cycles through the training set. Let f be the failure rate (the fraction of misclassified training instances),
and rr be the set of training patterns.
4
RANDOM TRAINING PATTERNS
We first examine the failure rate as a function of 5 and d. Figure 1 shows the
graph of the average failure rate as a function of S, for a fixed input size d 25.
Not surprisingly we find that the failure rate decreases fairly monotonically with 5.
Our simulations show that in fact, for majority there is a well defined relationship
between the failure rate and 5. Figure 2 shows this for a network with 25 input
units. The figure indicates that In f is proportional to 5 implying that the failure
rate decreases exponentially with 5, i.e., ,
ae- fJs . 1/{3 can be thought of as a
characteristic training set size, corresponding to a failure rate of a/e.
=
=
Obtaining the exact scaling relationship of l/P was somewhat tricky. Plotting {3 on
a log-log plot against d showed it to be close to a straight line, indicating that 1/{3
increases'" d(J for some constant a. Extracting the exponent by measuring the slope
of the log-log graph turned out to be very error prone, since the data only ranged
over one order of magnitude. An alternate method for obtaining the exponent is
to look for a particular exponent a by setting 5 = ad(J. Since a linear scaling
relationship is theoretically plausible, we measured the failure rate of the network
Scaling and Generalization in Neural Networks
G."'"
In!
-1.000
-Z.OOO
-3.000
-4.000
-5.000
-6.000
0.0
70.0
140.0
Z10.0
Z80.0
4Z0.0
350.0
s
Figure 2: In f as a function of S. d = 25. The slope was == -0.01
=
at S ad for various values of a. As Figure 3 shows, the failure rate remains more
or less constant for fixed values of a, indicating a linear scaling relationship with d.
Thus O( d) training patterns should be required to learn majority to a fixed level of
performance. Note that if we require perfect learning, then the failure rate has to
be < 1/(2 d - S) ,..,. 1/2d ? By substituting this for f in the above formula and solving
for S, we get that
)(dln 2 + In a) patterns are required. The extra factor of d
suggests that O( d2 ) would be required to learn majority perfectly. We will show in
Section 6.1 that this is actually an overestimate.
(1
5
THE INPUT REPRESENTATION
So far in our simulations we have used the representation commonly used for boolean
predicates. Whenever an input feature has been true, we clamped the corresponding
input unit to a 1, and when it has been off we have clamped it to a O. There is no
reason, however, why some other representation couldn't have been used. Notice
that in back propagation the weight change is proportional to the incoming input
signal, hence the weight from a particular input unit to the output unit is changed
only when the pattern is misclassified and the input unit is non-zero. The weight
remains unchanged when the input unit is O. If the 0,1 representation were changed
to a-l,+1 representation each weight will be changed more often, hence the network
should learn the training set quicker (simulations in [Stornetta and Huberman, 81]
reported such a decrease in training time using a -i, +i representation.)
163
164
Ahmad and Tesauro
0.50
f
0.42
0.33
~~--------------------
0.25
-
0.17
S=3d
S=5d
S=7d
0.08
0.00
+----+----+-----+---+----+---....60
20
27
33
Figure 3: Failure ra.te
40
VB
47
53
d
d with S = 3d, 5d, 7 d.
We found that not only did the training time decrease with the new representation,
the generalization of the network improved significantly. The scaling of the failure
rate with respect to S is unchanged, but for any fixed value of S, the generalization
is about 5 - 10% better. Also, the scaling with respect to dis still linear, but the
constant for a fixed performance level is smaller. Although the exact reason for
the improved generalization is unclear, the following might be a plausible reason.
A weight is changed only if the corresponding input is non-zero. By the definition
of the majority function, the average number of units that are on for the positive
instances is higher than for the negative instances. Hence, using the 0,1 representation, the weight changes are more pronounced for the positive instances than for
the negative instances. Since the weights are changed whenever a pattern is misclassified, the net result is that the weight change is greater when a positive event
is misclassified than when a negative event is misclassified. Thus, there seems to be
a bias in the 0,1 representation for correcting the hyperplane more when a positive
event is misclassified. In the new representation, both positive and negative events
are treated equally hence it is unbiased.
The basic lesson here seems to be that one should carefully examine every choice
that has been made during the design process. The representation of the input,
even down to such low level details as deciding whether "off" should be represented
as 0 or -1, could make a significant difference in the generalization.
Scaling and Generalization in Neural Networks
6
BORDER PATTERNS
We now consider a method for improving the generalization by intelligently selecting
the patterns in the training set. Normally, for a given training set, when the inputs
are spread evenly around the input space, there can be several generalizations which
are consistent with the patterns. The performance of the network on the test
set becomes a random event, depending on the initial state of the network. If
practical, it makes sense to choose training patterns whic~ can limit the possible
generalizations. In particular, if we can find those examples which are closest to
the separating surface, we can maximally constrain the number of generalizations.
The solution that the network converges to using these "border" patterns should
have a higher probability of being a good separator. In general finding a perfect
set of border patterns can be computationally expensive, however there might exist
simple heuristics which can help select good training examples.
We explored one heuristic for choosing such points: selecting only those patterns
in which the number of 1's is either one less or one more than half the number
of input units. Intuitively, these inputs should be close to the desired separating
surface, thereby constraining' the network more than random patterns would. Our
results show that using only border patterns in the training set, there is a large
increase in the expected performance of the network for a given S. In addition, the
scaling behavior as a function of S seems to be very different and is faster than an
exponential decrease. (Figure 4 shows typical failure rate vs S curves comparing
border patterns, the -1,+1 representation, and the 0,1 representation.)
6.1
BORDER PATTERNS AND PERFECT LEARNING
We say the network has perfectly learned a function when the test patterns are never
misclassified. For the majority function, one can argue that at least some border
patterns must be present in order to guarantee perfect performance. If no border
patterns were in the training set, then the network could have learned the
1
of d or the + 1 of d function . Furthermon~, if we know that a certain number
of border patterns is guaranteed to give perfect performance, say bed), then given
the probability that a random pattern is a border pattern, we can calculate the
expected number of random patterns sufficient to learn majority.
f-
f
For odd d, there are 2 *
( ; )
border pattern randomly is:
border patterns, so the probability of choosing a
( ;)
2d-
1
As d gets larger this probability decreases as 1/.fd.* The expected number of randomly chosen patterns required before b( d) border patterns are chosen is therefore:
0*
This can be shown using Stirling's approximation to d!.
165
166
Ahmad and Tesauro
0.50
f
0.42
0.33
0.25
0.17
0.08
0.001
0
58
117
175
233
292
S
350
Figure 4: Graph showing the average failure rate vs. S using the 0,1 representation
(right), the -1,+1 representation (middle), and using border patterns (left). The network
had 23 inputs units and was tested on a test set consisting of 1024 patterns.
b( cl)Vd. From our data we find that 3d border patterns are always sufficient to learn
the test set perfectly. From this, and from the theoretical results in [Cover, 65], we
can be confident that b( cI) is linear in d. Thus, O( fi3/2) random patterns should be
sufficient to learn majority perfectly.
It should be mentioned that border patterns are not the only patterns which contribute to the generalization of the network. Figure 5 shows that the failure rate of
the network when trained with random training patterns which happen to contain
b border patterns is substantially better than a training set consisting of only b
border patterns. Note that perfect performance is achieved at the same point in
both cases.
7
CONCLUSION
In this paper we have described a systematic study of some of the various factors
affecting scaling and generalization in neural networks. Using empirical studies in
a simple test domain, we were able to obtain precise scaling relationships between
the performance of the network, the number of training patterns, and the size of
the network. It was shown that for a fixed network size, the failure rate decreases
exponentially with the size of the training set. The number of patterns required to
/
--......, '\
Scaling and Generalization in Neural Networks
f ?.u
?.u
?? U
?? 17
....
....
?
II
..
II
,.
II
Nwnber of border patterna.
Figure 5: This figure compares the failure rate on a random training set which happens
to contain b border patterns (bottom plot) with a training set composed of only b border
patterns (top plot).
achieve a fixed performance level was shown to increase linearly with the network
SIZe.
A general finding was that the performance of the network was very sensitive to a
number of factors. A slight change in the input representation caused a jump in the
performance of the network. The specific patterns in the training set had a large
influence on the final weights and on the generalization. By selecting the training
patterns intelligently, the performance of the network was increased significantly.
The notion of border patterns were introduced as the most interesting patterns in
the training set. As far as the number of patterns required to teach a function
to the network, these patterns are near optimal. It was shown that a network
trained only on border patterns generalizes substantially better than one trained
on the same number of random patterns. Border patterns were also used to derive
an expected bound on the number of random patterns sufficient to learn majority
perfectly. It was shown tha,t on average, O(d3 / 2 ) random patterns are sufficient to
learn majority perfectly.
In conclusion, this paper advocates a careful study of the process of generalization
in neural networks. There are a large number of different factors which can affect
the performance. Any assumptions made when applying neural networks to a realworld problem should be made with care. Although much more work needs to be
167
168
Ahmad and Tesauro
done, it was shown that many of the issues can be effectively studied in a simple
test domain.
Acknowledgements
We thank T. Sejnowski, R. Rivest and A. Barron for helpful discussions. We also
thank T. Sejnowski and B. Bogstad for assistance in development of the simulator
code. This work was partially supported by the National Center for Supercomputing
Applications and by National Science Foundation grant Phy 86-58062.
References
[Ahmad,88] S. Ahmad. A Study of Scaling and Generalization in Neural Networks.
Technical Report UIUCDCS-R-88-1454, Department of Computer Science, University of Illinois, Urbana-Champaign, IL, 1988.
[Cover, 65] T. Cover. Geometric and satistical properties of systems oflinear equations. IEEE Trans. Elect. Comp., 14:326-334, 1965.
[Minsky and Papert, 69] Marvin Minsky and Seymour Papert. Perceptrons. MIT
Press, Cambridge, Mass., 1969.
[Muroga, 71] S Muroga. Threshold Logic and its Applications. Wiley, New York,
1971.
[Rumelhart and McClelland, 86] D. E. Rumelhart and J. L. McClelland, editors.
Parallel Distributed Processing: Explorations in the Microstructure of Cognition:
Foundations. Volume I, MIT Press, Cambridge, Mass., 1986.
[Stornetta and Huberman, 87] W.S. Stornetta and B.A. Huberman. An improved
three-layer, back propagation algorithm. In Proceedings of the IEEE First International Conference on Neural Networks, San Diego, CA, 1987.
[Sejnowski and Rosenberg, 87] T.J. Sejnowski and C.R. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145-168, 1987.
[Tesauro and Sejnowski, 88] G. Tesauro and T.J. Sejnowski. A Parallel Network
that Learns to Play Backgammon. Technical Report CCSR-88-2, Center for
Complex Systems Research, University of Illinois, Urbana-Champaign, IL, 1988.
| 129 |@word middle:1 version:1 seems:3 d2:1 simulation:5 thereby:1 phy:1 initial:1 selecting:3 current:2 comparing:1 yet:1 must:1 numerical:2 happen:1 analytic:1 plot:3 v:2 implying:1 half:3 nervous:1 contribute:1 height:1 advocate:1 theoretically:1 expected:4 ra:1 behavior:1 examine:3 multi:1 simulator:1 encouraging:1 little:2 becomes:1 notation:1 rivest:1 mass:2 substantially:2 finding:2 guarantee:1 every:1 growth:1 exactly:1 tricky:1 unit:14 normally:1 grant:1 overestimate:1 positive:6 before:1 tends:1 limit:1 seymour:1 might:2 emphasis:1 studied:1 suggests:1 limited:1 pronounce:1 practical:2 procedure:2 empirical:2 thought:1 significantly:2 get:2 close:2 layered:1 context:1 influence:1 applying:1 center:4 starting:1 correcting:1 rule:1 insight:2 notion:1 diego:1 play:1 exact:2 rumelhart:3 expensive:1 bottom:1 quicker:1 calculate:1 cycle:2 decrease:7 ahmad:8 extensibility:1 mentioned:1 gerald:1 trained:4 solving:1 po:1 various:2 represented:1 sejnowski:8 couldn:1 choosing:2 whose:1 emerged:1 heuristic:2 plausible:2 larger:1 say:2 fi3:1 ability:1 final:1 reproduced:1 rr:1 net:1 intelligently:2 turned:1 achieve:1 intuitive:1 bed:1 pronounced:1 convergence:1 ccsr:1 perfect:6 converges:1 help:1 depending:1 derive:1 measured:1 exemplar:1 odd:2 exploration:1 material:1 require:1 microstructure:1 generalization:25 biological:1 around:1 deciding:1 cognition:1 substituting:1 sensitive:1 mit:2 subutai:1 always:1 rosenberg:3 backgammon:1 indicates:1 rigorous:1 sense:1 helpful:1 hidden:1 misclassified:7 interested:1 issue:3 exponent:3 development:1 fairly:2 never:1 having:1 look:1 muroga:2 report:2 few:1 randomly:2 composed:1 national:2 minsky:3 consisting:2 fd:1 initialized:1 desired:1 theoretical:2 instance:6 formalism:1 increased:1 boolean:3 cover:3 measuring:1 stirling:1 predicate:6 too:1 reported:1 confident:1 st:1 international:1 systematic:1 off:2 concrete:2 thesis:1 choose:2 return:2 caused:1 ad:2 portion:1 parallel:3 slope:2 minimize:1 il:3 characteristic:1 lesson:1 thumb:1 comp:1 straight:1 randomness:1 whenever:2 definition:1 failure:18 against:1 gain:1 popular:1 carefully:1 actually:2 back:3 feed:2 originally:1 higher:2 supervised:1 methodology:1 improved:3 maximally:1 ooo:1 done:1 box:1 hopefully:1 propagation:3 facilitate:1 effect:2 consisted:1 ranged:1 true:1 unbiased:1 contain:2 hence:4 assistance:1 during:1 elect:1 yorktown:1 performs:1 novel:1 common:1 sigmoid:1 exponentially:2 volume:1 organism:1 slight:1 significant:1 cambridge:2 illinois:3 had:2 surface:2 closest:1 recent:1 showed:1 tesauro:8 certain:1 watson:1 greater:1 somewhat:1 care:1 monotonically:1 signal:1 ii:3 champaign:4 technical:2 faster:1 equally:1 basic:2 multilayer:1 ae:1 represent:1 achieved:2 condensation:1 affecting:1 addition:1 extra:1 extracting:1 near:1 constraining:1 variety:2 affect:1 perfectly:6 whether:1 motivated:1 york:1 useful:4 detailed:1 amount:1 mcclelland:3 simplest:2 category:1 exist:1 notice:1 dln:1 key:1 threshold:1 d3:1 graph:3 fraction:1 year:1 realworld:1 respond:1 fjs:1 scaling:17 vb:1 layer:2 bound:1 guaranteed:2 marvin:1 occur:1 constrain:1 aspect:1 whic:1 separable:2 department:1 alternate:1 combination:1 smaller:1 happens:1 intuitively:1 computationally:1 resource:1 equation:1 remains:2 needed:1 know:1 generalizes:1 z10:1 barron:1 top:1 include:1 unchanged:2 objective:1 question:1 quantity:1 unclear:1 thank:2 separating:2 majority:17 vd:1 evenly:1 argue:1 reason:3 code:1 relationship:7 difficult:1 teach:1 negative:6 design:1 urbana:3 situation:2 precise:1 nwnber:1 introduced:1 required:7 learned:2 tremendous:1 trans:1 able:1 pattern:57 summarize:2 patterna:1 event:5 ation:1 difficulty:1 natural:1 treated:1 text:1 understanding:4 literature:1 acknowledgement:1 geometric:1 interesting:1 proportional:2 foundation:2 sufficient:5 consistent:1 plotting:1 editor:1 ibm:1 qf:1 prone:1 changed:5 surprisingly:1 supported:1 english:1 dis:1 bias:2 perceptron:2 wide:1 distributed:1 curve:1 world:2 forward:2 author:1 commonly:1 made:3 jump:1 san:1 supercomputing:1 far:2 logic:1 dealing:1 incoming:1 why:1 learn:11 ca:1 obtaining:2 improving:1 complex:3 separator:1 cl:1 domain:4 did:1 spread:1 linearly:3 border:24 stornetta:3 ny:1 oflinear:1 wiley:1 papert:3 exponential:1 clamped:2 learns:1 theorem:1 z0:1 formula:1 down:1 specific:1 showing:1 appeal:1 explored:1 effectively:1 importance:1 ci:1 magnitude:1 te:1 contained:1 partially:1 applies:1 tha:1 uiucdcs:1 careful:1 considerable:1 change:4 typical:1 huberman:3 hyperplane:1 perceptrons:2 indicating:2 select:1 tested:2 |
318 | 1,290 | Separating Style and Content
Joshua B. Tenenbaum
Dept. of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
jbtGpsyche.mit.edu
William T. Freeman
MERL, Mitsubishi Electric Res. Lab.
201 Broadway
Cambridge, MA 02139
freemanOmerl.com
Abstract
We seek to analyze and manipulate two factors, which we call style
and content, underlying a set of observations. We fit training data
with bilinear models which explicitly represent the two-factor structure. These models can adapt easily during testing to new styles or
content, allowing us to solve three general tasks: extrapolation of a
new style to unobserved content; classification of content observed
in a new style; and translation of new content observed in a new
style. For classification, we embed bilinear models in a probabilistic
framework, Separable Mixture Models (SMMsj, which generalizes
earlier work on factorial mixture models [7, 3]. Significant performance improvement on a benchmark speech dataset shows the
benefits of our approach.
1
Introduction
In many pattern analysis or synthesis tasks, the observed data are generated from
the interaction of two underlying factors which we will generically call "style" and
"content." For example, in a character recognition task, we might observe different
letters in different fonts (see Fig. 1); with handwriting, different words in different
writing styles; with speech, different phonemes in different accents; with visual
images, the faces of different people under different lighting conditions.
Such data raises a number of learning problems. Extracting a hidden two-factor
structure given only the raw observations has received significant attention [7, 3],
but unsupervised factorial learning of this kind has yet to prove tractable for our
focus: real-world data with subtly interacting factors. We work in a more supervised
setting, where labels for style or content may be available during training or testing.
Figure 1 shows three problems we want to solve. Given a labelled training set of
observations in multiple styles, we want to extrapolate a new style to unobserved
content classes (Fig. la), classify content observed in a new style (Fig. Ib), and
translate new content observed in a new style (Fig. lc).
This paper treats these problems in a common framework, by fitting the training
data with a separable model that can easily adapt during testing to new styles or
content classes. We write an observation vector in style s and content class c as
y3C. We seek to fit these observations with some model
y3C = f(a 3 , bC; W),
(1)
where a particular functional form of f is assumed. We must estimate parameter
vectors a 3 and b C describing style s and content c, respectively, and W, parameters
Separating Style and Content
663
Extrapolation
HB
[
0 E
Jl '.8 C 'l> '?
A BC D E
A r
C II E.
A BC D E
ABC ? ?
Classification
HB
)
RB
I: 0 E
Jl 'B C
1J)
'?
A r c
II
e
AB C D E
ED CB A
(a)
[
0 E ? ? ?
Jl '.8 C'D '?
A BC D E
I
Translation
I
I
A B C DE
A J C n E. I
V
I
I
I
E ? ? ?
? - - r- ? F G H
A BC D
(c)
(b)
Figure 1: Given observations of content (letters) in different styles (fonts), we want to
extrapolate, classify, and translate observations from a new style or content class.
for f that are independent of style and content but govern their interaction. In terms
of Fig. 1 (and in the spirit of [8]), the model represents what the elements of each
row have in common independent of column (a&), what the elements of each column
have in common independent of row (b C ) , and what all elements have in common
independent of row and column (W). With these three modular components, we
can solve problems like those illustrated in Fig. 1. For example, we can extrapolate
a new style to unobserved content classes (Fig. 1a) by combining content and
interaction parameters learned during training with style parameters estimated from
available data in the new style.
2
Bilinear models
We propose to separate style and content using bilinear models - two-factor models
that are linear in either factor when the other is held constant. These simple
models are still complex enough to model subtle interactions of style and content.
The empirical success of linear models in many pattern recognition applications
with single-factor data (e.g. "eigenface" models of faces under varying identity
but constant illumination and pose [15], or under varying illumination but constant
identity and pose [5] ), makes bilinear models a natural choice when two such factors
vary independently across the data set. Also, many of the computationally desirable
properties of linear models extend to bilinear models. Model fitting (discussed
in Section 3 below) is easy, based on efficient and well-known techniques such as
the singular value decomposition (SVD) and the expectation-maximization (EM)
algorithm. Model complexity can be controlled by varying model dimensionality to
achieve a compromise between reproduction of the training data and generalization
during testing. Finally, the approach extends to multilinear models [10], for data
generated by three or more interacting factors.
We have explored two bilinear models for Eq. 1. In the symmetric model (so called
because it treats the two factors symmetrically), we assume f is a bilinear mapping
given by
yi/
= a&TWkb = LaibjWijk.
C
(2)
ij
The W ij k parameters represent a set of basis functions independent of style and content, which characterize the interaction between these two factors. Observations in
style s and content c are generated by mixing these basis functions with coefficients
664
1. B. Tenenbaum and W T. Freeman
given by the tensor product of a 3 and be vectors. The model exactly reproduces the
observations when the dimensionalities of a 3 and be equal the number of styles N3
and content classes Ne observed. It finds coarser but more compact representations
as these dimensionalities are decreased.
Sometimes it may not be practical to represent both style and content with lowdimensional vectors. For example, a linear combination of a few basis styles learned
during training may not describe new styles well. We can obtain more flexible,
asymmetric bilinear models by letting the basis functions Wijk themselves depend
on style or content. For example, if the basis functions are allowed to depend on
style, the bilinear model from Eq. 2 becomes Yi/ = Lij bj W/j k' This simplifies
to Yi/ = Lj Aj kbj, by summing out the i index and identifying Aj k == Li W/j k'
In vector notation, we have
y3C = A 3 b c ,
(3)
at
at
where A 3 is a matrix of basis functions specific to style s (independent of content), and be is a vector of coefficients specific to content c (independent of style).
Alternatively, the basis functions may depend on content, which gives
(4)
Asymmetric models do not parameterize the rendering function f independently of
style and content, and so cannot translate across both factors simultaneously (Fig.
lc). Further, a matrix representation of style or content may be too flexible and
overfit the training data. But if overfitting is not a problem or can be controlled
by some additional constraint, asymmetric models may solve extrapolation and
classification tasks using less training data than symmetric models.
Figure 2 illustrates an example of an asymmetric model used to separate style and
content . We have collected a small database offace images, with 11 different people
(content classes) in 15 different head poses (styles). The images are 22 x 32 pixels,
which we treat as 704-dimensional vectors. A subset of the data is shown in Fig. 2a.
Fig. 2b schematically depicts an asymmetric bilinear model of the data, with each
pose represented by a set of basis vectors Apose (shown as images) and each person
represented by a set of coefficients bperson. To render an image of a particular
person in a particular pose, the pose-specific basis vectors are mixed according
to the person-specific coefficients. Note that the basis vectors for each pose look
like eigenfaces [15] in the appropriate style of each pose. However, the bilinear
structure of the model ensures that corresponding basis vectors play corresponding
roles across poses (e .g. the first vector holds (roughly) the mean face for that pose,
the second may modulate overall head size, the third may modulate head thickness,
etc.), which is crucial for adapting to new styles or content classes.
3
Model fitting
All the tasks shown in Fig. 1 break down into a training phase and a testing phase;
both involve some model fitting. In the training phase (corresponding to the first
5 rows and columns of Figs. la-c), we learn all the parameters of a bilinear model
from a complete matrix of observations of Nc content classes in N3 styles. In the
testing phase (corresponding to the final rows of Figs. la, b and the final row and
last 3 columns of Fig. lc), we adapt the same model to data in a new style or content
class (or both), estimating new parameters for the new style or content, clamping
the other parameters. Then new and old parameters are combined to accomplish
the desired classification, extrapolation, or translation task . This section focuses on
the asymmetric model and its use in extrapolation and classification. Training and
Separating Style and Content
665
Faces rendered from:
y =APose lJPerson
AUp-Right
A Up-Straight
A Up-Left
(a)
(b)
Figure 2: An illustraton of the asymmetric model, with faces varying in identity and head
pose.
adaptation procedures for the symmetric model are similar and based on algorithms
in [10, 11]. In [2], we describe these procedures and their application to extrapolation
and translation tasks.
3.1
Training
Let nSC be the number of observations in style s and content c, and let mSc = L: y8C
be the sum of these observations. Then estimates of AS and b C that minimize the
sum-of-squared-errors for the asymmetric model in Eq. 3 can be found by iterating
the fixed point equations
obtained by setting derivatives of the error equal to O. To ensure stability, we update
the parameters according to AS = (l-"7)As+"7As and b C= (l-"7)bc+"7bc, typically
using a stepsize 0.2 < "7 < 0.5. Replacing AS with B C and b C with as yields the
.analogous procedure for training the model in Eq. 4.
If the same number of observations are available for all style-content pairs, there
exists a closed-form procedure to fit the asymmetric model using the SVD. Let the
K-dimensional vector y8C denote the mean of the observed data generated by style
s and content c, and stack these vectors into a single (K x N s ) x Nc matrix
y=
(6)
We compute the SVD ofY = USVT, and define the (K x N s ) x J matrix A to be
the first J columns of U, and the J x Nc matrix B to be the first J rows of SVT .
Finally, we identify A and B as the desired parameter estimates in stacked form
666
J. B. Tenenbaum and W T. Freeman
(see also [9, 14]),
(7)
The model dimensionality J can be chosen in various standard ways: by a priori
considerations, by requiring a sufficiently good approximation to the data (as measured by mean squared error or some more subjective metric), or by looking for a
gap in the singular value spectrum.
3.2
Testing
It is straightforward to adapt the asymmetric model to an incomplete new style s* ,
in order to extrapolate that style to unseen content. We simply estimate A 3 from
Eq. 5, using b C values learned during training and restricting the sums over e to
those content classes observed in the new style. Then data in content e and style s*
can be synthesized from A $- b C ? Extrapolating incomplete new content to unseen
styles is done similarly.
Adapting the asymmetric model for classification in new styles is more involved,
because the content class of the new data (and possibly its style as well) is unlabeled. To deal with this uncertainty, we embed the bilinear model within a gaussian
mixture model to yield a separable mixture model (SMM), which can then be fit efficiently to data in new styles using the EM algorithm. Specifically, we assume that
the probability of a new, unlabeled observation y being generated by style sand
content c is given by a spherical gaussian centered at the prediction of the asymmetric bilinear model: p(Yls, e) <X exp{ -IIY - A$b C 11 2 j(2(T2)}. The total probability
of Y is then p(y) = 2:3 cp(yls, e)p(s, e); we use equal priors p(s, e). We assume
that the content vectors 'bc are known from training, but that new style matrices
A must be found to explain the test data. The EM algorithm alternates between
computing soft style and content-class assignments p(s, ely) p(yls, e)p(s, e)jp(y)
for each test vector y given the current style matrix estimates (E-step), and estimating new style matrices by setting A3 to maximize 2: y logp(y) (M-step). The
M-step is solved in closed form using the update rule for A 3 from Eq. 5, with
m 3C = 2: y p(s, elY) y and nu = 2: y p(s, ely)? Test vectors in new styles can now
be classified by grouping each vector y with the content class e that maximizes
p(cly) = 2:3 p(s, ely)?
$
4
=
Application: speaker-adaptive speech recognition
This example illustrates our approach to style-adaptive classification on a real-world
data set that is a benchmark for many connectionist learning algorithms. The data
consist of 6 samples of each of 11 vowels uttered by 15 speakers of British English
(originally collected by David Deterding, from the eMU neural-bench ftp archive).
Each data vector consists of 10 parameters computed from a linear predictive analysis of the digitized speech. Robinson [13] compared many learning algorithms
trained to categorize vowels from the first 8 speakers (4 male and 4 female) and
tested on samples from the remaining 7 speakers (4 male and 3 female).
Using the SVD-based procedure described above, we fit an asymmetric bilinear
model to the training data, labeled by style (speaker) and content (vowel). We
then used the learned vowel parameters b C in an SMM and tested classification
performance with varying degrees of style information for the 7 new speakers' data:
Separating Style and Content
667
both style and content labels missing for each test vector (SMMl), style labels
present (indicating a change of speaker) but content labels missing (SMM2), and
both labels missing but with the test data loglikelihood 2: y logp(y) augmented by
a prior favoring temporal continuity of style assignments (SMM3).
The few training styles makes this problem difficult and a good showcase for our approach. Robinson [13] obtained 51% correct vowel classification on the test set with
a multi-layer perceptron and 56% with a I-nearest neighbor (l-NN) classifier, the
best performance of the many standard techniques he tried. Hastie and Tibshirani
[6] recently obtained 62% correct using their discriminant adaptive nearest neighbor
algorithm, the best result we know of for an approach that does not model speaker
style. We obtained 69% correct for SMMl, 77% for SMM2, and 76% for SMM3,
using a model dimensionality of J = 4, model variance of (j2 = .5, and using the
vowel class assignments of I-NN to initialize the E-step of EM. While good initial
conditions were important for the EM algorithm, a range of model dimensionality
and variance settings gave reasonable performance.
We also applied these methods to the head pose data of Fig. 2a. We trained
on 10 subjects in the 15 poses, and used SMM2 to learn a style model for a new
person while simultaneously classifying the head poses. We obtained 81% correct
pose categorization (averaged over all 11 test subjects), compared with 53% correct
performance for I-NN matching.
These results demonstrate that modeling style and content can substantially improve content classification in new styles even when no style information is available
during testing (SMMl), and dramatically so when some style demarkation is available explicitly (SMM2) or implicitly (SMM3). Bilinear models offer an easy way
to improve performance using style labels which are frequently available for many
classification tasks.
5
Pointers to other work and conclusions
We discuss the extrapolation and translation problems in [2]. Here we summarize
results. Figure 3 shows extrapolation of a partially observed font (Monaco) to the
unseen letters (see also the gridfont work of [8, 4]). During training, we presented
all letters of the five fonts shown at the left. To accomodate many shape topologies,
we described letters by the warps of black particles from a reference shape into
the letter shape. During testing, we fit an asymmetric model style matrix to all
the letters of the Monaco font except those shown in the figure. We used the best
fitting linear combination of training fonts as a prior for the style matrix, in order
to control model complexity. Using the fit style, we then synthesized the unseen
letters of the Monaco font . These compare well with the actual letters in that style.
Because the W weights of the symmetric model are independent of any particular
style and content class, they allow translation of observations from unknown styles
and content-classes to known ones. During training, we fit the symmetric model to
the observations. For a test observation under a new style and content class, we
find a 3 and be values using the known W numbers, iterating least squares fits of
the two parameters. Typically, the resulting a 3 and be vectors are unique up to
an uncertainty in scale. We have used this approach to translate across shape or
lighting conditions for images of faces, and to translate across illumination color for
color measurements (assuming small specular reflections).
Our work naturally combines two current themes in the connectionist learning literature: factorial learning [7, 3] and learning a family of many related tasks [1, 12]
1. B. Tenenbaum and W. T. Freeman
668
u
Training fonts
I
1
i
A A
R .111 A
B 2l 8
C C C
0 'D 0
E :& E
F 7 F
It
J
A
B
8 8
C
C
C C
D D
? E
F
D 0
E E
f F
6 ~ G
H :Ii H
a
0-
G G
H H
U H
1 I
I
I
I
??
I
Figure 3: Style extrapolation in typography. The training data were all letters of the 5
fonts at left. The test data were all the Monaco letters except those shown at right. The
synthesized Monaco letters compare well with the missing ones.
to facilitate task transfer. Separable bilinear models provide a powerful framework
for separating style and content by combining explicit representation of each factor
with the computational efficiency of linear models.
Acknowledgements
We thank W. Richards, Y. Weiss, and M. Bernstein for helpful discussions. Joshua Tenenbaum is a Howard Hughes Medical Institute Predoctoral Fellow.
References
[1] R. Caruana. Learning many related tasks at the same time with backpropagation. In
Adv. in Neural Info . Proc. Systems, volume 7, pages 657-674, 1995.
[2] W. T. Freeman and J. B. Tenenbaum. Learning bilinear models for two-factor problems in vision. TR 96-37, MERL, 201 Broadway, Cambridge, MA 02139, 1996.
[3] Z. Ghahramani. Factorial learning and the EM algorithm. In Adv. in Neural Info.
Proc. Systems, volume 7, pages 617- 624, 1995.
[4] I. Grebert, D. G. Stork, R. Keesing, and S. Mims. Connectionist generalization for
production: An example from gridfont. Neural Networks, 5:699-710, 1992.
[5] P. W. Hallinan. A low-dimensional representation of human faces for arbitrary lighting
conditions. In Proc. IEEE CVPR, pages 995- 999, 1994.
[6] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification.
IEEE Pat. Anal. Mach. Intell., (18):607-616, 1996.
[7] G. E. Hinton and R. Zemel. Autoencoders, minimum description length, and
Helmholtz free energy. In Adv. in Neural Info . Proc. Systems, volume 6, 1994.
[8] D. Hofstadter. Fluid Concepts and Creative Analogies. Basic Books, 1995.
[9] J. J. Koenderink and A. J. van Doorn. The generic bilinear calibration-estimation
problem. Inti. J. Compo Vis., 1997. in press.
[10] J. R. Magnus and H. Neudecker. Matrix differential calculus with applications in
statistics and econometrics. Wiley, 1988.
[ll] D. H. Marimont and B. A. Wandell. Linear models of surface and illuminant spectra.
J. Opt. Soc. Am. A, 9(1l):1905-1913, 1992.
[12] S. M. Omohundro. Family discovery. In Adv. in Neural Info. Proc. Sys., vol. 8, 1995.
[13] A. Robinson. Dynamic error propagation networks. PhD thesis, Cambridge University
Engineering Dept., 1989.
[14] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography:
a factorization method. Inti. J. Compo Vis., 9(2):137-154, 1992.
[15] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cog. Neurosci., 3(1), 1991.
| 1290 |@word calculus:1 seek:2 mitsubishi:1 tried:1 decomposition:1 tr:1 initial:1 bc:8 subjective:1 current:2 com:1 yet:1 must:2 shape:5 extrapolating:1 update:2 sys:1 compo:2 pointer:1 five:1 differential:1 prove:1 consists:1 fitting:5 combine:1 roughly:1 themselves:1 frequently:1 usvt:1 brain:1 multi:1 freeman:5 spherical:1 actual:1 becomes:1 estimating:2 underlying:2 notation:1 maximizes:1 what:3 kind:1 substantially:1 unobserved:3 temporal:1 fellow:1 exactly:1 classifier:1 control:1 medical:1 engineering:1 svt:1 treat:3 bilinear:20 mach:1 might:1 black:1 factorization:1 range:1 averaged:1 practical:1 unique:1 testing:9 hughes:1 backpropagation:1 procedure:5 empirical:1 adapting:2 matching:1 word:1 cannot:1 unlabeled:2 writing:1 missing:4 uttered:1 straightforward:1 attention:1 independently:2 identifying:1 rule:1 stability:1 analogous:1 play:1 element:3 helmholtz:1 recognition:4 asymmetric:14 showcase:1 richards:1 coarser:1 database:1 labeled:1 observed:9 role:1 econometrics:1 solved:1 parameterize:1 ensures:1 adv:4 keesing:1 mims:1 govern:1 complexity:2 dynamic:1 trained:2 raise:1 depend:3 compromise:1 subtly:1 predictive:1 efficiency:1 basis:11 easily:2 represented:2 various:1 stacked:1 describe:2 zemel:1 modular:1 solve:4 cvpr:1 loglikelihood:1 statistic:1 unseen:4 final:2 propose:1 lowdimensional:1 interaction:5 product:1 adaptation:1 j2:1 combining:2 translate:5 mixing:1 achieve:1 description:1 categorization:1 ftp:1 pose:15 measured:1 nearest:3 ij:2 received:1 eq:6 soc:1 correct:5 centered:1 human:1 eigenface:1 sand:1 generalization:2 opt:1 multilinear:1 hold:1 sufficiently:1 magnus:1 exp:1 cb:1 mapping:1 bj:1 vary:1 estimation:1 proc:5 grebert:1 label:6 mit:1 gaussian:2 varying:5 focus:2 improvement:1 am:1 helpful:1 nn:3 lj:1 typically:2 hidden:1 favoring:1 pixel:1 overall:1 classification:13 flexible:2 priori:1 initialize:1 equal:3 represents:1 look:1 unsupervised:1 t2:1 connectionist:3 few:2 simultaneously:2 intell:1 phase:4 william:1 vowel:6 ab:1 wijk:1 generically:1 mixture:4 male:2 held:1 iiy:1 incomplete:2 old:1 re:1 desired:2 merl:2 classify:2 earlier:1 column:6 soft:1 modeling:1 caruana:1 logp:2 assignment:3 maximization:1 subset:1 too:1 characterize:1 thickness:1 accomplish:1 combined:1 person:4 probabilistic:1 synthesis:1 squared:2 thesis:1 possibly:1 cognitive:1 book:1 derivative:1 style:86 koenderink:1 li:1 de:1 coefficient:4 explicitly:2 ely:4 vi:2 stream:1 break:1 extrapolation:9 lab:1 aup:1 analyze:1 closed:2 minimize:1 square:1 phoneme:1 variance:2 efficiently:1 yield:2 identify:1 gridfont:2 raw:1 lighting:3 straight:1 classified:1 explain:1 ed:1 energy:1 involved:1 turk:1 naturally:1 handwriting:1 dataset:1 massachusetts:1 color:2 dimensionality:6 subtle:1 monaco:5 originally:1 supervised:1 wei:1 done:1 msc:1 overfit:1 autoencoders:1 replacing:1 propagation:1 accent:1 continuity:1 aj:2 facilitate:1 requiring:1 concept:1 symmetric:5 illustrated:1 deal:1 ll:1 during:11 speaker:8 complete:1 demonstrate:1 omohundro:1 cp:1 reflection:1 motion:1 image:7 consideration:1 recently:1 common:4 wandell:1 functional:1 stork:1 jp:1 volume:3 jl:3 extend:1 discussed:1 he:1 synthesized:3 significant:2 measurement:1 cambridge:4 similarly:1 particle:1 calibration:1 surface:1 etc:1 female:2 yls:3 success:1 yi:3 joshua:2 minimum:1 additional:1 maximize:1 ii:3 multiple:1 ofy:1 desirable:1 adapt:4 offer:1 manipulate:1 controlled:2 prediction:1 basic:1 vision:1 expectation:1 metric:1 represent:3 sometimes:1 orthography:1 doorn:1 schematically:1 want:3 decreased:1 singular:2 crucial:1 archive:1 subject:2 marimont:1 spirit:1 call:2 extracting:1 symmetrically:1 bernstein:1 enough:1 easy:2 hb:2 rendering:1 fit:9 gave:1 specular:1 hastie:2 topology:1 simplifies:1 render:1 speech:4 dramatically:1 iterating:2 involve:1 factorial:4 tenenbaum:6 estimated:1 tibshirani:2 rb:1 write:1 vol:1 kbj:1 sum:3 letter:12 uncertainty:2 powerful:1 extends:1 family:2 reasonable:1 layer:1 constraint:1 n3:2 neudecker:1 separable:4 rendered:1 according:2 alternate:1 creative:1 combination:2 across:5 em:6 character:1 inti:2 computationally:1 equation:1 describing:1 discus:1 know:1 letting:1 tractable:1 generalizes:1 available:6 observe:1 appropriate:1 generic:1 stepsize:1 remaining:1 ensure:1 ghahramani:1 tensor:1 font:9 separate:2 thank:1 separating:5 collected:2 discriminant:2 assuming:1 length:1 index:1 nc:3 difficult:1 broadway:2 info:4 fluid:1 anal:1 unknown:1 allowing:1 predoctoral:1 observation:17 benchmark:2 howard:1 pentland:1 pat:1 hinton:1 looking:1 head:6 digitized:1 interacting:2 stack:1 arbitrary:1 david:1 pair:1 bench:1 tomasi:1 nsc:1 learned:4 emu:1 nu:1 robinson:3 below:1 pattern:2 summarize:1 natural:1 improve:2 technology:1 ne:1 lij:1 prior:3 literature:1 acknowledgement:1 discovery:1 mixed:1 analogy:1 degree:1 classifying:1 translation:6 row:7 production:1 last:1 free:1 english:1 allow:1 perceptron:1 institute:2 neighbor:3 eigenfaces:2 face:7 warp:1 benefit:1 van:1 world:2 adaptive:4 compact:1 smm:2 implicitly:1 reproduces:1 overfitting:1 summing:1 assumed:1 alternatively:1 spectrum:2 kanade:1 learn:2 transfer:1 complex:1 electric:1 neurosci:1 allowed:1 augmented:1 fig:15 depicts:1 wiley:1 lc:3 theme:1 explicit:1 hofstadter:1 ib:1 third:1 down:1 british:1 embed:2 cog:1 specific:4 explored:1 reproduction:1 a3:1 exists:1 grouping:1 consist:1 restricting:1 phd:1 illumination:3 illustrates:2 accomodate:1 clamping:1 gap:1 simply:1 visual:1 partially:1 abc:1 ma:3 modulate:2 identity:3 labelled:1 content:61 change:1 specifically:1 except:2 called:1 total:1 svd:4 la:3 indicating:1 people:2 categorize:1 illuminant:1 dept:2 tested:2 extrapolate:4 |
319 | 1,291 | Contour Organisation with the EM
Algorithm
J. A. F. Leite and E. R. Hancock
Department of Computer Science
University of York, York, Y01 5DD, UK.
Abstract
This paper describes how the early visual process of contour organisation can be realised using the EM algorithm. The underlying
computational representation is based on fine spline coverings. According to our EM approach the adjustment of spline parameters
draws on an iterative weighted least-squares fitting process. The
expectation step of our EM procedure computes the likelihood of
the data using a mixture model defined over the set of spline coverings. These splines are limited in their spatial extent using Gaussian windowing functions. The maximisation of the likelihood leads
to a set of linear equations in the spline parameters which solve the
weighted least squares problem. We evaluate the technique on the
localisation of road structures in aerial infra-red images.
1
Introduction
Dempster, Laird and Rubin's EM (expectation and maximisation) [1] algorithm was
originally introduced as a means of finding maximum likelihood solutions to problems posed in terms of incomplete data. The basic idea underlying the algorithm
is to iterate between the expectation and maximisation modes until convergence is
reached. Expectation involves computing a posteriori model probabilities using a
mixture density specified in terms of a series of model parameters. In the maximisation phase, the model parameters are recomputed to maximise the expected
value of the incomplete data likelihood. In fact, when viewed from this perspective,
the updating of a posteriori probabilities in the expectation phase would appear
to have much in common with the probabilistic relaxation process extensively exploited in low and intermediate level vision [9, 2] . Maximisation of the incomplete
Contour Organisation with the EM Algorithm
881
data likelihood is reminiscent of robust estimation where outlier reject is employed
in the iterative re-computation of model parameters [7].
It is these observations that motivate the study reported in this paper. We are
interested in the organisation of the output of local feature enhancement operators
into meaningful global contour structures [13, 2]. Despite providing one of the classical applications of relaxation labelling in low-level vision [9], successful solutions
to the iterative curve reinforcement problem have proved to be surprisingly elusive
[8, 12, 2]. Recently, two contrasting ideas have offered practical relaxation operators. Zucker et al [13] have sought biologically plausible operators which draw on
the idea of computing a global curve organisation potential and locating consistent
structure using a form of local snake dynamics [11]. In essence this biologically
inspired model delivers a fine arrangement of local splines that minimise the curve
organisation potential. Hancock and Kittler [2], on the other hand, appealed to a
more information theoretic motivation [4]. In an attempt to overcome some of the
well documented limitations of the original Rosenfeld, Hummel and Zucker relaxation operator [9] they have developed a Bayesian framework for relaxation labelling
[4]. Of particular significance for the low-level curve enhancement problem is the underlying statistical framework which makes a clear-cut distinction between the roles
of uncertain image data and prior knowledge of contour structure. This framework
has allowed the output of local image operators to be represented in terms of Gaussian measurement densities, while curve structure is represented by a dictionary of
consistent contour structures [2].
While both the fine-spline coverings of Zucker [13] and the dictionary-based relaxation operator of Hancock and Kittler [2] have delivered practical solutions to
the curve reinforcement problem, they each suffer a number of shortcomings. For
instance, although the fine spline operator can achieve quasi-global curve organisation, it is based on an essentially ad hoc local compatibility model. While being
more information theoretic, the dictionary-based relaxation operator is limited by
virtue of the fact that in most practical applications the dictionary can only realistically be evaluated over at most a 3x3 pixel neighbourhood. Our aim in this paper
is to bridge the methodological divide between the biologically inspired fine-spline
operator and the statistical framework of dictionary-based relaxation. We develop
an iterative spline fitting process using the EM algorithm of Dempster et al [1] .
In doing this we retain the statistical framework for representing filter responses
that has been used to great effect in the initialisation of dictionary-based relaxation. However, we overcome the limited contour representation of the dictionary by
drawing on local cubic splines.
2
Prerequisites
The practical goal in this paper is the detection of line-features which manifest
themselves as intensity ridges of variable width in raw image data. Each pixel
is characterised by a vector of measurements, ?<i where i is the pixel-index. This
measurement vector is computed by applying a battery of line-detection filters of
various widths and orientations to the raw image. Suppose that the image data
is indexed by the pixel-set I. Associated with each image pixel is a cubic spline
parameterisation which represents the best-fit contour that couples it to adjacent
feature pixels. The spline is represented by a vector of parameters denoted by
882
1. A. F. Leite and E. R. Hancock
9.; = (q?, q}, q; , qr) T . Let (Xi, Yi) represent the position co-ordinates of the pixel
indexed i . The spline variable, Si,j = X i - Xj associated with the contour connecting
the pixel indexed j is the horizontal displacement between the pixels indexed i and
j. We can write the cubic spline as an inner product F(Si ,j, g) = g[.Si,j where
Si,j = (1, Si,j , S~,j' S~,j) T . Central to our EM algorithm will be the comparison of
the predicted vertical spline displacement with its measured value Ti, j = Yi - Yj.
In order to initialise the EM algorithm, we require a set of initial spline probabilities
Here we use the multi-channel combination model
which we denote by 7l'(q(O?).
-t
recently reported by Leite and Hancock [5] to compute an initial multi-scale linefeature probability. Accordingly, if ~ is the variance-covariance matrix for the
components of the filter bank, then
7l'(g~O?) =
l-exp
[-~~r~-l~i]
(1)
The remainder of this paper outlines how these initial probabilities are iteratively refined using the EM algorithm. Because space is limited we only provide an algorithm
sketch. Essential algorithm details such as the estimation of spline orientation and
the local receptive gating of the spline probabilities are omitted for clarity. Full
details can be found in a forthcoming journal article [6].
3
Expectation
Our basic model of the spline organisation process is as follows . Associated with
each image pixel is a spline parameterisation. Key to our philosophy of exploiting
a mixture model to describe the global contour structure of the image is the idea
that the pixel indexed i can associate to each of the putative splines residing in a
local Gaussian window N i . We commence by developing a mixture model for the
conditional probability density for the filter response ~i given the current global
Vi E I} is the global spline description at
spline description. If ~(n) = {q(n),
-t
iteration n of the EM process, then we can expand the mixture distribution over a
set of putative splines that may associate with the image pixel indexed i
p(~il~(n?) =
L p(~ilg;n?)7l'(g;n?)
(2)
jENi
The components of the above mixture density are the conditional measurement
densities p(~ilq(n?) and the spline mixing proportions 7l'(q(n?). The conditional
-J
-J
measurement densities represent the likelihood that the datum ~i originates from the
spline centred on pixel j. The mixing proportions, on the other hand, represent the
fractional contribution to the data arising from the jth parameter vector i.e. q(n).
-J
Since we are interested in the maximum likelihood estimation of spline parameters,
we turn our attention to the likelihood of the raw data, i.e.
P(~i' Vi E II~(n?) = IIp(~il~(n))
(3)
iEI
The expectation step of the EM algorithm is aimed at estimating the log-likelihood
using the parameters of the mixture distribution. In other words, we need to average the likelihood over the space of potential pixel-spline assignments. In fact,
883
Contour Organisation with the EM Algorithm
it was Dempster, Laird and Rubin [1] who observed that maximising the weighted
log-likelihood was equivalent to maximising the conditional expectation of the likelihood for a new parameter set given an old parameter set. For our spline fitting
problem, maximisation of the expectation of the conditional likelihood is equivalent
to maximising the weighted log-likelihood function
Q(~(n+l)I~(n?)
=L
L
p(g;n)l~i) lnp(~ilg;n+l?)
(4)
iEI JEN,
The a posteriori probabilities p(q(n)l~i) may be computed from the corresponding
-1
components of the mixture density p(~ilqJn?) using the Bayes formula
p(g;n)l~i) = 2:
P(~i Iq(n) )7r( q(n?)
(11 gk(n?))7r (gk(n?)
(5)
kEN. P ~t
For notational convenience, and to make the weighting role of the a posteriori probabilities explicit we use the shorthand w~~) = p(g;n) I~i). Once updated parameter
become available through the maximisation of this criterion, imestimates q(n)
-t
proved estimates of the mixture components may be obtained by substitution into
required to determine
equation (6). The updated mixing proportions, 7r(q(n+l?),
-t
the new weights w~~) are computed from the newly available density components
using the following estimator
7r( (n+l?)
gi
= ""
p( z ?lq(n?)7r(q(n?)
-)
-i
-i
~ ~
(I (n?) 7r (gk(n?)
JEN. L.JkEIP ~j gk
(6)
In order to proceed with the development of a spline fitting process we require
a model for the mixture components, i.e. p(~ilg;n?). Here we assume that the
required model can be specified in terms of Gaussian distribution functions. In
other words, we confine our attention to Gaussian mixtures. The physical variable
of these distributions is the squared error residual for the position prediction of the
ith datum delivered by the jth spline. Accordingly we write
p(~ilg;n?) =
a
exp [-,8
(ri,j - F(Si,j, g;n?)) 2]
(7)
where ,8 is the inverse variance of the fit residuals. Rather than estimating ,8, we
use it in the spirit of a control variable to regulate the effect of fit residuals.
Equations (5), (6) and (11) therefore specify a recursive procedure that iterates the
weighted residuals to compute a new mixing proportions based on the quality of
the spline fit.
4
Maximisation
The maximisation step aims to optimize the quantity Q(~(n+l)I~(n?) with respect
to the spline parameters. Formally this corresponds to finding the set of spline
parameters which satisfy the condition
(8)
884
1. A. F. Leite and E. R. Hancock
We find a local approximation to this condition by solving the following set of linear
equations
8Q( <)(n+l) I<)(n?)
= 0
8(qf)(n+l)
(9)
for each spline parameter (qf)(n+l) in turn, i.e. for k=O,1,2,3. Recovery of the
splines is most conveniently expressed in terms of the following matrix equation for
the components of the parameter-vector q(n)
-t
(10)
The elements of the vector x(n) are weighted cross-moments between the parallel
and perpendicular spline distances in the Gaussian window, i.e.
x(n)
=
(11)
-t
The elements of the matrix A~n), on the other hand, are weighted moments computed purely in terms of the parallel distance Si ,j. If k and I are the row and column
indices, then the (k, l)th element of the matrix A~n) is
I
[A(n)]k
t
,
= "L
w(n) sk+I-2
t,)
t,l
(12)
JENi
5
Experiments
We have evaluated our iterative spline fitting algorithm on the detection of linefeatures in aerial infra-red images. Figure 1a shows the original picture. The initial
feature probabilities (i.e. 71" { q~O?)) assigned according to equation (1) are shown
in Figure lb. Figure 1c shows the final contour-map after the EM algorithm has
converged. Notice that the line contours exhibit good connectivity and that the
junctions are well reconstructed. We have highlighted a subregion of the original
image. There are two features in this subregion to which we would like to draw
attention. The first of these is centred on the junction structure. The second
feature is a neighbouring point on the descending branch of the road.
Figure 2 shows the iterative evolution of the cubic splines at these two locations.
The spline shown in Figure 2a adjusts to fit the upper pair of road segments. Notice
also that although initially displaced, the final spline passes directly through the
junction. In the case of the descending road-branch the spline shown in Figure 2b
recovers from an initially poor orientation estimate to align itself with the underlying
road structure. Figure 2c shows how the spline probabilities (i.e. 7I"(q~n?)) evolve
with iteration number. Initially, the neighbourhood is highly ambiguous. Many
neighbouring splines compete to account for the local image structure. As a result
the detected junction is several pixels wide. However, as the fitting process iterates,
the splines move from the inconsistent initial estimate to give a good local estimate
which is less ambiguous. In other words the two splines illustrated in Figure 2 have
successfully arranged themselfs to account for the junction structure.
Contour Organisation with the EM Algorithm
(a) Original image.
(b) Probability map.
885
(c) Line map.
Figure 1: Infra-red aerial picture with corresponding probability map showing region
containing pixel under study and correspondent line map.
6
Conclusions
We have demonstrated how the process of parallel iterative contour refinement can
be realised using the classical EM algorithm of Dempster, Laird and Rubin [1].
The refinement of curves by relaxation operations has been a preoccupation in the
literature since the seminal work of Rosenfeld, Hummel and Zucker [9]. However,
it is only recently that successful algorithms have been developed by appealing
to more sophisticated modelling methodologies [13, 2]. Our EM approach not
only delivers comparable performance, it does so using a very simple underlying
model. Moreover, it allows the contour re-enforcement process to be understood in
a weighted least-squares optimisation framework which has many features in common with snake dynamics [11] without being sensitive on the initial positioning of
control points. Viewed from the perspective of classical relaxation labelling [9, 4],
the EM framework provides a natural way of evaluating support beyond the immediate object neighbourhood. Moreover, the framework for spline fitting in 2D is
readily extendible to the reconstruction of surface patches in 3D [10].
References
[1] Dempster A., Laird N. and Rubin D., "Maximum-likelihood from incomplete data
via the EM algorithm", J. Royal Statistical Soc. Ser. B (methodological) ,39, pp 1-38,
1977.
[2] Hancock E.R and Kittler J., "Edge Labelling using Dictionary-based Probabilistic
Relaxation", IEEE PAMI, 12, pp. 161-185, 1990.
[3] Jordan M.1. and Jacobs RA, "Hierarchical Mixtures of Experts and the EM Algorithm", Neural Computation, 6, pp. 181-214, 1994.
[4] Kittler J. and Hancock, E.R, "Combining Evidence in Probabilistic relaxation",
International Journal of Pattern Recognition and Artificial Intelligence, 3, N1, pp
29-51, 1989.
[5] Leite J.A.F. and Hancock, E.R, " Statistically Combining and Refining Multichannel
Information" , Progress in Image Analysis and Processing III: Edited by S Impedovo,
World Scientific, pp . 193-200, 1994.
[6] Leite J.A.F. and Hancock, E .R, "Iterative curve organisation with the EM algorithm", to appear in Pattern Recognition Letters, 1997.
886
1. A. F. Leite and E. R. Hancock
~
",
/'
\~
Iili>
(ll'
x
Iv)
(vi>
I ~''''
~
~~
(ril)
(Ie)
(YIIII
,
x
"""X
"
(j)
<tii>
(i!)
~
+
+
~ :+ :+ :+
:f\ ~ ~
I,.)
:--I- ~ +
so
hy)
(v)
,
"
(Ix)
(i,)
MI)
"
)
:+ ~
I?
,l)
(a)
(b)
(c)
Figure 2: Evolution of the spline in the fitting process. The image in (a) is the
junction spline while the image in (b) is the branch spline. The first spline is shown
in (i), and the subsequent ones from (ii) to (xi). The evolution of the corresponding
spline probabilities is shown in (c).
[7] Meer P., Mintz D., Rosenfeld A . and Kim D.Y., "Robust Regression Methods for
Computer Vision - A Review", International Journal of Computer vision, 6, pp.
59- 70, 1991.
[8] Peleg S. and Rosenfeld A" "Determining Compatibility Coefficients for curve enhancement relaxation processes", IEEE SMC, 8, pp. 548-555, 1978.
[9] Rosenfeld A., Hummel R.A. and Zucker S.W., "Scene labelling by relaxation operations", IEEE Transactions SMC, SMC-6, pp400-433, 1976.
[10] Sander P.T. and Zucker S.W ., "Inferring surface structure and differential structure
from 3D images" , IEEE PAMI, 12, pp 833-854, 1990.
[11] Terzopoulos D. , "Regularisation of inverse problems involving discontinuities" , IEEE
PAMI, 8, pp 129-139, 1986.
[12] Zucker, S.W., Hummel R.A., and Rosenfeld A., "An application ofrelaxation labelling
to line and curve enhancement", IEEE TC, C-26, pp. 394-403, 1977.
[13] Zucker S. , David C., Dobbins A. and Iverson L., "The organisation of curve qetection: coarse tangent fields and fine spline coverings" , Proceedings of the Second
International Conference on Computer Vision, pp. 577-586, 1988.
| 1291 |@word proportion:4 covariance:1 jacob:1 moment:2 initial:6 substitution:1 series:1 initialisation:1 current:1 si:7 reminiscent:1 readily:1 subsequent:1 intelligence:1 accordingly:2 ith:1 provides:1 coarse:1 iterates:2 location:1 iverson:1 become:1 differential:1 shorthand:1 fitting:8 ra:1 expected:1 themselves:1 multi:2 inspired:2 window:2 estimating:2 underlying:5 moreover:2 developed:2 contrasting:1 finding:2 ti:1 uk:1 control:2 originates:1 ser:1 appear:2 maximise:1 understood:1 local:11 despite:1 pami:3 co:1 limited:4 smc:3 perpendicular:1 statistically:1 practical:4 yj:1 maximisation:9 recursive:1 x3:1 procedure:2 displacement:2 reject:1 word:3 road:5 convenience:1 operator:9 leite:7 applying:1 seminal:1 descending:2 optimize:1 equivalent:2 map:5 demonstrated:1 elusive:1 attention:3 commence:1 recovery:1 estimator:1 adjusts:1 initialise:1 meer:1 updated:2 suppose:1 ilq:1 neighbouring:2 dobbin:1 associate:2 element:3 recognition:2 updating:1 cut:1 observed:1 role:2 kittler:4 region:1 edited:1 dempster:5 battery:1 dynamic:2 motivate:1 solving:1 segment:1 purely:1 represented:3 various:1 hancock:11 shortcoming:1 describe:1 detected:1 artificial:1 refined:1 posed:1 solve:1 plausible:1 drawing:1 gi:1 rosenfeld:6 highlighted:1 itself:1 laird:4 delivered:2 final:2 hoc:1 reconstruction:1 product:1 remainder:1 combining:2 mixing:4 achieve:1 realistically:1 description:2 qr:1 correspondent:1 exploiting:1 convergence:1 enhancement:4 object:1 iq:1 develop:1 measured:1 progress:1 subregion:2 soc:1 predicted:1 involves:1 peleg:1 filter:4 y01:1 require:2 confine:1 residing:1 exp:2 great:1 sought:1 early:1 dictionary:8 omitted:1 estimation:3 bridge:1 ilg:4 sensitive:1 successfully:1 weighted:8 gaussian:6 aim:2 rather:1 refining:1 notational:1 methodological:2 modelling:1 likelihood:15 kim:1 posteriori:4 snake:2 initially:3 expand:1 quasi:1 interested:2 compatibility:2 pixel:16 orientation:3 denoted:1 development:1 spatial:1 field:1 once:1 represents:1 infra:3 spline:54 mintz:1 phase:2 hummel:4 n1:1 attempt:1 detection:3 highly:1 localisation:1 mixture:12 edge:1 indexed:6 incomplete:4 divide:1 old:1 iv:1 re:2 uncertain:1 instance:1 column:1 assignment:1 successful:2 reported:2 density:8 international:3 ie:1 retain:1 probabilistic:3 connecting:1 connectivity:1 squared:1 central:1 iip:1 containing:1 expert:1 account:2 potential:3 tii:1 iei:2 centred:2 coefficient:1 satisfy:1 ad:1 vi:3 doing:1 realised:2 red:3 reached:1 bayes:1 parallel:3 contribution:1 square:3 il:2 variance:2 who:1 bayesian:1 raw:3 converged:1 pp:11 associated:3 mi:1 recovers:1 couple:1 newly:1 proved:2 manifest:1 knowledge:1 fractional:1 impedovo:1 sophisticated:1 originally:1 methodology:1 response:2 specify:1 arranged:1 evaluated:2 until:1 hand:3 sketch:1 horizontal:1 mode:1 quality:1 scientific:1 effect:2 evolution:3 assigned:1 ril:1 iteratively:1 illustrated:1 adjacent:1 ll:1 width:2 covering:4 essence:1 ambiguous:2 criterion:1 outline:1 theoretic:2 ridge:1 delivers:2 image:18 recently:3 common:2 physical:1 measurement:5 zucker:8 surface:2 align:1 perspective:2 yi:2 exploited:1 lnp:1 employed:1 determine:1 ii:2 branch:3 windowing:1 full:1 positioning:1 cross:1 prediction:1 involving:1 basic:2 regression:1 vision:5 expectation:9 essentially:1 optimisation:1 iteration:2 represent:3 fine:6 pass:1 inconsistent:1 spirit:1 jordan:1 intermediate:1 iii:1 sander:1 iterate:1 xj:1 fit:5 forthcoming:1 inner:1 idea:4 minimise:1 suffer:1 locating:1 york:2 proceed:1 clear:1 aimed:1 extensively:1 ken:1 multichannel:1 documented:1 notice:2 arising:1 write:2 recomputed:1 key:1 clarity:1 relaxation:15 compete:1 inverse:2 letter:1 patch:1 putative:2 draw:3 comparable:1 datum:2 ri:1 scene:1 hy:1 department:1 developing:1 according:2 combination:1 poor:1 aerial:3 describes:1 em:21 appealing:1 parameterisation:2 biologically:3 outlier:1 equation:6 turn:2 enforcement:1 available:2 junction:6 operation:2 prerequisite:1 hierarchical:1 regulate:1 neighbourhood:3 original:4 classical:3 move:1 arrangement:1 quantity:1 receptive:1 exhibit:1 distance:2 extent:1 maximising:3 index:2 providing:1 gk:4 upper:1 vertical:1 observation:1 displaced:1 immediate:1 lb:1 intensity:1 ordinate:1 introduced:1 david:1 pair:1 required:2 specified:2 extendible:1 distinction:1 discontinuity:1 beyond:1 pattern:2 royal:1 natural:1 residual:4 representing:1 picture:2 prior:1 literature:1 review:1 tangent:1 evolve:1 determining:1 regularisation:1 appealed:1 limitation:1 offered:1 consistent:2 rubin:4 dd:1 article:1 bank:1 row:1 qf:2 surprisingly:1 jth:2 terzopoulos:1 wide:1 curve:12 overcome:2 evaluating:1 world:1 contour:16 computes:1 reinforcement:2 refinement:2 transaction:1 reconstructed:1 global:6 xi:2 iterative:8 sk:1 channel:1 robust:2 significance:1 motivation:1 allowed:1 cubic:4 jeni:2 position:2 inferring:1 explicit:1 lq:1 weighting:1 ix:1 formula:1 jen:2 gating:1 showing:1 virtue:1 organisation:12 evidence:1 essential:1 preoccupation:1 labelling:6 tc:1 visual:1 conveniently:1 expressed:1 adjustment:1 corresponds:1 conditional:5 viewed:2 goal:1 characterised:1 meaningful:1 formally:1 support:1 philosophy:1 evaluate:1 |
320 | 1,292 | Why did TD-Gammon Work?
Jordan B. Pollack & Alan D. Blair
Computer Science Department
Brandeis University
Waltham, MA 02254
{pollack,blair} @cs.brandeis.edu
Abstract
Although TD-Gammon is one of the major successes in machine learning, it has not led to similar impressive breakthroughs in temporal difference learning for other applications or even other games. We were
able to replicate some of the success of TD-Gammon, developing a
competitive evaluation function on a 4000 parameter feed-forward neural network, without using back-propagation, reinforcement or temporal
difference learning methods. Instead we apply simple hill-climbing in a
relative fitness environment. These results and further analysis suggest
that the surprising success of Tesauro's program had more to do with the
co-evolutionary structure of the learning task and the dynamics of the
backgammon game itself.
1 INTRODUCTION
It took great chutzpah for Gerald Tesauro to start wasting computer cycles on temporal
difference learning in the game of Backgammon (Tesauro, 1992). Letting a machine learning program play itself in the hopes of becoming an expert, indeed! After all, the dream of
computers mastering a domain by self-play or "introspection" had been around since the
early days of AI, forming part of Samuel's checker player (Samuel, 1959) and used in
Donald Michie's MENACE tic-tac-toe learner (Michie, 1961). However such self-conditioning systems, with weak or non-existent internal representations, had generally been
fraught with problems of scale and abandoned by the field of AI. Moreover, self-playing
learners usually develop eccentric and brittle strategies which allow them to draw each
other, yet play poorly against humans and other programs.
Yet Tesauro's 1992 result showed that this self-play approach could be powerful, and after
some refinement and millions of iterations of self-play, his TD-Gammon program has
become one of the best backgammon players in the world (Tesauro, 1995). His derived
weights are viewed by his corporation as significant enough intellectual property to keep
as a trade secret, except to leverage sales of their minority operating system (International
Business Machines, 1995). Others have replicated this TD result both for research purposes (Boyan, 1992) and commercial purposes.
Why did TD-Gammon Work?
11
With respect to the goal of a self-organizing learning machine which starts from a minimal
specification and rises to great sophistication, TD-Gammon stands alone. How is its success to be understood, explained, and replicated in other domains? Is TD-Gammon unbridled good news about the reinforcement learning method?
Our hypothesis is that the success ofTD-gammon is not due to the back-propagation, reinforcement, or temporal-difference technologies, but to an inherent bias from the dynamics
of the game of backgammon, and the co-evolutionary setup of the training, by which the
task dynamically changes as the learning progresses. We test this hypothesis by using a
much simpler co-evolutionary learning method for backgammon - namely hill-climbing.
2 SETUP
We use a standard feedforward neural network with two layers and the sigmoid function,
set up in the same fashion as Tesauro with 4 units to represent the number of each player's
pieces on each of the 24 points, plus 2 units each to indicate how many are on the bar and
off the board. In addition, we added one more unit which reports whether or not the game
has reached the endgame or "race" situation, making a total of 197 input units. These are
fully connected to 20 hidden units, which are then connected to one output unit that judges
the position. Including bias on the hidden units, this makes a total of 3980 weights. The
game is played by generating all legal moves, converting them into the proper network
input, and picking the position judged as best by the network. We started with all weights
set to zero.
Our initial algorithm was hillclimbing:
1. add gaussian noise to the weights
2. play the network against the mutant for a number of games
3. if the mutant wins more than half the games, select it for the next generation.
The noise was set so each step would have a 0.05 RMS distance (which is the euclidean
distance divided by J3980).
Surprisingly, this worked reasonably well! The networks so evolved improved rapidly at
first, but then sank into mediocrity. The problem we perceived is that comparing two close
backgammon players is like tossing a biased coin repeatedly: it may take dozens or even
hundreds of games to find out for sure which of them is better. Replacing a well-tested
champion is dangerous without enough information to prove the challenger is really a better player and not just a lucky novice. Rather than burden the system with so much computation, we instead introduced the following modifications to the algorithm to avoid this
"Buster Douglas Effect":
Firstly, the games are played in pairs, with the order of play reversed and the same random
seed used to generate the dice rolls for both games. This washes out some of the unfairness due to the dice rolls when the two networks are very close - in particular, if they were
identical, the result would always be one win each. Secondly, when the challenger wins
the contest, rather than just replacing the champion by the challenger, we instead make
only a small adjustment in that direction:
champion := 0.95*champion + 0.05*challenger
This idea, similar to the "inertia" term in back-propagation, was introduced on the
assumption that small changes in weights would lead to small changes in decision-making
by the evaluation function. So, by preserving most of the current champion's decisions,
we would be less likely to have a catastrophic replacement of the champion by a lucky
novice challenger.
In the initial stages of evolution, two pairs of parallel games were played and the challenger was required to win 3 out of 4 of these games.
12
1. B. Pollack and A. D. Blair
~
o
~
50
25
Generation (x 103)
5
10
15
20
25
30
35
Figure 1. Percentage of losses of our first 35,000 generation players
against PUBEVAL. Each match consisted of 200 games.
Figure 1 shows the first 35,000 players rated against PUBEVAL, a strong public-domain
player trained by Tesauro using human expert preferences. There are three things to note:
(1) the percentage of losses against PUBEVAL falls from 100% to about 67% by 20,000
generations, (2) the frequency of successful challengers increases over time as the player
improves, and (3) there are epochs (e.g. starting at 20,000) where the performance against
PUBEVAL begins to falter. The first fact shows that our simple self-playing hill-climber is
capable of learning. The second fact is quite counter-intuitive - we expected that as the
player improved, it would be harder to challenge it! This is true with respect to a uniform
sampling of the 4000 dimensional weight space, but not true for a sampling in the neighborhood of a given player: once the player is in a good part of weight space, small changes
in weights can lead to mostly similar strategies, ones which make mostly the same moves
in the same situations. However, because of the few games we were using to determine
relative fitness, this increased frequency of change allows the system to drift, which may
account for the subsequent degrading of performance.
To counteract the drift, we decided to change the rules of engagement as the evolution proceeds according to the following "annealing schedule": after 10,000 generations, the number of games that the challenger is required to win was increased from 3 out of 4 to 5 out
of 6; after 70,000 generations, it was further increased to 7 out of 8. The numbers 10,000
and 70,000 were chosen on an ad hoc basis from observing the frequency of successful
challenges.
After 100,000 games, we have developed a surprisingly strong player, capable of winning
40% of the games against PUBEVAL. The networks were sampled every 100 generations
in order to test their performance. Networks at generation 1,000, 10,000 and 100,000 were
extracted and used as benchmarks. Figure 2 shows the percentage of losses of the sampled
players against the three benchmark networks. Note that the three curves cross the 50%
line at 1, 10, and 100, respectively and show a general improvement over time.
The end-game of backgammon, called the "bear-off," can be used as another yardstick of
the progress of learning. The bear-off occurs when all of a player's pieces are in the
player's home, or first 6 points, and then the dice rolls can be used to remove pieces. We
set up a racing board with two pieces on each player's 1 through 7 point and one piece on
the 8 point, and played a player against itself 200 games, averaging the number of rolls.
We found a monotonic improvement, from 22 to less then 19 rolls, over the lOOk generations. PUBEVAL scored 16.6 on this task.
13
Why did TD-Gammon Work?
l00.-------r-----~~----~------~------~
20
40
60
80
100
Generation (x 103)
Figure 2. Percentage of losses against benchmark networks at
generation 1,000 [lower), 10,000 [middle) and 100,000 [upper).
3 DISCUSSION
3.1 Machine Learning and Evolution
We believe that our evidence of success in learning backgammon using simple hillclimbing indicates that the reinforcement and temporal difference methodology used by Tesauro
in TO-gammon was non-essential for its success. Rather, the success came from the setup
of co-evolutionary self-play biased by the dynamics of backgammon. Our result is thus
similar to the bias found by Mitchell, Crutchfield & Graber in Packard's evolution of cellular automata to the "edge of chaos"(Packard, 1988, Mitchell et aI., 1993).
TO-Gammon is a major milestone for a kind of evolutionary machine learning in which
the initial specification of model is far simpler than expected because the learning environment is specified implicitly, and emerges as a result of the co-evolution between a learning
system and its training environment: The learner is embedded in an environment which
responds to its own improvements in a never-ending spiral. While this effect has been seen
in population models, it is completely unexpected for a "1 +1" hillclimbing evolution.
Co-evolution was explored by Hillis (Hillis, 1992) on the sorting problem, by Angeline &
Pollack (Angeline and Pollack, 1994) on genetically programmed tic-tac-toe players, on
predator/prey games, e.g. (Cliff and Miller, 1995, Reynolds, 1994), and by Juille & Pollack on the intertwined spirals problem (Juille and Pollack, 1995). Rosin & Belew applied
competitive fitness to several games (Rosin and Belew, 1995). However, besides
Tesauro's TO-Gammon, which has not to date been viewed as an instance of co-evolutionary learning, Sims' artificial robot game (Sims, 1994) is the only other domain as complex
as Backgammon to have had substantial success.
3.2 Leamability and Unleamability
Learnability can be formally defined as a time constraint over a search space. How hard is
it to randomly pick 4000 floating-point weights to make a good backgammon evaluator? It
is simply impossible. How hard is it to find weights better than the current set? Initially,
when all weights are random, it is quite easy. As the playing improves, we would expect it
to get harder and harder, perhaps similar to the probability of a tornado constructing a 747
out of a junkyard. However, if we search in the neighborhood of the current weights, we
will find many players which make mostly the same moves but which can capitalize on
each other's slightly different choices and exposed weaknesses in a tournament.
J B. Pollack and A. D. Blair
14
Although the setting of parameters in our initial runs involved some guesswork, now that
we have a large set of "players" to examine, we can try to understand the phenomenon.
Taking the 1000th, 1O,000th, and loo,OOOth champions from our run, we sampled random
players in their neighborhoods at different RMS distances to find out how likely is it to
find a winning challenger. We took 1000 random neighbors at each of 11 different RMS
distances, and played them 8 games against the corresponding champion. Figure 3 plots
l00r---------------~----------------_,
~
75
f "' :,:,: . .....
u
50~~. . .=_:_::~--__
:~
....
.. .......... 10k
"""' ,:,:,:::::~,::,::, ",.
RMS distance from champion
O~--------------~----------------~
o
0.05
0.1
Figure 3. Distance versus probability of random challenger winning
against champions at generation 1,000, 10,000 and 100,000.
the average number of games won against the three champions in the range of neighborhoods. This graph demonstrates that as the players improve over time, the probability of
finding good challengers in their neighborhood increases. This accounts for why the frequency of successful challenges goes up. Each successive challenger is only required to
take the small step of changing a few moves of the champion in order to beat it. Therefore,
under co-evolution what was apparently unlearnable becomes learnable as we convert
from a single question to a continuous stream of questions, each one dependent on the previous answer.
3.3 Avoiding Mediocre Stable States
In general, the problem with learning through self-play is that the player could keep playing the same kinds of games over and over, only exploring some narrow region of the
strategy space, missing out on critical areas of the game where it could then be vulnerable
to other programs or human experts. Such a learning system might declare success when
in reality it has simply converged to a "mediocre stable state" of continual draws or a long
term cooperation which merely mimics competition. Such a state can arise in human education systems, where the student gets all the answers right and rewards the teacher with
positive feedback for not asking harder questions.
The problem is particularly prevalent in self-play for deterministic games such as chess or
tic-tac-toe. We have worked on using a population to get around it (Angeline and
Pollack, 1994). Schraudolph et al., 1994 added non-determinism to the game of Go by
choosing moves according to the Boltzmann distribution of statistical mechanics. Others,
such as Fogel, 1993, expanded exploration by forcing initial moves. Epstein, 1994, has
studied a mix of training using self-play, random testing, and playing against an expert
in order to better understand this phenomenon.
We are not suggesting that 1+1 hillclimbing is an advanced machine learning technique
which others should bring to many tasks. Without internal cognition about an opponent's
behavior, co-evolution usually requires a population. Therefore, there must be something
about the dynamics of backgammon itself which is helpful because it permitted both TD
Why did TD-Gammon Work?
15
learning and hill-climbing to succeed where they would clearly fail on other tasks and in
other games of this scale. If we can understand why the backgammon domain led to successful acquisition of expert strategies from random initial conditions, we might be able to
re-cast other domains in its image.
Tesauro, 1992 pointed out some of the features of Backgammon that make it suitable for
approaches involving self-play and random initial conditions. Unlike chess, a draw is
impossible and a game played by an untrained network making random moves will eventually terminate (though it may take much longer than a game between competent players). Moreover the randomness of the dice rolls leads self-play into a much larger part of
the search space than it would be likely to explore in a deterministic game.
We believe it is not simply the dice rolls which overcome the problems of self-learning.
Others have tried to add randomness to deterministic games and have not generally met
with success. There is something critical about the dynamics of backgammon which sets
its apart from other games with random elements like Monopoly. Namely, that the outcome of the game continues to be uncertain until all contact is broken and one side has a
clear advantage. What many observers find exciting about backgammon, and what helps a
novice sometimes overcome an expert, is the number of situations where one dice roll, or
an improbable sequence, can dramatically reverse which player is expected to win.
A learning system can be viewed as a meta-game between teacher and student, which are
identical in a self-play situation. The teacher's goal is to expose the student's mistakes,
while the student's goal is to placate the teacher and avoid such exposure. A mediocre stable state for a self-learning system can be seen as an equilibrium situation in this metagame. A player which learns to repeatedly draw itself will have found a meta-game equilibrium and stop learning. If draws are not allowed, it may still be possible for a self-playing learner to collude with itself - to simulate competition while actually cooperating
(Angeline, 1994). For example, if slightly suboptimal moves would allow a player to
"throw" a game, a player under self-play could find a meta-game equilibrium by alternately throwing games to itself! Our hypothesis is that the dynamics of backgammon discussed above actively prevent this sort of collusion from forming in the meta-game of
self-learning.
4 CONCLUSIONS
Tesauro's 1992 result beat Sun's Gammontool and achieved parity against his own Neurogammon 1.0, trained on expert knowledge. Neither of these is available. Following the
1992 paper on TO-learning, he incorporated a number of hand-crafted expert-knowledge
features, eventually producing a network which achieved world master level play
(Tesauro, 1995). These features included concepts like existence of a prime, probability of
blots being hit, and probability of escape from behind the opponent's barrier. Our best
players win about 45% against PUBEVAL which was trained using "comparison
training"(Tesauro, 1989). Therefore we believe our players achieve approximately the
same power as Tesauro's 1992 results, without any advanced learning algorithms. We do
not claim that our 100,000 generation player is as good as TO-Gammon, ready to challenge the best humans, just that it is surprisingly good considering its humble origins from
hill-climbing with a relative fitness measure. Tuning our parameters or adding more input
features would make more powerful players, but that is not the point of this study.
TO-Gammon remains a tremendous success in Machine Learning, but the causes for its
success have not been well understood. Replicating some ofTO-Gammon's success under
a much simpler learning paradigm, we find that the primary cause for success must be the
dynamics of backgammon combined with the power of co-evolutionary learning. If we
can isolate the features of the backgammon domain which enable evolutionary learning to
work so well, it may lead to a better understanding of the conditions necessary, in general,
for complex self-organization.
16
1. B. Pollack and A. D. Blair
Acknowledgments
This work is supported by ONR grant NOOOI4-96-1-0418 and a Krasnow Foundation
Postdoctoral fellowship. Thanks to Gerry Tesauro for providing PUBEVAL and subsequent means to calibrate it, Jack Laurence and Pablo Funes for development of the WWW
front end to our evolved player. Interested players can challenge our evolved network
using a web browser through our home page at: http://www.demo.cs.brandeis.edu
References
Angeline, P. J. (1994). An alternate interpretation of the iterated prisoner's dilemma and
the evolution of non-mutual cooperation. In Brooks, R. and Maes, P., editors, Proceedings
4th Artificial Life Conference, pages 353-358. MIT Press.
Angeline, P. J. and Pollack, J. B. (1994). Competitive environments evolve better solutions for complex tasks. In Forrest, S., editor, Genetic Algorithms: Proceedings of the
Fifth Inter national Conference.
Boyan, J. A. (1992). Modular neural networks for learning context-dependent game strategies. Master's thesis, Computer Speech and Language Processing, Cambridge University.
Cliff, D. and Miller, G. (1995). Tracking the red queen: Measurements of adaptive
progress in co-evolutionary simulations. In Third European Conference on Artificial Life,
pages 200-218.
Hillis, D. (1992). Co-evolving parasites improves simulated evolution as an optimization
procedure. In C. Langton, C. Taylor, J. E and Rasmussen, S., editors, Artificial Life II.
Addison-Wesley, Reading, MA.
International Business Machines (Sept. 12, 1995). IBM's family funpak for osl2 warp hits
retail shelves.
Juille, H. and Pollack, J. (1995). Massively parallel genetic programming. In Angeline, P.
and Kinnear, K., editors, Advances in Genetic Programming II. MIT Press, Cambridge.
Michie, D. (1961). Trial and error. In Science Survey, part 2, pages 129-145. Penguin.
Mitchell, M., Hraber, P. T., and Crutchfield, J. P. (1993). Revisiting the edge of chaos:
Evolving cellular automata to perform computations. Complex Systems, 7.
Packard, N. (1988). Adaptation towards the edge of chaos. In Kelso, J. A. S., Mandell,
A. J., and Shlesinger, M. E, editors, Dynamic patterns in complex systems, pages 293301. World Scientific.
Reynolds, C. (1994). Competition, coevolution, and the game of tag. In Proceedings 4th
Artificial Life Conference. MIT Press.
Rosin, C. D. and Belew, R. K. (1995). Methods for competitive co-evolution: finding
opponents worth beating. In Proceedings of the 6th international conference on Genetic
Algorithms, pages 373-380. Morgan Kaufman.
Samuel, A. L. (1959). some studies of machine learning using the game of checkers. IBM
]oural of Research and Development.
Sims, K. (1994). Evolving 3d morphology and behavior by competition. In Brooks, R. and
Maes, P., editors, Proceedings 4th Artificial Life Conference. MIT Press.
Tesauro, G. (1989). Connectionist learning of expert preferences by comparison training.
In Touretzky, D., editor, Advances in Neural Information Processing Systems, volume 1,
pages 99-106, Denver 1988. Morgan Kaufmann, San Mateo.
Tesauro, G. (1992). Practical issues in temporal difference learning. Machine Learning,
8:257-277.
Tesauro, G. (1995). Temporal difference learning and td-gammon. Communications of the
ACM, 38(3):58-68.
| 1292 |@word trial:1 middle:1 laurence:1 replicate:1 simulation:1 tried:1 pick:1 maes:2 harder:4 initial:7 angeline:7 genetic:4 reynolds:2 current:3 comparing:1 surprising:1 collude:1 yet:2 must:2 subsequent:2 remove:1 plot:1 mandell:1 alone:1 half:1 intellectual:1 preference:2 successive:1 firstly:1 simpler:3 evaluator:1 become:1 prove:1 inter:1 secret:1 expected:3 indeed:1 behavior:2 examine:1 mechanic:1 morphology:1 td:12 considering:1 becomes:1 begin:1 moreover:2 unfairness:1 what:3 tic:3 evolved:3 kind:2 kaufman:1 degrading:1 developed:1 finding:2 corporation:1 wasting:1 temporal:7 every:1 continual:1 milestone:1 demonstrates:1 oftd:1 sale:1 unit:7 hit:2 grant:1 producing:1 positive:1 declare:1 understood:2 mistake:1 cliff:2 becoming:1 approximately:1 tournament:1 plus:1 might:2 studied:1 mateo:1 dynamically:1 co:13 programmed:1 range:1 decided:1 acknowledgment:1 practical:1 testing:1 procedure:1 dice:6 area:1 lucky:2 evolving:3 kelso:1 gammon:17 donald:1 suggest:1 get:3 close:2 judged:1 mediocre:3 context:1 impossible:2 www:2 deterministic:3 missing:1 go:2 exposure:1 starting:1 automaton:2 survey:1 rule:1 his:4 population:3 monopoly:1 play:16 commercial:1 programming:2 hypothesis:3 l00r:1 origin:1 element:1 particularly:1 continues:1 michie:3 racing:1 revisiting:1 region:1 cycle:1 news:1 connected:2 sun:1 trade:1 counter:1 buster:1 substantial:1 environment:5 broken:1 reward:1 dynamic:8 gerald:1 existent:1 trained:3 exposed:1 dilemma:1 learner:4 basis:1 completely:1 artificial:6 neighborhood:5 choosing:1 outcome:1 parasite:1 quite:2 modular:1 larger:1 browser:1 itself:7 hoc:1 advantage:1 sequence:1 took:2 adaptation:1 rapidly:1 date:1 organizing:1 poorly:1 achieve:1 intuitive:1 competition:4 generating:1 help:1 develop:1 progress:3 strong:2 throw:1 c:2 indicate:1 blair:5 waltham:1 judge:1 direction:1 met:1 exploration:1 human:5 enable:1 public:1 education:1 really:1 secondly:1 exploring:1 around:2 great:2 seed:1 cognition:1 equilibrium:3 claim:1 major:2 early:1 purpose:2 perceived:1 expose:1 champion:12 hope:1 mit:4 challenger:12 clearly:1 gaussian:1 always:1 rather:3 avoid:2 shelf:1 derived:1 improvement:3 mutant:2 backgammon:19 indicates:1 prevalent:1 helpful:1 dependent:2 initially:1 hidden:2 interested:1 issue:1 development:2 breakthrough:1 mutual:1 field:1 once:1 never:1 sampling:2 identical:2 look:1 capitalize:1 introspection:1 mimic:1 others:4 report:1 ofto:1 inherent:1 penguin:1 escape:1 few:2 randomly:1 connectionist:1 national:1 fitness:4 floating:1 replacement:1 organization:1 evaluation:2 weakness:1 juille:3 behind:1 edge:3 capable:2 necessary:1 improbable:1 euclidean:1 taylor:1 re:1 pollack:12 minimal:1 uncertain:1 increased:3 instance:1 asking:1 queen:1 calibrate:1 rosin:3 hundred:1 uniform:1 successful:4 front:1 learnability:1 loo:1 answer:2 teacher:4 engagement:1 combined:1 thanks:1 international:3 shlesinger:1 off:3 picking:1 thesis:1 langton:1 expert:9 falter:1 actively:1 account:2 suggesting:1 student:4 race:1 ad:1 stream:1 piece:5 try:1 observer:1 observing:1 apparently:1 reached:1 red:1 sort:1 start:2 parallel:2 leamability:1 competitive:4 predator:1 roll:8 kaufmann:1 miller:2 climbing:4 weak:1 iterated:1 worth:1 randomness:2 converged:1 touretzky:1 against:16 acquisition:1 frequency:4 involved:1 toe:3 sampled:3 stop:1 noooi4:1 mitchell:3 knowledge:2 emerges:1 improves:3 schedule:1 actually:1 back:3 feed:1 wesley:1 day:1 methodology:1 permitted:1 improved:2 though:1 just:3 stage:1 until:1 hand:1 web:1 replacing:2 propagation:3 epstein:1 perhaps:1 scientific:1 believe:3 effect:2 consisted:1 true:2 concept:1 evolution:12 game:45 self:20 samuel:3 won:1 hill:5 bring:1 image:1 chaos:3 jack:1 sigmoid:1 denver:1 conditioning:1 volume:1 million:1 discussed:1 he:1 interpretation:1 significant:1 measurement:1 cambridge:2 eccentric:1 ai:3 tac:3 tuning:1 pointed:1 guesswork:1 contest:1 replicating:1 had:4 language:1 specification:2 robot:1 impressive:1 operating:1 stable:3 longer:1 add:2 gammontool:1 something:2 own:2 showed:1 apart:1 tesauro:18 forcing:1 reverse:1 prime:1 massively:1 meta:4 onr:1 success:15 came:1 life:5 preserving:1 seen:2 morgan:2 converting:1 tossing:1 determine:1 paradigm:1 ii:2 mix:1 alan:1 match:1 cross:1 long:1 schraudolph:1 divided:1 involving:1 iteration:1 represent:1 sometimes:1 achieved:2 retail:1 addition:1 fellowship:1 annealing:1 biased:2 unlike:1 checker:2 sure:1 isolate:1 thing:1 jordan:1 neurogammon:1 leverage:1 feedforward:1 enough:2 spiral:2 easy:1 coevolution:1 suboptimal:1 idea:1 whether:1 rms:4 crutchfield:2 speech:1 cause:2 repeatedly:2 dramatically:1 generally:2 clear:1 generate:1 http:1 percentage:4 intertwined:1 changing:1 prevent:1 douglas:1 neither:1 prey:1 graph:1 merely:1 cooperating:1 convert:1 counteract:1 run:2 powerful:2 master:2 family:1 forrest:1 home:2 draw:5 decision:2 layer:1 played:6 dangerous:1 constraint:1 worked:2 throwing:1 collusion:1 tag:1 endgame:1 simulate:1 expanded:1 department:1 developing:1 according:2 alternate:1 climber:1 slightly:2 mastering:1 making:3 modification:1 chess:2 explained:1 legal:1 remains:1 eventually:2 fail:1 addison:1 letting:1 end:2 available:1 opponent:3 apply:1 coin:1 existence:1 abandoned:1 belew:3 contact:1 move:8 added:2 question:3 occurs:1 strategy:5 primary:1 responds:1 evolutionary:9 blot:1 win:7 distance:6 reversed:1 simulated:1 cellular:2 dream:1 minority:1 besides:1 providing:1 setup:3 mostly:3 rise:1 proper:1 boltzmann:1 perform:1 upper:1 benchmark:3 beat:2 situation:5 incorporated:1 communication:1 drift:2 introduced:2 pablo:1 namely:2 pair:2 required:3 specified:1 cast:1 fogel:1 narrow:1 tremendous:1 hillis:3 alternately:1 brook:2 able:2 bar:1 proceeds:1 usually:2 pattern:1 beating:1 reading:1 challenge:5 genetically:1 program:5 including:1 packard:3 power:2 critical:2 suitable:1 business:2 boyan:2 advanced:2 improve:1 technology:1 rated:1 started:1 ready:1 sept:1 epoch:1 understanding:1 evolve:1 relative:3 embedded:1 fully:1 loss:4 bear:2 brittle:1 expect:1 generation:13 krasnow:1 versus:1 foundation:1 exciting:1 editor:7 playing:6 ibm:2 cooperation:2 surprisingly:3 parity:1 supported:1 rasmussen:1 bias:3 allow:2 understand:3 side:1 warp:1 fall:1 neighbor:1 taking:1 barrier:1 fifth:1 determinism:1 curve:1 feedback:1 overcome:2 world:3 stand:1 ending:1 forward:1 inertia:1 reinforcement:4 refinement:1 replicated:2 adaptive:1 novice:3 brandeis:3 far:1 san:1 l00:1 implicitly:1 keep:2 demo:1 postdoctoral:1 search:3 continuous:1 why:6 reality:1 terminate:1 reasonably:1 fraught:1 untrained:1 complex:5 european:1 constructing:1 domain:7 did:4 noise:2 scored:1 arise:1 allowed:1 competent:1 graber:1 crafted:1 board:2 fashion:1 position:2 winning:3 third:1 learns:1 dozen:1 learnable:1 explored:1 sims:3 evidence:1 burden:1 essential:1 adding:1 wash:1 sorting:1 led:2 sophistication:1 simply:3 likely:3 explore:1 forming:2 hillclimbing:4 unexpected:1 adjustment:1 tracking:1 prisoner:1 vulnerable:1 monotonic:1 extracted:1 ma:2 acm:1 succeed:1 viewed:3 goal:3 towards:1 change:6 hard:2 included:1 except:1 averaging:1 total:2 called:1 catastrophic:1 player:35 select:1 formally:1 internal:2 yardstick:1 phenomenon:2 tested:1 avoiding:1 unlearnable:1 |
321 | 1,293 | Bayesian Unsupervised Learning of
Higher Order Structure
Michael S. Lewicki
Terrence J. Sejnowski
levicki~salk.edu
terry~salk.edu
The Salk Institute
Howard Hughes Medical Institute
Computational Neurobiology Lab
10010 N. Torrey Pines Rd.
La Jolla, CA 92037
Abstract
Multilayer architectures such as those used in Bayesian belief networks and Helmholtz machines provide a powerful framework for
representing and learning higher order statistical relations among
inputs. Because exact probability calculations with these models are often intractable, there is much interest in finding approximate algorithms. We present an algorithm that efficiently discovers
higher order structure using EM and Gibbs sampling. The model
can be interpreted as a stochastic recurrent network in which ambiguity in lower-level states is resolved through feedback from higher
levels. We demonstrate the performance of the algorithm on benchmark problems.
1
Introduction
Discovering high order structure in patterns is one of the keys to performing complex
recognition and discrimination tasks. Many real world patterns have a hierarchical
underlying structure in which simple features have a higher order structure among
themselves. Because these relationships are often statistical in nature, it is natural
to view the process of discovering such structures as a statistical inference problem
in which a hierarchical model is fit to data.
Hierarchical statistical structure can be conveniently represented with Bayesian
belief networks (Pearl, 1988; Lauritzen and Spiegelhalter, 1988; Neal, 1992). These
M. S. Lewicki and T. 1. Sejnowski
530
models are powerful, because they can capture complex statistical relationships
among the data variables, and also mathematically convenient, because they allow
efficient computation of the joint probability for any given set of model parameters.
The joint probability density of a network of binary states is given by a product of
conditional probabilities
(1)
where W is the weight matrix that parameterizes the model. Note that the probability of an individual state Si depends only on its parents. This probability is
given by
P(Si = 1lpa[Si], W) = h(L SjWji)
(2)
j
where Wji is the weight from Sj to Si (Wji = 0 for j
< i).
The weights specify a hierarchical prior on the input states, which are the fixed
subset of states at the lowest layer of units. The active parents of state Si represent
the underlying causes of that state. The function h specifies how these causes are
combined to give the probability of Si. We assume h to be the "noisy OR" function,
h(u) = 1 - exp( -u), u >= O.
2
Learning Objective
The learning objective is to adapt W to find the most probable explanation of the
input patterns. The probability of the input data is
(3)
n
P(D n IW) is computed by marginalizing over all states of the network
P(DnIW)
= L P(DnISk, W)P(SkIW)
(4)
k
Because the number of different states, Sk, is exponential in the number of units,
computing the sum exactly is intractable and must be approximated. The nature
of the learning tasks discussed here, however, allow us to make accurate approximations. A desirable property for representations is that most patterns have just
one or a few possible explanations. In this case, all but a few terms P(D n ISk, W)
will be zero, and, as described below, it becomes feasible to use sampling based
methods which select Sk according to P(SkIDn, W).
3
Inferring the Internal Representation
Given the input data, finding its most likely explanation is an inference process.
Although it is simple to calculate the probability of any particular network state,
there is no simple way to determine the most probable state given input D. A
general approach to this problem is Gibbs sampling (Pearl, 1988; Neal, 1992).
Bayesian Unsupervised Learning of Higher Order Structure
531
In Gibbs sampling, each state Si of the network is updated iteratively according to
the probability of Si given the remaining states in the network. This conditional
probability can be computed using
P(SiISj :
j::l i, W) oc P(Silpa[Si], W) II P(Sjlpa[Sj],Si' W)
(5)
jEch[S.]
where ch[Sil indicates the children of state Si. In the limit, the ensemble of states
obtained by this procedure will be typical samples from P(SID, W). More generally,
any subset of states can be fixed and the rest sampled.
The Gibbs equations have an interpretation in terms of a stochastic recurrent neural
network in which feedback from higher levels influences the states at lower levels.
For the models defined here, the probability of Si changing state given the remaining
states is
(6)
The variable D-xi indicates how much changing the state Si changes the probability
of the network state
~xi=logh(ui;l-Si)-logh(ui;Si)+
L
logh(uj+bij;Sj)-logh(uj;Sj) (7)
jEch[S;]
where h(u; a) = h(u) if a = 1 and 1 - h(u) if a = o. The variable Ui is the causal
input to Si, given by l:k SkWki. The variable bj specifies the change in Uj for a
change in Si: bij = +SjWij if Si = 0 and -SjWij if Si = 1.
The first two terms in (7) can be interpreted as the feedback from higher levels. The
sum can be interpreted as the feedforward input from the children of Si. Feedback
allows the lower level units to use information only computable at higher levels. The
feedforward terms typically dominate the expression, but the feedback becomes the
determining factor when the feedforward input is ambiguous.
For general distributions, Gibbs sampling can require many samples to achieve a
representative samples. But if there is little ambiguity in the internal representation,
as is the goal, Gibbs sampling can be as efficient as a single feedforward pass. One
potential problem is that Gibbs sampling will not work before the weights have
adapted, when the representations are highly ambiguous. We show below, however,
that it is not necessary to sample for long periods in order for good representations
to be learned. As learning proceeds, the internal representations obtained with
limited Gibbs sampling become increasingly accurate.
4
Adapting the Weights
The complexity of the model is controlled by placing a prior on the weights. For
the form of the noisy OR function in which all weights are constrained to be positive, we assume the prior to be the product of independent gamma distributions
parameterized by 0: and {3. The objective function becomes
C = P(D 1 : N IW)P(Wlo:, {3)
(8)
A simple and efficient EM-type formula for adapt the weights can be derived by
setting aCjwij to zero and solving for Wij. Using the transformations iij = 1 -
532
M. S. Lewicki and T. 1. Sejnowski
exp( -Wij) and 9i = 1 - exp( -Ui) we obtain
(9)
where s(n) is the state obtained with Gibbs sampling for the nth input pattern.
The variable Iij can be interpreted as the frequency of state Sj given cause Si. The
sum in the above expression is a weighted average of the number of times Sj was
active when Si was active. The ratio hj /9j weights each term in the sum inversely
according to the number of different causes for Sj. If Si is the unique cause of Sj
then hj = 9j and the term would have full weight.
A straightforward application of the learning algorithm would adapt all the weights
at the same time. This does not produce good results, however, because there
is nothing to prevent the model from learning overly strong priors. This can be
prevented by adapting the weights in the upper levels after the weights in the lower
levels have stabilized. This allows the higher levels to adapt to structure that is
actually present in the data. We have obtained good results from both the naive
method of adapting the lowest layers first and from more sophisticated methods
where stability was based on how often a unit changed during the Gibbs sampling.
5
Examples
In the following examples, the weight prior was specified with a = 1.0 and {3 = 1.5.
Weights were set to random values between 0.05 and 0.15. Gibbs sampling was
stopped if the maximum state change probability was less than 0.05 or after 15
sweeps through the units. Weights were reestimated after blocks of 200 patterns.
Each layer of weights was adapted for 10 epochs before adapting the next layer.
A High Order Lines Problem. The first example illustrates that the algorithm
can discover the underlying features in complicated patterns and that the higher
layers can capture interesting higher order structure. The first dataset is a variant of
the lines problem proposed by Foldiak (1989) . The patterns in the dataset are composed of horizontal and vertical lines as illustrated in figure 1. Note that, although
B
Figure 1: Dataset for the high order lines problem. (A) Patterns are generated by
selecting one of the pattern types according to the probabilities next to the arrows. Top
patterns are copied to the input. The horizontal and vertical lines on the left are selected
with probability 0.3. (B) Typical input patterns.
Bayesian Unsupervised Learning of Higher Order Structure
533
the datasets are displayed on a 2-D grid, the network makes no assumptions about
topography. Because the network is fully connected, all spatial arrangements of the
inputs are identical. The weights learned by the network are shown in figure 2.
I'~
1.111:1 ~?I?II:I:l:I:I:II.I?~tl.II:~:I?I:1
_ _-.... IIlImmll
1111111111
Figure 2: The weights in a 25-10-5 network after training. Blocks indicate the weights to
each unit. Square size is proportional to the weight values. Second layer units capture the
structure of the horizontal and vertical lines. Third layer units capture the correlations
among the lines. The first unit in the third layer is active when the 'II' is present. The
second, fourth, and fifth units have learned to represent the '+', '=', and '0' respectively,
with the remaining unit acting as a bias.
The Shifter Problem. The shifter problem (Hinton and Sejnowski, 1986), explained in figure 3, is important because the structure that must be discovered is in
the higher order input correlations. This example also illustrates the importance of
allowing high level states to influence low level states to determine the most probable internal representation. The units in the second layer can only capture second
order statistics and cannot determine the direction of the shift. The only way these
units can be disambiguated is to use the feedback from the units in the third layer
which detect the direction of the shift by integrating the output of the units in
the second layer. This allows the representation in the second layer to be "cleaned
up" and makes it easier to discover the higher order structure of the global shift.
The speed and reliability of the learning was tested by learning from random initial
conditions. The results are shown in figure 4. Note that the best solutions have a
cost of about one bit higher than the optimal cost of less than 9 bits, because top
units cannot capture the fact that they are mutually exclusive.
6
Discussion
The methods we have described work well on these simple benchmark problems
and scale well to larger problems such as the handwritten digits example used in
(Hinton et al., 1995). We believe there are two main reasons why the algorithm
described here runs considerably faster than other Gibbs sampling based methods.
The first is that there is no need to collect state statistics for each pattern. The
weight values are reestimated using just one sampled internal state per pattern. The
second is that weights that are not connected to informative units are not updated.
This prevents the model from learning what are effectively overly strong priors and
allows the weights in upper layers to adapt to structure actually in the data.
Gibbs sampling allows internal representations to be selected according to their true
posterior probability. This was shown to be effective in cases where the resulting
534
M. S. Lewicki and T. 1. Sejnowski
A
B
..
..? _?
..
..
..
??m
. ..
.
.
. . .
. '
.
.
. . . .
----------------
Figure 3: The shifter problem. (A) Input patterns are generated by generating a random
binary vector in the bottom row. This pattern is shifted either left or right (with wrap
around) with equal probability and copied to the top row. The input rows are duplicated
to add redundancy as in Dayan et al. (1995). (B) The weights of a 32-20-2 after learning.
The second layer of units learn to detect either local left shifts or right shifts in the data.
These units cannot determine the shift direction alone, however, and require feedback from
third layer units which integrate the outputs of all the units that represent a common shift
(note that there is no overlap in the weights for the two third-layer units) . This feedback
turns off units that are inconsistent with the direction of shift. The weights that are close
to zero for both third layer units effectively remove redundant second layer units that are
not required to represent the input patterns.
~w-----~------~----~-------.------.
~~-----I~O------~15------~ro~-----2~5----~~
Number of Epochs
Figure 4: The graph shows 10 runs on the shifter problem from random initial conditions.
The average bits per pattern is computed by -log(C)/(Nlog2). Each epoch used 200
input randomly generated input patterns. Two additional epochs were performed with
1000 random patterns to obtain accurate estimates of the average bits per pattern. The
network converges rapidly and reliably. The best solutions, like the one shown in figure 3b,
were found in 4/10 runs and had costs of approximately 10 bits at epoch 30. In this
example, the network can get caught in local minima if too many units learn to represent
the same local shifts.
Bayesian Unsupervised Learning of Higher Order Structure
535
representation has little ambiguity, i.e. each pattern has only a small number of
probable explanations. If the causal structure to be learned is inherently ambiguous,
e.g. in modeling the causal structure of medical symptoms, Gibbs sampling will be
slow and better performance can be obtained with wake-sleep learning (Hinton
et al., 1995; Frey et al., 1995) or mean field approximations (Saul et al., 1996).
There are many natural situations when there is ambiguity in low level features.
This ambiguity can only be resolved by integrating the contextual information which
itself is derived from the ambiguous simple features. This problem is common in
the case of noisy input patterns and in feature grouping problems such as figureground separation. Feedback is crucial for ensuring that low-level representations
are consistent within the larger context.
Some systems, such as the Helmholtz machine (Dayan et al., 1995; Hinton et al.,
1995) , arrive at the internal state through a feedforward process. It possible that this
ambiguity in lower-level representations could be resolved by circuitry in the higherlevel representations, but if multiple higher-level modules make use of the same lowlevel representations, the additional circuitry would have to be duplicated in each
module. It seems more parsimonious to use feedback to influence the formation of
the lower-level representations.
References
Dayan, P. , Hinton, G. E., Neal, R. M., and Zemel, R. S. (1995). The Helmholtz
machine. Neural Computation, 7:889-904.
F61dilik, P. (1989). Adaptive network for optimal linear feature extraction. In
Proceedings of the International Joint Conference on Neural Networks, volume I,
pages 401-405, Washington, D. C.
Frey, B. J., Hinton, G. E., and Dayan, P. (1995). Does the wake-sleep algorithm
produce good density estimators? In Touretzky, D. S., Mozer, M., and Hasselmo,
M., editors, Advances in Neural Information Processing Systems, volume 8, pages
661-667, San Mateo. Morgan Kaufmann.
Hinton, G. E., Dayan, P., Frey, B. J ., and Neal, R. M. (1995). The wake-sleep
algorithm for unsupervised neural networks. Science, 268(5214):1158-116l.
Hinton, G. E. and Sejnowski, T. J . (1986). Learning and relearning in Boltzmann
machines. In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed
Processing, volume 1, chapter 7, pages 282-317. MIT Press, Cambridge.
Lauritzen, S. L. and Spiegelhalter, D. J. (1988). Local computations with probabilities on graphical structures and their application to expert systems. J. Royal
Statistical Soc. Series B Methodological, 50(2):157- 224.
Neal, R. M. (1992) . Connectionist learning of belief networks. Artificial Intelligence,
56(1):71-113.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann,
San Mateo.
Saul, L. K., Jaakkola, T ., and Jordan, M. I. (1996). Mean field theory for sigmoid
belief networks. J. Artificial Intelligence Research, 4:61-76.
| 1293 |@word seems:1 initial:2 series:1 selecting:1 contextual:1 si:23 must:2 informative:1 remove:1 discrimination:1 alone:1 intelligence:2 discovering:2 selected:2 become:1 nlog2:1 themselves:1 little:2 becomes:3 discover:2 underlying:3 lowest:2 what:1 interpreted:4 finding:2 transformation:1 exactly:1 ro:1 unit:26 medical:2 before:2 positive:1 local:4 frey:3 limit:1 approximately:1 mateo:2 collect:1 limited:1 unique:1 hughes:1 block:2 digit:1 procedure:1 adapting:4 convenient:1 integrating:2 get:1 cannot:3 close:1 context:1 influence:3 straightforward:1 lowlevel:1 caught:1 estimator:1 dominate:1 stability:1 updated:2 exact:1 helmholtz:3 recognition:1 approximated:1 rumelhart:1 bottom:1 module:2 capture:6 calculate:1 connected:2 mozer:1 complexity:1 ui:4 solving:1 resolved:3 joint:3 represented:1 chapter:1 effective:1 sejnowski:6 artificial:2 zemel:1 formation:1 larger:2 statistic:2 torrey:1 noisy:3 itself:1 higherlevel:1 product:2 rapidly:1 achieve:1 parent:2 produce:2 generating:1 converges:1 recurrent:2 lauritzen:2 strong:2 soc:1 indicate:1 direction:4 stochastic:2 require:2 probable:4 mathematically:1 around:1 exp:3 bj:1 circuitry:2 pine:1 iw:2 hasselmo:1 weighted:1 mit:1 hj:2 jaakkola:1 derived:2 logh:4 methodological:1 indicates:2 detect:2 inference:2 dayan:5 typically:1 relation:1 wij:2 among:4 constrained:1 spatial:1 equal:1 field:2 extraction:1 washington:1 sampling:14 identical:1 placing:1 unsupervised:5 connectionist:1 intelligent:1 few:2 randomly:1 composed:1 gamma:1 individual:1 interest:1 highly:1 accurate:3 necessary:1 causal:3 stopped:1 modeling:1 cost:3 subset:2 too:1 considerably:1 combined:1 density:2 international:1 probabilistic:1 terrence:1 off:1 michael:1 reestimated:2 ambiguity:6 expert:1 potential:1 depends:1 performed:1 view:1 lab:1 complicated:1 parallel:1 square:1 kaufmann:2 efficiently:1 ensemble:1 bayesian:6 sid:1 handwritten:1 touretzky:1 frequency:1 wlo:1 sampled:2 dataset:3 duplicated:2 sophisticated:1 actually:2 higher:18 specify:1 symptom:1 just:2 correlation:2 horizontal:3 believe:1 true:1 iteratively:1 neal:5 illustrated:1 during:1 ambiguous:4 oc:1 demonstrate:1 reasoning:1 discovers:1 common:2 sigmoid:1 volume:3 discussed:1 interpretation:1 cambridge:1 gibbs:14 rd:1 grid:1 had:1 reliability:1 add:1 posterior:1 foldiak:1 jolla:1 binary:2 wji:2 morgan:2 minimum:1 additional:2 determine:4 period:1 redundant:1 ii:5 full:1 desirable:1 multiple:1 faster:1 adapt:5 calculation:1 long:1 prevented:1 controlled:1 ensuring:1 variant:1 multilayer:1 represent:5 wake:3 crucial:1 figureground:1 rest:1 inconsistent:1 jordan:1 feedforward:5 fit:1 architecture:1 parameterizes:1 computable:1 shift:9 expression:2 cause:5 generally:1 mcclelland:1 specifies:2 stabilized:1 shifted:1 overly:2 per:3 key:1 redundancy:1 changing:2 prevent:1 graph:1 sum:4 run:3 parameterized:1 powerful:2 fourth:1 arrive:1 separation:1 parsimonious:1 bit:5 layer:18 copied:2 sleep:3 adapted:2 lpa:1 speed:1 disambiguated:1 performing:1 according:5 em:2 increasingly:1 explained:1 equation:1 mutually:1 turn:1 hierarchical:4 top:3 remaining:3 graphical:1 uj:3 sweep:1 objective:3 arrangement:1 exclusive:1 wrap:1 reason:1 shifter:4 relationship:2 ratio:1 reliably:1 boltzmann:1 allowing:1 upper:2 vertical:3 datasets:1 howard:1 benchmark:2 displayed:1 situation:1 neurobiology:1 hinton:8 discovered:1 cleaned:1 specified:1 required:1 learned:4 pearl:3 proceeds:1 below:2 pattern:23 royal:1 explanation:4 belief:4 terry:1 overlap:1 natural:2 nth:1 representing:1 spiegelhalter:2 inversely:1 naive:1 prior:6 epoch:5 marginalizing:1 determining:1 fully:1 topography:1 interesting:1 proportional:1 sil:1 integrate:1 consistent:1 editor:2 row:3 changed:1 bias:1 allow:2 institute:2 saul:2 fifth:1 distributed:1 feedback:10 world:1 adaptive:1 san:2 sj:8 approximate:1 global:1 active:4 xi:2 sk:2 why:1 nature:2 learn:2 ca:1 inherently:1 complex:2 main:1 arrow:1 silpa:1 nothing:1 child:2 representative:1 tl:1 salk:3 slow:1 iij:2 inferring:1 exponential:1 third:6 bij:2 formula:1 grouping:1 intractable:2 effectively:2 importance:1 illustrates:2 relearning:1 easier:1 likely:1 conveniently:1 prevents:1 lewicki:4 ch:1 conditional:2 goal:1 feasible:1 change:4 typical:2 acting:1 pas:1 la:1 select:1 internal:7 tested:1 isk:1 |
322 | 1,294 | Dynamic features for visual speechreading: A systematic comparison
Michael S. Grayl,a, Javier R. Movellan l , Terrence J. Sejnowski2 ,3
Departments of Cognitive Science l and Biology2
University of California, San Diego
La Jolla, CA 92093
and
Howard Hughes Medical Institute3
Computational Neurobiology Lab
The Salk Institute, P. O. Box 85800
San Diego, CA 92186-5800
Email: mgray, jmovellan, tsejnowski@ucsd.edu
Abstract
Humans use visual as well as auditory speech signals to recognize
spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual
features on a speechreading task. We have found that normalization of images to eliminate variation due to translation, scale,
and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In
addition, the dynamic information in the difference between successive frames yielded better performance than optical-flow based
approaches, and compression by local low-pass filtering worked surprisingly better than global principal components analysis (PCA).
These results are examined and possible explanations are explored.
1
INTRODUCTION
Visual speech recognition is a challenging task in sensory integration. Psychophysical work by McGurk and MacDonald [5] first showed the powerful influence of
visual information on speech perception that has led to increased interest in this
752
M. S. Gray, J. R. MoveLlan and T. J. Sejnowski
area. A wide variety of techniques have been used to model speech-reading. Yuhas,
Goldstein, Sejnowski, and Jenkins [8] used feedforward networks to combine gray
scale images with acoustic representations of vowels. Wolff, Prasad, Stork, and Hennecke [7] explicitly computed information about the position of the lips, the shape of
the mouth, and motion. This approach has the advantage of dramatically reducing
the dimensionality of the input, but critical information may be lost. The visual
information (mouth shape, position, and motion) was the input to a time-delay neural network (TDNN) that was trained to distinguish among consonant-vowel pairs.
A separate TDNN was trained on the acoustic signal. The output probabilities for
the visual and acoustic signals were then combined mUltiplicatively_ Bregler and
Konig [1] also utilized a TDNN architecture. In this work, the visual information
was captured by the first 10 principal components of a contour model fit to the lips.
This was enough to specify the full range of lip shapes ("eigenlips"). Bregler and
Konig [1] combined the acoustic and visual information in the input representation,
which gave improved performance in noisy environments, compared with acoustic
information alone.
Surprisingly, the visual signal alone carries a substantial amount of information
about spoken words. Garcia, Goldschen, and Petajan [2] used a variety of visual
features from the mouth region of a speaker's face to recognize test sentences using
hidden Markov models (HMMs). Those features that were found to give the best
discrimination tended to be dynamic in nature, rather than static. Mase and Pentland [4] also explored the dynamic information present in lip images through the
use of optical flow. They found that a template matching approach on the optical
flow of 4 windows around the edges of the mouth yielded results similar to humans
on a digit recognition task. Movellan [6] investigated the recognition of spoken digits using only visual information. The input representation for the hidden Markov
model consisted of low-pass filtered pixel intensity information at each time step, as
well as a delta image that showed the pixel by pixel difference between subsequent
time steps.
The motivation for the current work was succinctly stated by Bregler and Konig [1]:
"The real information in lipreading lies in the temporal change of lip positions,
rather than the absolute lip shape." Although different kinds of dynamic visual
information have been explored, there has been no careful comparison of different
methods. Here we present results for four different dynamic techniques that are
based on general purpose processing at the pixel level. The first approach was to
combine low-pass filtered gray scale pixel values with a delta image, defined as the
difference between two successive gray level images. A peA reduction of this grayscale and delta information was investigated next. The final two approaches were
motivated by the kinds of visual processing that are believed to occur in higher
levels of the visual cortex. We first explored optical flow, which provides us with
a representation analogous to that in primate visual area MT. Optical flow output
was then combined with low-pass filtered gray-scale pixel values. Each of these four
representations was tested on two different datasets: (1) the raw video images, and
(2) the normalized video images.
Dynamic Features/or Visual Speechreading: A Systematic Comparison
753
\1\tt!1"f
,lIIIlltIrmltltNlII
.
,
f \ ,
I \
//f\t\,,_
"
'\
\
I
I
,
Figure 1: Image processing techniques. Left column: Two successive video frames
(frames 1 and 2) from a subject saying the digit "one". These images have been
made symmetric by averaging left and right pixels relative to the vertical midline.
Middle column: The top panel shows gray scale pixel intensity information of frame
2 after low-pass filtering and down-sampling to a resolution of 15 x 20 pixels. The
bottom panel shows the delta image (pixel-wise subtraction of frame 1 from frame
2), after low-pass filtering and downsampling. Right column: The top panel shows
the optical flow for the 2 video frames in the left column. The bottom panel shows
the reconstructed optical flow representation learned by a 1-state HMM. This can
be considered the canonical or prototypical representation for the digit "one" across
our database of 12 individuals.
2
2.1
METHODS AND MODELS
TRAINING SAMPLE
The training sample was the Thlips1 database (Movellan [6]): 96 digitized movies
of 12 undergraduate students (9 males, 3 females) from the Cognitive Science Department at U C-San Diego. Video capturing was performed in a windowless room
at the Center for Research in Language at UC-San Diego. Subjects were asked to
talk into a video camera and to say the first four digits in English twice. Subjects
could monitor the digitized images in a small display conveniently located in front
of them. They were asked to position themselves so that their lips were roughly
centered in the feed-back display. Gray scale video images were digitized at 30
frames per second, 100 x 75 pixels, 8 bits per pixel. The video tracks were hand
segmented by selecting a few relevant frames before the beginning and after the
end of activity in the acoustic track. There were an average of 9.7 frames for each
movie. Two sample frames are shown in the left column of Figure 1.
2.2
IMAGE PROCESSING
We compared the performance of four different visual representations for the digit
recognition task: low-pass + delta, PCA of (gray-scale + delta), flow, and low-pass
| 1294 |@word consisted:1 middle:1 normalized:1 compression:1 psychophysical:1 symmetric:1 pea:1 prasad:1 speechreading:3 centered:1 human:2 tsejnowski:1 speaker:1 separate:1 carry:1 reduction:1 macdonald:1 hmm:1 generalization:1 selecting:1 tt:1 bregler:3 motion:2 current:1 around:1 considered:1 image:14 wise:1 downsampling:1 rotation:1 subsequent:1 mt:1 shape:4 stork:1 stated:1 purpose:2 discrimination:1 alone:2 vertical:1 beginning:1 markov:2 howard:1 datasets:1 eigenlips:1 pentland:1 filtered:3 provides:1 neurobiology:1 language:1 successive:3 digitized:3 frame:11 rather:2 ucsd:1 cortex:1 intensity:2 showed:2 yuhas:1 female:1 combine:2 improvement:1 jolla:1 pair:1 sentence:1 california:1 acoustic:6 learned:1 roughly:1 themselves:1 lipreading:1 captured:1 perception:1 eliminate:1 hidden:2 subtraction:1 window:1 reading:1 signal:4 pixel:12 panel:4 among:1 full:1 explanation:1 video:8 segmented:1 kind:2 mouth:4 critical:1 integration:1 believed:1 uc:1 spoken:3 movie:2 sampling:1 temporal:1 tdnn:3 mase:1 normalization:1 few:1 medical:1 addition:1 before:1 recognize:2 midline:1 local:1 individual:1 relative:1 prototypical:1 filtering:3 vowel:2 interest:1 subject:3 twice:1 examined:1 flow:8 challenging:1 male:1 hmms:1 systematically:1 feedforward:1 range:2 enough:1 translation:1 succinctly:1 variety:3 camera:1 fit:1 gave:1 edge:1 hughes:1 lost:1 architecture:1 movellan:4 surprisingly:2 english:1 digit:6 institute:1 wide:1 template:1 face:1 area:2 absolute:1 motivated:1 pca:2 matching:1 word:2 increased:1 column:5 contour:1 sensory:1 made:1 san:4 speech:4 influence:1 dramatically:1 reconstructed:1 center:1 delay:1 amount:1 global:1 regardless:1 front:1 resolution:1 consonant:1 canonical:1 combined:3 grayscale:1 delta:6 per:2 track:2 lip:7 systematic:2 terrence:1 nature:1 ca:2 variation:1 michael:1 analogous:1 diego:4 four:4 investigated:3 monitor:1 mcgurk:1 main:1 recognition:4 cognitive:2 utilized:1 located:1 motivation:1 database:2 bottom:2 powerful:1 student:1 region:1 saying:1 salk:1 explicitly:1 position:4 performed:1 lab:1 substantial:2 lie:1 environment:1 bit:1 capturing:1 asked:2 distinguish:1 display:2 dynamic:8 down:1 trained:2 petajan:1 yielded:3 activity:1 occur:1 worked:1 explored:4 undergraduate:1 raw:1 talk:1 performing:1 optical:7 department:2 sejnowski:2 tended:1 led:1 garcia:1 across:1 email:1 visual:19 conveniently:1 say:1 primate:1 static:1 noisy:1 auditory:1 final:1 advantage:1 dimensionality:1 javier:1 careful:1 goldstein:1 back:1 room:1 relevant:1 feed:1 end:1 higher:1 change:1 jenkins:1 planar:1 specify:1 improved:1 hennecke:1 reducing:1 averaging:1 principal:2 box:1 wolff:1 pas:8 la:1 konig:3 hand:1 top:2 gray:8 tested:1 |
323 | 1,295 | Limitations of self-organizing maps for
vector quantization and multidimensional
scaling
Arthur Flexer
The Austrian Research Institute for Artificial Intelligence
Schottengasse 3, A-lOlO Vienna, Austria
and
Department of Psychology, University of Vienna
Liebiggasse 5, A-lOlO Vienna, Austria
arthur~ai.univie.ac.at
Abstract
The limitations of using self-organizing maps (SaM) for either
clustering/vector quantization (VQ) or multidimensional scaling
(MDS) are being discussed by reviewing recent empirical findings
and the relevant theory. SaM 's remaining ability of doing both VQ
and MDS at the same time is challenged by a new combined technique of online K-means clustering plus Sammon mapping of the
cluster centroids. SaM are shown to perform significantly worse in
terms of quantization error , in recovering the structure of the clusters and in preserving the topology in a comprehensive empirical
study using a series of multivariate normal clustering problems.
1
Introduction
Self-organizing maps (SaM) introduced by [Kohonen 84] are a very popular tool
used for visualization of high dimensional data spaces. SaM can be said to do
clustering/vector quantization (VQ) and at the same time to preserve the spatial
ordering of the input data reflected by an ordering of the code book vectors (cluster
centroids) in a one or two dimensional output space, where the latter property is
closely related to multidimensional scaling (MDS) in statistics. Although the level
of activity and research around the SaM algorithm is quite large (a recent overview
by [Kohonen 95] contains more than 1000 citations) , only little comparison among
the numerous existing variants of the basic approach and also to more traditional
statistical techniques of the larger frameworks of VQ and MDS is available. Additionally, there is only little advice in the literature about how to properly use
A. Flexer
446
SOM in order to get optimal results in terms of either vector quantization (VQ) or
multidimensional scaling or maybe even both of them. To make the notion of SOM
being a tool for "data visualization" more precise, the following question has to be
answered: Should SOM be used for doing VQ, MDS, both at the same time or none
of them?
Two recent comprehensive studies comparing SOM either to traditional VQ or MDS
techniques separately seem to indicate that SOM is not competitive when used for
either VQ or MDS: [Balakrishnan et al. 94J compare SOM to K-means clustering
on 108 multivariate normal clustering problems with known clustering solutions and
show that SOM performs significantly worse in terms of data points misclassified 1 ,
especially with higher numbers of clusters in the data sets. [Bezdek & Nikhil 95J
compare SOM to principal component analysis and the MDS-technique Sammon
mapping on seven artificial data sets with different numbers of points and dimensionality and different shapes of input distributions. The degree of preservation of
the spatial ordering of the input data is measured via a Spearman rank correlation between the distances of points in the input space and the distances of their
projections in the two dimensional output space. The traditional MDS-techniques
preserve the distances much more effectively than SOM, the performance of which
decreases rapidly with increasing dimensionality of the input data.
Despite these strong empirical findings that speak against the use of SOM for either
VQ or MDS there remains the appealing ability ofSOM to do both VQ and MDS at
the same time . It is the aim of this work to find out, whether a combined technique
of traditional vector quantization (clustering) plus MDS on the code book vectors
(cluster centroids) can perform better than Kohonen's SOM on a series of multivariate normal clustering problems in terms of quantization error (mean squared
error) , recovering the cluster structure (Rand index) and preserving the topology
(Pearson correlation). All the experiments were done in a rigoruos statistical design
using multiple analysis of variance for evaluation of the results.
2
SOM and vector quantization/clustering
A vector quantizer (VQ) is a mapping, q, that assigns to each input vector x a
reproduction (code book) vector x = q( x) drawn from a finite reproduction alphabet
A = {Xi, i = 1, ... , N}. The quantizer q is completely described by the reproduction
alphabet (or codebook) A together with the partition S = {Si , i = 1, .. . , N}, of the
input vector space into the sets Si = {x : q(x) = xd of input vectors mapping into
the ith reproduction vector (or code word) [Linde et al. 80J. To be compareable to
SO M, our VQ assigns to each of the input vectors x = (xO, xl, . .. , x k - l ) a socalled
code book vector x = (xO, xl, . .. , xk -1) of the same dimensionality k. For reasons of
data compression, the number of code book vectors N ~ n, where n is the number
of input vectors.
Demanded is a VQ that produces a mapping q for which the expected distortion
caused by reproducing the input vectors x by code book vectors q( x) is at least
locally minimal. The expected distortion is usually esimated by using the average distortion D, where the most common distortion measure is the squared-error
1 Although SOM is an unsupervised technique not built for classification, the number
of points missclassified to a wrong cluster center is an appropriate and commonly used
performance measure for cluster procedures if the true cluster structure is known.
447
Limitations of Self-organizing Maps
distortion d:
k-l
d(x, x)
=L
(2)
1 Xi - Xi 12
i=O
The classical vector quantization technique to achieve such a mapping is the LBGalgorithm [Linde et al. 80], where a given quantizer is iteratively improved. Already [Linde et al. 80] noted that their proposed algorithm is almost similar to
the k-means approach developed in the cluster analysis literature starting from
[MacQueen 67]. Closely related to SOM is online K-means clustering (oKMC) consisting of the following steps:
1. Initialization: Given N = number of code book vectors, k = dimensionality
of the vectors, n = number of input vectors, a training sequence {Xj; j =
0, ... , n -I}, an initial set Ao of N code book vectors x and a discrete-time
coordinate t = 0 ... , n - 1.
2. Given At = {Xi ; i = 1, .. . , N}, find the minimum distortion partition
peAt) = {Si; i = 1, ... , N}. Compute d(Xt, Xi) for i = 1, .. . , N. If
d(Xt, Xi) ~ (Xt, XI) for alII, then Xt E Si.
3. Update the code book vector with the minimum distortion
X(t)(Si)
= x(t-1)(S;) + O'[X(t) -
X(t-l)(Si)]
(3)
where 0' is a learning parameter to be defined by the user. Define
x(P(A t
replace t by t + 1, ift = n -1, halt. Else go to step 2.
?,
At +1 =
The main difference between the SOM-algorithm and oKMC is the fact that the
code book vectors are ordered either on a line or on a planar grid (i.e. in a one or
two dimensional output space). The iterative procedure is the same as with oKMC
where formula (3) is replaced by
X(t)(S;) = X(t-1)(Si)
+ h[x(t) -
X(t-l)(Si)]
(4)
and this update is not only computed for the Xi that gives minimum distortion, but
also for all the code book vectors which are in the neighbourhood of this Xi on the
line or planar grid. The degree of neighbourhood and amount of code book vectors
which are updated together with the Xi that gives minimum distortion is expressed
by h, a function that decreases both with distance on the line or planar grid and
with time and that also includes an additional learning parameter 0' . If the degree
of neighbourhood is decreased to zero, the SOM-algorithm becomes equal to the
oKMC-algorithm.
Whereas local convergence is guaranteed for oKMC (at least for decreasing 0',
[Bot.t.ou & Bengio 95]), no general proof for the convergence of SOM with nonzero
neighbourhood is known. [Kohonen 95, p.128] notes that the last. steps of the SOM
algorithm should be computed with zero neighbourhood in order to guarantee "the
most. accurate density approximation of the input samples" .
3
SOM and multidimensional scaling
Formally, a topology preserving algorithm is a t.ransformation <1l : Rk ....... RP, that
either preserves similarities or just. similarity orderings of the points in the input
space Rk when they are mapped into the outputspace R? For most algorithms it is
the case t.hat both the number of input vectors 1 x E Rk 1 and the number of output
A. Flexer
448
vectors I x E RP I are equal to n . A transformation !l> : x = !l>( x), that preserves
similarities poses the strongest possible constraint since d( Xi, Xj) = cf( Xi, Xj) for all
Xi, X JERk, all Xi, Xj E RP, i, j = 1, .. . , n - 1 and d (cf) being a measure of distance
in Rk (RP). Such a transformation is said to produce an isometric image.
Techniques for finding such transformations !l> are, among others, various forms of
multidimensional scalinl (MDS) like metric MDS [Torgerson 52], nonmetric MDS
[Shepard 62] or Sammon mapping [Sammon 69], but also principal component analysis (PCA) (see e.g. [Jolliffe 86]) or SOM. Sammon mapping is doing MDS by
minimizing the following via steepest descent:
Since the SOM has been designed heuristically and not to find an extremum for a
certain cost or energy function 3 and the theoretical connection to the other MDS
algorithms remains unclear. It should be noted that for SOM the number of output
vectors I x E RP I is limited to N, the number of cluster centroids x and that the x
are further restricted to lie on a planar grid . This restriction entails a discretization
of the outputspace RP .
4
Online [(-means clustering plus Sammon mapping of the
cl uster centroids
Our new combined approach consists of simply finding the set of A = {Xi, i =
1, ... , N} code book vectors that give the minimum distortion partition P(A) =
{8i ; i = 1, . .. , N} via the oKMC algorithm and then using the Xi as input vectors
to Sammon mapping and thereby obtaining a two dimensional representation of the
Xi via minimizing formula (5). Contrary to SOM, this two dimensional representation is not restricted to any fixed form and the distances between the N mapped
Xi directly correspond to those in the original higher dimension. This combined
algorithm is abbreviated oKMC+ .
5
Empirical comparison
The empirical comparison was done using a 3 factorial experimental design with
3 dependent variables. The multivariate normal distributions were generated using the procedure by [Milligan & Cooper 85], which since has been used for several
comparisons of cluster algorithms (see e.g. [Balakrishnan et al. 94]). The marginal
normal distributions gave internal cohesion of the clusters by warranting that more
than 99% of the data lie within 3 standard deviations (IT). External isolation was
defined as having the first dimension nonoverlapping by truncating the normal distributions in the first dimension to ?2IT and defining the cluster centroids to be
4.5IT apart. In all other dimensions the clusters were allowed to overlap by setting
the distance per dimension between two centroids randomly to lie between ?6IT.
The data was normalized to zero mean and unit variance in all dimensions.
2Note that for MDS not the actual coordinates of the points in the input space but
only their distances or the ordering of the latter are needed.
3[Erwin et al. 92] even showed that such an objective function cannot exist for SOM.
Limitations of Self-organizing Maps
algorithm
SOM
no. clusters
4
9
oKMC+
mean SOM
4
9
449
dimension
4
6
8
4
6
8
4
6
8
4
6
8
mean oKMC+
msqe
0.53
1.53
1.15
0.33
0.54
0.81
0 .81
0.53
1.06
1.17
0.29
0.47
0.56
0 .68
Rand
1.00
0.91
O.YY
0.97
0.97
0.96
0.97
0.99
0.99
1.00
0.98
0.99
0.98
0.99
corr.
0.64
0.72
0.74
0.48
0.66
0.74
0.67
0.87
0.87
O.Yl
0.89
0.87
0.86
0.88
Factor 1, Type of algorithm: The number of code book vectors of both the SOM
and the oKMC+ were set equal to the number of clusters known to be in the data.
The SOMs were planar grids consisting of 2 x 2 (3 x 3) code book vectors. During
the first phase (1000 code book updates) a was set to 0.05 and the radius of the
neighbourhood to 2 (5). During the second phase (10000 code book updates) a was
set to 0.02 and the radius ofthe neighbourhood to 0 to guarantee the most accurate
vector quantization [Kohonen 95, p.128]. The oKMC+ algorithm had the parameter
a fixed to 0.02 and was trained using each data set 20 times, the minimization of
formula (5) was stopped after 100 iterations. Both SOM and oKMC+ were run 10
times on each data set and only the best solutions, in terms of mean squared error,
were used for further analysis.
Factor 2, Number of clusters was set to 4 and 9.
Factor 3, Number of dimensions was set to 4,6, or8.
Dependent variable 1: mean squared error was computed using formula (1).
D ependent variable 2, Rand index (see [Hubert & Arabie 85]) is a measure of agreement between the true, known partition structure and the obtained clusters. Both
the numerator and the denominator of the index reflect frequency counts. The
numerator is the number of times a pair of data is either in the same or in different clusters in both known and obtained clusterings for all possible comparisons
of data points. Since the denominator is the total number of all possible pairwise
comparisons, an index value of 1.0 indicates an exact match of the clusterings.
Dependent variable 3, correlation is a measure of the topology preserving abilities of
the algorithms. The Pearson correlation of the distances d( Xl, X2) in the input space
and the distances d( Xi, Xj) in the output space for all possible pairwise comparisons
of data points is computed. Note that for SOM the coordinates of the code book
vectors on the planar grid were used to compute the d. An algorithm that preserves
all dist.ances in every neighbourhood would produce an isometric image and yield
a value of 1.0 (see [Bezdek & Nikhil 95] for a discussion of measures of topolgy
preservation) .
For each cell in the full-factorial 2 x 2 x 3 design 3 data sets with 25 points for each
cluster were generated resulting in a total of 36 data sets. A multiple analysis of
variance (MANOVA) yielded the following significant effects at the .05 error level:
The mean squared error is lower for oKMC+ than for SOM, it is lower for the 9cluster problem than for the 4-cluster problem and is higher for higher dimensional
A. Flexer
450
data. There is also a combined effect of the number of clusters and dimensions on
the mean squared error. The Rand index is higher for oKMC+ than for SOM, there
is also a combined effect of the number of clusters and dimensions. The correlation
index is higher for oKMC+ than for SOM. Since the main interest of this study is the
effect of the type of algorithm on the dependent variables, the mean performances
for SOM and oKMC+ are printed in bold letters in the table. Note that the overall
differences in the performances of the two algorithms are blurred by the significant
effects of the other factors and that therefore the differences of the grand means
across the type of algorithms appear rather small. Only by applying a MANOVA,
effects of the factor 'type of algorithms' that are masked by additional effects of
the other two factors 'number of clusters' and 'number of dimensions' could still be
detected.
6
Discussion and Conclusion
From the theoretical comparison of SOM to oKMC it should be clear that in terms
of quantization error, SOM should only be possible to perform as good as oKMC
if SOM's neighbourhood is set to zero. Additional experiments, not reported here
in detail for brevity, with nonzero neighbourhood till the end of SOM training
gave even worse results since the neighbourhood tends to pull the obtained cluster centroids away from the true ones. The Rand index is only slightly better for
oKMC+. The high values indicate that both algorithms were able to recover the
known cluster structure. Topology preserving is where SOM performs worst compared to oKMC+. This is a direct implication of the restriction to planar grids
which allows only 2::=2 i,(&~2) different distances in an s x s planar grid instead
of N(~ -1) different distances for N = s x s cluster centroids mapped via Sammon
mapping in the case of oKMC+. Using a nonzero neighbourhood at the end of SOM
training did not warrant any significant improvements.
An argument that could be brought forward against our approach towards comparing SOM and oKMC+ is that it would be unfair or not correct to set the number
of SOM's code book vectors equal to the number of clusters known to be in the
data. In fact it seems to be common practice to apply SOM with numbers of code
book vectors that are a multiple of the input vectors available for training (see e.g.
[Kohonen 95, pp.113]). Two things have to be said against such an argumentation:
First if one uses more or even only the same amount of code book vectors than input
vectors during vector quantization, each code book vector will become identical to
one of the input vectors in the limit of learning. So every Xi is replaced with an
identical Xi, which does not make any sense and runs counter to every notion of vector quantization. This means that SOMs employing numbers of code book vectors
t.hat are a multiple of the input vectors available can be used for MDS only. But
even such big SOMs do MDS in a very crude way: We computed SOMs consisting
of either 20 x 20 (for data sets consisting of 4 clusters and 100 points) or 30 x 30
(for data sets consisting of 9 clusters and 225 points) code book vectors for all 36
data sets which gave an average correlation of 0.77 between the distances di and di .
This is significantly worse at the .05 error level compared to the average correlation
of 0.95 achieved by Sammon mapping applied to the input data directly.
Our data sets consisted of iid multivariate normal distributions which therefore have
spherical shape. All VQ algorithms using squared distances as a distortion measure,
including our versions of oKMC as well as SOM, are inherently designed for such
distributions. Therefore, the clustering problems in this study, being also perfectly
seperable in one dimension, were very simple and should be solveable with little or
no error by any clustering or MDS algorithm.
Limitations of Self-organizing Maps
451
In this work we examined the vague concept of using SOM as a "data visualization
tool" both from a theoretical and empirical point of view. SOM cannot outperform
traditional VQ techniques in terms of quantization error and should therefore not
be used for doing VQ. From [Bezdek & Nikhil 95] as well as from our discussion of
SOM's restriction to planar grids in the output space which allows only a restricted
number of different distances to be represented, it should be evident that SOM
is also a rather crude way of doing MDS. Our own empirical results show that if
one wants to have an algorithm that does both VQ and MDS at the same time,
there exists a very simple combination oftraditional techniques (our oKMC+) with
wellknown and established properties that clearly outperforms SOM.
Whether it is a good idea to combine clustering or vector quantization and multidimensional scaling at all and whether more principled approaches (see e.g.
[Bishop et al. this volume], also for pointers to further related work) can yield even
better results than our oKMC+ and last but not least what self-organizing maps
shmtld be used for under this new light remain questions to be answered by future
investigations.
Acknowledgements: Thanks are due to James Pardey, University of Oxford, for the
Sammon code. The SOM_PAK, Helsinki University of Technology, was used for all computations of self-organizing maps. This work has been started within the framework of the
BIOMED-1 concerted action ANNDEE, sponsored by the European Commission, DG XII,
and the Austrian Federal Ministry of Science, Transport, and the Arts, which is also supporting the Austrian Research Institute for Artificial Intelligence. The author is supported
by a doctoral grant of the Austrian Academy of Sciences.
References
[Balakrishnan et al. 94] Balakrishnan P.V., Cooper M.C., Jacob V.S., Lewis P.A. : A study
of the classification capabilities of neural networks using unsupervised learning: a
comparison with k-means clustering, Psychometrika, Vol. 59, No.4, 509-525, 1994.
[Bezdek & Nikhil 95] Bezdek J.C. , Nikhil R.P.: An index of topological preservation for
feature extraction, Pattern Recognition, Vol. 28, No.3, pp.381-391, 1995.
[Bishop et al. this volume] Bishop C.M., Svensen M., Williams C.K.I.: GTM: A Principled Alternative to the Self-Organizing Map, this volume.
[Bottou & Bengio 95] Bottou 1., Bengio Y.: Convergence Properties of the K-Means Algorithms, in Tesauro G., et al.(eds.), Advances in Neural Information Processing
System 7, MIT Press, Cambridge, MA, pp.585-592, 1995.
[Erwin et al. 92] Erwin E., Obermayer K., Schulten K.: Self-organizing maps: ordering,
convergence properties and energy functions, Biological Cybernetics, 67,47- 55, 1992.
[Hubert & Arabie 85] Hubert L.J., Arabie P.: Comparing partitions, J . of Classification,
2, 63-76, 1985.
[Jolliffe 86] Jolliffe I.T.: Principal Component Analysis, Springer, 1986.
[Kohonen 84] Kohonen T.: Self-Organization and Associative Memory, Springer, 1984.
[Kohonen 95] Kohonen T.: Self-organizing maps, Springer, Berlin, 1995.
[Linde et al. 80] Linde Y. , Buzo A., Gray R.M.: An Algorithm for Vector Quantizer Design, IEEE Transactions on Communications, Vol. COM-28, No.1, January, 1980.
[MacQueen 67] MacQueen J.: Some Methods for Classification and Analysis of Multivariate Observations, Proc. of the Fifth Berkeley Symposium on Math., Stat. and Prob.,
Vol. 1, pp. 281-296, 1967.
[Milligan & Cooper 85] Milligan G.W., Cooper M.C.: An examination of procedures for
determining the number of clusters in a data set, Psychometrika 50(2), 159-179, 1985 .
[Sammon 69] Sammon J .W.: A Nonlinear Mapping for Data Structure Analysis, IEEE
Transactions on Comp., Vol. C-18, No.5, p.401-409, 1969.
[Shepard 62] Shepard R.N.: The analysis of proximities: multidimensional scaling with
an unknown distance function . I., Psychometrika, Vol. 27, No. 2, p.125-140, 1962.
[Torgerson 52] Torgerson W .S.: Multidimensional Scaling, I: theory and method, Psychometrika, 17, 401-419, 1952.
| 1295 |@word version:1 compression:1 seems:1 sammon:12 heuristically:1 jacob:1 thereby:1 initial:1 series:2 contains:1 outperforms:1 existing:1 comparing:3 discretization:1 com:1 si:8 partition:5 shape:2 designed:2 sponsored:1 update:4 intelligence:2 xk:1 ith:1 steepest:1 pointer:1 quantizer:4 math:1 codebook:1 direct:1 become:1 symposium:1 consists:1 combine:1 concerted:1 pairwise:2 expected:2 dist:1 decreasing:1 spherical:1 little:3 actual:1 increasing:1 becomes:1 psychometrika:4 what:1 developed:1 finding:4 transformation:3 extremum:1 guarantee:2 berkeley:1 every:3 multidimensional:9 xd:1 wrong:1 unit:1 grant:1 appear:1 local:1 tends:1 limit:1 despite:1 oxford:1 plus:3 initialization:1 warranting:1 examined:1 doctoral:1 limited:1 practice:1 procedure:4 empirical:7 significantly:3 printed:1 projection:1 word:1 peat:1 get:1 cannot:2 milligan:3 applying:1 restriction:3 map:11 center:1 go:1 lolo:2 starting:1 truncating:1 williams:1 assigns:2 pull:1 notion:2 coordinate:3 updated:1 user:1 speak:1 exact:1 us:1 agreement:1 recognition:1 worst:1 ordering:6 decrease:2 counter:1 principled:2 arabie:3 trained:1 reviewing:1 completely:1 vague:1 various:1 represented:1 gtm:1 alphabet:2 artificial:3 detected:1 pearson:2 quite:1 seperable:1 larger:1 nikhil:5 distortion:11 ability:3 statistic:1 online:3 associative:1 sequence:1 kohonen:10 relevant:1 rapidly:1 organizing:11 till:1 achieve:1 academy:1 convergence:4 cluster:33 produce:3 ac:1 svensen:1 stat:1 pose:1 measured:1 strong:1 recovering:2 indicate:2 radius:2 closely:2 correct:1 ao:1 investigation:1 biological:1 proximity:1 around:1 normal:7 buzo:1 mapping:13 cohesion:1 proc:1 tool:3 minimization:1 federal:1 brought:1 clearly:1 mit:1 aim:1 rather:2 properly:1 improvement:1 rank:1 indicates:1 centroid:9 sense:1 dependent:4 misclassified:1 biomed:1 overall:1 among:2 classification:4 socalled:1 spatial:2 art:1 marginal:1 equal:4 having:1 extraction:1 identical:2 unsupervised:2 warrant:1 future:1 others:1 bezdek:5 randomly:1 dg:1 preserve:5 comprehensive:2 replaced:2 phase:2 consisting:5 organization:1 interest:1 evaluation:1 light:1 hubert:3 implication:1 accurate:2 arthur:2 theoretical:3 minimal:1 stopped:1 challenged:1 cost:1 deviation:1 masked:1 reported:1 commission:1 combined:6 thanks:1 density:1 grand:1 yl:1 together:2 pardey:1 squared:7 reflect:1 worse:4 external:1 book:24 nonoverlapping:1 bold:1 includes:1 blurred:1 caused:1 view:1 doing:5 competitive:1 recover:1 capability:1 variance:3 correspond:1 ofthe:1 yield:2 iid:1 none:1 comp:1 cybernetics:1 argumentation:1 strongest:1 ed:1 against:3 energy:2 frequency:1 pp:4 james:1 proof:1 di:2 popular:1 austria:2 dimensionality:4 ou:1 nonmetric:1 higher:6 isometric:2 reflected:1 planar:9 improved:1 rand:5 done:2 just:1 correlation:7 transport:1 nonlinear:1 gray:1 effect:7 normalized:1 true:3 consisted:1 concept:1 iteratively:1 nonzero:3 during:3 self:12 numerator:2 noted:2 evident:1 performs:2 image:2 common:2 overview:1 shepard:3 volume:3 discussed:1 significant:3 cambridge:1 ai:1 grid:9 had:1 entail:1 similarity:3 multivariate:6 own:1 recent:3 showed:1 apart:1 wellknown:1 tesauro:1 certain:1 preserving:5 minimum:5 additional:3 ministry:1 preservation:3 multiple:4 full:1 match:1 halt:1 variant:1 basic:1 austrian:4 denominator:2 metric:1 erwin:3 iteration:1 achieved:1 cell:1 whereas:1 want:1 separately:1 decreased:1 else:1 thing:1 balakrishnan:4 contrary:1 seem:1 bengio:3 jerk:1 xj:5 isolation:1 psychology:1 gave:3 topology:5 perfectly:1 idea:1 whether:3 pca:1 linde:5 schottengasse:1 action:1 clear:1 factorial:2 maybe:1 amount:2 locally:1 outperform:1 exist:1 bot:1 torgerson:3 per:1 yy:1 xii:1 discrete:1 vol:6 drawn:1 run:2 prob:1 letter:1 almost:1 scaling:8 guaranteed:1 topological:1 yielded:1 activity:1 constraint:1 x2:1 helsinki:1 answered:2 argument:1 department:1 combination:1 spearman:1 across:1 slightly:1 remain:1 sam:6 appealing:1 restricted:3 xo:2 vq:17 visualization:3 remains:2 abbreviated:1 count:1 jolliffe:3 needed:1 end:2 available:3 apply:1 away:1 appropriate:1 neighbourhood:12 alternative:1 rp:6 hat:2 original:1 clustering:18 remaining:1 cf:2 vienna:3 especially:1 classical:1 objective:1 question:2 already:1 md:23 traditional:5 said:3 unclear:1 obermayer:1 distance:16 mapped:3 berlin:1 seven:1 reason:1 code:26 index:8 minimizing:2 design:4 unknown:1 perform:3 observation:1 macqueen:3 finite:1 descent:1 anndee:1 supporting:1 january:1 defining:1 communication:1 precise:1 reproducing:1 introduced:1 pair:1 connection:1 established:1 able:1 usually:1 univie:1 pattern:1 built:1 including:1 memory:1 overlap:1 examination:1 technology:1 numerous:1 started:1 alii:1 literature:2 acknowledgement:1 determining:1 limitation:5 degree:3 ift:1 supported:1 last:2 institute:2 fifth:1 dimension:12 forward:1 commonly:1 author:1 employing:1 transaction:2 citation:1 ances:1 xi:21 demanded:1 iterative:1 table:1 additionally:1 inherently:1 obtaining:1 bottou:2 cl:1 european:1 som:52 did:1 main:2 big:1 allowed:1 advice:1 cooper:4 schulten:1 xl:3 lie:3 crude:2 unfair:1 formula:4 rk:4 xt:4 bishop:3 reproduction:4 exists:1 quantization:15 effectively:1 corr:1 manova:2 simply:1 expressed:1 ordered:1 springer:3 lewis:1 ma:1 towards:1 replace:1 principal:3 total:2 experimental:1 formally:1 internal:1 latter:2 brevity:1 |
324 | 1,296 | 3D Object Recognition:
A Model of View-Tuned Neurons
Emanuela Bricolo
Tomaso Poggio
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
{emanuela,tp}Gai.mit.edu
Nikos Logothetis
Baylor College of Medicine
Houston, TX 77030
nikosGbcmvision.bcm.tmc.edu
Abstract
In 1990 Poggio and Edelman proposed a view-based model of object recognition that accounts for several psychophysical properties
of certain recognition tasks. The model predicted the existence of
view-tuned and view-invariant units, that were later found by Logothetis et al. (Logothetis et al., 1995) in IT cortex of monkeys
trained with views of specific paperclip objects. The model, however, does not specify the inputs to the view-tuned units and their
internal organization. In this paper we propose a model of these
view-tuned units that is consistent with physiological data from
single cell responses.
1
INTRODUCTION
Recognition of specific objects, such as recognition of a particular face, can be
based on representations that are object centered, such as 3D structural models.
Alternatively, a 3D object may be represented for the purpose of recognition in
terms of a set of views. This latter class of models is biologically attractive because
model acquisition - the learning phase - is simpler and more natural.
A simple model for this strategy of object recognition was proposed by Poggio and
Edelman (Poggio and Edelman, 1990). They showed that, with few views of an object used as training examples, a classification network, such as a Gaussian radial
basis function network, can learn to recognize novel views of that object, in partic-
E. Bricolo, T. Poggio and N. Logothetis
42
(a)
(b)
View angle
Figure 1: (a) Schematic representation of the architecture of the Poggio-Edelman
model. The shaded circles correspond to the view-tuned units, each tuned to a view
of the object, while the open circle correspond to the view-invariant, object specific
output unit. (b) Tuning curves ofthe view-tuned (gray) and view-invariant (black)
units.
ular views obtained by in depth rotation of the object (translation, rotation in the
image plane and scale are probably taken care by object independent mechanisms).
The model, sketched in Figure 1, makes several prediction about limited generalization from a single training view and about characteristic generalization patterns
between two or more training views (Biilthoff and Edelman, 1992) . Psychophysical
and neurophysiological results support the main features and predictions of this
simple model. For instance, in the case of novel objects, it has been shown that
when subjects -both humans and monkeys- are asked to learn an object from a single unoccluded view, their performance decays as they are tested on views farther
away from the learned one (Biilthoff and Edelman, 1992; Tarr and Pinker, 1991;
Logothetis et al., 1994). Additional work has shown that even when 3D information
is provided during training and testing, subjects recognize in a view dependent way
and cannot generalize beyond 40 degrees from a single training view (Sinha and
Poggio, 1994).
Even more significantly, recent recordings in inferotemporal cortex (IT) of monkeys
performing a similar recognition task with paperclip and amoeba-like objects, revealed cells tuned to specific views of the learned object (Logothetis et al., 1995).
The tuning, an example of which is shown in Figure 3, was presumably acquired as
an effect of the training to views of the particular object. Thus an object can be
thought as represented by a set of cells tuned to several of its views, consistently
with finding of others (Wachsmuth et al., 1994). This simple model can be extended
to deal with symmetric objects (Poggio and Vetter, 1992) as well as objects which
are members of a nice class (Vetter et al., 1995): in both cases generalization from
a single view may be significantly greater than for objects such as the paperclips
used in the psychophysical and physiological experiments.
The original model of Poggio and Edelman has a major weakness: it does not
specify which features are inputs to the view-tuned units and what is stored as a
representation of a view in each unit. The simulation data they presented employed
features such as the x,y coordinates of the object vertices in the image plane or the
angles between successive segments. This representation, however, is biologically
implausible and specific for objects that have easily detectable vertices and measurable angles, like paperclips. In this paper, we suggest a view representation which
is more biologically plausible and applies to a wider variety of cases. We will also
show that this extension of the Poggio-Edelman model leads to properties that are
43
3D Object Recognition: A Model o/View-Tuned Neurons
Filters
Bank
Filtered
Images
Feature units
(m=4)
Normalized
Output
Figure 2: Model overview: during the training phase the images are first filtered
through a bank of steerable filters. Then a number of image locations are chosen
by an attentional mechanism and the vector of filtered values at these locations is
stored in the feature units.
consistent with the cell response to the same objects.
2
A MODEL OF VIEW-TUNED UNITS
Our approach consists in representing a view in terms of a few local features, which
can be regarded as local configurations of grey-levels. Suppose one point in the
image of the object is chosen. A feature vector is computed by filtering the image
with a set of filters with small support centered at the chosen location. The vector
of filter responses serves as a description of the local pattern in the image. Four
such points were chosen, for example, in the image of Figure 3a, where the white
squares indicate the support of the bank of filters that were used. Since the support
is local but finite, the value of each filter depends on the pattern contained in the
support and not only on the center pixel; since there are several filters one expects
that the vector of values may uniquely represent the local feature, for instance a
corner of the paperclip.
We used filters that are somewhat similar to oriented receptive fields in VI (though
it is far from being clear whether some VI cells behave as linear filters). The ten
filters we used are the same steerable filters (Freeman and Adelson, 1991) suggested
by Ballard and Rao (Rao and Ballard, 1995; Leung et al., 1995). The filters were
chosen to be a basis of steerable 2-dimensional filters up to the third order. If Gn
represents the nth derivative of a Gaussian in the x direction and we define the
rotation operator ( ... )9 as the operator that rotates a function through an angle 0
about the origin, the ten basis filters are:
G 9 k n = 0, 1,2,3
n
O,,=O, .. . ,k'rr/(n+l),k=I, ... n
(1)
44
E. Bricolo, T. Poggio and N. Logothetis
Therefore for each one of the chosen locations m in the image we have a lO-value
array Tm given by the output of the filters bank.
The representation of a given view of an object is then the following. First m =
1, ... , M locations are chosen, then for each of these M locations the lO-valued
vectors Tm are computed and stored. These M vectors, with M typically between
1 and 4, form the representation of the view which is learned and commited to
memory.
How are the locations chosen? Precise location is not critical. Feature locations
can be chosen almost randomly. Of course each specific choice will influence properties of the unit but precise location does not affect the qualitative properties of
the model, as verified in simulation experiments. Intuitively, features should be
centered at salient locations in the object where there are large changes in contrast
and curvature. We have implemented (Riesenhuber and Bricolo, in preparation)
a simple attentional mechanism that chooses locations close to edges with various
orientations I . The locations shown in Figures 3 and 4 were obtained with this unsupervised technique. We emphasize however that all the results and conclusions
of this paper do not depend on the specific location of the feature or the precise
procedure used to choose them .
We have described so far the learning phase and how views are represented and
stored. When a new image V is presented, recognition takes place in the following
way. First the new image is filtered through the bank of filters. Thus at each pixel
location i we have the vector of values fi provided by the filters. Now consider the
first stored vector T I. The closest ft is found searching over all i locations and
the distance DI = IITI - ftll is computed. This process is repeated for the other
feature vectors Tm for m = 2, ... , M. Thus for the new image V, M distances Dm
are computed; the distance Dm is therefore the distance to the stored feature T m
of the closest image vector searched over the whole image.
The model uses these M distances as exponents in M Gaussian units. The output
of the system is a weighted average of the output of these units with an output non
linearity of the sigmoidal type:
YV=h
(
M
D:l)
LCme-~
(3)
m=l
In the simulations presented in this paper we estimated (J' from the distribution
of distances over several images; the Cm are Cm = M-I, since we have only one
training view; h is hex) = 1/(1 - e- X ).
In Figure 3b we see the result obtained by simply combining linearly the output of
the four feature detectors followed by the sigmoidal non linearity (Figure 4a shows
another example). We have also experimented with a multiplicative combination
of the output of the feature units. In this case the system performs an AND of
the M features. If the response to the distractors is used to set a threshold for
1 A saliency map is at first constructed as the average of the convolutions of the image
with four directional filters (first order steer able filters with 0 = 0, ... , k7r /( 4), k = 1, .. .4).
The locations with higher saliency are extracted one at the time. After each selection, a
region around the selected position is inhibited to avoid selecting the same feature over
again.
45
3D Object Recognition: A Model of VIeW-Tuned Neurons
111111
N
(a)
C
E
L
L
~
Q)
20
e20
Cl
c
?c
10
;;:: 10
fii
Q)
~ 0
0
O~
o
20
40
60
1
(b)
M ~0.8
0 'S
0.8
D G)
E "8 0 .4
0.4
0.6
00.6
L
~0.2
0
0.2
0
Figure 3: Comparison between a model view-tuned unit and cortical neuron tuned
to a view of the same object. (a) Mean spike rate of an inferotemporal cortex
cell recorded in response to views of a specific paperclip Oeft] and to a set of 60
distractor paperclip objects [right]'(Logothetis and Pauls, personal communication).
(b) Model response for the same set of objects. This is representative for other cells
we have simulated, thought there is considerable variability in the cells (and the
model) tuning.
classification, then the two versions of the system behave in a similar way. Similar
results (not shown) were obtained using other kinds of objects.
3
3.1
RESULTS
COMPARISON BETWEEN VIEW-TUNED UNITS AND
CORTICAL NEURONS
Electrophysiological investigations in alert monkeys, trained to recognize wire-like
objects presented from any view show that the discharge rate of many IT neurons is
a bell-shaped function of orientation centered on a preferred view (Logothetis et aI.,
1995). The properties of the units here described are comparable to those of the
cortical neurons (see Figure 3). The model was tested with the exactly the same
objects used in the physiological experiments. As training view for the model we
used the view preferred by the cell (the cell became tuned presumably as an effect
of training during which the monkey was shown in this particular case several views
of this object).
3.2
OCCLUSION EXPERIMENTS
What physiological experiments could lend additional support to our model? A
natural question concerns the behavior of the cells when various parts of the object
are occluded. The predictions of our model are given in Figure 4 for a specific object
and a specific choice of feature units (m = 4) and locations.
E. Bricolo, T. Poggio and N. Logothetis
46
(a)
(b)
II
'S
0.8
o'S
0.6
a.
60 120 180 240 300 360
Rotation from zero view
:t!l...l..lLi..M
o
20
Q) 0.4
"'C
o
~ 0.2
dAM
40 60 80 100
Distractor indices
o
(i)
(ii)
(iii)
(iv)
(v)
Figure 4: (a) Model behavior in response to a learned object in full view (highlighted
on the learned image are the positions of the four features) at different rotations and
to 120 other paperclip objects (distractors), (b) Response dependence on occluder
characteristics: (i) object in full view at learned location, (ii) object occluded with
a small occluder, (iii) occluded region in (ii) presented in isolation, (iv-v) same as
(ii-iii) but with a larger occluder.
The simulations show that the behavior depends on the position of key features with
respect to the occluder itself. Occluding a part of the object can drastically reduce
the response to that specific view (Figure 4b(ii-iv? because of interference with
more than one feature. But since the occluded region does not completely overlap
with the occluded featUres (considering the support of the filters) , the presentation
of this region alone does not always evoke a significant response (Figure 4b(iii-v?.
4
Discussion
Poggio and Edelman model was designed specifically for paperclip objects and did
not explicitly specify how to compute the response for any object and image. In
this paper we fill this gap and propose a model of these IT cells that become view
tuned as an effect of training. The key aspect of the model is that it relies on a
few local features (1-4) that are computed and stored during the training phase.
Each feature is represented as the set of responses of oriented filters at one location
in the image. During recognition the system computes a robust conjunction of the
best matches to the stored features.
Clearly, the version of the model described here does not exploit information about
the geometric configuration of the features. This information is available once the
features are detected and can be critical to perform more robust recognition. We
have devised a model of how to use the relative position of the features ft in the
image. The model can be made translation and scale invariant in a biologically
plausible way by using a network of cells with linear receptive fields, similar in spirit
to a model proposed for spatial representation in the parietal cortex (Pouget and
Sejnowski, 1996). Interestingly enough, this additional information is not needed
to account for the selectivity and the generalization properties of the IT cells we
3D Object Recognition: A Model o/VIeW-Tuned Neurons
47
have considered so far. The implication is that IT cells may not be sensitive to
the overall configuration of the stimulus but to the presence of moderately complex
local features (according to our simulations, the number of necessary local features
is greater than one for the most selective neurons, such as the one of Figure 3a).
Scrambling the image of the object should therefore preserve the selectivity of the
neurons, provided this can be done without affecting the filtering stage. In practice
this may be very difficult. Though our model is still far from being a reasonable
neuronal model, it can already be used to make useful predictions for physiological
experiments which are presently underway.
References
Biilthoff, H. and Edelman, S. (1992). Psychophisical support for a two-dimensional
view interpolation theory of object recognition. Proceedings of the National
Academy of Science. USA, 89:60-64.
.
Freeman, W. and Adelson, E. (1991). The design and use of steerable filters. IEEE
transactions on Pattern Analysis and Machine Intelligence, 13(9) :891-906 .
Leung, T., Burl, M., and Perona, P. (1995). Finding faces in cluttered scenes
using random labelled graph matching. In Proceedings of the 5th Internatinal
Conference on Computer Vision, Cambridge, Ma.
Logothetis, N., Pauls, J., Biilthoff, H., and Poggio, T. (1994). View dependent
object recognition by monkeys. Current Biology, 4(5):401-414.
Logothetis, N., Pauls, J., and Poggio, T . (1995) . Shape representation in the inferior
temporal cortex of monkeys. Current Biology, 5(5):552- 563.
Poggio, T . and Edelman, S. (1990) . A network that learns to recognize threedimensional objects. Nature, 343:263-266.
Poggio, T. and Vetter, T. (1992). Recognition and structure from one 2d model view:
observations on prototypes, object classes and symmetries. Technical Report
A.1. Memo No.1347, Massachusetts Institute of Technnology, Cambridge, Ma.
Pouget, A. and Sejnowski, T. (1996). Spatial representations in the parietal cortex
may use basis functions. In Tesauro, G., Touretzky, D., and Leen, T., editors,
Advances in Neural Information Processing Systems, volume 7, pages 157-164.
MIT Press.
Rao, R. and Ballard, D. (1995). An active vision architecture based on iconic
representations. Artificial Intelligence Journal, 78:461-505.
Sinha, P. and Poggio, T. (1994) . View-based strategies for 3d object recognition.
Technical Report A.1. Memo No.1518, Massachusetts Institute of Technnology,
Cambridge, Ma.
Tarr, M. and Pinker, S. (1991). Orientation-dependent mechanisms in shape recognition: further issues. Psychological Science, 2(3):207- 209 .
Vetter, T., Hurlbert, A., and Poggio, T. (1995). View-based models of 3d object
recognition: Invariance to imaging transformations. Cerebral Cortex, 3(261269).
Wachsmuth, E., Oram, M., and Perrett, D. (1994). Recognition of objects and
their component parts: Responses of single units in the temporal cortex of the
macaque. Cerebral Cortex, 4(5):509-522.
| 1296 |@word version:2 open:1 grey:1 simulation:5 configuration:3 selecting:1 tuned:19 interestingly:1 current:2 shape:2 designed:1 alone:1 intelligence:2 selected:1 plane:2 farther:1 filtered:4 location:20 successive:1 sigmoidal:2 simpler:1 alert:1 constructed:1 become:1 edelman:11 consists:1 qualitative:1 acquired:1 behavior:3 tomaso:1 distractor:2 brain:1 occluder:4 freeman:2 considering:1 provided:3 linearity:2 what:2 cm:2 kind:1 monkey:7 finding:2 transformation:1 temporal:2 exactly:1 unit:20 local:8 interpolation:1 black:1 shaded:1 limited:1 testing:1 practice:1 procedure:1 steerable:4 bell:1 significantly:2 thought:2 matching:1 radial:1 vetter:4 suggest:1 cannot:1 close:1 selection:1 operator:2 dam:1 influence:1 measurable:1 map:1 center:1 cluttered:1 pouget:2 iiti:1 array:1 regarded:1 fill:1 searching:1 coordinate:1 discharge:1 logothetis:12 suppose:1 us:1 origin:1 paperclip:9 recognition:21 ft:2 region:4 moderately:1 asked:1 occluded:5 personal:1 trained:2 depend:1 segment:1 basis:4 completely:1 easily:1 represented:4 tx:1 various:2 sejnowski:2 detected:1 artificial:1 larger:1 plausible:2 valued:1 highlighted:1 itself:1 rr:1 propose:2 combining:1 academy:1 description:1 bricolo:5 object:55 wider:1 implemented:1 predicted:1 indicate:1 direction:1 filter:22 centered:4 human:1 generalization:4 investigation:1 extension:1 around:1 considered:1 presumably:2 major:1 purpose:1 sensitive:1 weighted:1 mit:2 clearly:1 gaussian:3 always:1 avoid:1 partic:1 conjunction:1 iconic:1 consistently:1 contrast:1 dependent:3 leung:2 typically:1 perona:1 selective:1 sketched:1 pixel:2 classification:2 orientation:3 overall:1 issue:1 exponent:1 spatial:2 field:2 once:1 shaped:1 tarr:2 biology:2 represents:1 adelson:2 unsupervised:1 others:1 stimulus:1 report:2 inhibited:1 few:3 oriented:2 randomly:1 preserve:1 recognize:4 national:1 phase:4 occlusion:1 organization:1 weakness:1 implication:1 edge:1 necessary:1 poggio:19 iv:3 circle:2 sinha:2 psychological:1 instance:2 steer:1 rao:3 gn:1 tp:1 vertex:2 expects:1 stored:8 chooses:1 again:1 recorded:1 choose:1 cognitive:1 corner:1 derivative:1 account:2 explicitly:1 depends:2 vi:2 later:1 view:58 multiplicative:1 pinker:2 yv:1 square:1 became:1 characteristic:2 correspond:2 ofthe:1 saliency:2 directional:1 generalize:1 lli:1 detector:1 implausible:1 touretzky:1 acquisition:1 dm:2 di:1 hurlbert:1 oram:1 massachusetts:3 distractors:2 electrophysiological:1 higher:1 specify:3 response:13 leen:1 done:1 though:2 stage:1 gray:1 usa:1 effect:3 normalized:1 burl:1 symmetric:1 deal:1 attractive:1 white:1 during:5 uniquely:1 inferior:1 performs:1 perrett:1 image:22 novel:2 fi:1 rotation:5 overview:1 volume:1 cerebral:2 significant:1 cambridge:4 ai:1 tuning:3 ftll:1 cortex:9 inferotemporal:2 curvature:1 closest:2 fii:1 showed:1 recent:1 tesauro:1 selectivity:2 certain:1 additional:3 houston:1 nikos:1 care:1 greater:2 employed:1 somewhat:1 ii:6 full:2 technical:2 match:1 devised:1 schematic:1 prediction:4 vision:2 represent:1 cell:15 affecting:1 probably:1 subject:2 recording:1 member:1 spirit:1 structural:1 presence:1 revealed:1 iii:4 enough:1 variety:1 affect:1 isolation:1 architecture:2 reduce:1 tm:3 prototype:1 whether:1 useful:1 clear:1 ten:2 estimated:1 key:2 four:4 salient:1 threshold:1 verified:1 imaging:1 graph:1 angle:4 place:1 almost:1 reasonable:1 comparable:1 followed:1 scene:1 aspect:1 performing:1 department:1 according:1 combination:1 biologically:4 presently:1 intuitively:1 invariant:4 interference:1 taken:1 detectable:1 mechanism:4 needed:1 serf:1 available:1 away:1 existence:1 original:1 medicine:1 exploit:1 threedimensional:1 psychophysical:3 question:1 already:1 spike:1 strategy:2 receptive:2 dependence:1 distance:6 attentional:2 rotates:1 simulated:1 index:1 baylor:1 difficult:1 memo:2 design:1 perform:1 neuron:10 convolution:1 wire:1 observation:1 finite:1 riesenhuber:1 behave:2 parietal:2 extended:1 communication:1 precise:3 variability:1 scrambling:1 tmc:1 bcm:1 learned:6 amoeba:1 macaque:1 beyond:1 suggested:1 able:1 pattern:4 memory:1 lend:1 critical:2 overlap:1 natural:2 nth:1 representing:1 technology:1 nice:1 geometric:1 underway:1 relative:1 filtering:2 degree:1 consistent:2 editor:1 bank:5 translation:2 lo:2 course:1 hex:1 drastically:1 institute:3 face:2 curve:1 depth:1 cortical:3 computes:1 made:1 far:4 transaction:1 biilthoff:4 emphasize:1 preferred:2 evoke:1 unoccluded:1 active:1 alternatively:1 learn:2 ballard:3 robust:2 nature:1 symmetry:1 cl:1 e20:1 complex:1 did:1 main:1 linearly:1 whole:1 paul:3 repeated:1 emanuela:2 neuronal:1 representative:1 gai:1 position:4 third:1 learns:1 specific:11 decay:1 physiological:5 experimented:1 concern:1 gap:1 simply:1 neurophysiological:1 contained:1 applies:1 relies:1 extracted:1 ma:4 presentation:1 labelled:1 considerable:1 change:1 specifically:1 ular:1 invariance:1 occluding:1 college:1 internal:1 support:8 searched:1 latter:1 preparation:1 tested:2 |
325 | 1,297 | Approximate Solutions to
Optimal Stopping Problems
John N. Tsitsiklis and Benjamin Van Roy
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
Cambridge, MA 02139
e-mail: jnt@mit.edu, bvr@mit.edu
Abstract
We propose and analyze an algorithm that approximates solutions
to the problem of optimal stopping in a discounted irreducible aperiodic Markov chain. The scheme involves the use of linear combinations of fixed basis functions to approximate a Q-function.
The weights of the linear combination are incrementally updated
through an iterative process similar to Q-Iearning, involving simulation of the underlying Markov chain. Due to space limitations,
we only provide an overview of a proof of convergence (with probability 1) and bounds on the approximation error. This is the first
theoretical result that establishes the soundness of a Q-Iearninglike algorithm when combined with arbitrary linear function approximators to solve a sequential decision problem. Though this
paper focuses on the case of finite state spaces, the results extend
naturally to continuous and unbounded state spaces, which are addressed in a forthcoming full-length paper.
1
INTRODUCTION
Problems of sequential decision-making under uncertainty have been studied extensively using the methodology of dynamic programming [Bertsekas, 1995]. The
hallmark of dynamic programming is the use of a value junction, which evaluates
expected future reward, as a function of the current state. Serving as a tool for
predicting long-term consequences of available options, the value function can be
used to generate optimal decisions.
A number of algorithms for computing value functions can be found in the dynamic
programming literature. These methods compute and store one value per state in a
state space. Due to the curse of dimensionality, however, states spaces are typically
Approximate Solutions to Optimal Stopping Problems
1083
intractable, and the practical applications of dynamic programming are severely
limited.
The use of function approximators to "fit" value functions has been a central theme
in the field of reinforcement learning. The idea here is to choose a function approximator that has a tractable number of parameters, and to tune the parameters
to approximate the value function. The resulting function can then be used to
approximate optimal decisions.
There are two preconditions to the development an effective approximation. First,
we need to choose a function approximator that provides a "good fit" to the value
function for some setting of parameter values. In this respect, the choice requires
practical experience or theoretical analysis that provides some rough information on
the shape of the function to be approximated. Second, we need effective algorithms
for tuning the parameters of the function approximator.
Watkins (1989) has proposed the Q-Iearning algorithm as a possibility. The original
analyses of Watkins (1989) and Watkins and Dayan (1992), the formal analysis
of Tsitsiklis (1994), and the related work of Jaakkola, Jordan, and Singh (1994),
establish that the algorithm is sound when used in conjunction with exhaustive lookup table representations (i.e., without function approximation). Jaakkola, Singh,
and Jordan (1995), Tsitsiklis and Van Roy (1996a), and Gordon (1995), provide a
foundation for the use of a rather restrictive class of function approximators with
variants of Q-Iearning. Unfortunately, there is no prior theoretical support for the
use of Q- Iearning-like algorithms when broader classes of function approximators
are employed.
In this paper, we propose a variant of Q-Iearning for approximating solutions to
optimal stopping problems, and we provide a convergence result that established its
soundness. The algorithm approximates a Q-function using a linear combination of
arbitrary fixed basis functions. The weights of these basis functions are iteratively
updated during the simulation of a Markov chain. Our result serves as a starting
point for the analysis of Q-learning-Iike methods when used in conjunction with
classes of function approximators that are more general than piecewise constant.
In addition, the algorithm we propose is significant in its own right. Optimal
stopping problems appear in practical contexts such as financial decision making
and sequential analysis in statistics. Like other problems of sequential decision
making, optimal stopping problems suffer from the curse of dimensionality, and
classical dynamic programming methods are of limited use. The method we propose
presents a sound approach to addressing such problems.
2
OPTIMAL STOPPING PROBLEMS
We consider a discrete-time, infinite-horizon, Markov chain with a finite state space
S = {I, ... , n} and a transition probability matrix P. The Markov chain follows a
trajectory Xo, Xl, X2, .?? where the probability that the next state is y given that the
current state is X is given by the (x, y)th element of P, and is denoted by Pxy. At
each time t E {O, 1,2, ... } the trajectory can be stopped with a terminal reward of
G(Xt). If the trajectory is not stopped, a reward of g(Xt) is obtained. The objective
is to maximize the expected infinite-horizon discounted reward, given by
E
[~"tg(Xt} +<>'G(X,}] ,
where Q: E (0, 1) is a discount factor and T is the time at which the process is
stopped. The variable T is defined by a stopping policy, which is given by a sequence
1084
J. N. Tsitsiklis and B. Van Roy
of mappings I-'t : st+l I-t {stop, continue}. Each I-'t determines whether or not to
terminate, based on xo, . .. , Xt. If the decision is to terminate, then T = t.
We define the value function to be a mapping from states to the expected discounted
future reward, given that an optimal policy is followed starting at a given state. In
particular, the value function J* : S I-t ~ is given by
where T is the stopping time given by the policy {I-'d. It is well known that the
value function is the unique solution to Bellman's equation:
J*(x) = max [G(X)' g(x)
+ Q L pxyJ*(Y)].
yES
Furthermore, there is always an optimal policy that is stationary (Le., of the form
{I-'t = 1-'*, "It}) and defined by
* ()
{stop,
I-' x =
continue,
if G(x) ;::: V*(x),
otherwise.
Following Watkins (1989), we define the Q- function as the function Q* : S
given by
Q*(x) = g(x)
I-t ~
+ Q LpxyV*(y).
yES
It is easy to show that the Q-function uniquely satisfies
Q* (x) = g(x)
+ Q L Pxy max [G(y), Q* (y)] ,
"Ix E S.
(1)
yES
Furthermore, an optimal policy can be defined by
*()
I-' x =
3
{stop,
continue,
if G(x) ;::: Q*(x),
otherwise.
APPROXIMATING THE Q-FUNCTION
Classical computational approaches to solving optimal stopping problems involve
computing and storing a value function in a tabular form. The most common way
for doing this is through use of an iterative algorithm of the form
Jk+l (x)
= max [G(X),g(X)
+ Q LPxyJk(Y)].
yES
When the state space is extremely large, as is the typical case, two difficulties arise.
The first is that computing and storing one value per state becomes intractable,
and the second is that computing the summation on the right hand side becomes
intractable. We will present an algorithm, motivated by Watkins' Q-Iearning, that
addresses both these issues, allowingJor approximate solution to optimal stopping
problems with large state spaces.
Approximate Solutions to Optimal Stopping Problems
3.1
1085
LINEAR FUNCTION APPROXIMATORS
We consider approximations of Q* using a function of the form
K
L r(k)?k (x).
Q(x, r) =
k=l
Here, r = (r(I), ... ,r(K)) is a parameter vector and each ?k is a fixed scalar
function defined on the state space S. The functions ?k can be viewed as basis
functions (or as vectors of dimension n), while each r(k) can be viewed as the
associated weight. To approximate the Q-function, one usually tries to choose the
parameter vector r so as to minimize some error metric between the functions Q(., r)
and Q*(.).
It is convenient to define a vector-valued function ? : S t-+ lR K , by letting ?(x) =
(?l(X), ... ,?K(X)). With this notation, the approximation can also be written in
the form Q(x,r) = (4)r)(x) , where 4> is viewed as a lSI x K matrix whose ith row
is equal to ?(x).
3.2
THE APPROXIMATION ALGORITHM
In the approximation scheme we propose, the Markov chain underlying the stopping
problem is simulated to produce a single endless trajectory {xtlt = 0,1,2, ... }. The
algorithm is initialized with a parameter vector ro, and after each time step, the
parameter vector is updated according to
rt+l = rt
+ ,t?(Xt) (g(Xt) + a max [?'(xt+drt,G(xt+d]
- ?'(xdrt) ,
where It is a scalar stepsize.
3.3
CONVERGENCE THEOREM
Before stating the convergence theorem, we introduce some notation that will make
the exposition more concise. Let 7r(I), ... , 7r(n) denote the steady-state probabilities
for the Markov chain. We assume that 7r(x) > 0 for all xES. Let D be an n x n
diagonal matrix with diagonal entries 7r(I), . . . , 7r(n). We define a weighted norm
II?IID by
L 7r(X)P(x).
IIJIID =
xES
We define a "projection matrix" Il that induces a weighted projection onto the
subspace X = {4>r IrE lRK} with projection weights equal to the steady-state
probablilities. In particular,
IlJ = arg!p.in
JEX
IIJ - JIID.
It is easy to show that Il is given by Il = 4>(4)' D4?-l4>' D.
We define an operator F : lR n t-+ lR n by
F J = 9 + aPmax [4>rt,
a] ,
where the max denotes a componentwise maximization.
We have the following theorem that ensures soundness of the algorithm:
J N. Tsitsiklis and B. Van Roy
1086
Theorem 1 Let the following conditions hold:
(a) The Markov chain has a unique invariant distribution 7r that satisfies 7r' P = 7r',
with 7r(x) > 0 for all xES.
(b) The matrix ?P has full column rank; that is, the ((basis functions" {?k 1 k =
1, . .. ,K} are linearly independent.
(c) The step sizes 'Yt are nonnegative, nonincreasing, and predetermined. Furthermore, they satisfy L~o 'Yt = 00, and L~o 'Yl < 00.
We then have:
(a) The algorithm converges with probability 1.
(b) The limit of convergence r* is the unique solution of the equation
IlF(<<pr*) = ?Pr*.
(c) FUrthermore, r* satisfies
lI?pr* _ Q*IID
3.4
=
~ IIIlQ; ~"IID.
OVERVIEW OF PROOF
The proof of Theorem 1 involves an analysis in a Euclidean space where the operator
F and projection II serve as tools for interpreting the algorithm's dynamics. The
ideas for this type of analysis can be traced back to Van Roy and Tsitsiklis (1996)
and have since been used to analyze Sutton's temporal-difference learning algorithm
(Tsitsiklis and Van Roy, 1996b). Due to space limitations, we only provide an
overview of the proof.
We begin by establishing that, with respect to the norm II ?11 D, P is a nonexpansion
and F is a contraction. In the first case, we apply Jensen's inequality to obtain
IIPJII1
=
<
2
L7r(x) (LpxyJ(Y?)
xES
L
yES
7r(x) LPXyJ2(y)
xES
yES
L L 7r(X)PXy J
2 (y)
yES xES
yES
=
IIJI11?
The fact that F is a contraction now follows from the fact that
I(F J)(x) - (F J)(x)1 ~ al(P J)(x) - (P J)(x)l,
for any J, J E ~n and any state XES .
Let s : ~m
algorithm:
H ~m
denote the "steady-state" expectation of the steps taken by the
s(r) = Eo [?(Xt ) (g(Xt)
+ a max [?' (xt+l)r, G(Xt+dJ - ?' (xt)r) ],
where E o['] denotes the expectation with respect to steady-state probabilities. Some
simple algebra gives
s(r) = ?P'D(F(4>r) - ?pr) .
Approximate Solutions to Optimal Stopping Problems
1087
We focus on analyzing a deterministic algorithm of the form
ft+l = ft
+ 'Yts(ft).
The convergence of the stochastic algorithm we have proposed can be deduced
from that of this deterministic algorithm through use of a theorem on stochastic
approximation, contained in (Benveniste, et al., 1990).
Note that the composition IIFO is a contraction with respect to 1/ . liD with contraction coefficient a since projection is nonexpansive and F is a contraction. It
follows that IIF (.) has a fixed point of the form <Pr" for some r" E Rm that uniquely
satisfies
<Pr" = IIF(<Pr").
To establish convergence, we consider the potential function U(r) = ~ Ilr - r"lIb?
We have
(r - r" ) <pI D (F( <pr) -
(V'U(r)y s(r)
I
<pr)
(r - r" r<PID(IIF(<Pr) - (I - II)F(<Pr) - <pr)
( <Pr - <Pr" ) ID ( IIF( <pr) - <pr) ,
where the last equality follows because <pI DII = <pI D. Using the contraction property of F and the nonexpansion property of projection, we have
UIIF(<pr) - <pr"UD
=
I/IIF(<pr) - IIF(<pr")IID
al/<Pr - <pr"I/D'
~
and it follows from the Cauchy-Schwartz inequality that
(V'U(r)y s(r)
r
D(IIF(<pr) - <Pr"
+ <Pr"
=
(<pr - <Pr"
<
<
II<pr - <pr"IIDI/IIF(<pr) - <pr"I/D -1I<pr - <pr"lIb
- <pr)
(a - 1 )1I<p/r - <Pr"l/b.
Since <P has full column rank, it follows that (V'U(r))' s(r)
E > 0, and f t converges to r" .
~
-EU(r), for some fixed
We can further establish the desired error bound:
II<Pr" - Q"UD
< II<pr" - IIQ"UD + UIIQ" - Q"IID
IIIIF(<pr") - IIQ"UD + IIIIQ" - Q"UD
< all<Pr" - Q"IID + I/IIQ" - Q"IID,
and it follows that
4
I/<pr" - Q"I/D ~ I/IIQ" - Q"IID.
I-a
CONCLUSION
We have proposed an algorithm for approximating Q-functions of optimal stopping
problems using linear combinations of fixed basis functions. We have also presented
a convergence theorem and overviewed the associated analysis. This paper has
served a dual purpose of establishing a new methodology for solving difficult optimal
stopping problems and providing a starting point for analyses of Q-learning-Iike
algorithms when used in conjunction with function approximators.
1088
J. N. Tsitsiklis and B. Van Roy
The line of analysis presented in this paper easily generalizes in several directions.
First, it extends to unbounded continuous state spaces. Second, it can be used to
analyze certain variants of Q-Iearning that can be used for optimal stopping problems where the underlying Markov processes are not irreducible and/or aperiodic.
Rigorous analyses of some extensions, as well as the case that was discussed in this
paper, are presented in a forthcoming full-length paper.
Acknowledgments
This research was supported by the NSF under grant DMI-9625489 and the ARO
under grant DAAL-03-92-G-Ol15.
References
Benveniste, A., Metivier, M., & Priouret, P. (1990) Adaptive Algorithms and
Stochastic Approximations, Springer-Verlag, Berlin.
Bertsekas, D. P. (1995) Dynamic Programming and Optimal Control. Athena Scientific, Belmont, MA.
Gordon, G. J. (1995) Stable Function Approximation in Dynamic Programming.
Technical Report: CMU-CS-95-103, Carnegie Mellon University.
Jaakkola, T., Jordan M. 1., & Singh, S. P. (1994) "On the Convergence of Stochastic
Iterative Dynamic Programming Algorithms," Neural Computation, Vol. 6, No.6.
Jaakkola T., Singh, S. P., & Jordan, M. 1. (1995) "Reinforcement Learning Algorithms for Partially Observable Markovian Decision Processes," in Advances in
Neural Information Processing Systems 7, J. D. Cowan, G. Tesauro, and D. Touretzky, editors, Morgan Kaufmann.
Sutton, R. S. (1988) Learning to Predict by the Method of Temporal Differences.
Machine Learning, 3:9-44.
Tsitsiklis, J. N. (1994) "Asynchronous Stochastic Approximation and Q-Learning,"
Machine Learning, vol. 16, pp. 185-202.
Tsitsiklis, J. N. & Van Roy, B. (1996a) "Feature-Based Methods for Large Scale
Dynamic Programming," Machine Learning, Vol. 22, pp. 59-94.
Tsitsiklis, J. N. & Van Roy, B. (1996b) An Analysis of Temporal-Difference Learning
with Function Approximation. Technical Report: LIDS-P-2322, Laboratory for
Information and Decision Systems, Massachusetts Institute of Technology.
Van Roy, B. & Tsitsiklis, J. N. (1996) "Stable Linear Approximations to Dynamic
Programming for Stochastic Control Problems with Local Transitions," in Advances
in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer, and M.
E. Hasselmo, editors, MIT Press.
Watkins, C. J. C. H. (1989) Learning from Delayed Rewards. Doctoral dissertation,
University of Cambridge, Cambridge, United Kingdom.
Watkins, C. J. C. H. & Dayan, P. (1992) "Q-Iearning," Machine Learning, vol. 8,
pp. 279-292.
| 1297 |@word norm:2 simulation:2 contraction:6 concise:1 united:1 current:2 written:1 john:1 belmont:1 predetermined:1 shape:1 stationary:1 ith:1 dissertation:1 lr:3 ire:1 provides:2 unbounded:2 introduce:1 expected:3 terminal:1 bellman:1 discounted:3 curse:2 lib:2 becomes:2 begin:1 underlying:3 notation:2 temporal:3 iearning:8 ro:1 rm:1 schwartz:1 control:2 grant:2 appear:1 bertsekas:2 before:1 local:1 limit:1 consequence:1 severely:1 sutton:2 analyzing:1 id:1 establishing:2 doctoral:1 studied:1 limited:2 practical:3 unique:3 acknowledgment:1 convenient:1 projection:6 onto:1 operator:2 context:1 ilf:1 deterministic:2 yt:2 starting:3 financial:1 updated:3 programming:10 element:1 roy:10 approximated:1 jk:1 ft:3 precondition:1 nonexpansion:2 ensures:1 iidi:1 eu:1 benjamin:1 mozer:1 reward:6 dynamic:11 metivier:1 singh:4 solving:2 yts:1 algebra:1 serve:1 basis:6 easily:1 effective:2 exhaustive:1 whose:1 solve:1 valued:1 otherwise:2 soundness:3 statistic:1 sequence:1 propose:5 aro:1 convergence:9 produce:1 converges:2 stating:1 c:1 involves:2 direction:1 aperiodic:2 stochastic:6 dii:1 summation:1 extension:1 hold:1 mapping:2 predict:1 purpose:1 hasselmo:1 establishes:1 tool:2 weighted:2 mit:3 rough:1 always:1 rather:1 jaakkola:4 broader:1 conjunction:3 jnt:1 focus:2 rank:2 rigorous:1 dayan:2 stopping:17 typically:1 issue:1 arg:1 dual:1 denoted:1 l7r:1 development:1 field:1 equal:2 dmi:1 iif:8 future:2 tabular:1 report:2 gordon:2 piecewise:1 irreducible:2 delayed:1 possibility:1 nonincreasing:1 chain:8 endless:1 experience:1 euclidean:1 initialized:1 desired:1 theoretical:3 stopped:3 column:2 markovian:1 maximization:1 tg:1 addressing:1 entry:1 combined:1 st:1 deduced:1 yl:1 ilj:1 central:1 choose:3 li:1 potential:1 lookup:1 coefficient:1 satisfy:1 try:1 analyze:3 doing:1 iike:2 option:1 minimize:1 il:3 kaufmann:1 yes:8 iid:8 trajectory:4 served:1 touretzky:2 evaluates:1 pp:3 naturally:1 proof:4 associated:2 stop:3 massachusetts:2 dimensionality:2 back:1 methodology:2 though:1 furthermore:4 hand:1 incrementally:1 scientific:1 equality:1 laboratory:2 iteratively:1 during:1 uniquely:2 steady:4 d4:1 interpreting:1 hallmark:1 common:1 overview:3 extend:1 discussed:1 approximates:2 significant:1 composition:1 mellon:1 cambridge:3 tuning:1 dj:1 stable:2 own:1 tesauro:1 store:1 certain:1 verlag:1 inequality:2 continue:3 approximators:7 morgan:1 lrk:1 employed:1 eo:1 maximize:1 ud:5 ii:7 full:4 sound:2 technical:2 long:1 drt:1 involving:1 variant:3 expectation:2 metric:1 cmu:1 addition:1 addressed:1 cowan:1 jordan:4 easy:2 fit:2 forthcoming:2 idea:2 whether:1 motivated:1 suffer:1 involve:1 tune:1 discount:1 extensively:1 induces:1 generate:1 lsi:1 nsf:1 per:2 serving:1 discrete:1 carnegie:1 vol:4 traced:1 uncertainty:1 extends:1 decision:10 bound:2 followed:1 pxy:3 nonnegative:1 x2:1 extremely:1 jex:1 according:1 combination:4 nonexpansive:1 lid:2 making:3 invariant:1 pr:40 xo:2 taken:1 pid:1 equation:2 iiq:4 letting:1 tractable:1 serf:1 junction:1 available:1 generalizes:1 apply:1 stepsize:1 original:1 denotes:2 restrictive:1 establish:3 approximating:3 classical:2 objective:1 rt:3 diagonal:2 subspace:1 simulated:1 berlin:1 athena:1 bvr:1 mail:1 cauchy:1 length:2 priouret:1 providing:1 difficult:1 unfortunately:1 kingdom:1 ilr:1 policy:5 markov:9 finite:2 arbitrary:2 componentwise:1 established:1 address:1 usually:1 max:6 xtlt:1 difficulty:1 predicting:1 scheme:2 technology:2 prior:1 literature:1 limitation:2 approximator:3 foundation:1 editor:2 benveniste:2 storing:2 pi:3 row:1 supported:1 last:1 asynchronous:1 tsitsiklis:12 formal:1 side:1 institute:2 van:10 dimension:1 transition:2 reinforcement:2 adaptive:1 approximate:9 observable:1 continuous:2 iterative:3 table:1 terminate:2 linearly:1 arise:1 iij:1 theme:1 xl:1 watkins:7 ix:1 theorem:7 xt:13 jensen:1 x:7 intractable:3 sequential:4 horizon:2 contained:1 partially:1 scalar:2 springer:1 determines:1 satisfies:4 ma:2 viewed:3 exposition:1 infinite:2 typical:1 l4:1 support:1 overviewed:1 |
326 | 1,298 | 488 Solutions to the XOR Problem
Frans M. Coetzee *
eoetzee@eee.emu.edu
Department of Electrical Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
Virginia L. Stonick
ginny@eee.emu.edu
Department of Electrical Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
A globally convergent homotopy method is defined that is capable
of sequentially producing large numbers of stationary points of the
multi-layer perceptron mean-squared error surface. Using this algorithm large subsets of the stationary points of two test problems
are found. It is shown empirically that the MLP neural network
appears to have an extreme ratio of saddle points compared to
local minima, and that even small neural network problems have
extremely large numbers of solutions.
1
Introduction
The number and type of stationary points of the error surface provide insight into
the difficulties of finding the optimal parameters ofthe network, since the stationary
points determine the degree of the system[l]. Unfortunately, even for the small
canonical test problems commonly used in neural network studies, it is still unknown
how many stationary points there are, where they are, and how these are divided
into minima, maxima and saddle points.
Since solving the neural equations explicitly is currently intractable, it is of interest
to be able to numerically characterize the error surfaces of standard test problems.
To perform such a characterization is non-trivial, requiring methods that reliably
converge and are capable of finding large subsets of distinct solutions. It can be
shown[2] that methods which produce only one solution set on a given trial become
inefficient (at a factorial rate) at finding large sets of multiple distinct solutions,
since the same solutions are found repeatedly. This paper presents the first provably globally convergent homotopy methods capable of finding large subsets of the
Currently with Siemens Corporate Research, Princeton NJ 08540
488 Solutions to the XOR Problem
411
stationary points of the neural network error surface. These methods are used to
empirically quantify not only the number but also the type of solutions for some
simple neural networks .
1.1
Sequential Neural Homotopy Approach Summary
We briefly acquaint the reader with the principles of homotopy methods, since these
approaches differ significantly from standard descent procedures.
Homotopy methods solve systems of nonlinear equations by mapping the known
solutions from an initial system to the desired solution of the unsolved system of
equations. The basic method is as follows: Given a final set of equations /(z) =
0, xED ~ ?R n whose solution is sought, a homotopy function h : D x T -+ ?R n is
defined in terms of a parameter T ETC ?R, such that
g(z)
h(z, T) = { /(z)
when
when
T
=0
T
= 1
where the initial system of equations g(z) = 0 has a known solution. For optimization problems f(x) = \7 x?2(x) where ?2(x) is the error meaSUre. Conceptually,
h(z, T) = 0 is solved numerically for z for increasing values of T, starting at T = 0
at the known solution, and incrementally varying T and correcting the solution x
until T = 1, thereby tracing a path from the initial to the final solutions.
The power and the problems of homotopy methods lie in constructing a suitable
function h. Unfortunately, for a given f most choices of h will fail, and, with the exception of polynomial systems, no guaranteed procedures for selecting h exist. Paths
generally do not connect the initial and final solutions, either due to non-existence
of solutions, or due to paths diverging to infinity. However, if a theoretical proof
of existence of a suitable trajectory can be constructed, well-established numerical
procedures exist that reliably track the trajectory.
The following theorem, proved in [2], establishes that a suitable homotopy exists
for the standard feed-forward backpropagation neural networks:
Theorem 1.1 Let ?2 be the unregularized mean square error (MSE) problem for
the multi-layer perceptron network, with weights f3 E ?R n . Let f3 0 E U C ?Rn and
a EVe ?Rn , where U and V are open bounded sets. Then except for a set of
measure zero (f3, a) E U x V, the solutions ({3, T) of the set of equations
+ J.t'?'(IIf3 - aWn = 0
where J.t > 0 and'?' : ?R -+ ?R satisfies 2'?'"(a 2 )a 2 + '?"(a 2 ) > 0 as a -+ 00,
h(f3, T)
= (1- T)(f3 -
f3o)
+ TD(J
(?2
(1)
form noncrossing one dimensional trajectories for all T E ?R, which are bounded V T E [0, 1].
Furthermore, the path through (f3 o , 0) connects to at least one solution ({3* ,1) of the
regularized MSE error problem
(2)
On T E [0,1] the approach corresponds to a pseudo-quadratic error surface being
deformed continuously into the final neural network error surface l . Multiple solu1 The common engineering heuristic whereby some arbitrary error surface is relaxed into
another error surface generally does not yield well defined trajectories.
F M. Coetzee and V. L. Stonick
412
tions can be obtained by choosing different initial values f30. Every desired solution
(3" is accessible via an appropriate choice of a, since f30 = (3" suffices.
Figure 1 qualitatively illustrates typical paths obtained for this homotopy2 . The
paths typically contain only a few solutions , are disconnected and diverge to infinity.
A novel two-stage homotopy[2, 3] is used to overcome these problems by constructing
and solving two homotopy equations. The first homotopy system is as described
above . A synthetic second homotopy solves an auxiliary set of equations on a nonEuclidean compact manifold (sn (0; R) x A, where A is a compact subset of R) and
is used to move between the disconnected trajectories of the first homotopy. The
method makes use of the topological properties of the compact manifold to ensure
that the secondary homotopy paths do not diverge.
4
+infty
3
2
(3
0
0
-1
-2
-infty
-infty
0
T
+infty
-3
-4
-4
-2
0
2
4
Figure 1: (a) Typical homotopy trajectories , illustrating divergence of paths and
multiple solutions occurring on one path. (b) Plot of two-dimensional vectors used
as training data for the second test problem (Yin-Yang problem).
2
Test Problems
The test problems described in this paper are small to allow for (i) a large number
of repeated runs, and (ii) to make it possible to numerically distinguish between
solutions. Classification problems were used since these present the only interesting
small problems, even though the MSE criterion is not necessarily best for classification. Unlike most classification tasks, all algorithms were forced to approximate
the stationary point accurately by requiring the 11 norm of the gradient to be less
than 10- 10 , and ensuring that solutions differed in h by more than 0.01.
The historical XOR problem is considered first. The data points (-1, -1), (1,1)'
'(-1,1) and (1,-1) were trained to the target values -0.8,-0.8, 0.8 and 0.8. A
network with three inputs (one constant), two hidden layer nodes and one output
node were used , with hyperbolic tangent transfer functions on the hidden and final
2Note that the homotopy equation and its trajectories exist outside the interval
[0,1] .
T
=
413
488 Solutions to the XOR Problem
nodes. The regularization used fL = 0.05 , tjJ(x) = x and a = 0 (no bifurcations were
found for this value during simulations) . This problem was chosen since it is small
enough to serve as a benchmark for comparing the convergence and performance of
the different algorithms. The second problem, referred to as the Yin-Yang problem ,
is shown in Figure 1. The problem has 23 and 22 data points in classes one and two
respectively, and target values ?0.7. Empirical evidence indicates that the smallest
single hidden layer network capable of solving the problem has five hidden nodes.
We used a net with three inputs, five hidden nodes and one output. This problem
is interesting since relatively high classification accuracy is obtained using only a
single neuron, but a 100% classification performance requires at least five hidden
nodes and one of only a few global weight solutions.
The stationary points form equivalence classes under renumbering of the weights
or appropriate interchange of weight signs. For the XOR problem each solution
class contains up to 22 2! = 8 distinct solutions; for the Yin-Yang network, there
are 25 5! = 3840 symmetries. The equivalence classes are reported in the following
sections.
3
Test Results
A Ribak-Poliere conjugate gradient (CG) method was used as a control since this
method can find only minima, as contrasted to the other algorithms, all of which
are attracted by all stationary points. In the second algorithm, the homotopy
equation (1) was solved by following the main path until divergence. A damped
Newton (DN) method and the twa-stage homotopy method completed the set of
four algorithms considered. The different algorithms were initialized with the same
n random weights f30 E sn-l(o ; v'2n).
3.1
Control- The XOR problem
The total number and classification of the solutions obtained for 250 iterations on
each algorithm are shown in Table 1.
Table 1: Number of equivalence class solutions obtained . XOR Problem
Algorithm
CG
DN
One Stage
Two Stage
I Total DIstmct
# Solutions
17
44
28
61
61
#Minima
17
6
16
17
17
# Maxima
0
0
0
0
0
#Saddle Points
0
38
12
44
44
The probability of finding a given solution on a trial is shown in Figure 2. The
twa-stage homotopy method finds almost every solution from every initial point.
In contrast to the homotopy approaches, the Newton method exhibits poor convergence, even when heavily damped. The sets of saddle points found by the DN
algorithm and the homotopy algorithms are to a large extent disjoint, even though
the same initial weights were used. For the Newton method solutions close to the
initial point are typically obtained, while the initial point for the homotopy alga-
414
F. M. Coetzee and V. L Stonick
75%-t
50%-t
lOO%-t
0.5
0.4
0.3
0.2
0.1
0
0
20
10
30
40
50
75%-t
50%-t
60
Conjugate
Gradient
lOO%-t
0.5
0.4
Pi
0.3
0.2
0.1
o
~.
r1r">~n.
o
10
20
30
-
.Jl?..nnnn..
40
50
75%-t
50%-t
Newton
60
lOO%-t
0.5
0.4
Pi
0.3
0.2
0.1
0
0
10
20
30
40
50
75%-t
50%-t
60
Single
Stage
Homotopy
lOO%-t
0.8
Pi
0.6
0.4
0.2
0
0
10
20
40
30
Equivalence Class
50
60
Two
Stage
Homotopy
Figure 2: Probability of finding equivalence class i on a trial. Solutions have been
sorted based on percentage ofthe training set correctly classified. Dark bars indicate
local minima, light bars saddle points. XOR problem
488 Solutions to the XOR Problem
415
Table 2: Number of solutions correctly classifying x% of target data.
Classification
Minimum
Saddle
Total Distinct
25 %
17
44
61
50 %
17
44
61
75 %
4
20
24
100 %
4
0
4
rithm might differ significantly from the final solution. This difference illustrates
that homotopy arrives at solutions in a fundamentally different way than descent
approaches.
Based on these results we conclude that the two-stage homotopy meets its objective
of significantly increasing the number of solutions produced on a single trial. The
homotopy algorithms converge more reliably than Newton methods, in theory and
in practise. These properties make homotopy attractive for characterizing error
surfaces. Finally, due to the large number of trials and significant overlap between
the solution sets for very different algorithms, we believe that Tables 1-2 represent
accurate estimates for the number and types of solutions to the regularized XOR
problem.
3.2
Results on the Yin-Yang problem
The first three algorithms for the Yin-Yang problem were evaluated for 100 trials.
The conjugate gradient method showed excellent stability, while the Newton method
exhibited serious convergence problems, even with heavy damping. The two-stage
algorithm was still producing solutions when the runs were terminated after multiple
weeks of computer time, allowing evaluation of only ten different initial points.
Table 3: Number of equivalence class solutions obtained. Yin-Yang Problem
Algorithm
Conjugate Gradient
Damped Newton
One Stage Homotopy
Two Stage Homotopy
Total Dlstmct
# Solutions
14
10
78
1633
1722
#Minima
14
0
15
12
28
# Maxima
0
0
0
0
0
#Saddle Points
0
10
63
1621
1694
Table 4: Number of solutions correctly classifying x% of target data.
Classification
Minimum
Saddle
Total Distinct
75
28
1694
1722
80
28
1694
1722
90
28
1682
1710
95
26
400
426
96
26
400
426
97
5
13
18
98
5
13
18
99
2
3
5
100%
2
3
5
The results in Tables 3-4 for the number of minima are believed to be accurate, due
to verification provided by the conjugate gradient method. The number of saddle
416
F. M. Coetzee and V. L Stonick
points should be seen as a lower bound. The regularization ensured that the saddle
points were well conditioned, i.e. the Hessian was not rank deficient, and these
solutions are therefore distinct point solutions.
4
Concl usions
The homotopy methods introduced in this paper overcome the difficulties of poor
convergence and the problem of repeatedly finding the same solutions. The use
of these methods therefore produces significant new empirical insight into some
extraordinary unsuspected properties of the neural network error surface.
The error surface appears to consist of relatively few minima, separated by an
extraordinarily large 'number of saddle points. While one recent paper by Goffe et
al [4] had given some numerical estimates based on which it was concluded that a
large number of minima in neural nets exist (they did not find a significant number
of these), this extreme ratio of saddle points to minima appears to be unexpected.
No maxima were discovered in the above runs; in fact none appear to exist within
the sphere where solutions were sought (this seems likely given the regularization).
The numerical results reveal astounding complexity in the neural network problem.
If the equivalence classes are complete, then 488 solutions for the XOR problem are
implied, of which 136 are minima. For the Yin-Yang problem, 6,600,000+ solutions
and 107,250+ minima were characterized. For the simple architectures considered,
these numbers appear extremely high. We are unaware of any other system of
equations having these remarkable properties.
Finally, it should be noted that the large number of saddle points and the small
ratio of minima to saddle points in neural problems can create tremendous computational difficulties for approaches which produce stationary points, rather than
simple minima. The efficiency of any such algorithm at producing solutions will be
negated by the fact that, from an optimization perspective, most of these solutions
will be useless.
Acknowledgements. The partial support of the National Science Foundation by
grant MIP-9157221 is gratefully acknowledged.
References
[1] E . H. Rothe, Introduction to Various Aspects of Degree Theory in Banach Spaces.
Mathematical Surveys and Monographs (23), Providence, Rhode Island: American
Mathematical Society, 1986. ISBN 0-82218-1522-9.
[2] F. M. Coetzee, Homotopy Approaches for the Analysis and Solution of Neural
Network and Other Nonlinear Systems of Equations. PhD thesis, Carnegie Mellon
University, Pittsburgh, PA, May 1995.
[3] F . M. Coetzee and V. L. Stonick, "Sequential homotopy-based computation of
multiple solutions to nonlinear equations," in Proc. IEEE ICASSP, (Detroit), IEEE,
May 1995.
[4] W. L. Goffe, G. D. Ferrier, and J. Rogers, "Global optimization of statistical
functions with simulated annealing," Jour. Econometrics, vol. 60, no. 1-2, pp. 65-99,
1994.
| 1298 |@word deformed:1 trial:6 illustrating:1 briefly:1 polynomial:1 norm:1 seems:1 open:1 simulation:1 thereby:1 initial:10 contains:1 selecting:1 ferrier:1 comparing:1 attracted:1 numerical:3 plot:1 stationary:10 characterization:1 node:6 five:3 mathematical:2 dn:3 constructed:1 become:1 frans:1 multi:2 globally:2 td:1 increasing:2 provided:1 bounded:2 xed:1 finding:7 nj:1 pseudo:1 every:3 ensured:1 control:2 grant:1 appear:2 producing:3 engineering:3 local:2 meet:1 path:10 rhode:1 might:1 equivalence:7 backpropagation:1 procedure:3 empirical:2 significantly:3 hyperbolic:1 close:1 starting:1 survey:1 correcting:1 insight:2 stability:1 target:4 heavily:1 pa:3 econometrics:1 unsuspected:1 electrical:2 solved:2 monograph:1 complexity:1 practise:1 trained:1 solving:3 serve:1 efficiency:1 icassp:1 various:1 separated:1 distinct:6 forced:1 choosing:1 outside:1 extraordinarily:1 whose:1 heuristic:1 solve:1 r1r:1 final:6 net:2 isbn:1 convergence:4 produce:3 tions:1 solves:1 auxiliary:1 indicate:1 quantify:1 differ:2 rogers:1 suffices:1 homotopy:33 considered:3 mapping:1 week:1 sought:2 smallest:1 proc:1 currently:2 create:1 establishes:1 detroit:1 rather:1 varying:1 rank:1 indicates:1 contrast:1 cg:2 typically:2 hidden:6 provably:1 classification:8 alga:1 bifurcation:1 noneuclidean:1 f3:6 having:1 fundamentally:1 serious:1 few:3 divergence:2 national:1 astounding:1 connects:1 mlp:1 interest:1 evaluation:1 extreme:2 arrives:1 light:1 damped:3 accurate:2 capable:4 partial:1 damping:1 initialized:1 desired:2 mip:1 theoretical:1 subset:4 virginia:1 loo:4 characterize:1 reported:1 connect:1 providence:1 synthetic:1 jour:1 accessible:1 diverge:2 continuously:1 squared:1 thesis:1 american:1 inefficient:1 explicitly:1 square:1 accuracy:1 xor:11 yield:1 ofthe:2 conceptually:1 accurately:1 produced:1 none:1 trajectory:7 classified:1 pp:1 proof:1 coetzee:6 unsolved:1 proved:1 appears:3 feed:1 evaluated:1 though:2 furthermore:1 stage:11 until:2 nonlinear:3 incrementally:1 reveal:1 believe:1 requiring:2 contain:1 regularization:3 attractive:1 during:1 whereby:1 noted:1 criterion:1 complete:1 novel:1 common:1 empirically:2 jl:1 banach:1 numerically:3 mellon:3 significant:3 gratefully:1 concl:1 had:1 surface:11 etc:1 showed:1 recent:1 perspective:1 seen:1 minimum:16 relaxed:1 determine:1 converge:2 ii:1 multiple:5 corporate:1 characterized:1 believed:1 renumbering:1 sphere:1 divided:1 f3o:1 infty:4 ensuring:1 basic:1 iteration:1 represent:1 interval:1 annealing:1 concluded:1 f30:3 unlike:1 exhibited:1 deficient:1 eve:1 yang:7 enough:1 usions:1 architecture:1 hessian:1 repeatedly:2 generally:2 factorial:1 dark:1 ten:1 exist:5 percentage:1 canonical:1 sign:1 disjoint:1 track:1 correctly:3 carnegie:3 vol:1 four:1 acknowledged:1 run:3 almost:1 reader:1 eee:2 bound:1 layer:4 fl:1 guaranteed:1 distinguish:1 convergent:2 topological:1 quadratic:1 infinity:2 awn:1 aspect:1 extremely:2 relatively:2 department:2 disconnected:2 poor:2 conjugate:5 island:1 unregularized:1 equation:13 fail:1 appropriate:2 existence:2 ensure:1 completed:1 newton:7 society:1 implied:1 move:1 objective:1 exhibit:1 gradient:6 simulated:1 manifold:2 extent:1 trivial:1 useless:1 ratio:3 unfortunately:2 reliably:3 unknown:1 perform:1 allowing:1 negated:1 neuron:1 benchmark:1 descent:2 rn:2 discovered:1 arbitrary:1 introduced:1 tremendous:1 emu:2 established:1 acquaint:1 able:1 bar:2 power:1 suitable:3 overlap:1 difficulty:3 regularized:2 stonick:5 sn:2 acknowledgement:1 tangent:1 interesting:2 tjj:1 remarkable:1 rothe:1 foundation:1 degree:2 verification:1 principle:1 classifying:2 pi:3 heavy:1 summary:1 allow:1 perceptron:2 characterizing:1 tracing:1 overcome:2 unaware:1 forward:1 commonly:1 qualitatively:1 interchange:1 historical:1 approximate:1 compact:3 global:2 sequentially:1 pittsburgh:3 conclude:1 table:7 transfer:1 symmetry:1 mse:3 excellent:1 necessarily:1 constructing:2 did:1 main:1 terminated:1 repeated:1 referred:1 rithm:1 differed:1 extraordinary:1 lie:1 theorem:2 twa:2 evidence:1 intractable:1 exists:1 consist:1 sequential:2 phd:1 illustrates:2 occurring:1 conditioned:1 yin:7 saddle:14 likely:1 unexpected:1 corresponds:1 satisfies:1 sorted:1 typical:2 except:1 contrasted:1 total:5 secondary:1 diverging:1 siemens:1 exception:1 support:1 princeton:1 |
327 | 1,299 | The Neurothermostat:
Predictive Optimal Control of
Residential Heating Systems
Michael C. Mozer t , Lucky Vidmart , Robert H. Dodiert
tDepartment of Computer Science
tDepartment of Civil, Environmental, and Architectural Engineering
University of Colorado, Boulder, CO 80309-0430
Abstract
The Neurothermostat is an adaptive controller that regulates indoor air temperature in a residence by switching a furnace on or
off. The task is framed as an optimal control problem in which
both comfort and energy costs are considered as part of the control objective. Because the consequences of control decisions are
delayed in time, the Neurothermostat must anticipate heating demands with predictive models of occupancy patterns and the thermal response of the house and furnace. Occupancy pattern prediction is achieved by a hybrid neural net / look-up table. The Neurothermostat searches, at each discrete time step, for a decision
sequence that minimizes the expected cost over a fixed planning
horizon. The first decision in this sequence is taken, and this process repeats. Simulations of the Neurothermostat were conducted
using artificial occupancy data in which regularity was systematically varied, as well as occupancy data from an actual residence.
The Neurothermostat is compared against three conventional policies, and achieves reliably lower costs. This result is robust to the
relative weighting of comfort and energy costs and the degree of
variability in the occupancy patterns.
For over a quarter century, the home automation industry has promised to revolutionize our lifestyle with the so-called Smart House@ in which appliances, lighting,
stereo, video, and security systems are integrated under computer control. However, home automation has yet to make significant inroads, at least in part because
software must be tailored to the home occupants.
Instead of expecting the occupants to program their homes or to hire someone to
do so, one would ideally like the home to essentially program itself by observing
the lifestyle of the occupants. This is the goal of the Neural Network House (Mozer
et al., 1995), an actual residence that has been outfitted with over 75 sensorsincluding temperature, light, sound, motion-and actua.tors to control air heating,
water heating, lighting, and ventilation. In this paper, we describe one research
954
M. C. Mozer. L. Vidmar and R. H. Dodier
project within the house, the Neurothermostat, that learns to regulate the indoor
air temperature automatically by observing and detecting patterns in the occupants'
schedules and comfort preferences. We focus on the problem of air heating with
a whole-house furnace, but the same approach can be taken with alternative or
multiple heating devices, and to the problems of cooling and ventilation.
1
TEMPERATURE REGULATION AS AN OPTIMAL
CONTROL PROBLEM
Traditionally, the control objective of air temperature regulation has been to minimize energy consumption while maintaining temperature within an acceptable comfort margin during certain times of the day and days of the week. This is sensible
in commercial settings, where occupancy patterns follow simple rules and where
energy considerations dominate individual preferences. In a residence, however, the
desires and schedules of occupants need to be weighted equally with energy considerations. Consequently, we frame the task of air temperature regulation as a
problem of maximizing occupant comfort and minimizing energy costs.
These two objectives clearly conflict, but they can be integrated into a unified
framework via an optimal control aproach in which the goal is to heat the house
according to a policy that minimizes the cost
J
=
lim 1 l:!~~"+ 1 [e( ut)
"-+00 "
0
+ m( xt}],
where time, t, is quantized into nonoverlapping intervals during which we assume
all environmental variables remain constant, to is the interval ending at the current
time, Ut is the control decision for interval t (e.g., turn the furnace on), e(u) is the
energy cost associated with decision u, Xt is the environmental state during interval
t, which includes the indoor temperature and the occupancy status of the home,
and m(x) is the misery of the occupant given state x. To add misery and energy
costs, a common currency is required. Energy costs are readily expressed in dollars.
We also determi'ne misery in dollars, as we describe later.
While we have been unable to locate any earlier work that combined energy and
comfort costs in an optimal control framework, optimal control has been used in a
variety of building energy system control applications (e.g., Henze & Dodier, 1996;
Khalid & Omatu, 1995),
2
THE NEUROTHERMOSTAT
Figure 1 shows the system architecture of the Neurothermostat and its interaction
with the environment. The heart of the Neurothermostat is a controller that, at
time intervals of 6 minutes, can switch the house furnace on or off. Because the consequences of control decisions are delayed in time, the controller must be predictive
to anticipate heating demands. The three boxes in the Figure depict components
that predict or model various aspects of the environment. We explain their purpose
via a formal description of the controller operation.
The controller considers sequences of '" decisions, denoted u, and searches for the
sequence that minimizes the expected total cost, i u , over the planning horizon of
",6 minutes:
where the expectation is computed over future states of the environment conditional
on the decision sequence u. The energy cost in an interval depends only on the
control decision during that interval. The misery cost depends on two components
955
The Neurothermostat
,_
. r ( . ?' ? . , ....
._~" .:J . ~ I;";
?
EnvlrcJni;imt ..
.
,.
= -~':.~
Figure 1: The Neurothermostat and its interaction with the environment
of the state-the house occupancy status, o(t) (0 for empty, 1 for occupied), and
the indoor air temperature, hu(t):
mu(o(t), hu(t?
= p[o(t) = 1] m(l, hu(t? + p[o(t) = 0] m(O, hu(t?
Because the true quantities e, hu, m, and p are unknown, they must be estimated .
The house thermal model of Figure 1 provides e and hu, the occupant comfort cost
model provides m, and the home occupancy predictor provides p.
We follow a tradition of using neural networks for prediction in the context of building energy system control (e.g., Curtiss, Kreider, & Brandemuehl, 1993; Ferrano &
Wong, 1990; Miller & Seem, 1991), although in our initial experiments we require
a network only for the occupancy prediction.
2.1
PREDICTIVE OPTIMAL CONTROLLER
We propose a closed-loop controller that combines prediction with fixed-horizon
planning, of the sort proposed by Clarke, Mohtadi, and 'lUffs (1987). At each
time step, the controller considers all possible decision sequences over the pl~nning
horizon and selects the sequence that minimizes the expected total cost, J. The
first decision in this sequence is then performed. After b minutes, the planning and
execution process is repeated. This approach assumes that beyond the planning
horizon, all costs are independent of the first decision in the sequence.
While dynamic programming is an efficient search algorithm, it requires a discrete
state space. Wishing to avoid quantizing the continuous variable of indoor temperature, and the errors that might be introduced, we performed performed exhaustive
search through the possible decision sequences, which was tractable due to relatively short horizons and two additional domain constraints. First, because the
house occupancy status can reasonably be assumed to be independent of the indoor temperature, p need not be recalculated for every possible decision sequence.
Second, the current occupancy status depends on the recent occupancy history.
Consequently, one needs to predict occupancy patterns over the planning horizon,
o E {O, l}'" to compute p. However, because most occupancy sequences are highly
improbable, we find that considering only the most likely sequences-those containing at most two occupancy state transitions-produces the same decisions as doing
the search over the entire distribution, reducing the cost from 0(2"') to O(K;2).
2.2
OCCUPANCY PREDICTOR
The basic task of the occupancy predictor is to estimate the probability that the
occupant will be home b minutes in the future. The occupancy predictor can be
run iteratively to estimate the probability of an occupancy pattern.
If occupants follow a deterministic daily schedule, a look up table indexed by time
of day and current occupancy state should capture occupancy patterns. We thus
use a look up table to encode whatever structure possible, and a neural network
M. C. Mozer. L. Vidmar and R. H. Dodier
956
to encode residual structure. The look up table divides time into fixed 6 minute
bins. The neural network consisted of the following inputs: current time of day; day
of the week; average proportion of time home was occupied in the 10, 20, and 30
minutes from the present time of day on the previous three days and on the same
day of the week during the past four weeks; and the proportion of time the home
was occupied during the past 60, 180, and 360 minutes . The network, a standard
three-layer architecture, was trained by back propagation. The number of hidden
units was chosen by cross validation .
2.3
THERMAL MODEL OF HOUSE AND FURNACE
A thermal model of the house and furnace predicts future indoor temperature(s)
as a function of the outdoor temperature and the furnace operation. While one
could perform system identification using neural networks, a simple parameterized
resistance-capacitance (RC) model provides a reasonable first-order approximation.
The RC model assumes that : the inside of the house is at a uniform temperature,
and likewise the outside; a homogeneous flat wall separates the inside and outside,
and this wall has a thermal resistance R and thermal capacitance C; the entire
wall mass is at the inside temperature; and the heat input to the inside is Q when
the furnace is running or zero otherwise. Assuming that the outdoor temperature,
denoted g, is constant, the the indoor temperature at time t, hu(t), is:
hu(t) = hu(t - 1) exp( -606 / RC)
+ (RQu(t) + 9 )(1 -
exp( -606 / RC)),
where hu(to) is the actual indoor temperature at the current time. Rand C were
determined by architectural properties of the Neural Network House to be 1.33
Kelvins/kilowatt and 16 megajoules/Kelvin, respectively. The House furnace is
rated at 133,000 Btu/hour and has 92.5% fuel use efficiency, resulting in an output
of Q = 36.1 kilowatts. With natural gas at $.485 per CCF, the cost of operating
the furnace , e, is $.7135 per hour.
2.4
OCCUPANT COMFORT COST MODEL
In the Neural Network House, the occupant expresses discomfort by adjusting a
set point temperature on a control panel. However, for simplicity, we assume in
this work the setpoint is a constant, A. When the home is occupied , the misery
cost is a monotonic function of the deviation of the actual indoor temperature from
the set point. When the home is empty, the misery cost is zero regardless of the
temperature.
There is a rich literature directed at measuring thermal comfort in a given environment (i .e., dry-bulb temperature, relative humidity, air velocity, c~othing insulation,
etc.) for the average building occupant (e .g., Fanger, 1972; Gagge, Stolwijk, & Nishi,
1971). Although the measurements indicate the fraction of the population which
is uncomfortable in a particular environment, one might also interpret them as a
measure of an individual's level of discomfort. As a function of dry-bulb temperature, this curve is roughly parabolic. We approximate it with a measure of misery
in a 6-minute period as follows:
h) _
_6_ max (0,1.x-hl-?)2
A
m
(
0,
-
oa 24 x 60
25
.
The first term, 0, is a binary variable indicating the home occupancy state. The
second term is a conversion factor from arbitrary "misery" units to dollars. The
third term scales the misery cost from a full day to the basic update interval.
The fourth term produces the parabolic relative misery function, scaled so that a
temperature difference of 5? C produces one unit of misery, with a deadband region
from A - ( to A + L
We have chosen the conversion factor a using an economic perspective. Consider
the lost productivity that results from trying to work in a home that is 5? C colder
The Neurothermostat
957
than desired for a 24 hour period. Denote this loss p, measured in hours. The cost
in dollars of this loss is then a = 'YP, where'Y is the individual's hourly salary. With
this approch, a can be set in a natural, intuitive manner.
3
SIMULATION METHODOLOGY
In all experiments we report below, fJ = 10 minutes, K, = 12 steps (120 minute planning horizon), ,\ = 22.5? C, f = 1, and 'Y = 28 dollars per hour. The productivity
loss, p, was varied from 1 to 3 hours.
We report here results from the Neurothermostat operating in a simulated environment, rather than in the actual Neural Network House. The simulated environment incorporates the house/furnace thermal model described earlier and occupants
whose preferences follow the comfort cost model. The outdoor temperature is assumed to remain a constant 0? C. Thus, the Neurothermostat has an accurate model
of its environment, except for the occupancy patterns, which it must predict based
on training data. This allows us to evaluate the performance of the Neurothermostat and the occupancy model as occupancy patterns are varied, uncontaminated
by the effect of inaccuracy in the other internal models.
We have evaluated the Neurothermostat with both real and artificial occupancy
data. The real data was collected from the Neural Network House with a single
resident over an eight month period, using a simple algorithm based on motion
detector output and the opening and closing of outside doors. The artificial data
was generated by a simulation of a single occupant. The occupant would go to work
each day, later on the weekends, would sometimes come home for lunch, sometimes
go out on weekend nights, and sometimes go out of town for several days. To
examine performance of the Neurothermostat as a function of the variability in
the occupant's schedule, the simulation model included a parameter, the variability
index. An index of 0 means that the schedule is entirely deterministic; an index of
1 means that the schedule was very noisy, but still contained statistical regularities.
The index determined factors such as the likelihood and duration of out-of-town
trips and the variability in departure and return times.
3.1
ALTERNATIVE HEATING POLICIES
In addition to the Neurothermostat, we examined three nonadaptive control policies.
These policies produce a setpoint at each time step, and the furnace is switched on
if the temperature drops below the setpoint and off if the temperature rises above
the setpoint. (We need not be concerned about damage to the furnace by cycling
because control decisions are made ten minutes apart.) The constant-temperature
policy produces a fixed setpoint of 22.5? C. The occupancy-triggered policy produces
a set point of 18? C when the house is empty, 22.5? C when the house is occupied.
The setback-thermostat policy lowers the setpoint from 22.5? C to 18? C half an
hour before the mean work departure time for that day of the week, and raises it
back to 22.5? C half an hour before the mean work return time for that day of the
week. The setback temperature for the occupancy-triggered and setback-thermostat
policies was determined empirically to minimize the total cost.
4
4.1
RESULTS
OCCUPANCY PREDICTOR
Performance of three different predictors was evaluated using artificial data across
a range of values for the variability index. For each condition, we generated eight
training/test sets of artificial data, each training and test set consisting of 150 days
of data. Table 1 shows the normalized mean squared error (MSE) on the test set,
averaged over the eight replications. The normalization involved dividing the MSE
for each replication by the error obtained by predicting the future occupancy state
M . C. Mozer, L. Vidmar and R. H. Dodier
958
Table 1: Normalized MSE on Test Set for Occupancy Prediction-Artificial Data
lookup table
neural net
lookup table + neural net
variability index
.25 .50 .75
.94 .92
.81
.83 .86
.63
.60
.78 .77
0
.49
.02
.02
Productivity Loss = 1.0 hr.
Productivity Loss ,. 3.0 hr.
8.2
i
~ 10
8
--"."
~7.8 ~.'~~~ ... .. . . :..::::.. ....~.:::..: :".. . .. .. .
'0
"
115
r---_
87.6
=7.4 __ .-'
c
~
1
.94
.91
.74
- . ~.~
_____
."
------
7.2_
o
0.25
0.5
Variability Index
0.75
~
~
"
- , ' * " - ./
9~---
O~
-
o
...
-
___
. .,
..
,
_ . _ . _ ./
:; .- - -
~'"-""
;""
0.25
/
0.5
----
0.75
Variability Index
Figure 2: Mean cost per day incurred by four control policies on (artificial) test data as
a function of the data's variability index for p = 1 (comfort lightly weighted, left panel)
and p = 3 (comfort heavily weighted, right panel).
is the same as the present state. The main result here is that the combination of
neural network and look up table perform better than either component in isolation
(ANOVA: F(l, 7) = 1121,p < .001 for combination vs. table; F(l, 7) = 64,p < .001
for combination vs. network), indicating that the two components are capturing
different structure in the data.
4.2
CONTROLLER WITH ARTIFICIAL OCCUPANCY DATA
Having trained eight occupancy predictors with different (artificial data) training
sets, we computed misery and energy costs for the Neurothermostat on the corresponding test sets. Figure 2 shows the mean total cost per day as a function
of variability index, control policy, and relative comfort cost . The robust result is
that the Neurothermostat outperforms the three nonadaptive control policies for all
levels of the variability index and for both a wide range of values of p.
Other patterns in the data are noteworthy. Costs for the Neurothermostat tend to
rise with the variability index, as one would expect because the occupant's schedule becomes less predictable. The constant-temperature policy is worst if occupant
comfort is weighted lightly, and begins to approach the Neurothermostat in performance as comfort costs are increased. If comfort costs overwhelm energy costs,
then the constant-temperature policy and the Neurothermostat converge.
4.3
CONTROLLER WITH REAL OCCUPANCY DATA
Eight months of real occupancy data collected in the Neural Network House beginning in September 1994 was also used to generate occupancy models and test
controllers. Three training/test splits were formed by training on five consecutive months and testing on the next month. Table 2 shows the mean daily cost
for the four controllers. The Neurothermostat significantly outperforms the three
nonadaptive controllers, as it did with the artificial data.
5
DISCUSSION
The simulation studies reported here strongly suggest that adaptive control of residential heating and cooling systems is worthy of further investigation. One is
959
The Neurothermostat
Table 2: Mean Daily Cost Based on Real Occupancy Data
productivity loss
Neurothermostat
constant temperature
occupancy triggered
setback thermostat
p=1
$6.77
$7.85
$7.49
$8.12
p=3
$7.05
$7.85
$8.66
$9.74
tempted to trumpet the conclusion that adaptive control lowers heating costs, but
before doing so, one must be clear that the cost being lowered is a combination of
comfort and energy costs. If one is merely interested .in lowering energy costs, then
simply shut off the furnace. A central contribution of this work is thus the framing
of the task of air temperature regulation as an optimal control problem in which
both comfort and energy costs are considered as part of the control objective.
A common reaction to this research project is, "My life is far too irregular to be
predicted. I don 't return home from work at the same time every day." An important finding of this work is that even a highly nondeterministic schedule contains
sufficient statistical regularity to be exploited by a predictive controller. We found
this for both artificial data with a high variability index and real occupancy data.
A final contribution of our work is to show that for periodic data such as occupancy
patterns that follow a weekly schedule, the combination of a look up table to encode
the periodic structure and a neural network to encode the residual structure can
outperform either method in isolation.
Acknowledgements
Support for this research has come from Lifestyle Technologies, NSF award IRI9058450, and a CRCW grant-in-aid from the University of Colorado. This project
owes its existence to the dedication of many students, particularly Marc Anderson,
Josh Anderson, Paul Kooros, and Charles Myers. Our thanks to Reid Hastie and
Gary McClelland for their suggestions on assessing occupant misery.
References
Clarke, D. W., Mohtadi, C., & Tuffs, P. S. (1987). Generalized predictive control-Part I.
The basic algorithm. Automatica, 29, 137- 148.
Curtiss, P., Kreider, J. F ., & Brandemuehl, M. J . (1993). Local and global control of
commercial building HVAC systems using artificial neural networks. Proceedings of
the American Control Conference, 9, 3029-3044.
Fanger, P. O. (1972) . Thermal comfort. New York: McGraw-Hill.
Ferrano, F. J., & Wong, K. V. (1990). Prediction of thermal storage loads using a neural
network. ASHRAE Transactions, 96, 723-726.
Gagge, A. P., Stolwijk, J . A. J., & Nishi, Y. (1971). An effective temperature scale based on
a simple model of human physiological regulatory response. ASHRAE Transactions,
77,247- 262.
Henze, G. P., & Dodier, R. H. (1996). Development of a predictive optimal controller for
thermal energy storage systems. Submitted for publication.
Khalid, M., & Omatu, S. (1995). Temperature regulation with neural networks and alternative control schemes. IEEE Transactions on Neural Networks, 6, 572-582.
Miller, R. C., & Seem, J. E. (1991). Comparison of artificial neural networks with traditional methods of predicting return time from night or weekend setback. ASHRAE
Transactions, 97, 500- 508.
Mozer, M. C., Dodier, R. H., Anderson, M., Vidmar, L., Cruickshank III, R. F., & Miller,
D. (1995) . The neural network house: An overview. In L. Niklasson & M. Boden
(Eds.), Current trends in connectionism (pp. 371-380) . Hillsdale, NJ: Erlbaum.
| 1299 |@word proportion:2 humidity:1 hu:10 simulation:5 initial:1 contains:1 past:2 outperforms:2 reaction:1 current:6 yet:1 must:6 readily:1 drop:1 depict:1 update:1 v:2 half:2 device:1 shut:1 beginning:1 short:1 detecting:1 quantized:1 appliance:1 provides:4 preference:3 nishi:2 five:1 rc:4 replication:2 combine:1 nondeterministic:1 inside:4 manner:1 expected:3 roughly:1 planning:7 examine:1 kilowatt:2 automatically:1 actual:5 considering:1 becomes:1 project:3 begin:1 panel:3 mass:1 fuel:1 minimizes:4 unified:1 finding:1 nj:1 every:2 weekly:1 uncomfortable:1 scaled:1 control:30 whatever:1 unit:3 grant:1 kelvin:2 reid:1 hourly:1 before:3 engineering:1 local:1 consequence:2 switching:1 iri9058450:1 noteworthy:1 might:2 examined:1 someone:1 co:1 range:2 averaged:1 ventilation:2 directed:1 testing:1 lost:1 insulation:1 lucky:1 significantly:1 suggest:1 storage:2 context:1 wong:2 conventional:1 deterministic:2 maximizing:1 go:3 regardless:1 duration:1 simplicity:1 rule:1 dominate:1 century:1 population:1 traditionally:1 commercial:2 colorado:2 heavily:1 programming:1 homogeneous:1 velocity:1 trend:1 dodier:6 particularly:1 cooling:2 predicts:1 capture:1 worst:1 region:1 expecting:1 mozer:6 environment:9 mu:1 predictable:1 ideally:1 dynamic:1 trained:2 raise:1 smart:1 predictive:7 efficiency:1 various:1 weekend:3 approch:1 heat:2 describe:2 effective:1 artificial:13 outside:3 exhaustive:1 lifestyle:3 whose:1 otherwise:1 itself:1 noisy:1 final:1 sequence:13 triggered:3 quantizing:1 net:3 myers:1 propose:1 hire:1 interaction:2 loop:1 description:1 intuitive:1 regularity:3 empty:3 assessing:1 produce:6 measured:1 dividing:1 predicted:1 indicate:1 come:2 human:1 bin:1 hillsdale:1 require:1 furnace:15 wall:3 investigation:1 anticipate:2 connectionism:1 pl:1 considered:2 exp:2 henze:2 recalculated:1 week:6 predict:3 tor:1 achieves:1 consecutive:1 purpose:1 weighted:4 clearly:1 rather:1 occupied:5 avoid:1 publication:1 encode:4 focus:1 likelihood:1 tradition:1 wishing:1 dollar:5 integrated:2 entire:2 hidden:1 selects:1 interested:1 denoted:2 development:1 having:1 look:6 future:4 report:2 opening:1 individual:3 delayed:2 consisting:1 highly:2 khalid:2 btu:1 light:1 accurate:1 daily:3 improbable:1 owes:1 indexed:1 divide:1 desired:1 ashrae:3 increased:1 industry:1 earlier:2 measuring:1 cost:40 deviation:1 predictor:7 uniform:1 conducted:1 erlbaum:1 too:1 reported:1 periodic:2 my:1 combined:1 thanks:1 off:4 michael:1 squared:1 central:1 town:2 containing:1 american:1 return:4 yp:1 nonoverlapping:1 imt:1 lookup:2 student:1 automation:2 includes:1 depends:3 later:2 performed:3 closed:1 observing:2 doing:2 sort:1 contribution:2 minimize:2 air:9 formed:1 likewise:1 miller:3 dry:2 identification:1 occupant:20 lighting:2 history:1 submitted:1 explain:1 detector:1 ed:1 against:1 actua:1 energy:19 uncontaminated:1 pp:1 involved:1 associated:1 adjusting:1 lim:1 ut:2 schedule:9 back:2 day:17 follow:5 methodology:1 response:2 rand:1 evaluated:2 revolutionize:1 box:1 strongly:1 anderson:3 night:2 propagation:1 resident:1 building:4 effect:1 consisted:1 true:1 normalized:2 ccf:1 iteratively:1 during:6 generalized:1 trying:1 hill:1 motion:2 temperature:35 discomfort:2 fj:1 consideration:2 charles:1 common:2 niklasson:1 quarter:1 empirically:1 overview:1 regulates:1 interpret:1 significant:1 measurement:1 framed:1 closing:1 lowered:1 operating:2 etc:1 add:1 recent:1 perspective:1 apart:1 certain:1 binary:1 life:1 exploited:1 additional:1 converge:1 period:3 multiple:1 sound:1 currency:1 full:1 cross:1 equally:1 award:1 prediction:6 basic:3 controller:15 essentially:1 expectation:1 sometimes:3 tailored:1 normalization:1 achieved:1 irregular:1 addition:1 interval:8 salary:1 tend:1 incorporates:1 seem:2 door:1 split:1 iii:1 concerned:1 variety:1 switch:1 isolation:2 architecture:2 hastie:1 economic:1 stereo:1 resistance:2 york:1 clear:1 ten:1 mcclelland:1 generate:1 outperform:1 nsf:1 estimated:1 per:5 discrete:2 express:1 four:3 promised:1 anova:1 lowering:1 nonadaptive:3 merely:1 fraction:1 residential:2 run:1 parameterized:1 fourth:1 parabolic:2 reasonable:1 architectural:2 residence:4 home:16 decision:16 acceptable:1 clarke:2 entirely:1 layer:1 capturing:1 constraint:1 software:1 flat:1 lightly:2 aspect:1 aproach:1 relatively:1 according:1 inroad:1 combination:5 remain:2 across:1 lunch:1 hl:1 boulder:1 determi:1 heart:1 taken:2 overwhelm:1 turn:1 tractable:1 operation:2 eight:5 regulate:1 alternative:3 existence:1 assumes:2 running:1 setpoint:6 maintaining:1 objective:4 capacitance:2 quantity:1 damage:1 traditional:1 cycling:1 september:1 unable:1 separate:1 simulated:2 oa:1 sensible:1 consumption:1 considers:2 collected:2 water:1 assuming:1 setback:5 index:13 minimizing:1 regulation:5 robert:1 rise:2 reliably:1 policy:14 unknown:1 perform:2 conversion:2 hvac:1 thermal:11 gas:1 variability:13 frame:1 locate:1 varied:3 worthy:1 arbitrary:1 introduced:1 required:1 trip:1 security:1 conflict:1 framing:1 hour:8 inaccuracy:1 beyond:1 below:2 pattern:12 indoor:10 comfort:19 departure:2 program:2 max:1 video:1 natural:2 hybrid:1 predicting:2 hr:2 residual:2 occupancy:41 scheme:1 rated:1 technology:1 ne:1 literature:1 acknowledgement:1 relative:4 loss:6 expect:1 suggestion:1 dedication:1 validation:1 switched:1 incurred:1 degree:1 bulb:2 sufficient:1 systematically:1 repeat:1 formal:1 wide:1 curve:1 ending:1 transition:1 rich:1 made:1 adaptive:3 far:1 transaction:4 approximate:1 mcgraw:1 status:4 global:1 automatica:1 assumed:2 don:1 search:5 continuous:1 regulatory:1 table:13 reasonably:1 robust:2 curtis:2 boden:1 mse:3 domain:1 marc:1 did:1 main:1 whole:1 paul:1 heating:10 repeated:1 aid:1 colder:1 house:23 outdoor:3 weighting:1 third:1 learns:1 minute:11 load:1 xt:2 physiological:1 thermostat:3 execution:1 demand:2 horizon:8 margin:1 civil:1 simply:1 likely:1 josh:1 desire:1 expressed:1 contained:1 monotonic:1 gary:1 environmental:3 conditional:1 goal:2 month:4 consequently:2 tempted:1 included:1 determined:3 except:1 reducing:1 called:1 total:4 productivity:5 indicating:2 internal:1 support:1 evaluate:1 |
328 | 13 | 297
TEMPORAL PATTERNS OF ACTIVITY IN
NEURAL NETWORKS
Paolo Gaudiano
Dept. of Aerospace Engineering Sciences,
University of Colorado, Boulder CO 80309, USA
January 5, 1988
Abstract
Patterns of activity over real neural structures are known to exhibit timedependent behavior. It would seem that the brain may be capable of utilizing
temporal behavior of activity in neural networks as a way of performing functions
which cannot otherwise be easily implemented. These might include the origination
of sequential behavior and the recognition of time-dependent stimuli. A model is
presented here which uses neuronal populations with recurrent feedback connections in an attempt to observe and describe the resulting time-dependent behavior.
Shortcomings and problems inherent to this model are discussed. Current models
by other researchers are reviewed and their similarities and differences discussed.
METHODS / PRELIMINARY RESULTS
In previous papers,[2,3] computer models were presented that simulate a net consisting of two spatially organized populations of realistic neurons. The populations are
richly interconnected and are shown to exhibit internally sustained activity. It was
shown that if the neurons have response times significantly shorter than the typical unit
time characteristic of the input patterns (usually 1 msec), the populations will exhibit
time-dependent behavior. This will typically result in the net falling into a limit cycle.
By a limit cycle, it is meant that the population falls into activity patterns during which
all of the active cells fire in a cyclic, periodic fashion. Although the period of firing of
the individual cells may be different, after a fixed time the overall population activity
will repeat in a cyclic, periodic fashion. For populations organized in 7x7 grids, the
limit cycle will usually start 20~200 msec after the input is turned off, and its period
will be in the order of 20-100 msec.
The point ofinterest is that ifthe net is allowed to undergo synaptic modifications by
means of a modified hebbian learning rule while being presented with a specific spatial
pattern (i.e., cells at specific spatial locations within the net are externally stimulated),
subsequent presentations of the same pattern with different temporal characteristics
will cause the population to recall patterns which are spatially identical (the same cells
will be active) but which have different temporal qualities. In other words, the net can
fall into a different limit cycle. These limit cycles seem to behave as attractors in that
similar input patterns will result in the same limit cycle, and hence each distinct limit
cycle appears to have a basin of attraction. Hence a net which can only learn a small
? American Institute of Physics 1988
298
number of spatially distinct patterns can recall the patterns in a number of temporal
modes. If it were possible to quantitatively discriminate between such temporal modes,
it would seem reasonable to speculate that different limit cycles could correspond to
different memory traces. This would significantly increase estimates on the capacity of
memory storage in the net.
It has also been shown that a net being presented with a given pattern will fall and
stay into a limit cycle until another pattern is presented which will cause the system
to fall into a different basin of attraction. If no other patterns are presented, the net
will remain in the same limit cycle indefinitely. Furthermore, the net will fall into the
same limit cycle independently of the duration of the input stimulus, so long as the
input stimulus is presented for a long enough time to raise the population activity level
beyond a minimum necessary to achieve self-sustained activity. Hence, if we suppose
that the net "recognizes" the input when it falls into the corresponding limit cycle, it
follows that the net will recognize a string of input patterns regardless of the duration of
each input pattern, so long as each input is presented long enough for the net to fall into
the appropriate limit cycle. In particular, our system is capable of falling into a limit
cycle within some tens of milliseconds. This can be fast enough to encode, for example, a
string of phonemes as would typically be found in continuous speech. It may be possible,
for instance, to create a model similar to Rumelhart and McClelland's 1981 model on
word recognition by appropriately connecting multiple layers of these networks. If the
response time of the cells were increased in higher layers, it may be possible to have
the lowest level respond to stimuli quickly enough to distinguish phonemes (or some
sub-phonemic basic linguistic unit), then have populations from this first level feed into
a slower, word-recognizing population layer, and so On. Such a model may be able to
perform word recognition from an input consisting of continuous phoneme strings even
when the phonemes may vary in duration of presentation.
SHORTCOMINGS
Unfortunately, it was noticed a short time ago that a consistent mistake had been
made in the process of obtaining the above-mentioned results. Namely, in the process
of decreasing the response time of the cells I accidentally reached a response time below
the time step used in the numerical approximation that updates the state of each cell
during a simulation. The equations that describe the state of each cell depend on the
state of the cell at the previous time step as well as on the input at the present time.
These equations are of first order in time, and an explicit discrete approximation is
used in the model. Unfortunately it is a known fact that care must be taken in selecting
the size of the time step in order to obtain reliable results. It is infact the case that
by reducing the time step to a level below the response time of the cells the dynamics
of the system varied significantly. It is questionable whether it would be possible to
adjust some of the population parameters within reson to obtain the same results with
a smaller step size, but the following points should be taken into account: 1) other
researchers have created similar models that show such cyclic behavior (see for example
Silverman, Shaw and Pearson[7]). 2) biological data exists which would indicate the
existance of cyclic or periodic bahvior in real neural systems (see for instance Baird[1]).
As I just recently completed a series of studies at this university, I will not be able
to perform a detailed examination of the system described here, but instead I will more
299
than likely create new models on different research equipment which will be geared more
specifically towards the study of temporal behavior in neural networks.
OTHER MODELS
It should be noted that in the past few years some researchers have begun investigating the possibility of neural networks that can exhibit time-dependent behavior,
and I would like to report on some of the available results as they relate to the topic of
temporal patterns. Baird[l] reports findings from the rabbit's olfctory bulb which indicate the existance of phase-locked oscillatory states corresponding to olfactory stimuli
presented to the subjects. He outlines an elegant model which attributes pattern recognition abilities to competing instabilities in the dynamic activity of neural structures.
He further speculates that inhomogeneous connectivity in the bulb can be selectively
modified to achieve input-sensitive oscillatory states.
Silverman, Shaw and Pearson[7] have developed a model based on a biologically-inspired
idealized neural structure, which they call the trion. This unit represents a localized
group of neurons with a discrete firing period. It was found that small ensembles of trions with symmetric connections can exhibit quasi-stable periodic firing patterns which
do not require pacemakers or external driving. Their results are inspired by existing
physiological data and are consistent with other works.
Kleinfeld[6], and Sompolinsky and Kanter[8] independently developed neural network
models that can generate and recognize sequential or cyclic patterns. Both models rely
on what could be summarized as the recirculation of information through time-delayed
channels.
Very similar results are presented by Jordan[4] who extends a typical connectionist or
PDP model to include state and plan units with recurrent connections and feedback
from output units through hidden units. He employs supervised learning with fuzzy
constraints to induce learning of sequences in the system.
From a slightly different approach, Tank and Hopfield[9] make USe of patterned sets
of delays which effectively compress information in time. They develop a model which
recognizes patterns by falling into local minima of a state-space energy function. They
suggest that a systematic selection of delay functions can be done which will allow for
time distortions that would be likely to occur in the input.
Finally, a somewhat different approach is taken by Homma, Atlas and Marks[5], who
generalize a network for spatial pattern recognition to one that performs spatio-temporal
patterns by extending classical principles from spatial networks to dynamic networks.
In particular, they replace multiplication with convolution, weights with transfer functions, and thresholding with non linear transforms. Hebbian and Delta learning rules
are similarly generalized. The resulting models are able to perform temporal pattern
recognition.
The above is only a partial list of some of the relevant work in this field, and there
are probably various other results I am not aware of.
DISCUSSION
All of the above results indicate the importance of temporal patterns in neural networks. The need is apparent for further formal models which can successfully quantify
temporal behavior in neural networks. Several questions must be answered to further
300
clarify the role and meaning of temporal patterns in neural nets. For instance, there
is an apparent difference between a model that performs sequential tasks and one that
performs recognition of dynamic patterns. It seems that appropriate selection of delay
mechanisms will be necessary to account for many types of temporal pattern recognition. The question of scaling must also be explored: mechanism are known to exist in
the brain which can cause delays ranging from the millisecond-range (e.g. variations
in synaptic cleft size) to the tenth of a second range (e.g. axonal transmission times).
On the other hand, the brain is capable of rec"Ignizing sequences of stimuli that can be
much longer than the typical neural event, such as for instance being able to remember
a song in its entirety. These and other questions could lead to interesting new aspects
of brain function which are presently unclear.
References
[1] Baird, B., "Nonlinear Dynamics of Pattern Formation and Pattern Recognition in
the Rabbit Olfactory Bulb". Physica 22D, 150-175. 1986.
[2] Gaudiano, P., "Computer Models of Neural Networks". Unpublished Master's Thesis. University of Colorado. 1987.
[3] Gaudiano, P., MacGregor, R.J., "Dynamic Activity and Memory Traces in
Computer-Simulated Recurrently-Connected Neural Networks". Proceedings of the
First International Conference on Neural Networks. 2:177-185. 1987.
[4] Jordan, M.I., "Attractor Dynamics and Parallelism in a Connectionist Sequential
Machine". Proceedings of the Eighth Annual Conference of the Cognitive Sciences
Society. 1986.
[5] Homma, T., Atlas, L.E., Marks, R.J.II, "An Artificial Neural Network for SpatioTemporal Bipolar Patterns: Application to Phoneme Classification". To appear in
proceedings of Neural Information Processing Systems Conference (AlP). 1987.
[6] Kleinfeld, D., "Sequential State Generation by Model Neural Networks". Proc.
Natl. Acad. Sci. USA. 83: 9469-9473. 1986.
[7] Silverman, D.l., Shaw, G.L., Pearson, l.C. "Associative Recall Properties of the
Trion Model of Cortical Organization". Biol. Cybern. 53:259-271. 1986.
[8] Sompolinsky, H., Kanter, I. "Temporal Association in Asymmetric Neural Networks". Phys. Rev. Let. 57:2861-2864. 1986.
[9] Tank, D.W., Hopfield, l.l. "Neural Computation by Concentrating Information in
Time". Proc. Natl. Acad. Sci. USA. 84:1896-1900. 1987.
| 13 |@word seems:1 simulation:1 cyclic:5 series:1 selecting:1 past:1 existing:1 current:1 must:3 subsequent:1 numerical:1 realistic:1 atlas:2 update:1 pacemaker:1 short:1 indefinitely:1 location:1 sustained:2 olfactory:2 behavior:9 brain:4 inspired:2 decreasing:1 lowest:1 what:1 string:3 fuzzy:1 developed:2 finding:1 temporal:15 remember:1 questionable:1 bipolar:1 unit:6 internally:1 appear:1 engineering:1 local:1 limit:14 mistake:1 acad:2 firing:3 might:1 co:1 patterned:1 locked:1 range:2 timedependent:1 silverman:3 existance:2 significantly:3 word:4 induce:1 suggest:1 cannot:1 selection:2 storage:1 cybern:1 instability:1 regardless:1 independently:2 duration:3 rabbit:2 rule:2 attraction:2 utilizing:1 population:12 variation:1 reson:1 suppose:1 colorado:2 us:1 rumelhart:1 recognition:9 rec:1 asymmetric:1 role:1 cycle:14 connected:1 sompolinsky:2 mentioned:1 dynamic:7 raise:1 depend:1 easily:1 hopfield:2 various:1 distinct:2 fast:1 describe:2 shortcoming:2 artificial:1 formation:1 pearson:3 kanter:2 apparent:2 distortion:1 otherwise:1 ability:1 associative:1 sequence:2 net:14 ifthe:1 interconnected:1 turned:1 relevant:1 achieve:2 transmission:1 extending:1 recurrent:2 develop:1 phonemic:1 implemented:1 entirety:1 indicate:3 quantify:1 inhomogeneous:1 attribute:1 alp:1 require:1 preliminary:1 biological:1 clarify:1 physica:1 driving:1 vary:1 proc:2 sensitive:1 create:2 successfully:1 modified:2 linguistic:1 encode:1 equipment:1 am:1 dependent:4 typically:2 hidden:1 quasi:1 tank:2 overall:1 classification:1 plan:1 spatial:4 field:1 aware:1 identical:1 represents:1 report:2 stimulus:6 quantitatively:1 inherent:1 few:1 connectionist:2 employ:1 recognize:2 individual:1 delayed:1 phase:1 consisting:2 fire:1 attractor:2 attempt:1 organization:1 possibility:1 adjust:1 natl:2 capable:3 partial:1 necessary:2 shorter:1 recirculation:1 instance:4 increased:1 recognizing:1 delay:4 periodic:4 spatiotemporal:1 international:1 stay:1 systematic:1 off:1 physic:1 connecting:1 quickly:1 connectivity:1 thesis:1 homma:2 external:1 cognitive:1 american:1 account:2 speculate:1 summarized:1 baird:3 idealized:1 reached:1 start:1 phoneme:5 characteristic:2 who:2 ensemble:1 correspond:1 generalize:1 researcher:3 ago:1 oscillatory:2 phys:1 synaptic:2 energy:1 richly:1 begun:1 ofinterest:1 concentrating:1 recall:3 organized:2 appears:1 feed:1 higher:1 supervised:1 response:5 done:1 furthermore:1 just:1 until:1 hand:1 nonlinear:1 kleinfeld:2 mode:2 quality:1 usa:3 hence:3 spatially:3 symmetric:1 during:2 self:1 noted:1 macgregor:1 generalized:1 outline:1 performs:3 meaning:1 ranging:1 recently:1 discussed:2 he:3 association:1 grid:1 similarly:1 had:1 stable:1 geared:1 similarity:1 longer:1 minimum:2 care:1 somewhat:1 period:3 ii:1 multiple:1 hebbian:2 long:4 basic:1 cell:10 appropriately:1 probably:1 subject:1 undergo:1 elegant:1 seem:3 jordan:2 call:1 axonal:1 enough:4 competing:1 whether:1 song:1 speech:1 cause:3 detailed:1 transforms:1 ten:1 mcclelland:1 generate:1 exist:1 millisecond:2 delta:1 discrete:2 paolo:1 group:1 falling:3 tenth:1 year:1 master:1 respond:1 extends:1 reasonable:1 scaling:1 layer:3 distinguish:1 annual:1 activity:10 occur:1 constraint:1 x7:1 aspect:1 simulate:1 answered:1 performing:1 remain:1 smaller:1 slightly:1 rev:1 modification:1 biologically:1 presently:1 boulder:1 taken:3 equation:2 mechanism:2 available:1 observe:1 appropriate:2 shaw:3 slower:1 compress:1 include:2 recognizes:2 completed:1 classical:1 society:1 noticed:1 question:3 unclear:1 exhibit:5 simulated:1 capacity:1 sci:2 topic:1 unfortunately:2 relate:1 trace:2 perform:3 neuron:3 convolution:1 behave:1 january:1 pdp:1 varied:1 namely:1 unpublished:1 connection:3 aerospace:1 cleft:1 beyond:1 able:4 usually:2 pattern:30 below:2 parallelism:1 eighth:1 reliable:1 memory:3 event:1 examination:1 rely:1 created:1 multiplication:1 interesting:1 generation:1 localized:1 bulb:3 basin:2 consistent:2 principle:1 thresholding:1 repeat:1 accidentally:1 formal:1 allow:1 institute:1 fall:7 feedback:2 cortical:1 made:1 active:2 investigating:1 spatio:1 infact:1 continuous:2 reviewed:1 stimulated:1 learn:1 channel:1 transfer:1 obtaining:1 allowed:1 neuronal:1 fashion:2 sub:1 msec:3 explicit:1 externally:1 specific:2 recurrently:1 list:1 explored:1 physiological:1 exists:1 sequential:5 effectively:1 importance:1 likely:2 presentation:2 towards:1 replace:1 typical:3 specifically:1 reducing:1 discriminate:1 selectively:1 mark:2 meant:1 dept:1 biol:1 |
329 | 130 | 586
STATISTICAL PREDICTION WITH KANERVA'S
SPARSE DISTRmUTED MEMORY
David Rogers
Research Institute for Advanced Computer Science
MS 230-5, NASA Ames Research Center
Moffett Field, CA 94035
ABSTRACT
A new viewpoint of the processing performed by Kanerva's sparse
distributed memory (SDM) is presented. In conditions of near- or
over- capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are
presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for
improving the predictiveness of the system based on Holland's
work with 'Genetic Algorithms', and a method for improving the
capacity of SDM even when used as an associative memory.
OVERVIEW
This work is the result of studies involving two seemingly separate topics that
proved to share a common framework. The fIrst topic, statistical prediction, is the
task of associating extremely large perceptual state vectors with future events. The
second topic, over-capacity in Kanerva's sparse distributed memory (SDM), is a
study of the computation done in an SDM when presented with many more associations than its stated capacity.
I propose that in conditions of over-capacity, where the associative-memory behavior of an SDM breaks down, the processing performed by the SDM can be used for
statistical prediction. A mathematical study of the prediction problem suggests a
variant of the standard SDM architecture. This variant not only behaves as a statistical predictor when the SDM is fIlled beyond capacity but is shown to double the
capacity of an SDM when used as an associative memory.
THE PREDICTION PROBLEM
The earliest living creatures had an ability, albeit limited, to perceive the world
through crude senses. This ability allowed them to react to changing conditions in
Statistical Prediction with Kanerva's Sparse Distributed Memory
the environment; for example, to move towards (or away from) light sources. As
nervous systems developed, learning was possible; if food appeared sim ultaneously
with some other perception, perhaps some odor, a creature could learn to associate
that smell with food.
As the creatures evolved further, a more rewarding type of learning was possible.
Some perceptions, such as the perception of pain or the discovery of food, are very
important to an animal. However, by the time the perception occurs, damage may
already be done, or an opportunity for gain missed. If a creature could learn to associate current perceptions with future ones, it would have a much better chance to
do something about it before damage occurs. This is the prediction problem.
The difficulty of the prediction problem is in the extremely large number of possible sensory inputs. For example, a simple animal might have the equivalent of 1000
bits of sensory data at a given time; in this case, the number of possible inputs is
greater than the number of atoms in the known universe! In essence, it is an enormous search problem: a living creature must fmd the subregions of the perceptual
space which correlate with the features of interest Most of the gigantic perceptual
space will be uncorrelated, and hence uninteresting.
THE OVERCAPACITY PROBLEM
An associative memory is a memory that can recall data when addressed 'close-to' an
address where data were previously stored. A number of designs for associative
memories have been proposed, such as Hopfield networks (Hopfield, 1986) or the
nearest-neighbor associative memory of Baum, Moody, and Wilczek (1987). Memory-related standards such as capacity are usually selected to judge the relative performance of different models. Performance is severely degraded when these memories
are filled beyond capacity.
Kanerva's sparse distributed memory is an associative memory model developed from
the mathematics of high-dimensional spaces (Kanerva, 1988) and is related to the
work of David Marr (1969) and James Albus (1971) on the cerebellum of the brain.
(For a detailed comparison of SDM to random-access memory, to the cerebellum,
and to neural-networks, see (Rogers, 1988b?. Like other associative memory models, it exhibits non-memory behavior when near- or over- capacity.
Studies of capacity are often over-simplified by the common assumption of uncorrelated random addresses and data. The capacity of some of these memories, including
SDM, is degraded if the memory is presented with correlated addresses and data.
Such correlations are likely if the addresses and data are from a real-world source.
Thus, understanding the over-capacity behavior of an SDM may lead to better procedures for storing correlated data in an associative memory.
587
588
Rogers
SPARSE DISTRmUTED MEMORY
Sparse distributed memory can be best illustrated as a variant of random-access memory (RAM). The structure of a twelve-location SDM with ten-bit addresses and
ten-bit data is shown in figure 1. (Kanerva, 1988)
Reference Address
01010101101
~
~
Input Data
Radius
o
Dist
lor 01
Select
0111 11 11 0
I tI 0 11 I
++++ttttt+
1101100111
1010101010
1 -1
0000011110
0011011001
1 -1 1
1011101100
0010101111
1101101101
0100000110
0110101001
1011010110
1100010111
1111110011
Location
Addresses
+ + ++ , , , , , +r
Sums
1-31-51-3151 513 1-31 31-3131
+ , +++ ++ +++
Output Data I 0 I 0 I 0111 11 11 0 I 11 0 11 I
Threshold at 0
Figure 1. Structure of a Sparse Distributed Memory
A memory location is a row in this figure. The location addresses are set to random
addresses. The data counters are initialized to zero. All operations begin with
addressing the memory; this entails finding the Hamming distance between the reference address and each of the location addresses. If this distance is less than or equal
to the Hamming radius, the select-vector entry is set to I, and that location is
tenned selected. The ensemble of such selected locations is called the selected set.
Selection is noted in the figure as non-gray rows. A radius is chosen so that only a
small percentage of the memory locations are selected for a given reference address.
(Later, we will refer to the fact that a memory location defines an activation set of
addresses in the address space; the activation set corresponding to a location is the
set of reference addresses which activate that memory location. Note the reciprocity
Statistical Prediction with Kanerva's Sparse Distributed Memory
between the selected set corresponding to a given reference address, and the activation set corresponding to a given location.)
When writing to the memory, all selected counters beneath elements of the input
data equal to 1 are incremented, and all selected counters beneath elements of the
input data equal to 0 are decremented. This completes a write operation. When
reading from the memory, the selected data counters are summed columnwise into
the register sums. If the value of a sum is greater than or equal to zero, we set the
corresponding bit in the output data to 1; otherwise, we set the bit in the output
data to O. (When reading, the contents of the input data are ignored.)
This example makes clear that a datum is distributed over the data counters of the
selected locations when writing, and that the datum is reconstructed during reading
by averaging the sums of these counters. However, depending on what additional
data were written into some of the selected locations, and depending on how these
data correlate with the Original data, the reconstruction may contain noise.
THE BEHAVIOR OF AN SDM WHEN AT OVER-CAPACITY
Consider an SDM with a I,OOO-bit address and a I-bit datum. In this memory, we
are storing associations that are samples of some binary function ( on the space S of
all possible addresses. After storing only a few associations, each data counter will
have no explicit meaning, since the data values stored in the memory are distributed
over many locations. However, once a sufficiently large number of associations are
stored in the memory, the data counter gains meaning: when appropriately normalized to the interval [0, 1], it contains a value which is the conditional probability
that the data bit is 1, given that its location was selected. This is shown in figure 2.
? S is the space of all possible addresses
? L is the set of addresses in S which activate
a given memory location
? ( is a binary function on S that we want
to estimate using the memory
[0 or 1]
? The data counter for L contains the average value
of (over L, which equals P( (X) =1 I X E L )
Figure 2. The Normalized Content of a Data Counter is the Conditional
Probability of the Value of ( Being Equal to 1 Given the Reference
Addresses are Restricted to the Sphere L.
In the prediction problem, we want to find activation sets of the address space that
correlate with some desired feature bit. When filled far beyond capacity, the indi-
589
590
Rogers
vidual memory locations of an SDM are collecting statistics about individual subregions of the address space. To estimate the value of r at a given address, it should
be possible to combine the conditional probabilities in the data counters of the
selected memory locations to make a "best guess" .
In the prediction problem. S is the space of possible sensory inputs. Since most
regions of S have no relationship with the datum we wish to predict, most of the
memory locations will be in non-informative regions of the address space. Associative memories are not useful for the prediction problem because the key part of the
problem is the search for subregions of the address space that are informative. Due
to capacity limitations and the extreme size of the address space. memories fill to
capacity and fail before enough samples can be written to identify the useful subregions.
PREDICTING THE VALUE OF
f
Each data counter in an SDM can be viewed as an independent estimate of the conditional probability of f being equal to lover the activation set defmed by the
counter's memory location. If a point of S is contained in multiple activation sets,
each with its own probability estimate, how do we combine these estimates? More
directly, when does knowledge of membership in some activation set help us estimate f better?
Assume that we know P( f(X) = 1), which is the average value of f over the entire
space S. If a data counter in memory location L has the same conditional probability
as P( f(X) =1). then knowing an address is contained in the activation set defining
L gives no additional information. (This is what makes the prediction problem
hard: most activation sets in S will be uncorrelated with the desired datum.)
When is a data counter useful? If a data counter contains a conditional probability
far away from the probability for the entire space, then it is highly informative.
The more committed a data counter is one way or the other, the more weight it
should be given. Ambivalent data counters should be given less weight
Figure 3 illustrates this point. Two activation sets of S are shown; the numbers 0
and 1 are the values of r at points in these sets. (Assume that all the points in the
activation sets are in these diagrams.) Membership in the left activation set is noninformative, while membership in the right activation set is highly informative.
Most activation sets are neither as bad as the left example nor as good as the right
example; instead. they are intermediate to these two cases. We can calculate the relative weights of different activation sets if we can estimate the relative signaVnoise
ratio of the sets.
591
Statistical Prediction with Kanerva's Sparse Distributed Memory
P(f(X)=1 I Xe L) =
P(f(X)=I)
? In the left example, the mean of the activation set is the same as the mean of the
entire space. Membership in this activation P(r(X)=1 I Xe L)
set gives no information; the opinion of
such a set should be given zero weight
=
1
? In the right example, the mean of the
activation set is I; membership in this activation set completely determines the value
of a point; the opinion of such a set should
be given 'infmite' weight.
Figure 3. The Predictive Value of an Activation Set Depends on How Much
New Infonnation it Gives About the Function f.
To obtain a measure of the amount of signal in an activation set L, imagine segregating the points of L into two sectors, which I call the informative sector and the noninformative sector. (Note that this partition will not be unique.) Include in the
non-infonnative sector the largest number of points possible such that the percentage of I's and O's equals the corresponding percentages in the overall population of
the entire space. The remaining points, which constitute the infonnative sector,
will contain all O's or I's. The relative size r of the informative sector compared to
L constitutes a measure of the signal. The relative size of the non-infonnative sector to L is (1 - r), and is a measure of the noise. Such a conceptual partition is
shown in figure 4.
Once the signal and the noise of an activation set is estimated, there are known methods for calculating the weight that should be given to this set when combining with
other sets (Rogers, 1988a). That weight is (r / (1 - r)2). Thus, given the conditional probability and the global probability, we can calculate the weight which should
be given to that data counter when combined with other counters.
P(r(X)=IIXeLinf)
= VALUE [0 or
Infonnative sector
1]
r
P(f(X)=11 XEL) - P(f(X)=I)
(1 _r)
P(f(X)=1 I Xe Lnon)
=
r= ---------------------VALUE - P(f(X)=I)
P(f(X)=I)
Figure 4. An Activation Set Dermed by a Memory Location can be
Partitioned into Infonnative and Non-infonnative Sectors.
592
Rogers
EXPERIMENT AL
The given weighting scheme was used in the standard SDM to test its effect on capacity. In the case of random addresses and data, the weights doubled the capacity of
the SDM. Even greater savings are likely with correlated data. These results are
shown in figure 5.
_"DIot
zo
20
Ie
Ie
!ii
!ii
10
10
e
l5
0
0
0
100
zoo
3DO
WWIIber 01 yri...
0
100
200
.UIIiMr of 9,."_
300
Figure s. Number of Bitwise Errors vs. Number of Writes in a 256-bit
Address, 256-bit Data, l000-Location Sparse Distributed Memory. The Left
is the Standard SDM; the Right is the Statistically-Weighted SDM. Graphs
Shown are Averages of 16 Runs
In deriving the weights, it was assumed that the individual data counters would
become meaningful only when a sufficiently large number of associations were
stored in the memory. This experiment suggests that even a small number of associations is sufficient to benefit from statistically-based weighting. These results are
important, for they suggest that this scheme can be used in an SDM in the full continuum, from low-capacity memory-based uses to over-capacity statistical-prediction uses.
CONCLUSIONS
Studies of SDM under conditions of over-capacity, in combination with the new
problem of statistical prediction, suggests a new range of uses for SDM. By weighting the locations differently depending on their contents, we also have discovered a
technique for improving the capacity of the SDM even when used as a memory.
This weighting scheme opens new possibilities for learning; for example, these
weights can be used to estimate the fitness of the locations for learning algorithms
such as Holland's genetic algorithms. Since the statistical prediction problem is primarily a problem of search over extremely large address spaces, such techniques
would allow redistribution of the memory locations to regions of the address space
which are maximally useful, while abandoning the regions which are non-informative. The combination of learning with memory is a potentially rich area for future
study.
Finally, many studies of associative memories have explicitly assumed random data
Statistical Prediction with Kanerva's Sparse Distributed Memory
in their studies; most real-world applications have non-random data. This theory
explicitly assumes, and makes use of, correlations between the associations given to
the memory. Assumptions such as randomness, which are useful in mathematical
studies, must be abandoned if we are to apply these tools to real-world problems.
Acknowledgments
This work was supported in part by Cooperative Agreements NCC 2-408 and NCC
2-387 from the National Aeronautics and Space Administration (NASA) to the Universities Space Research Association (USRA). Funding related to the Connection
Machine was jointly provided by NASA and the Defense Advanced Research Projects
Agency (DARPA). All agencies involved were very helpful in promoting this
work, for which I am grateful.
The entire RIACS staff and the SDM group has been supportive of my work. Louis
Iaeckel gave important assistance which guided the early development of these ideas.
Bruno Olshausen was a vital sounding-board for this work. Finally, I'll get mushy
and thank those who supported my spirits during this project, especially Pentti Kanerva, Rick Claeys, Iohn Bogan, and last but of course not least, my parents, Philip
and Cecilia. Love you all.
References
Albus, I. S., "A theory of cerebellar functions," Math. Bio.,10, pp. 25-61 (1971).
Baum, E., Moody, I., and Wilczek, F., "Internal representations for associative
memory," Biological Cybernetics, (1987).
Holland, I. H., Adaptation in natural and artificial systems, Ann Arbor: University of Michigan Press (1975).
Holland, I. H., "Escaping brittleness: the possibilities of general-purpose learning
algorithms applied to parallel rule-based systems," in Machine learning, an
artificial intelligence approach, Volume II, R. I. Michalski, I. G. Carbonell,
and T. M. Mitchell, eds. Los Altos, California: Morgan Kaufmann (1986).
Hopfield, IJ., "Neural networks and physical systems with emergent collective
computational abilities," Proc. Nat' I Acad. Sci. USA, 79, pp. 2554-8 (1982).
Kanerva, Pentti., "Self-propagating Search: A Unified Theory of Memory," Center
for the Study of Language and Information Report No. CSLI-84-7 (1984).
Kanerva, Pentti., Sparse distributed memory, Cambridge, Mass: MIT Press, 1988.
Marr, D., "The cortex of the cerebellum," 1. Physio .? 202, pp. 437-470 (1969).
Rogers, David, "Using data-tagging to improve the performance of Kanerva's sparse
distributed memory," Research Institute for Advanced Computer Science
Technical Report 88.1, NASA Ames Research Center (1988a).
Rogers, David, "Kanerva's sparse distributed memory: an associative memory algorithm well-suited to the Connection Machine," Research Institute for
Advanced Computer Science Technical Report 88.32, NASA Ames Research
Center (l988b).
593
| 130 |@word open:1 contains:3 genetic:2 bitwise:1 current:1 activation:23 tenned:1 must:2 written:2 riacs:1 partition:2 informative:7 noninformative:2 v:1 intelligence:1 selected:13 guess:1 nervous:1 math:1 location:27 ames:3 lor:1 mathematical:3 become:1 combine:2 tagging:1 behavior:5 dist:1 nor:1 love:1 brain:1 food:3 begin:1 provided:1 project:2 alto:1 mass:1 what:2 evolved:1 interpreted:1 developed:2 unified:1 finding:1 collecting:1 ti:1 bio:1 gigantic:1 louis:1 before:2 severely:1 acad:1 might:1 physio:1 suggests:4 limited:1 range:1 statistically:2 abandoning:1 unique:1 acknowledgment:1 writes:1 procedure:2 area:1 suggest:1 doubled:1 get:1 close:1 selection:1 writing:2 equivalent:1 center:4 baum:2 react:1 perceive:1 rule:1 marr:2 brittleness:1 fill:1 deriving:1 population:1 smell:1 imagine:1 us:3 agreement:1 associate:2 element:2 cooperative:1 calculate:2 region:4 counter:21 incremented:1 environment:1 agency:2 grateful:1 predictive:1 serve:1 completely:1 darpa:1 hopfield:3 differently:1 emergent:1 zo:1 activate:2 artificial:2 csli:1 otherwise:1 ability:3 statistic:1 jointly:1 associative:14 seemingly:1 sdm:28 michalski:1 propose:1 reconstruction:1 adaptation:1 beneath:2 combining:1 indi:1 albus:2 los:1 parent:1 enhancement:1 double:1 help:1 depending:3 propagating:1 ij:1 nearest:1 sim:1 judge:1 guided:1 radius:3 l000:1 opinion:2 rogers:8 redistribution:1 biological:1 predictiveness:1 sufficiently:2 predict:1 continuum:1 early:1 purpose:1 proc:1 ambivalent:1 infonnation:1 largest:1 tool:1 weighted:1 mit:1 rick:1 earliest:1 am:1 helpful:1 membership:5 entire:5 overall:1 development:1 animal:2 special:1 summed:1 field:1 equal:8 once:2 saving:1 atom:1 constitutes:1 future:3 report:3 decremented:1 few:1 primarily:1 national:1 individual:2 fitness:1 interest:1 highly:2 possibility:2 extreme:1 light:1 sens:1 filled:3 initialized:1 desired:2 infonnative:6 addressing:1 entry:1 predictor:2 uninteresting:1 stored:4 my:3 combined:1 twelve:1 ie:2 l5:1 rewarding:1 moody:2 register:1 explicitly:2 depends:1 performed:3 break:2 later:1 parallel:1 degraded:2 kaufmann:1 who:1 ensemble:1 identify:1 zoo:1 cybernetics:1 randomness:1 ncc:2 ed:1 pp:3 involved:1 james:1 hamming:2 gain:2 proved:1 mitchell:1 recall:1 knowledge:1 nasa:5 maximally:1 ooo:1 formulation:1 done:2 correlation:2 wilczek:2 defines:1 perhaps:1 gray:1 olshausen:1 usa:1 effect:1 contain:2 normalized:2 hence:1 infmite:1 illustrated:1 cerebellum:3 assistance:1 during:2 ll:1 defmed:1 self:1 essence:1 noted:1 m:1 meaning:2 funding:1 common:2 behaves:1 physical:1 overview:1 volume:1 association:8 refer:1 cambridge:1 mathematics:1 bruno:1 language:1 had:1 access:2 entail:1 cortex:1 aeronautics:1 something:1 own:1 binary:2 supportive:1 xe:3 yri:1 morgan:1 greater:3 additional:2 staff:1 living:2 ii:3 signal:3 multiple:1 full:1 technical:2 sphere:1 prediction:18 involving:1 variant:3 cerebellar:1 want:2 addressed:1 interval:1 completes:1 diagram:1 source:2 appropriately:1 sounding:1 lover:1 xel:1 spirit:1 call:1 near:2 intermediate:1 vital:1 enough:1 gave:1 architecture:1 associating:1 escaping:1 idea:1 knowing:1 administration:1 defense:1 constitute:1 ignored:1 useful:5 detailed:1 clear:1 amount:1 ten:2 subregions:4 percentage:3 estimated:1 write:1 group:1 key:1 segregating:1 threshold:1 enormous:1 changing:1 neither:1 ram:1 graph:1 sum:4 run:1 you:1 missed:1 bit:11 datum:5 extremely:3 diot:1 combination:2 partitioned:1 restricted:1 kanerva:15 previously:1 fail:1 know:1 operation:2 apply:1 promoting:1 away:2 odor:1 original:1 abandoned:1 assumes:1 remaining:1 include:1 opportunity:1 calculating:1 especially:1 move:1 already:1 occurs:2 damage:2 exhibit:1 pain:1 distance:2 separate:1 thank:1 columnwise:1 capacity:23 philip:1 sci:1 carbonell:1 topic:3 reciprocity:1 relationship:1 ratio:1 sector:9 potentially:1 stated:1 design:1 collective:1 defining:1 committed:1 discovered:1 david:4 connection:2 california:1 ultaneously:1 address:32 beyond:3 usually:1 perception:5 appeared:1 reading:3 including:2 memory:64 event:1 difficulty:1 natural:1 predicting:1 advanced:4 scheme:3 improve:1 usra:1 understanding:1 discovery:1 relative:5 limitation:1 moffett:1 sufficient:1 viewpoint:3 uncorrelated:3 share:1 storing:3 row:2 course:1 supported:2 last:1 allow:1 institute:3 neighbor:1 sparse:16 distributed:16 benefit:1 world:4 rich:1 sensory:3 simplified:1 far:2 correlate:3 reconstructed:1 global:1 conceptual:1 dermed:1 assumed:2 search:4 learn:2 ca:1 improving:3 universe:1 noise:3 allowed:1 fmd:1 creature:5 board:1 explicit:1 wish:1 crude:1 perceptual:3 weighting:4 down:2 bad:1 albeit:1 nat:1 illustrates:1 suited:1 michigan:1 likely:2 contained:2 holland:4 distrmuted:2 chance:1 determines:1 conditional:7 viewed:1 ann:1 towards:1 content:3 hard:1 averaging:1 called:1 pentti:3 arbor:1 meaningful:1 select:2 internal:1 correlated:3 |
330 | 1,301 | Reconstructing Stimulus Velocity from
Neuronal Responses in Area MT
Wyeth Bair, James R. Cavanaugh, J. Anthony Movshon
Howard Hughes Medical Institute, and
Center for Neural Science
New York University
4 Washington Place, Room 809
New York, NY 10003
wyeth@cns.nyu.edu, jamesc@cns.nyu.edu, tony@cns.nyu.edu
Abstract
We employed a white-noise velocity signal to study the dynamics
of the response of single neurons in the cortical area MT to visual
motion. Responses were quantified using reverse correlation, optimal linear reconstruction filters, and reconstruction signal-to-noise
ratio (SNR). The SNR and lower bound estimates of information
rate were lower than we expected. Ninety percent of the information was transmitted below 18 Hz, and the highest lower bound on
bit rate was 12 bits/so A simulated opponent motion energy subunit with Poisson spike statistics was able to out-perform the MT
neurons. The temporal integration window, measured from the reverse correlation half-width, ranged from 30-90 ms. The window
was narrower when a stimulus moved faster, but did not change
when temporal frequency was held constant.
1
INTRODUCTION
Area MT neurons can show precise and rapid modulation in response to dynamic
noise stimuli (Bair and Koch, 1996); however, computational models of these neurons and their inputs (Adelson and Bergen, 1985; Heeger, 1987; Grzywacz and
Yuille, 1990; Emerson et al., 1992; Qian et al., 1994; Nowlan and Sejnowski, 1995)
have primarily been compared to electrophysiological results based on time and ensemble averaged responses to deterministic stimuli, e.g., drifting sinusoidal gratings.
Reconstructing Stimulus Velocity from Neuronal Responses in Area MT
35
Using methods introduced by Bialek et al. (1991) and further analyzed by Gabbiani
and Koch (1996) for the estimation of information transmission by a neuron about
a white-noise stimulus, we set out to compare the responses of MT neurons for
white-noise velocity signals to those of a model based on opponent motion energy
sub-units.
The results of two analyses are summarized here. In the first, we compute a lower
bound on information transmission using optimal linear reconstruction filters and
examine the SNR as a function of temporal frequency. The second analysis examines
changes in the reverse correlation (the cross-correlation between the stimulus and
the resulting spike trains) as a function of spatial frequency and temporal frequency
of the moving stimulus pattern.
2
EXPERIMENTAL METHODS
Spike trains were recorded extracellularly from 26 well-isolated single neurons in area
MT of four anesthetized, paralyzed macaque monkeys using methods described in
detail elsewhere (Levitt et al., 1994). The size of the receptive fields and the spatial
and temporal frequency preferences of the neurons were assessed quantitatively
using drifting sinusoidal gratings, after which a white-noise velocity signal, s(t),
was used to modulate the position (within a fixed square aperture) of a low-pass
filtered 1D Gaussian white-noise (GWN) pattern. The frame rate of the display
was 54 Hz or 81 Hz. The spatial noise pattern consisted of 256 discrete intensity
values, one per spatial unit. Every 19 ms (or 12 ms at 81 Hz), the pattern shifted, or
jumped, 6. spatial units along the axis of the neuron's preferred direction, where 6.,
the jump size, was chosen according to a Gaussian, binary, or uniform probability
distribution. The maximum spatial frequency in the pattern was limited to prevent
aliasing.
In the first type of experiment, 10 trials of a 30 s noise sequence, s(t), and 10 trials of
its reverse, -s(t), were interleaved. A standard GWN spatial pattern and velocity
modulation pattern were used for all cells, but for each cell, the stimulus was scaled
for the receptive field size and aligned to the axis of preferred motion. Nine cells
were tested with Gaussian noise at 81 Hz, 15 cells with binary noise at 81 Hz and
54 Hz, and 10 cells with uniform noise at 54 Hz.
In another experiment, a sinusoidal spatial pattern (rather than GWN) moved
according to a binary white-noise velocity signal. Thials were interleaved with all
combinations of four spatial frequencies at octave intervals and four relative jump
sizes: 1/4, 1/8, 1/16, and 1/32 of each spatial period. Typically 10 trials of length
3 s were run. Four cells were tested at 54 Hz and seven at 81 Hz.
3
ANALYSIS AND MODELING METHODS
We used the linear reconstruction methods introduced by Bialek et al. (1991)
and further analyzed by Gabbiani and Koch (1996) to compute an optimal linear
estimate of the stimulus, s(t), described above, based on the neuronal response, x(t).
A single neuronal response was defined as the spike train produced by s (t) minus
the spike train produced by -s(t). This overcomes the neuron's limited dynamic
range in response to anti-preferred direction motion (Bialek et al., 1991).
W. Bair; J. R. Cavanaugh and J. A. Movshon
36
The linear filter, h(t), which when convolved with the response yields the minimum
mean square error estimate, Sest, of the stimulus can be described in terms of its
Fourier transform,
H(w) _ Rsz( -w)
(1)
- Rzz(w) ,
where Rsz(w) is the Fourier transform of the cross-correlation rsz(r) of the stimulus
and the resulting spike train and Rzz(w) is the power spectrum, i.e., the Fourier
transform of the auto-correlation, of the spike train (for details and references, see
Bialek et aI., 1991; Gabbiani and Koch, 1996). The noise, n(t), is defined as the
difference between the stimulus and the reconstruction,
n(t) =
Sest(t) -
s(t),
(2)
and the SNR is defined as
SNR(w) = Rss(w) ,
(3)
Rnn(w)
where Rss(w) is the Fourier power spectrum of the stimulus and Rnn(w) is the
power spectrum of the noise. If the stimulus amplitude distribution is Gaussian,
then SNR(w) can be integrated to give a lower bound on the rate of information
transmission in bits/s (Gabbiani and Koch, 1996).
The motion energy model consisted of opponent energy sub-units (Adelson and
Bergen, 1985) implemented with Gabor functions (Heeger, 1987; Grzywacz and
Yuille, 1990) in two spatial dimensions and time. The spatial frequency of the
Gabor function was set to match the spatial frequency of the stimulus, and the
temporal frequency was set to match that induced by a sequence of jumps equal to
the standard deviation (SD) of the amplitude distribution (which is the jump size in
the case of a binary distribution). We approximated causality by shifting the output
forward in time before computing the optimal linear filter. The model operated on
the same stimulus patterns and noise sequences that were used to generate stimuli
for the neurons. The time-varying response of the model was broken into two halfwave rectified signals which were interpreted as the firing probabilities of two units,
a neuron and an anti-neuron that preferred the opposite direction of motion. From
each unit, ten 30 s long spike trains were generated with inhomogeneous Poisson
statistics. These 20 model spike trains were used to reconstruct the velocity signal
in the same manner as the MT neuron output.
4
RESULTS
Stimulus reconstruction. Optimal linear reconstruction filters, h(t), were computed for 26 MT neurons from responses to 30 s sequences of white-noise motion. A
typical h(t), shown in Fig. lA (large dots), was dominated by a single positive lobe,
often preceded by a smaller negative lobe. It was thinner than the reverse correlation rsz(r) (Fig. lA, small dots) from which it was derived due to the division by
the low-pass power spectrum of the spikes (see Eqn. 1). Also, rsz(r) occasionally
had a slower, trailing negative lobe but did not have the preceding negative lobe
of h(t). On average, h(t) peaked at -69 ms (SD 17) and was 33 ms (SD 12) wide
at half-height. The peak for rsz(r) occurred at the same time, but the width was
41 ms (SD 15), ranging from 30-90 ms. The point of half-rise on the right side of
the peak was -53 ms (SD 9) for h(t) and -51 ms (SD 9) for rsz(r). For all plots,
Reconstructing Stimulus Velocity from Neuronal Responses in Area MT
37
vertical axes for velocity show normalized stimulus velocity, i.e., stimulus velocity
was scaled to have unity SD before all computations.
Fig lC (dots) shows the SNR for the reconstruction using the h(t) in panel A. For
8 cells tested with Gaussian velocity noise, the integral of the log of the SNR gives
a lower bound for information transmission, which was 6.7 bits/s (SD 2.8), with a
high value of 12.3 bits/so Most of the information was carried below 10 Hz, and 90%
of the information was carried below 18.4 Hz (SD 2.1). In Fig. ID, the failure of the
reconstruction (dots) to capture higher frequencies in the stimulus (thick line) is
directly visible. Both h(t) and SNR(w) were similar but on average slightly greater
in amplitude for tests using binary and uniform distributed noise. Gaussian noise
has many jumps at or near zero which may induce little or no response.
....~
.....
u
0.3
0.6
A
3
B
..9
0.2
o
>
"'2
~2
Z
0.1
N
CJ:)
~
~
S
o
Z -0.1 +--..--N..,. .e. ,. .ur..,. .o. ,. .l . . . . . ,. . . . . . . ,. . . . .-.-. . .-.-. . --.--I
-160
-80
Model
o
0 -80
Time relative to spike (ms)
o
20
40
Frequency (Hz)
3
D
....
.....
U
~
0
....-4
0
>
'i:j
0
0
.....N
....-4
~
S
0
Z
-3
100
150
200
250
300
Time (ms)
Figure 1: (A) Optimal linear filter h(t) (big dots) from Eqn. 1 and cross-correlation
Tsz(r) ,(small dots) for one MT neuron. (B) h(t) (thick line) and Tsz(r) (thin line)
for an opponent motion energy model. (C) SNR(w) for the neuron (dots) and the
model (line). (D) Reconstruction for the neuron (dots) and model (thin line) of
the stimulus velocity (thick line). Velocity was normalized to unity SD. Curves for
Tsz(r) were scaled by 0.5. ?Note the different vertical scale in B.
W. Bair, J. R. Cavanaugh and J. A. Movshon
38
An opponent motion energy model using Gabor functions was simulated with spatial SD 0.5 0 , spatial frequency 0.625 cycr, temporal SD 12.5 ms, and temporal
frequency 20 Hz. The model was tested with a Gaussian velocity stimulus with SD
32 0 / s. Because an arbitrary scaling of the spatial parameters in the model does not
affect the temporal properties of the information transmission, this was effectively
the same stimulus that yielded the neuronal data shown in Fig. 1A. Spike trains
were generated from the model at 20 Hz (matched to the neuron) and used to compute h(t) (Fig. 1B, thick line). The model h(t) was narrower than that for the MT
neuron, but was similar to h(t) for Vi neurons that have been tested (unpublished
analysis). This simple model of a putative input sub-unit to MT transmitted 29
bits/s- more than the best MT neurons studied here. The SNR ratio and the reconstruction for the model are shown in Fig. 1C,D (thin lines). The filter h(t) for
the model (Fig. 1B thick line) was more symmetric than that for the neuron due to
the symmetry of the Gabor function used in the model.
1
Neuron 1
Neuron 2
0.5
-120
-80
-40
0 -160
-120
-80
1/32
-0. 5 +-----.-....:,....;.....,-.-~"T-T---r--.--.___,_~~__i
-160
-120
-80
-40
0 -160
-40
0
1/16
-120
-80
Time relative to spike (ms)
-40
o
Figure 2: The width of r 8Z (r) changes with temporal frequency, but not spatial
frequency. Data from two neurons are shown, one on the left, one on the right.
Top: rsx(r) is shown for binary velocity stimuli with jump sizes 1/4, 1/8, 1/16,
and 1/32 (thick to thin lines) of the spatial period (10 trials, 3 s/trial). The left side
of the peak shifts leftward as jump size decreases. See text for statistics. Bottom:
The relative jump size, thus temporal frequency, was constant for the four cases in
each panel (1/32 on the left, 1/16 on the right). The peaks do not shift left or right
as spatial frequency and jump size change inversely. Thicker lines represent larger
jumps and lower spatial frequencies.
Reconstructing Stimulus Velocity from Neuronal Responses in Area MT
39
Changes in rsx(r). We tested 11 neurons with a set of binary white-noise motion
stimuli that varied in spatial frequency and jump size. The spatial patterns were
sinusoidal gratings. The peaks in rsx(r) and h(t) were wider when smaller jumps
(slower velocities) were used to move the same spatial pattern. Fig. 2 shows data
for two neurons plotted for constant spatial frequency (top) and constant effective
temporal frequency, or contrast frequency (bottom). Jump sizes were 1/4, 1/8,
1/16, and 1/32 (thick to thin lines, top panels) ofthe period of the spatial pattern.
(Note, a 1/2 period jump would cause an ambiguous motion.) Relative jump size
was constant in the bottom panels, but both the spatial period and the velocity
increased in octaves from thin to thick lines. One of the plots in the upper panel
also appears in the lower panel for each neuron. For 26 conditions in 11 MT neurons
(up to 4 spatial frequencies per neuron) the left and right half-rise points of the peak
of rsx(r) shifted leftward by 19 ms (SD 12) and 4.5 ms (SD 4.0), respectively, as
jump size decreased. The width, therefore, increased by 14 ms (SD 12). These
changes were statistically significant (p < 0.001, t-test). In fact, the left half-rise
point moved leftward in all 26 cases, and in no case did the width at half-height
decrease. On the other hand, there was no significant change in the peak width
or half-rise times when temporal frequency was constant, as demonstrated in the
lower panels of Fig. 2.
5
DISCUSSION
From other experiments using frame-based displays to present moving stimuli to
MT cells, we know that roughly half of the cells can modulate to a 60 Hz signal
in the preferred direction and that some provide reliable bursts of spikes on each
frame but do not respond to null direction motion. Therefore, one might expect that
these cells could easily be made to transmit nearly 60 bits/ s by moving the stimulus
randomly in either the preferred or null direction on each video frame. However,
the stimuli that we employed here did not result in such high frequency modulation,
nor did our best lower bound estimate of information transmission for an MT cell,
12 bits/s, approach the 64 bits/s capacity of the motion sensitive HI neuron in the
fly (Bialek et al., 1991). In recordings from seven VI neurons (not shown here),
two directional complex cells responded to the velocity noise with high temporal
precision and fired a burst of spikes on almost every preferred motion frame and no
spikes on null motion frames. At 53 frames/s, these cells transmitted over 40 bits/so
We hope that further investigation will reveal whether the lack of high frequency
modulation in our MT experiments was due to statistical variation between animals,
the structure of the stimulus, or possibly to anesthesia.
In spite of finding less high frequency bandwidth than expected, we were able to
document consistent changes, namely narrowing, of the temporal integration window of MT neurons as temporal frequency increased. Similar changes in the time
constant of motion processing have been reported in the fly visual system, where
it appears that neither velocity nor temporal frequency alone can account for all
changes (de Ruyter et al., 1986; Borst & Egelhaaf, 1987). The narrowing of rsx(r)
with higher temporal frequency does not occur in our simple motion energy model,
which lacks adaptive mechanisms, but it could occur in a model which integrated
signals from many motion energy units having distributed temporal frequency tuning, even without other sources of adaptation.
40
W. Bair, J. R. CavanaughandJ. A. Movshon
We were not able to assess whether changes in the integration window developed
quickly at the beginning of individual trials, but an analysis not described here at
least indicates that there was very little change in the position and width of r 83: (T)
and h(t) after the first few seconds during the 30 s trials.
Acknowledgements
This work was funded by the Howard Hughes Medical Institute. We thank Fabrizio Gabbiani and Christof Koch for helpful discussion, Lawrence P. O'Keefe for
assistance with electrophysiology, and David Tanzer for assistance with software.
References
Adelson EH, Bergen JR (1985) Spatiotemporal energy models for the perception of
motion. J Opt Soc Am A 2:284- 299.
Bair W, Koch C (1996) Temporal precision of spike trains in extrastriate cortex of
the behaving macaque monkey. Neural Comp 8:1185-1202.
Bialek W, Rieke F, de Ruyter van Steveninck RR, Warland D (1991) Reading a
neural code. Science 252:1854- 1857.
Borst A, Egelhaaf M (1987) Temporal modulation of luminance adapts time constant of fly movement detectors. BioI Cybern 56:209-215.
Emerson RC, Bergen JR, Adelson EH (1992) Directionally selective complex cells
and the computation of motion energy in cat visual cortex. Vision Res 32:203218.
Gabbiani F, Koch C (1996) Coding of time-varying signals in spike trains of
integrate-and-fire neurons with random threshold. Neural Comp 8:44-66.
Grzywacz NM, Yuille AL (1990) A model for the estimate of local image velocity
by cells in the visual cortex. Proc R Soc Lond B 239:129-161.
Heeger DJ (1987) Model for the extraction of image flow. J Opt Soc Am A 4:14551471.
Levitt JB, Kiper DC, Movshon JA (1994) Receptive fields and functional architecture of macaque V2. J .Neurophys. 71:2517-2542.
Nowlan SJ, Sejnowski TJ (1994) Filter selection model for motion segmentation
and velocity integration. J Opt Soc Am A 11:3177-3200.
Qian N, Andersen RA, Adelson EH (1994) Transparent motion perception as detection of unbalanced motion signals .3. Modeling. J Neurosc 14:7381-7392.
de Ruyter van Steveninck R, Zaagman WH, Mastebroek HAK (1986) Adaptation
of transient responses of a movement-sensitive neuron in the visual-system of
the blowfly calliphora-erythrocephala. BioI Cybern 54:223-236.
| 1301 |@word trial:7 r:2 lobe:4 minus:1 extrastriate:1 document:1 neurophys:1 nowlan:2 visible:1 plot:2 alone:1 half:8 cavanaugh:3 beginning:1 filtered:1 preference:1 height:2 rc:1 along:1 burst:2 anesthesia:1 manner:1 ra:1 expected:2 rapid:1 roughly:1 examine:1 nor:2 aliasing:1 borst:2 little:2 window:4 matched:1 panel:7 null:3 interpreted:1 monkey:2 developed:1 finding:1 temporal:21 every:2 thicker:1 scaled:3 unit:8 medical:2 zaagman:1 christof:1 before:2 positive:1 local:1 thinner:1 sd:16 id:1 firing:1 modulation:5 might:1 studied:1 quantified:1 limited:2 range:1 statistically:1 averaged:1 steveninck:2 hughes:2 area:7 emerson:2 rnn:2 gabor:4 induce:1 spite:1 selection:1 cybern:2 deterministic:1 demonstrated:1 center:1 qian:2 examines:1 rieke:1 variation:1 grzywacz:3 transmit:1 velocity:24 approximated:1 bottom:3 narrowing:2 fly:3 capture:1 decrease:2 highest:1 movement:2 broken:1 calliphora:1 dynamic:3 yuille:3 division:1 easily:1 cat:1 train:11 effective:1 sejnowski:2 larger:1 reconstruct:1 statistic:3 transform:3 directionally:1 sequence:4 rr:1 reconstruction:11 adaptation:2 aligned:1 fired:1 adapts:1 moved:3 transmission:6 wider:1 measured:1 grating:3 soc:4 implemented:1 direction:6 inhomogeneous:1 thick:8 filter:8 transient:1 ja:1 transparent:1 investigation:1 opt:3 koch:8 lawrence:1 mastebroek:1 trailing:1 jumped:1 estimation:1 proc:1 sensitive:2 gabbiani:6 hope:1 gaussian:7 rather:1 varying:2 derived:1 ax:1 indicates:1 contrast:1 am:3 helpful:1 bergen:4 typically:1 integrated:2 selective:1 animal:1 spatial:27 integration:4 field:3 equal:1 having:1 washington:1 extraction:1 adelson:5 nearly:1 thin:6 peaked:1 jb:1 stimulus:33 quantitatively:1 primarily:1 few:1 randomly:1 gwn:3 individual:1 cns:3 fire:1 detection:1 analyzed:2 operated:1 tj:1 held:1 integral:1 re:1 plotted:1 isolated:1 increased:3 modeling:2 kiper:1 deviation:1 snr:11 uniform:3 reported:1 spatiotemporal:1 peak:7 quickly:1 andersen:1 recorded:1 nm:1 possibly:1 account:1 tsz:3 sinusoidal:4 de:3 summarized:1 coding:1 vi:2 extracellularly:1 wyeth:2 ass:1 square:2 responded:1 ensemble:1 yield:1 ofthe:1 directional:1 produced:2 comp:2 rectified:1 detector:1 failure:1 energy:10 frequency:32 james:1 wh:1 electrophysiological:1 cj:1 segmentation:1 amplitude:3 appears:2 higher:2 response:17 correlation:8 hand:1 eqn:2 lack:2 reveal:1 ranged:1 consisted:2 normalized:2 symmetric:1 white:8 assistance:2 during:1 width:7 ambiguous:1 m:16 octave:2 motion:24 rsx:5 percent:1 ranging:1 image:2 functional:1 mt:20 preceded:1 occurred:1 rsz:7 significant:2 ai:1 tuning:1 had:1 dot:8 dj:1 moving:3 sest:2 funded:1 cortex:3 behaving:1 leftward:3 reverse:5 occasionally:1 binary:7 transmitted:3 minimum:1 greater:1 preceding:1 employed:2 erythrocephala:1 period:5 signal:11 paralyzed:1 match:2 faster:1 cross:3 long:1 vision:1 poisson:2 represent:1 cell:15 interval:1 decreased:1 source:1 hz:16 induced:1 recording:1 flow:1 near:1 affect:1 architecture:1 bandwidth:1 opposite:1 shift:2 whether:2 bair:6 movshon:5 york:2 nine:1 cause:1 ten:1 generate:1 halfwave:1 shifted:2 fabrizio:1 per:2 discrete:1 four:5 threshold:1 prevent:1 neither:1 luminance:1 hak:1 run:1 respond:1 place:1 almost:1 putative:1 scaling:1 bit:10 interleaved:2 bound:6 hi:1 display:2 yielded:1 occur:2 software:1 dominated:1 fourier:4 lond:1 according:2 combination:1 jr:2 smaller:2 slightly:1 reconstructing:4 ninety:1 unity:2 ur:1 mechanism:1 know:1 opponent:5 v2:1 blowfly:1 slower:2 drifting:2 convolved:1 top:3 tony:1 warland:1 neurosc:1 move:1 spike:18 receptive:3 bialek:6 thank:1 simulated:2 capacity:1 seven:2 length:1 rzz:2 code:1 ratio:2 negative:3 rise:4 perform:1 upper:1 vertical:2 neuron:36 howard:2 anti:2 subunit:1 precise:1 frame:7 dc:1 varied:1 arbitrary:1 intensity:1 introduced:2 david:1 unpublished:1 namely:1 macaque:3 able:3 below:3 pattern:12 perception:2 reading:1 reliable:1 video:1 shifting:1 power:4 eh:3 inversely:1 axis:2 carried:2 auto:1 text:1 acknowledgement:1 relative:5 expect:1 integrate:1 consistent:1 elsewhere:1 side:2 institute:2 wide:1 anesthetized:1 distributed:2 van:2 curve:1 dimension:1 cortical:1 forward:1 made:1 jump:16 adaptive:1 sj:1 preferred:7 aperture:1 overcomes:1 spectrum:4 ruyter:3 symmetry:1 complex:2 anthony:1 did:5 big:1 noise:22 egelhaaf:2 neuronal:7 levitt:2 causality:1 fig:10 ny:1 lc:1 precision:2 sub:3 position:2 heeger:3 nyu:3 effectively:1 keefe:1 electrophysiology:1 visual:5 bioi:2 modulate:2 narrower:2 room:1 change:12 typical:1 pas:2 experimental:1 la:2 assessed:1 unbalanced:1 tested:6 |
331 | 1,302 | On-line Policy Improvement using
Monte-Carlo Search
Gerald Tesauro
IBM T. J. Watson Research Center
P. O. Box 704
Yorktown Heights, NY 10598
Gregory R. Galperin
MIT AI Lab
545 Technology Square
Cambridge, MA 02139
Abstract
We present a Monte-Carlo simulation algorithm for real-time policy
improvement of an adaptive controller. In the Monte-Carlo simulation, the long-term expected reward of each possible action is
statistically measured, using the initial policy to make decisions in
each step of the simulation. The action maximizing the measured
expected reward is then taken, resulting in an improved policy. Our
algorithm is easily parallelizable and has been implemented on the
IBM SP! and SP2 parallel-RISC supercomputers.
We have obtained promising initial results in applying this algorithm to the domain of backgammon. Results are reported for a
wide variety of initial policies, ranging from a random policy to
TD-Gammon, an extremely strong multi-layer neural network. In
each case, the Monte-Carlo algorithm gives a substantial reduction,
by as much as a factor of 5 or more, in the error rate of the base
players. The algorithm is also potentially useful in many other
adaptive control applications in which it is possible to simulate the
environment.
1
INTRODUCTION
Policy iteration, a widely used algorithm for solving problems in adaptive control, consists of repeatedly iterating the following policy improvement computation
(Bertsekas, 1995): (1) First, a value function is computed that represents the longterm expected reward that would be obtained by following an initial policy. (This
may be done in several ways, such as with the standard dynamic programming algorithm.) (2) An improved policy is then defined which is greedy with respect to
that value function. Policy iteration is known to have rapid and robust convergence
properties, and for Markov tasks with lookup-table state-space representations, it
is guaranteed to convergence to the optimal policy.
On-line Policy Improvement using Monte-Carlo Search
1069
In typical uses of policy iteration, the policy improvement step is an extensive
off-line procedure. For example, in dynamic programming, one performs a sweep
through all states in the state space. Reinforcement learning provides another approach to policy improvement; recently, several authors have investigated using RL
in conjunction with nonlinear function approximators to represent the value functions and/or policies (Tesauro, 1992; Crites and Barto, 1996; Zhang and Dietterich,
1996). These studies are based on following actual state-space trajectories rather
than sweeps through the full state space, but are still too slow to compute improved
policies in real time. Such function approximators typically need extensive off-line
training on many trajectories before they achieve acceptable performance levels.
In contrast, we propose an on-line algorithm for computing an improved policy in
real time. We use Monte-Carlo search to estimate Vp(z, a), the expected value of
performing action a in state z and subsequently executing policy P in all successor
states. Here, P is some given arbitrary policy, as defined by a "base controller"
(we do not care how P is defined or was derived; we only need access to its policy
decisions). In the Monte-Carlo search, many simulated trajectories starting from
(z, a) are generated following P, and the expected long-term reward is estimated
by averaging the results from each of the trajectories. (N ote that Monte-Carlo
sampling is needed only for non-deterministic tasks, because in a deterministic
task, only one trajectory starting from (z, a) would need to be examined.) Having estimated Vp(z, a), the improved policy pI at state z is defined to be the
action which produced the best estimated value in the Monte-Carlo simulation, i.e.,
PI(z) = argmaxa Vp(z, a).
1.1
EFFICIENT IMPLEMENTATION
The proposed Monte-Carlo algorithm could be very CPU-intensive, depending on
the number of initial actions that need to be simulated, the number of time steps
per trial needed to obtain a meaningful long-term reward, the amount of CPU per
time step needed to make a decision with the base controller, and the total number
of trials needed to make a Monte-Carlo decision. The last factor depends on both
the variance in expected reward per trial, and on how close the values of competing
candidate actions are.
We propose two methods to address the potentially large CPU requirements of this
approach. First, the power of parallelism can be exploited very effectively. The
algorithm is easily parallelized with high efficiency: the individual Monte-Carlo
trials can be performed independently, and the combining of results from different
trials is a simple averaging operation. Hence there is relatively little communication
between processors required in a parallel implementation.
The second technique is to continually monitor the accumulated Monte-Carlo statistics during the simulation, and to prune away both candidate actions that are
sufficiently unlikely (outside some user-specified confidence bound) to be selected
as the best action, as well as candidates whose values are sufficiently close to the
value of the current best estimate that they are considered equivalent (i.e., choosing either would not make a significant difference). This technique requires more
communication in a parallel implementation, but offers potentially large savings in
the number of trials needed to make a decision.
2
APPLICATION TO BACKGAMMON
We have initially applied the Monte-Carlo algorithm to making move decisions in
the game of backgammon. This is an absorbing Markov process with perfect state-
1070
G. Tesauro and G. R. Galperin
space information, and one has a perfect model of the nondeterminism in the system,
as well as the mapping from actions to resulting states.
In backgammon parlance, the expected value of a position is known as the "equity"
of the position, and estimating the equity by Monte-Carlo sampling is known as
performing a "rollout." This involves playing the position out to completion many
times with different random dice sequences, using a fixed policy P to make move
decisions for both sides. The sequences are terminated at the end of the game (when
one side has borne off all 15 checkers), and at that time a signed outcome value
(called "points") is recorded. The outcome value is positive if one side wins and
negative if the other side wins, and the magnitude of the value can be either 1, 2, or
3, depending on whether the win was normal, a gammon, or a backgammon. With
normal human play, games typically last on the order of 50-60 time steps. Hence
if one is using the Monte-Carlo player to play out actual games, the Monte-Carlo
trials will on average start out somewhere in the middle of a game, and take about
25-30 time steps to reach completion.
In backgammon there are on average about 20 legal moves to consider in a typical
decision. The candidate plays frequently differ in expected value by on the order of
.01. Thus in order to resolve the best play by Monte-Carlo sampling, one would need
on the order of 10K or more trials per candidate, or a total of hundreds of thousands
of Monte-Carlo trials to make one move decision. With extensive statistical pruning
as discussed previously, this can be reduced to several tens of thousands of trials.
Multiplying this by 25-30 decisions per trial with the base player, we find that
about a million base-player decisions have to be made in order to make one MonteCarlo decision. With typical human tournament players taking about 10 seconds
per move, we need to parallelize to the point that we can achieve at least lOOK
base-player decisions per second.
Our Monte-Carlo simulations were performed on the IBM SP! and SP2 parallelRISC supercomputers at IBM Watson and at Argonne National Laboratories. Each
SP node is equivalent to a fast RSj6000, with floating-point capability on the order
of 100 Mflops. Typical runs were on configurations of 16-32 SP nodes, with parallel
speedup efficiencies on the order of 90%.
We have used a variety of base players in our Monte-Carlo simulations, with widely
varying playing abilities and CPU requirements. The weakest (and fastest) of these
is a purely random player. We have also used a few single-layer networks (i.e., no
hidden units) with simple encodings of the board state, that were trained by backpropagation on an expert data set (Tesauro, 1989). These simple networks also make
fast move decisions, and are much stronger than a random player, but in human
terms are only at a beginner-to-intermediate level. Finally, we used some multi-layer
nets with a rich input representation, encoding both the raw board state and many
hand-crafted features, trained on self-play using the TD(>.) algorithm (Sutton, 1988;
Tesauro, 1992). Such networks play at an advanced level, but are too slow to make
Monte-Carlo decisions in real time based on full rollouts to completion. Results for
all these players are presented in the following two sections.
2.1
RESULTS FOR SINGLE-LAYER NETWORKS
We measured the game-playing strength of three single-layer base players, and of
the corresponding Monte-Carlo players, by playing several thousand games against
a common benchmark opponent. The benchmark opponent was TD-Gammon 2.1
(Tesauro, 1995), playing on its most basic playing level (I-ply search, i.e., no lookahead). Table 1 shows the results. Lin-1 is a single-layer neural net with only the
raw board description (number of White and Black checkers at each location) as
1071
On-line Policy Improvement using Monte-Carlo Search
Network
Lin-1
Lin-2
Lin-3
Base player
-0.52 ppg
-0.65 ppg
-0.32 ppg
Monte-Carlo player
-0.01 ppg
-0.02 ppg
+0.04 ppg
Monte-Carlo CPU
5 sec/move
5 sec/move
10 sec/move
Table 1: Performance of three simple linear evaluators, for both initial base players
and corresponding Monte-Carlo players. Performance is measured in terms of expected points per game (ppg) vs. TO-Gammon 2.11-ply. Positive numbers indicate
that the player here is better than TO-Gammon. Base player stats are the results
of 30K trials (std. dev. about .005), and Monte-Carlo stats are the results of 5K
trials (std. dev. about .02). CPU times are for the Monte-Carlo player running on
32 SP 1 nodes.
input. Lin-2 uses the same network structure and weights as Lin-l, plus a significant amount of random noise was added to the evaluation function, in order to
deliberately weaken its playing ability. These networks were highly optimized for
speed, and are capable of making a move decision in about 0.2 msec on a single SP1
node. Lin-3 uses the same raw board input as the other two players, plus it has a
few additional hand-crafted features related to the probability of a checker being
hit; there is no noise added. This network is a significantly stronger player, but is
about twice as slow in making move decisions.
We can see in Table 1 that the Monte-Carlo technique produces dramatic improvement in playing ability for these weak initial players. As base players, Lin-1 should
be regarded as a bad intermediate player, while Lin-2 is substantially worse and is
probably about equal to a human beginner. Both of these networks get trounced
by TO-Gammon, which on its 1-ply level plays at strong advanced level. Yet the
resulting Monte-Carlo players from these networks appear to play about equal to
TO-Gammon l-ply. Lin-3 is a significantly stronger player, and the resulting MonteCarlo player appears to be clearly better than TO-Gammon l-ply. It is estimated
to be about equivalent to TO-Gammon on its 2-ply level, which plays at a strong
expert level.
The Monte-Carlo benchmarks reported in Table 1 involved substantial amounts of
CPU time. At 10 seconds per move decision, and 25 mOve decisions per game,
playing 5000 games against TO-Gammon required about 350 hours of 32-node SP
machine time. We have also developed an alternative testing procedure, which
is much less expensive in CPU time, but still seems to give a reasonably accurate
measure of performance strength. We measure the average equity loss of the MonteCarlo player on a suite of test positions. We have a collection of about 800 test
positions, in which every legal play has been extensively rolled out by TO-Gammon
2.11-ply. We then use the TO-Gammon rollout data to grade the quality of a given
player's move decisions.
Test set results for the three linear evaluators, and for a random evaluator, are
displayed in Table 2. It is interesting to note for comparison that the TO-Gammon
l-ply base player scores 0.0120 on this test set measure, comparable to the Lin-1
Monte-Carlo player, while TO-Gammon 2-ply base player scores 0.00843 , comparable to the Lin-3 Monte-Carlo player. These results are exactly in line with what
we measured in Table 1 using full-game benchmarking, and thus indicate that the
test-set methodology is in fact reasonably accurate. We also note that in each case,
there is a huge error reduction of potentially a factor of 4 or more in using the
Monte-Carlo technique. In fact, the rollouts summarized in Table 2 were done using fairly aggressive statistical pruning; we expect that rolling out decisions more
G. Tesauro and G. R. Galperin
1072
Evaluator
Random
Lin-1
Lin-2
Lin-3
Base loss
0.330
0.040
0.0665
0.0291
Monte-Carlo loss
0.131
0.0124
0.0175
0.00749
Ratio
2.5
3.2
3.8
3.9
Table 2: Average equity loss per move decision on an 800-position test set, for both
initial base players and corresponding Monte-Carlo players. Units are ppgj smaller
loss values are better. Also computed is ratio of base player loss to Monte-Carlo
loss.
extensively would give error reduction ratios closer to factor of 5, albeit at a cost
of increased CPU time.
2.2
RESULTS FOR MULTI-LAYER NETWORKS
Using large multi-layer networks to do full rollouts is not feasible for real-time
move decisions, since the large networks are at least a factor of 100 slower than the
linear evaluators described previously. We have therefore investigated an alternative
Monte-Carlo algorithm, using so-called "truncated rollouts.? In this technique trials
are not played out to completion, but instead only a few steps in the simulation
are taken, and the neural net's equity estimate of the final position reached is used
instead of the actual outcome. The truncated rollout algorithm requires much less
CPU time, due to two factors: First, there are potentially many fewer steps per
trial. Second, there is much less variance per trial, since only a few random steps
are taken and a real-valued estimate is recorded, rather than many random steps
and an integer final outcome. These two factors combine to give at least an order
of magnitude speed-up compared to full rollouts, while still giving a large error
reduction relative to the base player.
Table 3 shows truncated rollout results for two multi-layer networks: TD-Gammon
2.1 1-ply, which has 80 hidden units, and a substantially smaller network with
the same input features but only 10 hidden units. The first line of data for each
network reflects very extensive rollouts and shows quite large error reduction ratios,
although the CPU times are somewhat slower than acceptable for real-time play.
(Also we should be somewhat suspicious of the 80 hidden unit result, since this was
the same network that generated the data being used to grade the Monte-Carlo
players.) The second line of data shows what h~ppens when the rollout trials are
cut off more aggressively. This yields significantly faster run-times, at the price of
only slightly worse move decisions.
The quality of play of the truncated rollout players shown in Table 3 is substantially
better than TD-Gammon I-ply or 2-ply, and it is also substantially better than
the full-rollout Monte-Carlo players described in the previous section. In fact, we
estimate that the world's best human players would score in the range of 0.005 to
0.006 on this test set, so the truncated rollout players may actually be exhibiting
superhuman playing ability, in reasonable amounts of SP machine time.
3
DISCUSSION
On-line search may provide a useful methodology for overcoming some of the limitations of training nonlinear function approximators on difficult control tasks. The
idea of using search to improve in real time the performance of a heuristic controller
On-line Policy Improvement using Monte-Carlo Search
Hidden Units
10
Base loss
0.0152
80
0.0120
Truncated Monte-Carlo loss
0.00318 \ ll-step, thoroug~)
0.00433 (ll-step, optimistic)
0.00181 \!-step, thoroug~)
0.00269 (7-step, optimistic)
1073
Ratio
4.8
3.5
6.6
4.5
M-C CPU
25 sec/move
9 sec/move
65 sec(move
18 sec/move
Table 3: Truncated rollout results for two multi-layer networks, with number of
hidden units and rollout steps as indicated. Average equity loss per move decision on
an 800-position test set, for both initial base players and corresponding Monte-Carlo
players. Again, units are ppg, and smaller loss values are better. Also computed is
ratio of base player loss to Monte-Carlo loss. CPU times are for the Monte-Carlo
player running on 32 SP1 nodes.
is an old one, going back at least to (Shannon, 1950). Full-width search algorithms
have been extensively studied since the time of Shannon, and have produced tremendous success in computer games such as chess, checkers and Othello. Their main
drawback is that the re~uired CPU time increases exponentially with the depth of
the search, i.e., T '" B ,where B is the effective branching factor and D is the
search depth. In contrast, Monte-Carlo search provides a tractable alternative for
doing very deep searches, since the CPU time for a full Monte-Carlo decision only
scales as T", N? B . D, where N is the number of trials in the simulation.
In the backgammon application, for a wide range of initial policies, our on-line
Monte-Carlo algorithm, which basically implements a single step of policy iteration,
was found to give very substantial error reductions. Potentially 80% or more of the
base player's equity loss can be eliminated, depending on how extensive the MonteCarlo trials are. The magnitude of the observed improvement is surprising to us:
while it is known theoretically that each step of policy iteration produces a strict
improvement, there are no guarantees on how much improvement one can expect.
We have also noted a rough trend in the data: as one increases the strength of the
base player, the ratio of error reduction due to the Monte-Carlo technique appears
to increase. This could reflect superlinear convergence properties of policy iteration.
In cases where the base player employs an evaluator that is able to estimate expected
outcome, the truncated rollout algorithm appears to offer favorable tradeoffs relative
to doing full rollouts to completion. While the quality of Monte-Carlo decisions is
not as good using truncated rollouts (presumably because the neural net's estimates
are biased), the degradation in quality is fairly small in at least some cases, and
is compensated by a great reduction in CPU time. This allows more sophisticated
(and thus slower) base players to be used, resulting in decisions which appear to be
both better and faster.
The Monte-Carlo backgammon program as implemented on the SP offers the potential to achieve real-time move decision performance that exceeds human capabilities.
In future work, we plan to augment the program with a similar Monte-Carlo algorithm for making doubling decisions. It is quite possible that such a program would
be by far the world's best backgammon player.
Beyond the backgammon application, we conjecture that on-line Monte-Carlo search
may prove to be useful in many other applications of reinforcement learning and
adaptive control. The main requirement is that it should be possible to simulate
the environment in which the controller operates. Since basically all of the recent
successful applications of reinforcement learning have been based on training in
simulators, this doesn't seem to be an undue burden. Thus, for example, Monte-
1074
G. Tesauro and G. R. Galperin
Carlo search may well improve decision-making in the domains of elevator dispatch
(Crites and Barto, 1996) and job-shop scheduling (Zhang and Dietterich, 1996).
We are additionally investigating two techniques for training a controller based
on the Monte-Carlo estimates. First, one could train each candidate position on
its computed rollout equity, yielding a procedure similar in spirit to TD(1). We
expect this to converge to the same policy as other TD(..\) approaches, perhaps
more efficiently due to the decreased variance in the target values as well as the
easily parallelizable nature of the algorithm. Alternately, the base position - the
initial position from which the candidate moves are being made - could be trained
with the best equity value from among all the candidates (corresponding to the
move chosen by the rollout player). In contrast, TD(..\) effectively trains the base
position with the equity of the move chosen by the base controller. Because the
improved choice of move achieved by the rollout player yields an expectation closer
to the true (optimal) value, we expect the learned policy to differ from, and possibly
be closer to optimal than, the original policy.
Acknowledgments
We thank Argonne National Laboratories for providing SPI machine time used to
perform some of the experiments reported here. Gregory Galperin acknowledges
support under Navy-ONR grant N00014-96-1-0311.
References
D. P. Bertsekas, Dynamic Programming and Optimal Control. Athena Scientific,
Belmont, MA (1995).
R. H. Crites and A. G. Barto, "Improving elevator performance using reinforcement
learning." In: D. Touretzky et al., eds., Advances in Neural Information Processing
Systems 8, 1017-1023, MIT Press (1996).
C. E. Shannon, "Programming a computer for playing chess." Philosophical Magazine 41, 265-275 (1950).
R. S. Sutton, "Learning to predict by the methods of temporal differences." Machine
Learning 3, 9-44 (1988).
G. Tesauro, "Connectionist learning of expert preferences by comparison training."
In: D. Touretzky, ed., Advances in Neural Information Processing Systems 1, 99106, Morgan Kaufmann (1989).
G. Tesauro, "Practical issues in temporal difference learning." Machine Learning
8, 257-277 (1992).
G. Tesauro, "Temporal difference learning and TD-Gammon." Comm. of the ACM,
38:3, 58-67 (1995).
W. Zhang and T. G. Dietterich, "High-performance job-shop scheduling with a
time-delay TD("\) network." In: D. Touretzky et al., eds., Advances in Neural
Information Processing Systems 8, 1024-1030, MIT Press (1996).
| 1302 |@word trial:19 middle:1 longterm:1 stronger:3 seems:1 simulation:9 dramatic:1 reduction:8 initial:11 configuration:1 score:3 current:1 surprising:1 yet:1 belmont:1 v:1 greedy:1 selected:1 fewer:1 provides:2 node:6 location:1 preference:1 zhang:3 evaluator:6 height:1 rollout:14 suspicious:1 consists:1 prove:1 combine:1 theoretically:1 expected:10 rapid:1 frequently:1 multi:6 grade:2 simulator:1 td:10 ote:1 actual:3 cpu:16 little:1 resolve:1 estimating:1 what:2 substantially:4 developed:1 suite:1 guarantee:1 temporal:3 every:1 exactly:1 hit:1 control:5 unit:8 grant:1 appear:2 bertsekas:2 continually:1 before:1 positive:2 sutton:2 encoding:2 parallelize:1 signed:1 tournament:1 black:1 plus:2 twice:1 examined:1 studied:1 fastest:1 range:2 statistically:1 acknowledgment:1 practical:1 testing:1 implement:1 backpropagation:1 procedure:3 dice:1 significantly:3 confidence:1 gammon:17 argmaxa:1 othello:1 uired:1 get:1 close:2 superlinear:1 scheduling:2 applying:1 equivalent:3 deterministic:2 center:1 maximizing:1 compensated:1 starting:2 independently:1 stats:2 regarded:1 target:1 play:12 user:1 magazine:1 programming:4 us:3 trend:1 expensive:1 std:2 cut:1 observed:1 thousand:3 substantial:3 environment:2 comm:1 reward:6 dynamic:3 gerald:1 trained:3 solving:1 purely:1 efficiency:2 easily:3 train:2 fast:2 effective:1 monte:56 outside:1 choosing:1 outcome:5 navy:1 whose:1 quite:2 widely:2 valued:1 heuristic:1 ability:4 statistic:1 final:2 sequence:2 net:4 propose:2 combining:1 achieve:3 lookahead:1 description:1 convergence:3 requirement:3 produce:2 perfect:2 executing:1 depending:3 completion:5 measured:5 job:2 strong:3 implemented:2 involves:1 indicate:2 differ:2 exhibiting:1 drawback:1 subsequently:1 human:6 successor:1 sufficiently:2 considered:1 normal:2 presumably:1 great:1 mapping:1 sp2:2 predict:1 favorable:1 reflects:1 mit:3 clearly:1 rough:1 rather:2 varying:1 barto:3 conjunction:1 derived:1 improvement:12 backgammon:10 contrast:3 accumulated:1 typically:2 unlikely:1 initially:1 hidden:6 going:1 issue:1 among:1 augment:1 undue:1 plan:1 fairly:2 equal:2 saving:1 having:1 sampling:3 eliminated:1 represents:1 look:1 future:1 connectionist:1 few:4 employ:1 national:2 individual:1 elevator:2 floating:1 rollouts:8 huge:1 mflop:1 highly:1 evaluation:1 rolled:1 yielding:1 accurate:2 capable:1 closer:3 old:1 re:1 weaken:1 beginner:2 increased:1 dev:2 cost:1 rolling:1 hundred:1 delay:1 successful:1 too:2 reported:3 gregory:2 off:4 again:1 reflect:1 recorded:2 possibly:1 borne:1 worse:2 expert:3 aggressive:1 potential:1 lookup:1 sec:7 summarized:1 depends:1 performed:2 lab:1 optimistic:2 doing:2 reached:1 start:1 parallel:4 capability:2 square:1 variance:3 kaufmann:1 efficiently:1 yield:2 vp:3 weak:1 raw:3 produced:2 basically:2 carlo:56 trajectory:5 multiplying:1 processor:1 parallelizable:2 reach:1 touretzky:3 ed:3 against:2 involved:1 sophisticated:1 actually:1 back:1 appears:3 methodology:2 improved:6 done:2 box:1 parlance:1 hand:2 nonlinear:2 quality:4 perhaps:1 indicated:1 scientific:1 dietterich:3 true:1 deliberately:1 hence:2 aggressively:1 laboratory:2 white:1 dispatch:1 ll:2 during:1 game:12 self:1 width:1 branching:1 noted:1 yorktown:1 performs:1 ranging:1 recently:1 common:1 absorbing:1 rl:1 exponentially:1 million:1 discussed:1 significant:2 cambridge:1 ai:1 access:1 base:28 recent:1 tesauro:11 n00014:1 onr:1 watson:2 success:1 approximators:3 exploited:1 morgan:1 additional:1 care:1 somewhat:2 parallelized:1 prune:1 converge:1 full:9 exceeds:1 faster:2 offer:3 long:3 lin:15 basic:1 controller:7 expectation:1 iteration:6 represent:1 achieved:1 decreased:1 biased:1 ppg:8 checker:4 probably:1 strict:1 superhuman:1 spirit:1 seem:1 integer:1 intermediate:2 variety:2 spi:1 competing:1 idea:1 tradeoff:1 intensive:1 whether:1 action:9 repeatedly:1 deep:1 useful:3 iterating:1 amount:4 ten:1 extensively:3 risc:1 reduced:1 estimated:4 per:14 monitor:1 run:2 reasonable:1 decision:31 acceptable:2 comparable:2 layer:10 bound:1 guaranteed:1 played:1 strength:3 simulate:2 speed:2 extremely:1 performing:2 relatively:1 conjecture:1 speedup:1 smaller:3 slightly:1 making:5 chess:2 taken:3 legal:2 previously:2 montecarlo:4 needed:5 tractable:1 end:1 operation:1 opponent:2 away:1 alternative:3 slower:3 supercomputer:2 original:1 running:2 somewhere:1 giving:1 sweep:2 move:27 added:2 win:3 thank:1 simulated:2 athena:1 ratio:7 providing:1 difficult:1 potentially:6 negative:1 implementation:3 policy:33 perform:1 galperin:5 markov:2 benchmark:3 displayed:1 truncated:9 communication:2 arbitrary:1 overcoming:1 required:2 specified:1 extensive:5 optimized:1 philosophical:1 learned:1 tremendous:1 hour:1 alternately:1 address:1 able:1 beyond:1 parallelism:1 program:3 power:1 advanced:2 shop:2 improve:2 technology:1 acknowledges:1 relative:2 loss:14 expect:4 interesting:1 limitation:1 playing:11 pi:2 ibm:4 last:2 side:4 nondeterminism:1 wide:2 taking:1 depth:2 world:2 rich:1 doesn:1 author:1 made:2 adaptive:4 reinforcement:4 collection:1 far:1 pruning:2 investigating:1 search:16 table:12 additionally:1 promising:1 nature:1 reasonably:2 robust:1 improving:1 investigated:2 domain:2 sp:8 main:2 crites:3 terminated:1 noise:2 crafted:2 benchmarking:1 board:4 ny:1 slow:3 position:12 msec:1 candidate:8 ply:12 bad:1 weakest:1 burden:1 albeit:1 effectively:2 magnitude:3 doubling:1 acm:1 ma:2 price:1 feasible:1 typical:4 sp1:2 operates:1 averaging:2 degradation:1 total:2 called:2 player:53 equity:10 meaningful:1 shannon:3 argonne:2 support:1 |
332 | 1,303 | Size of multilayer networks for exact
learning: analytic approach
Andre Elisseefl'
Mathematiques et Informatique
Ecole Normale Superieure de Lyon
46 allee d'Italie
F69364 Lyon cedex 07, FRANCE
D~pt
Helene Paugam-Moisy
LIP, URA 1398 CNRS
Ecole Normale Superieure de Lyon
46 allee d'Italie
F69364 Lyon cedex 07, FRANCE
Abstract
This article presents a new result about the size of a multilayer
neural network computing real outputs for exact learning of a finite
set of real samples. The architecture of the network is feedforward,
with one hidden layer and several outputs. Starting from a fixed
training set, we consider the network as a function of its weights.
We derive, for a wide family of transfer functions, a lower and an
upper bound on the number of hidden units for exact learning,
given the size of the dataset and the dimensions of the input and
output spaces.
1
RELATED WORKS
The context of our work is rather similar to the well-known results of Baum et al. [1,
2,3,5, 10], but we consider both real inputs and outputs, instead ofthe dichotomies
usually addressed. We are interested in learning exactly all the examples of a fixed
database, hence our work is different from stating that multilayer networks are
universal approximators [6, 8, 9]. Since we consider real outputs and not only
dichotomies, it is not straightforward to compare our results to the recent works
about the VC-dimension of multilayer networks [11, 12, 13]. Our study is more
closely related to several works of Sontag [14, 15], but with different hypotheses on
the transfer functions of the units. Finally, our approach is based on geometrical
considerations and is close to the model of Coetzee and Stonick [4].
First we define the model of network and the notations and second we develop
our analytic approach and prove the fundamental theorem. In the last section, we
discuss our point of view and propose some practical consequences of the result.
Size ofMultilayer Networks for Exact Learning: Analytic Approach
2
163
THE NETWORK AS A FUNCTION OF ITS WEIGHTS
General concepts on neural networks are presented in matrix and vector notations,
in a geometrical perspective. All vectors are written in bold and considered as
column vectors, whereas matrices are denoted with upper-case script.
2.1
THE NETWORK ARCHITECTURE AND NOTATIONS
Consider a multilayer network with N/ input units, N H hidden units and N s output
units. The inputs and outputs are real-valued. The hidden units compute a nonlinear function f which will be specified later on. The output units are assumed to
be linear. A learning set of Np examples is given and fixed. For allp E {1..Np }, the
pth example is defined by its input vector d p E iRNI and the corresponding desired
output vector tp E iRNs. The learning set can be represented as an input matrix,
with both row and column notations, as follows
Similarly, the target matrix is T =
2.2
[ti, . .. ,tt
p (,
THE NETWORK AS A FUNCTION
with independent row vectors.
OF ITS WEIGHTS
g
w;
For all h E {1..NH },
= (w;I' ... ,WkNI? E iRNI is the vector of the weights between all the input units and the hth hidden unit. The input weight matrix WI is defined as WI = [wi, . .. ,wJvH ]. Similarly, a vector w~ = (w;I' ... ,W~NHf E iRNH
represents the weights between all the hidden units and the sth output unit, for all
s E {1..Ns}. Thus the output weight matrix W 2 is defined as W 2 = [w~, ... ,wJ.,.s]'
For an input matrix V, the network computes an output matrix
where each output vector z(dp ) must be equal to the target tp for exact learning.
The network computation can be detailed as follows, for all s E {1..Ns}
NH
NI
L w~h.f(L dpi.wt)
h=1
i=1
NH
L w;h.f(d;.w;)
h=1
Hence, for the whole learning set, the
NH
L
h=l
NH
(1)
sth
output component is
2
W 8h'
[f(di .. w
:
f(d~p.w;)
L W;h?F(V.W;)
h=l
u]
A. Elisseeff and H. Paugam-Moisy
164
In equation (1), F is a vector operator which transforms a n vector v into a n vector
F(v) according to the relation [F(V)]i = f([v]d, i E {1..n}. The same notation F
will be used for the matrix operator. Finally, the expression of the output matrix
can be deduced from equation (1) as follows
2(V)
(2)
2(V)
[F(V.wt), ... ,F(V.WhH
=
)] :
[wi, . .. ,w~s]
F(V.Wl).W2
From equation (2), the network output matrix appears as a simple function of
the input matrix and the network weights. Unlike Coetzee and Stonick, we will
consider that the input matrix V is not a variable of the problem. Thus we express
the network output matrix 2(V) as a function of its weights. Let 9 be this function
9 : n.N[xNH+NHxNs
--t
n.NpxNs
W = (Wl, W2)
--t
F(V.Wl).W2
The 9 function clearly depends on the input matrix and could have be denoted by
g'D but this index will be dropped for clarity.
3
FUNDAMENTAL RESULT
3.1
PROPERTY OF FUNCTION 9
Learning is said to be exact on V if and only if there exists a network such that
its output matrix 2(V) is equal to the target matrix T. If 9 is a diffeomorphic
function from RN[xNH+NHXNS onto RNpxNs then the network can learn any target
in RNpxNs exactly. We prove that it is sufficient for the network function 9 to be
a local diffeomorphism. Suppose there exist a set of weights X, an open subset
U C n.N[NH+NHNS including X and an open subset V C n. NpNs including g(X)
such that 9 is diffeomorphic from U to V. Since V is an open neighborhood of
g(X), there exist a real ..\ and a point y in V such that T = ..\(y - g(X)) . Since 9 is
diffeomorphic from U to V, there exists a set of weights Y in U such that y = g(Y),
hence T = ..\(g(Y) - g(X)). The output units of the network compute a linear
transfer function, hence the linear combination of g(X) and g(Y) can be integrated
in the output weights and a network with twice N/ N H + N H N s weights can learn
(V, T) exactly (see Figure 1).
g(Y)
(J;)
~)---z
'T=A.(g(Y)-g(X))
Figure 1: A network for exact learning of a target T (unique output for clarity)
For 9 a local diffeomorphism, it is sufficient to find a set of weights X such that the
Jacobian of 9 in X is non-zero and to apply the theorem of local inversion. This
analysis is developed in next sections and requires some assumptions on the transfer
function f of the hidden units. A function which verifies such an hypothesis 11. will
be called a ll-function and is defined below.
Size of Multilayer Networks for Exact Learning,' Analytic Approach
3.2
165
DEFINITION AND THEOREM
Definition 1 Consider a function f : 'R ~ 'R which is C1 ('R) (i.e. with continuous
derivative) and which has finite limits in -00 and +00. Such a function is called a
1l-function iff it verifies the following property
(Va E'RI I a I> 1)
(1l)
lim
x--+? oo
I ff'~(ax))
1= 0
x
From this hypothesis on the transfer function of all the hidden units, the fundamental result can be stated as follows
Theorem 1 Exact learning of a set of Np examples, in general position, from'RNr
to 'R Ns , can be realized by a network with linear output units and a transfer function
which is a 1l-function, if the size N H of its hidden layer verifies the following bounds
Lower Bound
NH
Upper Bound
NH
= r!:r~ 1
hidden units are necessary
= 2 rN~'Ns 1Ns
hidden units are sufficient
The proof of the lower bound is straightforward, since a condition for g to be
diffeomorphic from RNrxNH+NHXNs onto RNpxNs is the equality of its input and
output space dimensions NJNH + NHNS = NpNs .
3.3
SKETCH OF THE PROOF FOR THE UPPER BOUND
The 9 function is an expression of the network as a function of its weights, for a given
input matrix : g(W1, W2)
F(V .W1 ).W2 and 9 can be decomposed according to
its vectorial components on the learning set (which are themselves vectors of size
Ns) . For all p E {1..Np}
=
The derivatives of 9 w.r.t. the input weight matrix WI are, for all i E {1..NJ}, for
all h E {l..NH}
:!L = [W~h !,(d~.wl)dpi""
, WJvsh
f'(d~ .wl)dpi]T
For the output weight matrix W2, the derivatives of 9 are, for all h E {1..NH}, for
all s E {l..Ns}
88g~
W 8h
= [ 0, ...
,O,f(d~ .w~), 0, .. . , 0 y
'--"
8-1
'--"
NS-8
The Jacobian matrix MJ(g) of g, the size of which is NJNH + NHNS columns
and NsNp rows, is thus composed of a block-diagonal part (derivatives w.r.t. W2)
and several other blocks (derivatives w.r.t. WI). Hence the Jacobian J(g) can be
rewritten J(g) =1 J1, h,?? . ,JNH I, after permutations of rows and columns, and
using the Hadamard and Kronecker product notations, each J h being equal to
(3) Jh
= [F(v.wl) ? INs, [F'(v.wl) 061 "
.F'(v.wl) 06Nr ] 0 [W~h" ,WJvsh]]
where INs is for the identity matrix in dimension Ns.
A. Elisseeff and H. Paugam-Moisy
166
=
Our purpose is to prove that there exists a point X
(Wi, W2) such that the
Jacobian J(g) is non-zero at X, i.e. such that the column vectors of the Jacobian
matrix MJ(g) are linearly independent at X. The proof can be divided in two steps.
First we address the case of a single output unit. Afterwards, this proof can be used
to extend the result to several output units. Since the complete development of both
proofs require a lot of calculations, we only present their sketchs below . More details
can be found in [7] .
3.3.1
Case of a single output unit
The proof is based on a linear arrangement of the projections of the column vectors
of Jh onto a subspace. This subspace is orthogonal to all the J i for i < h. We
build a vector
and a scalar w~h such that the projected column vectors are an
independent family, hence they are independent with the J i for i < h. Such a construction is recursively applied until h N H. We derive then vectors wi, .. . ,wkrH
and
such that J(g) is non-zero. The assumption on 1l-fonctions is essential for
proving that the projected column vectors of Jh are independent .
wi
=
wi
3.3.2
Case of multiple output units
In order to extend the result from a single output to s output units, the usual idea
consists in considering as many subnetworks as the number of output units. From
this point of view, the bound on the hidden units would be N H = 2 'f.!;~f which
differs from the result stated in theorem 1. A new direct proof can be developed
(see [7]) and get a better bound: the denominator is increased to N/ + N s .
4
DISCUSSION
The definition of a 1l-function includes both sigmoids and gaussian functions which
are commonly used for multilayer perceptrons and RBF networks, but is not valid
for threshold functions . Figure 2 shows the difference between a sigmoid, which
is a 1l-function, and a saturation which is not a 1l-function. Figures (a) and (b)
represent the span of the output space by the network when the weights are varying ,
i.e. the image of g . For clarity, the network is reduced to 1 hidden unit , 1 input
unit, 1 output unit and 2 input patterns. For a 1l-function, a ball can be extracted
from the output space 'R}, onto which the 9 function is a diffeomorphism. For the
saturation, the image of 9 is reduced to two lines , hence 9 cannot be onto on a ball
of R2 . The assumption of the activation function is thus necessary to prove that
the jacobian is non-zero.
Our bound on the number of hidden units is very similar to Baum's results for
dichotomies and functions from real inputs to binary outputs [1] . Hence the present
result can be seen as an extension of Baum's results to the case of real outputs,
and for a wide family of transfer functions , different from the threshold functions
addressed by Baum and Haussler in [2]. An early result on sigmoid networks has
been stated by Sontag [14]: for a single output and at least two input units, the
number of examples must be twice the number of hidden units. Our upper bound
on the number of hidden units is strictly lower than that (as soon as the number of
input units is more than two). A counterpart of considering real data is that our
results bear little relation to the VC-dimension point of view.
167
Size of Multilayer Networksfor Exact Learning: Analytic Approach
D~
0.5
-.!,
t .5
-1
~J.5
0
05
1
1.5
2
2.5
:I
1.5
0.5
o ??????-----~I__-----0.5
-1
-1.5
-~2~---:'-1.5=----~1--:-07.5 -~---'0-:-:.5----7---:':1.5:---!.
(a) : A saturation function
(b) : A sigmoid function
Figure 2: Positions of output vectors, for given data, when varying network weights
5
CONCLUSION
r
In this paper, we show that a number of hidden units N H = 2 N p N s / (Nr + N s) 1is
sufficient for a network ofll-functions to exactly learn a given set of Np examples in
general position. We now discuss some of the practical consequences of this result .
According to this formula, the size of the hidden layer required for exact learning may grow very high if the size of the learning set is large. However, without
a priori knowledge on the degree of redundancy in the learning set, exact learning
is not the right goal in practical cases. Exact learning usually implies overfitting,
especially if the examples are very noisy. Nevertheless, a right point of view could
be to previously reduce the dimension and the size of the learning set by feature
extraction or data analysis as pre-processing. Afterwards, our theoretical result
could be a precious indication for scaling a network to perform exact learning on
this representative learning set, with a good compromise between, bias and variance.
Our bound is more optimistic than the rule-of-thumb N p = lOw derived from the
theory of PAC-learning. In our architecture, the number of weights is w = 2NpNs.
However the proof is not constructive enough to be derived as a learning algorithm,
especially the existence of g(Y) in the neighborhood of g(X) where 9 is a local
diffeomorphism (cf. figure 1). From this construction we can only conclude that
NH = NpNs/(Nr+Ns)l is necessary and NH = 2 fNpNs/(Nr+Ns)l is sufficient
to realize exact learning of Np examples, from nNr to nNs.
r
168
A. Elisseeff and H. Paugam-Moisy
The opportunity of using multilayer networks as auto-associative networks and for
data compression can be discussed at the light of this results. Assume that N s = NJ
and the expression of the number of hidden units is reduced to N H = N p or at
least NH = N p /2 . Since N p ~ NJ + Ns, the number of hidden units must verify
N H ~ NJ. Therefore, an architecture of "diabolo" network seems to be precluded
for exact learning of auto-associations. A consequence may be that exact retrieval
from data compression is hopeless by using internal representations of a hidden
layer smaller than the data dimension.
Acknowledgements
This work was supported by European Esprit III Project nO 8556, NeuroCOLT
Working Group. We thank C.S. Poon and J.V. Shah for fruitful discussions.
References
[1] E. B. Baum. On the capabilities of multilayer perceptrons. J. of Complexity,
4:193-215, 1988.
[2] E. B. Baum and D. Haussler. What size net gives valid generalization? Neural
Computation, 1:151- 160, 1989.
[3] E. K. Blum and L. K. Li. Approximation theory and feedforward networks.
Neural Networks, 4(4) :511-516, 1991.
[4] F. M. Coetzee and V. L. Stonick. Topology and geometry of single hidden layer
network, least squares weight solutions. Neural Computation, 7:672-705, 1995.
[5] M. Cosnard, P. Koiran, and H. Paugam-Moisy. Bounds on the number of
units for computing arbitrary dichotomies by multilayer perceptrons. J. of
Complexity, 10:57-63, 1994.
[6] G. Cybenko. Approximation by superpositions of a sigmoidal function. Math.
Control, Signal Systems, 2:303-314, October 1988.
[7] A. Elisseeff and H. Paugam-Moisy. Size of multilayer networks for exact learning: analytic approach. Rapport de recherche 96-16, LIP, July 1996.
[8] K. Funahashi. On the approximate realization of continuous mappings by
neural networks. Neural Networks, 2(3):183- 192, 1989.
[9] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks
are universal approximators. Neural Networks, 2(5):359-366, 1989.
[10] S.-C. Huang and Y.-F. Huang. Bounds on the number of hidden neurones in
multilayer perceptrons. IEEE Trans . Neural Networks, 2:47- 55, 1991.
[11] M. Karpinski and A. Macintyre. Polynomial bounds for vc dimension of sigmoidal neural networks . In 27th ACM Symposium on Theory of Computing,
pages 200-208, 1995.
[12] P. Koiran and E. D. Sontag. Neural networks with quadratic vc dimension. In
Neural Information Processing Systems (NIPS *95), 1995. to appear.
[13] W . Maass. Bounds for the computational power and learning complexity of
analog neural networks. In 25th ACM Symposium on Theory of Computing,
pages 335-344, 1993.
[14] E. D. Sontag. Feedforward nets for interpolation and classification. J. Compo
Syst. Sci., 45:20-48, 1992.
[15] E. D. Sontag. Shattering all sets of k points in "general position" requires (k1)/2 parameters. Technical Report Report 96-01, Rutgers Center for Systems
and Control (SYCON), February 1996.
| 1303 |@word inversion:1 polynomial:1 compression:2 seems:1 open:3 elisseeff:4 recursively:1 ecole:2 activation:1 written:1 must:3 realize:1 j1:1 analytic:6 funahashi:1 recherche:1 compo:1 math:1 sigmoidal:2 direct:1 symposium:2 prove:4 consists:1 themselves:1 decomposed:1 lyon:4 little:1 considering:2 project:1 notation:6 what:1 developed:2 nj:4 ti:1 exactly:4 esprit:1 control:2 unit:36 appear:1 dropped:1 local:4 limit:1 consequence:3 cosnard:1 interpolation:1 twice:2 practical:3 unique:1 block:2 differs:1 universal:2 projection:1 pre:1 sycon:1 get:1 onto:5 close:1 cannot:1 operator:2 context:1 fruitful:1 center:1 baum:6 straightforward:2 starting:1 rule:1 haussler:2 proving:1 nnr:1 pt:1 target:5 suppose:1 construction:2 exact:18 hypothesis:3 database:1 wj:1 complexity:3 compromise:1 represented:1 informatique:1 dichotomy:4 neighborhood:2 precious:1 valued:1 noisy:1 associative:1 indication:1 net:2 propose:1 product:1 hadamard:1 realization:1 iff:1 poon:1 derive:2 develop:1 stating:1 oo:1 implies:1 closely:1 vc:4 require:1 generalization:1 cybenko:1 diabolo:1 mathematiques:1 extension:1 strictly:1 considered:1 mapping:1 koiran:2 early:1 purpose:1 superposition:1 wl:8 clearly:1 gaussian:1 normale:2 rather:1 varying:2 ax:1 derived:2 diffeomorphic:4 cnrs:1 integrated:1 hidden:23 relation:2 france:2 interested:1 classification:1 denoted:2 priori:1 development:1 equal:3 extraction:1 shattering:1 represents:1 np:6 report:2 composed:1 geometry:1 light:1 necessary:3 orthogonal:1 desired:1 theoretical:1 increased:1 column:8 tp:2 whh:1 subset:2 nns:1 deduced:1 fundamental:3 w1:2 huang:2 derivative:5 li:1 syst:1 de:3 rapport:1 bold:1 includes:1 depends:1 script:1 view:4 later:1 lot:1 optimistic:1 capability:1 square:1 ni:1 variance:1 ofthe:1 thumb:1 andre:1 definition:3 proof:8 coetzee:3 di:1 dataset:1 lim:1 knowledge:1 appears:1 until:1 sketch:2 working:1 xnh:2 nonlinear:1 concept:1 verify:1 counterpart:1 hence:8 equality:1 maass:1 white:1 ll:1 tt:1 complete:1 geometrical:2 image:2 consideration:1 sigmoid:3 nh:13 extend:2 discussed:1 association:1 analog:1 fonctions:1 similarly:2 recent:1 perspective:1 binary:1 approximators:2 seen:1 signal:1 july:1 afterwards:2 multiple:1 technical:1 calculation:1 retrieval:1 divided:1 va:1 multilayer:14 denominator:1 rutgers:1 karpinski:1 represent:1 c1:1 whereas:1 addressed:2 grow:1 w2:8 unlike:1 cedex:2 ura:1 feedforward:4 iii:1 enough:1 architecture:4 topology:1 reduce:1 idea:1 paugam:6 moisy:6 expression:3 rnr:1 sontag:5 neurones:1 detailed:1 transforms:1 reduced:3 macintyre:1 exist:2 express:1 group:1 i__:1 redundancy:1 threshold:2 nevertheless:1 blum:1 clarity:3 allp:1 family:3 networksfor:1 scaling:1 layer:5 bound:15 quadratic:1 vectorial:1 kronecker:1 ri:1 span:1 according:3 combination:1 ball:2 smaller:1 sth:2 wi:10 equation:3 previously:1 discus:2 subnetworks:1 rewritten:1 apply:1 shah:1 existence:1 cf:1 opportunity:1 k1:1 build:1 especially:2 february:1 arrangement:1 realized:1 usual:1 diagonal:1 nr:4 said:1 dp:1 subspace:2 thank:1 neurocolt:1 sci:1 index:1 italie:2 october:1 stated:3 perform:1 upper:5 finite:2 rn:2 dpi:3 arbitrary:1 required:1 specified:1 nip:1 trans:1 address:1 precluded:1 usually:2 below:2 pattern:1 saturation:3 including:2 stinchcombe:1 power:1 stonick:3 auto:2 acknowledgement:1 permutation:1 bear:1 helene:1 degree:1 sufficient:5 article:1 row:4 allee:2 hopeless:1 supported:1 last:1 soon:1 bias:1 jh:3 wide:2 dimension:9 valid:2 computes:1 commonly:1 projected:2 pth:1 approximate:1 overfitting:1 assumed:1 conclude:1 continuous:2 lip:2 learn:3 transfer:7 mj:2 hornik:1 hth:1 european:1 linearly:1 whole:1 verifies:3 representative:1 ff:1 f69364:2 n:12 position:4 jacobian:6 theorem:5 formula:1 pac:1 r2:1 exists:3 essential:1 sigmoids:1 scalar:1 extracted:1 acm:2 identity:1 goal:1 diffeomorphism:4 rbf:1 wt:2 called:2 perceptrons:4 internal:1 superieure:2 constructive:1 |
333 | 1,304 | Spatial Decorrelation in Orientation
Tuned Cortical Cells
Alexander Dimitrov
Department of Mathematics
University of Chicago
Chicago, IL 60637
a-dimitrov@uchicago.edu
Jack D. Cowan
Department of Mathematics
University of Chicago
Chicago, IL 60637
cowan@math.uchicago.edu
Abstract
In this paper we propose a model for the lateral connectivity of
orientation-selective cells in the visual cortex based on informationtheoretic considerations. We study the properties of the input signal to the visual cortex and find new statistical structures which
have not been processed in the retino-geniculate pathway. Applying
the idea that the system optimizes the representation of incoming
signals, we derive the lateral connectivity that will achieve this for
a set of local orientation-selective patches, as well as the complete
spatial structure of a layer of such patches. We compare the results
with various physiological measurements.
1
Introduction
In recent years much work has been done on how the structure of the visual system reflects properties of the visual environment (Atick and Redlich 1992; Attneave
1954; Barlow 1989). Based on the statistics of natural scenes compiled and studied
by Field (1987) and Ruderman and Bialek (1993), work was done by Atick and
Redlich (1992) on the assumption that one of the tasks of early vision is to reduce the redundancy of input signals, the results of which agree qualitatively with
numerous physiological and psychophysical experiments. Their ideas were further
strengthened by research suggesting the possibility that such structures develop via
simple correlation-based learning mechanisms (Atick and Redlich 1993; Dong 1994).
As suggested by Atick and Li (1994), further higher-order redundancy reduction
of the luminosity field in the visual processing system is unlikely, since it gives
little benefit in information compression. In this paper we apply similar ideas to a
different input signal which is readily available to the system and whose statistical
properties are lost in the analysis of the luminosity signal. We note that after the
Spatial Decorrelation in Orientation Tuned Cortical Cells
853
application of the retinal "mexican hat" filter the most obvious salient features
that are left in images are sharp changes in luminosity, for which the filter is not
optimal, i.e. local edges. Such edges have correlations which are very different
from the luminosity autocorrelation of natural images (Field 1987), and have zero
probability measure in visual scenes, so they are lost in the ensemble averages used
to compute the autocorrelation function of natural images. We know that this
signal is projected to a set of direction-sensitive units in VI for each distinct retinal
position, thereby introducing new redundancy in the signal. Thus the necessity for
compression and use of factorial codes arises once again.
Since local edges are defined by sharp changes in the luminosity field, we can use
a derivative operation to pick up the pertinent structure. Indeed, if we look at the
gradient of the luminosity as a vector field, its magnitude at a point is proportional
to the change of luminosity, so that a large magnitude signals the possible presence of
a discontinuity in the luminosity profile. Moreover, in two dimensions, the direction
of the gradient vector is perpendicular to the direction of the possible local edge,
whose presence is given by the magnitude. These properties define a one-to-one
correspondence between large gradients and local edges.
The structure of the network we use reflects what is known about the structure of
V!. We select as our system a layer of direction sensitive cells which are laterally
connected to one another, each receiving input from the previous layer. We assume
that each unit receives as input the directional derivative of the luminosity signal
along the preferred visuotopic axis of the cell. This implies that locally the input to
a cell is proportional to the cosine of the angle between the unit's preferred direction
and the local gradient (edge). Thus each unit receives a broadly tuned signal, with
HW-HH approximately 60 0 ? With this feed-forward structure, the idea that the
system is trying to decorrelate its inputs suggests a way to calculate the lateral
connections that will perform this task. This calculation, and a further study of the
statistical properties of the input is the topic of the paper.
2
Mathematical Model
=
Let G(x)
(Gl(x), G 2 (x? be the gradient of luminosity at x . Assume that there
is a set of N detectors with activity Oi at x, each with a preferred direction ni.
Let the input from the previous layer to each detector be the directional derivative
along its preferred direction.
d
Vi(x) = IGrad(L(x?.nil = Idni L(x)1
(1)
There are long range correlations in the inputs to the network due both to the
statistical structure of the natural images and the structure of the input. The
simplest of them are captured in the two-point correlation matrix Rij(Xl , X2) =<
Vi(XI)Vj(X2) >, where the averaging is done across images. Then R is a block
matrix, with an N x N matrix at each spatial position (Xl, X2)'
We formulate the problem in terms of a recurrent kernel W, so that
(2)
The biological interpretation of this is that V is the effective input to VI from
the LGN and W specifies the lateral connectivity in V!. This equation describes
the steady state of the linear dynamical system 6 = -0 + W * 0 + V. The
A. Dimitrov and J. D. Cowan
854
above recurrent system has a solution for 0 not an eigenfunction of W in the form
0= (6 - W)-l * V = K * V. This suggests that there is an equivalent feed-forward
system with a transfer function K
(6 - W)-l and we can consider only such
systems.
=
The corresponding feed-forward system is a linear system that acts on the input
Vex) to produce an output O(x) = (1<. V)(x) == J K(x, y). V(y)dy . If we use Barlow's redundancy reduction hypothesis (Barlow 1989), this filter should decorrelate
the output signal. This is achieved by requiring that
6(X1 - X2)
6(X1 - X2)
'" < O(xd 0 0(X2) >=< (K . V)(X1)
'" K? R . KT
0
(K . V)(X2) >{:}
(3)
The aim then is to solve (3) for K. Obviously, this is equivalent to KT . K "" R- 1
(assuming K and R are non-singular), which has a solution K '" R-t, unique up
to a unitary transformation . The corresponding recurrent filter is then
(4)
This expression suggests an immediate benefit in the use of lateral kernels by the
system. As (4) shows, the filter does not now require inverting the correlation matrix
and thus is more stable than a feed-forward filter. This also helps preserve the local
structure of the autocorrelator in the optimal filter, while, because of the inversion
process, a feed-forward system will in general produce non-local, non-topographic
solutions.
To obtain a realistic connectivity structure, we need to explicitly include the effects
of noise on the system. The system is then described by 0 1 = V + N1 + M * W *
(0 1 + N2), where N1 is the input noise and N2 is the noise, generated by individual
units in the recurrently connected layer. Similarly to a feed-forward system (Atick
and Redlich 1992), we can modify the decorrelation kernel W derived from (2) to
M * W. The form of the correction M, which minimizes the effects of noise on
the system, is obtained by minimizing the distance between the states of the two
systems. If we define X2 (M) =< 10 - 0 1 12 > as the distance function, the solution
M ) = 0 will give us the appropriate kernel. A solution to this problem is
to
ox;1
M
*W
= 6 - (R + Nf
+ Ni) * (p
R 1 / 2 + Ni)-l
(5)
We see that it has the correct asymptotics as N 1, N2 approach zero . The filter behaves well for large N 2 , turning mostly into a low-pass filter with large attenuation .
It cannot handle well large N1 and reaches -00 proportionally to N'f.
3
3.1
Results
Local Optimal Linear Filter
As a first calculation with this model, consider its implications for the connectivity
between units in a single hypercolumn. This allows for a very simple application
of the theory and does not require any knowledge of the input signal under very
general assumptions.
We assume that direction selective cells receive as input from the previous layer the
projection of the gradient onto their preferred direction. Thus they act as directional
855
Spatial Decorrelation in Orientation Tuned Cortical Cells
derivatives, so that their response to a signal with the luminosity profile L( x) and
no input from other lateral units is Vi(x) = IGrad(L(x)).nil = Idldni(L(x))1
With this assumption the outputs of the edge detectors are correlated. Define a
(local) correlation matrix R;j =< ViVj >. By assumption (1), Vk = la Gas(a ak)1, where a and a are random, independent variables, denoting the magnitude
and direction of the local gradient and ak is the preferred angle of the detector .
Assuming spatially isotropic local structure for natural scenes, we can calculate the
average of R by integrating over a uniform probability measure in a. Then
(6)
where A =< a2 > can be factored because of the assumption of statistical independence. By the homogeneity assumption, Rij is a function of the relative angle
lai - aj I only. This allows us to easily calculate the integral in (6) from its Fourier
series. Indeed, in Fourier space R is just the square of the power spectrum of the
underlying signal, i.e., cos(a) on [0, 7r] . Thus we obtain the form of R analytically.
Knowing the local correlations, we can find a recurrent linear filter which decorrelates the outputs after it is applied. This filter is W = 0 - p R-! (Sec.2), unique
up to a unitary transformation .
w
0.8
0.6
0.4
0.2
e
-0.2
- 0. 4
Figure 1: Local recurrent filter in the presence of noise. The connection strength W
depends only on the relative angle (J between units.
If we include noise in the calculation according to (5), we obtain a filter which
depends on the signal to noise ratio of the input level. We model the noise process
here as a set of independent noise processes for each unit, with (Ndi being the
input noise and (N2)i the output noise for unit i. All noise processes are assumed
statistically independent. The result for SI N2 "" 3 is shown on Fig.I. We observe
the broadening of the central connections, caused by the need to average local results
in order to overcome the noise. It was calculated at very low Nl level, since, as
mentioned in Section 2, the filter is unstable with respect to input noise.
With this filter we can directly compare calculations obtained from applying it to a
specific input signal, with physiological measurements of the orientation selectivity
of cells in the cortex. The results of such comparisons are presented in Fig.2, in
which we plot the activity of orientation selective cells in arbitrary units vs stimulus
angle in degrees. We see very good matches with experimental results of Celebrini,
Thorpe, Trotter, and Imbert (1993), Schiller, Finlay, and Volman (1976) and Orban
(1984) . We expect some discrepancies, such as in Figures 2.D and 2.F, which can
be attributed to the threshold nature of real neural units. We see that we can use
the model to classify physiologically distinct cells by the value of the N2 parameter
A. Dimitrov and J. D. Cowan
856
that describes them. Indeed, since this parameter models the intrinsic noise of a
neural unit, we expect it to differ across populations.
-10
-tOO
A
B
c
D
E
F
.....
Figure 2: Comparison with experimental data. The activity of orientation selective cells
in arbitrary units is plotted against stimulus angle in degrees. Experimental points are
denoted with circles, calculated result with a solid line. The variation in the forms of
the tuning curves could be accounted for by selecting different noise levels in our noise
model. A - data from cell CAJ4 in Celebrini et.al. and fit for Nl = 0.1, N2 = 0.2. Bdata from cell CAK2 in Celebrini et.al. and fit for Nl = 0.35, N2 = 0.1. C - data from
a complex cell from Orban and fit for Nl = 0.3, N2 = 0.45. D - data from a simple cell
from Orban and fit for Nl = 1.0, N2 = 0.45. E - data from a simple cells in Schiller et.al.
and fit for Nl = 0.06, N2 = 0.001. F - data from a simple cells in Schiller et.al. and fit for
Nl = 15.0, N2 = 0.01.
3.2
Non-Local Optimal Filter
We can perform a similar analysis of the non-local structure of natural images to
design a non-local optimal filter. This time we have a set of detectors Vk(X) =
la(x) Cos(a(x) - k 7r/N) I and a correlation function Rij(X, y) =< Vi(x) Vj(y) >,
averaged over natural scenes. We assume that the function is spatially translation
invariant and can be represented as Rij(X, y) = ~j(x - y) . The averaging was done
over a set of about 100 different pictures, with 10-20 256 2 samples taken from each
picture.
The structure of the correlation matrix depends both on the autocorrelator of the
gradient field and the structure of the detectors, which are correlated. Obviously
the fact that the output units compute la(x) Cos(a(x) - k7r/N)1 creates many local
correlations between neighboring units. Any non-local structure in the detector set
is due to a similar structure, present in the gradient field autocorrelator.
The structure of the translation-invariant correlation matrix R( x) is shown in
Fig.3A. This can be interpreted as the correlation between the input to the center
hypercolumn with the input to rest of the hypercolumns . The result of the complete
model (5) can be seen in Fig.3B. Since the filter is also assumed to be translation
invariant, the pictures can be interpreted as the connectivity of the center hypercolumn with the rest of the network. This is seen to be concentrated near the diagonal ,
857
Spatial Decorrelation in Orientation Tuned Cortical Cells
B
A
2
-1..I ...
.
. i-I
~
, II
Figure 3: A. The autocorrelation function of a set with 8 detectors. Dark represents high
correlation, light - low correlation. The sets are indexed by the preferred angles Oi, OJ in
units of f and each RiJ has spatial structure, which is represented as a 32 x 32 square. B.
The lateral connectivity for the central horizontal selective unit with neighboring horizontal
(1) and 1r/4 (2) selective units. Note the anisotropic connectivity and the rotation of the
connectivity axis on the second picture.
and weak in the two adjacent bands, which represent connections to edge detectors
with a perpendicular preferred direction. The noise minimizing filter is a low pass
filter, as expected, and thus decreases the high frequency component of the power
spectrum of the respective decorrelating filter.
4
Conclusions and Discussion
We have shown that properties of orientation selective cells in the visual cortex can
be partially described by some very simple linear systems analysis. Using this we
obtain results which are in very good agreement with physiological and anatomical
data of single-cell recordings and imaging. We can use the parameters of the model
to classify functionally and structurally differing cells in the visual cortex.
We achieved this by using a recurrent network as the underlying model. This was
chosen for several reasons. One is that we tried to give the model biological plausibility and recurrency is well established on the cortical level. Another related
heuristic argument is that although there exists a feed-forward network with equivalent properties, as shown in Section 2, such a network will require an additional
layer of cells, while the recurrent model allows both for feed-forward processing (the
input to our model) as well as manipulation of the output of that (the decorrelation
procedure in our model). Finally, while a feed-forward network needs large weights
to amplify the signal, a recurrent network is able to achieve very high gains on the
input signal with relatively small weights by utilizing special architecture. As can
be seen from our equivalence model, K = (6 - W)-l, so if W is so constructed as
to have an eigenvalues close to 1, it will produce enormous amplification.
Our work is based on previous suggestions relating the structure of the visual environment to the structure of the visual pathway. It was thought before (Atick and
Li 1994) that this particular relation can describe only early visual pathways, but
is insufficient to account for the structure of the striate cortex. We show here that
redundancy reduction is still sufficient to describe many of the complexities of the
visual cortex, thus strengthening the possibility that this is a basic building princi-
858
A. Dimitrov and J. D. Cowan
pIe for the visual system and one should anticipate its appearance in later regions
of the latter.
What is even more intriguing is the possibility that this method can account for
the structure of other sensory pathways and cortices. We know e.g. that the somatosensory pathway and cortex are similar to the visual ones, because of the similar
environments that they encounter (luminosity, edges and textures have analogies in
somesthesia). Similar analogies may be expected for the auditory pathway.
We expect even better results if we consider a more realistic non-linear model for
the neural units. In fact this improves tremendously the information-processing
abilities of a bounded system, since it captures higher order correlations in the
signal and allows for true minimization of the mutual information in the system ,
rather than just decorrelating. Very promising results in this direction have been
recently described by Bell and Sejnowski (1996) and Lin and Cowan (1997) and we
intend to consider the implications for our model.
Acknowledgements
Supported in part by Grant
# 96-24 from the James S. McDonnell Foundation.
References
Atick, J. J. and Z. Li (1994). Towards a theory of the striate cortex. Neural
Computation 6, 127-146.
Atick, J. J. and N. N. Redlich (1992). What does the retina know about natural
scenes? Neural Computation 4, 196-210.
Atick, J . J. and N. N. Redlich (1993). Convergent algorithm for sensory receptive
field developement. Neural Computation 5, 45-60.
Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review 61, 183-193.
Barlow, H. B. (1989). Unsupervised learning. Neural Computation 1,295-311.
Bell, A. T. and T. J. Sejnowski (1996). The "independent components" of natural
scences are edge filters. Vision Research (submitted).
Celebrini, S., S. Thorpe, Y. Trotter, and M. Imbert (1993). Dynamics of orientation coding in area VI of the awake primate. Visual Neuroscience 10,
811-825.
Dong, D. (1994). Associative decorrelation dynamics: a theory of selforganization and optimization in feedback networks. Volume 7 of Advances in
Neural Information Processing Systems, pp. 925-932. The MIT Press.
Field, D. J. (1987). Relations between the statistics of natural images and the
response properties of cortical cells. J. Opt. Soc. Am. 4, 2379-2394.
Lin, J . K. and J. D. Cowan (1997). Faithful representation of separable input
distributions. Neural Computation, (to appear).
Orban, G. A. (1984). Neuronal Operations in the Visual Cortex. Springer-Verlag,
Berlin.
Ruderman , D. L. and W. Bialek (1993). Statistics of natural images: Scaling in
the woods. In J. D. COWall, G. Tesauro, and J. Alspector (Eds.), Advances
in Neural Information Processing Systems, Volume 6. Morgan Kaufman, San
Mateo, CA.
Schiller, P., B. Finlay, and S. Volman (1976). Quantitative studies of singlecell properties in monkey striate cortex. II . Orientation specificity and ocular
dominance. J. Neuroph. 39(6), 1320-1333.
| 1304 |@word selforganization:1 inversion:1 compression:2 trotter:2 tried:1 decorrelate:2 pick:1 thereby:1 solid:1 reduction:3 necessity:1 series:1 selecting:1 tuned:5 denoting:1 si:1 intriguing:1 readily:1 realistic:2 chicago:4 pertinent:1 plot:1 v:1 isotropic:1 math:1 mathematical:1 along:2 constructed:1 pathway:6 autocorrelation:3 expected:2 indeed:3 alspector:1 informational:1 little:1 moreover:1 underlying:2 bounded:1 what:3 kaufman:1 interpreted:2 minimizes:1 monkey:1 differing:1 transformation:2 quantitative:1 nf:1 act:2 attenuation:1 xd:1 laterally:1 unit:20 grant:1 appear:1 before:1 local:20 modify:1 ak:2 approximately:1 studied:1 mateo:1 equivalence:1 suggests:3 co:3 perpendicular:2 range:1 statistically:1 averaged:1 unique:2 faithful:1 lost:2 block:1 procedure:1 asymptotics:1 area:1 bell:2 thought:1 projection:1 integrating:1 specificity:1 cannot:1 onto:1 amplify:1 close:1 applying:2 equivalent:3 center:2 formulate:1 factored:1 utilizing:1 population:1 handle:1 variation:1 hypothesis:1 agreement:1 rij:5 capture:1 calculate:3 region:1 connected:2 decrease:1 mentioned:1 environment:3 complexity:1 dynamic:2 creates:1 easily:1 various:1 represented:2 distinct:2 effective:1 describe:2 sejnowski:2 whose:2 heuristic:1 solve:1 ability:1 statistic:3 topographic:1 associative:1 obviously:2 eigenvalue:1 propose:1 strengthening:1 neighboring:2 achieve:2 amplification:1 produce:3 help:1 derive:1 develop:1 recurrent:8 soc:1 implies:1 somatosensory:1 differ:1 direction:12 correct:1 filter:23 require:3 opt:1 biological:2 anticipate:1 correction:1 early:2 a2:1 geniculate:1 sensitive:2 reflects:2 minimization:1 mit:1 aim:1 rather:1 ndi:1 derived:1 vk:2 tremendously:1 am:1 unlikely:1 relation:2 selective:8 lgn:1 orientation:12 denoted:1 spatial:7 special:1 mutual:1 field:9 once:1 represents:1 look:1 unsupervised:1 discrepancy:1 stimulus:2 thorpe:2 retina:1 preserve:1 homogeneity:1 individual:1 n1:3 possibility:3 nl:7 light:1 implication:2 kt:2 edge:10 integral:1 respective:1 indexed:1 circle:1 plotted:1 psychological:1 classify:2 introducing:1 uniform:1 too:1 hypercolumns:1 dong:2 receiving:1 connectivity:9 again:1 central:2 derivative:4 li:3 suggesting:1 account:2 retinal:2 sec:1 coding:1 explicitly:1 caused:1 vi:7 depends:3 later:1 il:2 oi:2 ni:3 square:2 ensemble:1 directional:3 weak:1 submitted:1 detector:9 reach:1 ed:1 against:1 frequency:1 pp:1 james:1 obvious:1 attneave:2 ocular:1 attributed:1 gain:1 auditory:1 knowledge:1 improves:1 feed:9 higher:2 response:2 decorrelating:2 done:4 ox:1 just:2 atick:9 correlation:15 receives:2 horizontal:2 ruderman:2 aj:1 building:1 effect:2 requiring:1 true:1 barlow:4 analytically:1 spatially:2 adjacent:1 steady:1 cosine:1 imbert:2 trying:1 complete:2 image:8 jack:1 consideration:1 recently:1 rotation:1 behaves:1 celebrini:4 volume:2 anisotropic:1 interpretation:1 relating:1 functionally:1 measurement:2 tuning:1 mathematics:2 similarly:1 stable:1 cortex:12 compiled:1 recent:1 optimizes:1 tesauro:1 manipulation:1 selectivity:1 verlag:1 captured:1 seen:3 additional:1 morgan:1 signal:19 ii:2 match:1 calculation:4 plausibility:1 long:1 lin:2 lai:1 basic:1 vision:2 kernel:4 represent:1 achieved:2 cell:24 receive:1 singular:1 dimitrov:5 rest:2 recording:1 cowan:7 visuotopic:1 unitary:2 near:1 presence:3 independence:1 fit:6 architecture:1 reduce:1 idea:4 knowing:1 expression:1 retino:1 proportionally:1 factorial:1 dark:1 locally:1 band:1 concentrated:1 processed:1 simplest:1 specifies:1 neuroscience:1 anatomical:1 broadly:1 dominance:1 redundancy:5 salient:1 threshold:1 enormous:1 finlay:2 imaging:1 year:1 wood:1 angle:7 patch:2 vex:1 dy:1 scaling:1 layer:7 convergent:1 correspondence:1 activity:3 strength:1 awake:1 scene:5 x2:8 fourier:2 aspect:1 orban:4 argument:1 separable:1 relatively:1 department:2 according:1 volman:2 mcdonnell:1 across:2 describes:2 primate:1 invariant:3 taken:1 equation:1 agree:1 mechanism:1 hh:1 know:3 available:1 operation:2 apply:1 observe:1 appropriate:1 recurrency:1 encounter:1 hat:1 include:2 psychophysical:1 intend:1 receptive:1 striate:3 diagonal:1 bialek:2 gradient:9 distance:2 schiller:4 lateral:7 berlin:1 topic:1 unstable:1 reason:1 assuming:2 code:1 insufficient:1 ratio:1 minimizing:2 pie:1 mostly:1 vivj:1 design:1 perform:2 gas:1 luminosity:12 immediate:1 sharp:2 arbitrary:2 inverting:1 hypercolumn:3 connection:4 autocorrelator:3 established:1 discontinuity:1 eigenfunction:1 able:1 suggested:1 dynamical:1 perception:1 oj:1 power:2 decorrelation:7 natural:11 turning:1 numerous:1 picture:4 axis:2 review:1 acknowledgement:1 relative:2 expect:3 suggestion:1 proportional:2 analogy:2 foundation:1 degree:2 sufficient:1 translation:3 accounted:1 gl:1 supported:1 uchicago:2 decorrelates:1 benefit:2 overcome:1 dimension:1 cortical:6 calculated:2 curve:1 feedback:1 sensory:2 forward:9 qualitatively:1 projected:1 san:1 informationtheoretic:1 preferred:8 incoming:1 assumed:2 xi:1 spectrum:2 physiologically:1 promising:1 nature:1 transfer:1 ca:1 broadening:1 complex:1 vj:2 noise:18 profile:2 n2:12 x1:3 neuronal:1 fig:4 redlich:6 strengthened:1 structurally:1 position:2 xl:2 hw:1 specific:1 recurrently:1 physiological:4 intrinsic:1 exists:1 texture:1 magnitude:4 appearance:1 visual:17 partially:1 springer:1 towards:1 change:3 averaging:2 mexican:1 nil:2 pas:2 experimental:3 la:3 select:1 latter:1 arises:1 alexander:1 correlated:2 |
334 | 1,305 | Rapid Visual Processing using Spike Asynchrony
Simon J. Thorpe & Jacques Gautrais
Centre de Recherche Cerveau & Cognition
F-31062 Toulouse
France
email thorpe@cerco.ups-tlseJr
Abstract
We have investigated the possibility that rapid processing in the visual
system could be achieved by using the order of firing in different
neurones as a code, rather than more conventional firing rate schemes.
Using SPIKENET, a neural net simulator based on integrate-and-fire
neurones and in which neurones in the input layer function as analogto-delay converters, we have modeled the initial stages of visual
processing. Initial results are extremely promising. Even with activity
in retinal output cells limited to one spike per neuron per image
(effectively ruling out any form of rate coding), sophisticated processing
based on asynchronous activation was nonetheless possible.
1. INTRODUCTION
We recently demonstrated that the human visual system can process previously unseen
natural images in under 150 ms (Thorpe et aI, 1996). Such data, together with previous
studies on processing speeds in the primate visual system (see Thorpe & Imbert, 1989)
put severe constraints on models of visual processing. For example, temporal lobe
neurones respond selectively to faces only 80-100 ms after stimulus onset (Oram &
Perrett, 1992; Rolls & Tovee, 1994). To reach the temporal lobe in this time,
information from the retina has to pass through roughly ten processing stages (see Fig.
I). If one takes into account the surprisingly slow conduction velocities of intracortical
axons ? 1 ms-l, see Nowak & Bullier, 1997) it appears that the computation time
within any cortical stage will be as little as 5-10 ms. Given that most cortical neurones
will be firing below 100 spikes.sol , it is difficult to escape the conclusion that processing
can be achieved with only one spike per neuron.
S. J. Thorpe and J. Gautrais
902
Retina
VI
V2
-a-.o
-a-.o,...... ,......
LGN
.
0
PIT
-a.o-
..n..o-
AIT
r.
~
a-.~
~
r.
0
0
V4
r.
'-'
""
.~
'-'
40-60ms
30-SOms
,......
...a
~
SO-70ms
r.
,......
'-'
'"
roo. ........
~
0-.(').
.~
~
~~
60-BOms
~
L.
..(")
0
0
~
70-90 ms
SO-lOOms
Figure 1 : Approximate latencies for neurones in different stages of the visual primate
visual system (see Thorpe & Imbert, 1989; Nowak & Bullier, 1997).
Such constraints pose major problems for conventional firing rate codes since at least two
spikes are needed to estimate a neuron's instantaneous firing rate. While it is possible to
use the number of spikes generated by a population of cells to code analog values, this
turns out to be expensive, since to code n analog values, one needs n-1 neurones.
Furthermore, the roughly Poisson nature of spike generation would also seriously limit
the amount of information that can be transmitted. Even at 100 spikes.s? l , there is a
roughly 35% chance that the neuron will generate no spike at all within a particular 10
ms window, again forcing the system to use large numbers of redundant cells.
An alternative is to use information encoded in the temporal pattern of firing produced in
response to transient stimuli (Mainen & Sejnowski, 1995). In particular, one can treat
neurones not as analog to frequency converters (as is normally the case) but rather as
analog to delay converters(Thorpe, 1990, 1994). The idea is very simple and uses the fact
that the time taken for an integrate-and-fire neuron to reach threshold depends on input
strength. Thus, in response to an intensity profile, the 6 neurones in figure 2 will tend to
fire in a particular order - the most strongly activated cells firing first. Since each neuron
fires one and only one spike, the firing rates of the cells contain no information, but there
is information in the order in which the cells fire (see also Hopfield, 1995).
A
B
c
F
I
INTENSITY
Figure 2 : An example of spike order
coding. Because of the intrinsic properties
of neurones the most strongly activated
neurones will fire first. The sequence
B>A>F>C>E>D is one ot the 720 (i.e.
6!) possible orders in which the 6 neurones
can fire, each of which reflects a different
intensity profile. Note that such a code can
be used to send information very quickly.
To test the plausibility of using spike order rather than firing rate as a code, we have
developed a neural network simulator "SPIKENET" and used it to model the initial stages
of visual processing. Initial results are very encouraging and demonstrate that
sophisticated visual processing can indeed be achieved in a visual system in which only
one spike per neuron is available.
Rapid Vzsual Processing using Spike Asynchrony
903
2. SPIKENET SIMULATIONS
SPIKENET has been developed in order to simulate the activity of large numbers of
integrate-and-fire neurones. The basic neuronal elements are simple, and involve only a
limited number of parameters, namely, an activation level, a threshold and a membrane
time constant. The basic propagation mechanism involves processing the list of neurones
that fired during the previous time step. For each spiking neuron, we add a synaptic
weight value to each of its targets, and, if the target neuron's activation level exceeds its
threshold, we add it to the list of spiking neurones for the next time step and reset its
activation level by subtracting the threshold value. When a target neuron is affected for
the first time on any particular time step, its activation level is recalculated to simulate an
exponential decay over time. One of the great advantages of this kind of "event-driven"
simulator is its computational efficiency - even very large networks of neurones can be
simulated because no processor time is wasted on inactive neurones.
2.1 ARCHITECTURE
As an initial test of the possibility of single spike processing, we simulated the
propagation of activity in a visual system architecture with three levels (see Figure 3).
Starting from a gray-scale image (180 x 214 pixels) we calculate the levels of activation
in two retinal maps, one corresponding to ON-center retinal ganglion cells, the other to
OFF-center cells. This essentiaIly corresponds to convolving the image with two
Mexican-hat type operators. However, unlike more classic neural net models, these
activation levels are not used to determine a continuous output value for each neuron, nor
to calculate a firing rate. Instead, we treat the cells as analog-to-delay converters and
calculate at which time step each retinal unit will fire. Because of their receptive field
organization, cells which fire at the shortest latencies will correspond to regions in the
image where the local contrast is high. Note, however, that each retinal ganglion cell will
fire once and once only. While this is clearly not physiologically realistic (normally, cells
firing at a short latencies go on to fire further spikes at short intervals) our aim was to see
what sort of processing can be achieved in the absence of rate coding.
The ON- and OFF-center cells each make excitatory connections to a large number of
cortical maps in the second level of the network. Each map contains neurones with a
different pattern of afferent connections which results in orientation and spatial frequency
selectivity similar to that described for simple-type neurones in striate cortex. In these
simulations we used 32 different filters corresponding to 8 different orientations (each
separated by 45?) and four different scales or spatial frequencies. This is functionally
equivalent to having one single cortical map (equivalent to area VI) in which each point
in visual space corresponds to a hypercolumn containing a complete set of orientation and
spatial frequency tuned filters.
Units in the third layer receive weighted inputs from all the simple units corresponding to
a particular region of space with the same orientation preference and thus roughly
correspond to complex cells in area VI.
S. J. Thorpe and J. Gautrais
904
Layer 3
Orientation and Spatial
Frequency tuned
Complex cells
(::::205 000 units)
lJoyer1
ON- and OFF-center cells
(::::77 000 units)
Image
180 x 214 pixels
t
I
?r7
\
,
(!!YI
.".
'I
!J
Figure 3 : Architecture used in the present simulations
One unusual feature of the propagation process used in SPIKENET is that the postsynaptic effect of activating a synapse is not fixed, but depends on how many inputs have
already been activated. Thus, the earliest firing cells produce a maximal post-synaptic
effect (100%), but those which fire later produce less and less response. Specifically, the
sensitivity of the post-synaptic neuron decreases by a fixed percentage each time one of its
inputs fires. The phenomenon is somewhat similar to the sorts of activity-dependent
synaptic depression described recently by Markram & Tsodyks (1996) and others, but
differs in that the depression affects all the inputs to a particular neuron. The net result is
to make the post-synaptic cell sensitive to the order in which its inputs are activated.
2.2 SIMULATION RESULTS
When a new image is presented to the network, spikes are generated asynchronously in
the ON- and OFF-center cells of the retina in such a way that information about regions
of the image with high local contrast (i.e. where there are contours present) are sent to the
cortex first. Progressively, neurons in the second layer become more and more excited,
and, after a variable number of time steps, the first cells in the second layer will start to
905
Rapid Visual Processing using Spike Asynchrony
reach threshold and fire. Note that, as in the first layer, the earliest firing units will be
those for whom the pattern of input activation best matches their receptive field structure.
40rns
45rns
50rns
80rns
Layerl
"ON-center"
Cells
Layer 2
Simple cells
Orientation
45?
Layer 3
Complex cells
Orientation
45?
Figure 4 : Development of activity in 3 of the maps
Figure 4 illustrates this process for just three maps. The top row shows the location of
units in the ON-center retinal map that have fired after various delays. After 40 msec, the
main outlines of the figure can be seen but progressively more details are seen after 45
and then 50 ms. Note that the representation used here uses pixel intensity to code the
order in which the cells have fired - bright white spots correspond to places in the image
where the cells fired earliest. In the final frame (taken at 80 ms) the vast majority of ONcenter cells have already fired and the resulting image is quite similar to a high spatial
frequency filtered version of the original image.
The middle row of images shows activity in one of the second level maps - the one
corresponding to medium spatial frequency components oriented at 45?. Note that in the
first times lice (40 ms) very few cells have fired, but that the proportion increases
progressively over the next 10 or so milliseconds. However, even at the end of the
propagation process, the proportion of cells that have actually fired remains low. Finally,
the lowest row shows activity in the corresponding third layer map - again corresponding
to contours oriented at 45?, but this time with less precise position specificity as a result
of the grouping process.
Figure 5 plots the total number of spikes occurring per millisecond in each of the three
layers during the first 100 ms following the onset of processing. It is perhaps not
s. 1. Thorpe and 1. Gautrais
906
surprising that the onset of firing occurs later for layers 2 and 3. However, there is a huge
amount of overlap in the onset latencies of cells in the three layers, and indeed, it is
doubtful whether there would be any systematic differences in mean onset latency between
the three layers.
1250
On centre cells
Simple cells
Complex cells
en 1000
E
....
Q)
c..
~
~
750
.0.
en
500
250
o
o
10
20
30
40
50
60
70
80
90
100
Time (ms)
Figure 5 : Amount of activity measured in spikes/ms for the three layers of neurones as a
function of time
But perhaps one of the most striking features of these simulations is the way in which
the onset latency of cells can be seen to vary with the stimulus. The small number of
cells in each layer which fire early are in fact very special because only the most
optimally activated cells will fire at such short latencies. The implications of this effect
for visual processing are far reaching because it means that the earliest information
arriving at later processing stages will be particularly informative because the cells
involved are very unambiguous. Interestingly, such changes in onset latency have been
observed experimentally in neurones in area VI of the awake primate in response to
changes in orientation. In these experiments it was shown that shifting the orientation of
a grating away from a neuron's preferred orientation could result in changes in not only
the firing rate of the cell, but also increases in onset latency of as much as 20-30 ms
(Celebrini, Thorpe, Trotter & Imbert, 1993).
3. CONCLUSIONS
A number of points can be made on the basis of these results. Perhaps the most
important is that visual processing can indeed be performed under conditions in which
spike frequency coding is effectively ruled out. Clearly, under normal conditions, neurones
in the visual system that respond to a visual input will almost invariably generate more
Rapid VISual Processing using Spike Asynchrony
907
than one spike. However, as we have argued previously, processing in the visual system
is so fast that most cells will not have time to generate more than one spike before
processing in later stages has to be initiated. The present results indicate that the use of
temporal order coding may provide a key to understanding this remarkable efficiency.
The simulations presented here are clearly very limited, but we are currently looking at
spike propagation in more complex architectures that include extensive horizontal
connections between neurones in a particular layer as well as additional layers of
processing. As an example, we have recently developed an application capable of digit
recognition. SPIKENET is well suited for such large scale simulations because of the
event-driven nature of the propagation process. For instance, the propagation presented
here, which involved roughly 700 000 neurones and over 35 million connections, took
roughly 15 seconds on a 150 MHz PowerMac, and even faster simulations are possible
using parallel processing. With this is view we have developed a version of SPIKENET
that uses PVM (Parallel Virtual Machine) to run on a cluster of workstations.
References
Celebrini S., Thorpe S., Trotter Y. & Imbert M. (1993). Dynamics of orientation coding
in area VI of the awake primate Visual Neuroscience 10, 811-25.
Hopfie1d J. J. (1995). Pattern recognition computation using action potential timing for
stimulus representation . Nature , 376, 33-36.
Mainen Z. F. & Sejnowski T. J. (1995). Reliability of spike timing in neocortical
neurons Science, 268, 1503-6.
Markram H. & Tsodyks M . (1996) Redistribution of synaptic efficacy between
neocortical pyramidal neurons. Nature, 382,807-810
Nowak L.G. & Bullier J (1997) The timing of information transfer in the visual system.
In Kaas J., Rocklund K. & Peters A. (eds). Extrastriate Cortex in Primates (in press).
Plenum Press.
Oram M. W. & Perrett D. I. (1992). Time course of neural responses discriminating
different views of the face and head Journal of Neurophysiology, 68, 70-84.
Rolls E. T. & Tovee M. J. (1994). Processing speed in the cerebral cortex and the
neurophysiology of visual masking Proc R Soc Lond B Bioi Sci, 257,9-15.
Thorpe S., Fize D. & Marlot C. (1996). Speed of processing in the human visual system
Nature, 381, 520-522.
Thorpe S. J. (1990). Spike arrival times: A highly efficient coding scheme for neural
networks. In R. Eckmiller, G. Hartman & G. Hauske (Eds.), Parallel processing in neural
systems (pp. 91-94). North-Holland: Elsevier. Reprinted in H. Gutfreund & G. Toulouse
(1994) , Biology and computation: A physicist's choice. Singapour: World Scientific.
Thorpe S. J. & Imbert M. (1989). Biological constraints on connectionist models. In R.
Pfeifer, Z. Schreter, F. Fogelman-Soulie & L. Steels (Eds.), Connectionism in
Perspective. (pp. 63-92). Amsterdam: Elsevier.
| 1305 |@word neurophysiology:2 version:2 middle:1 proportion:2 oncenter:1 trotter:2 simulation:8 lobe:2 excited:1 extrastriate:1 initial:5 contains:1 efficacy:1 mainen:2 seriously:1 tuned:2 interestingly:1 surprising:1 activation:8 realistic:1 informative:1 plot:1 progressively:3 short:3 recherche:1 filtered:1 location:1 preference:1 become:1 indeed:3 rapid:5 roughly:6 nor:1 simulator:3 little:1 encouraging:1 window:1 medium:1 lowest:1 what:1 kind:1 developed:4 gutfreund:1 temporal:4 tovee:2 normally:2 unit:7 before:1 local:2 treat:2 timing:3 limit:1 physicist:1 initiated:1 firing:15 pit:1 limited:3 differs:1 digit:1 spot:1 area:4 ups:1 specificity:1 operator:1 put:1 equivalent:2 map:9 demonstrated:1 conventional:2 center:7 send:1 go:1 starting:1 population:1 classic:1 plenum:1 target:3 vzsual:1 us:3 velocity:1 element:1 expensive:1 particularly:1 recognition:2 observed:1 calculate:3 tsodyks:2 region:3 sol:1 decrease:1 dynamic:1 efficiency:2 basis:1 hopfield:1 various:1 separated:1 fast:1 sejnowski:2 quite:1 encoded:1 roo:1 toulouse:2 hartman:1 unseen:1 asynchronously:1 final:1 sequence:1 advantage:1 net:3 took:1 subtracting:1 maximal:1 reset:1 fired:7 cluster:1 produce:2 pose:1 measured:1 grating:1 soc:1 involves:1 indicate:1 filter:2 human:2 transient:1 virtual:1 redistribution:1 argued:1 activating:1 biological:1 connectionism:1 normal:1 great:1 cognition:1 recalculated:1 major:1 vary:1 early:1 proc:1 currently:1 sensitive:1 reflects:1 weighted:1 boms:1 clearly:3 aim:1 rather:3 reaching:1 louse:1 earliest:4 contrast:2 elsevier:2 dependent:1 france:1 lgn:1 pixel:3 fogelman:1 orientation:11 development:1 spatial:6 special:1 field:2 once:2 having:1 biology:1 r7:1 others:1 stimulus:4 connectionist:1 escape:1 few:1 thorpe:14 retina:3 oriented:2 fire:17 organization:1 huge:1 invariably:1 possibility:2 highly:1 marlot:1 severe:1 activated:5 implication:1 nowak:3 capable:1 ruled:1 doubtful:1 instance:1 mhz:1 delay:4 optimally:1 conduction:1 bullier:3 sensitivity:1 discriminating:1 v4:1 off:4 systematic:1 together:1 quickly:1 again:2 containing:1 convolving:1 account:1 potential:1 de:1 intracortical:1 retinal:6 coding:7 north:1 afferent:1 onset:8 vi:5 depends:2 later:4 performed:1 view:2 kaas:1 start:1 sort:2 parallel:3 masking:1 simon:1 bright:1 roll:2 correspond:3 produced:1 processor:1 reach:3 synaptic:6 email:1 ed:3 nonetheless:1 frequency:8 involved:2 pp:2 oram:2 workstation:1 sophisticated:2 actually:1 appears:1 response:5 synapse:1 strongly:2 furthermore:1 just:1 stage:7 horizontal:1 propagation:7 gray:1 perhaps:3 asynchrony:4 scientific:1 effect:3 contain:1 white:1 during:2 unambiguous:1 imbert:5 m:15 outline:1 analogto:1 demonstrate:1 complete:1 neocortical:2 perrett:2 image:12 instantaneous:1 recently:3 spiking:2 celebrini:2 cerebral:1 million:1 analog:5 functionally:1 ai:1 centre:2 reliability:1 cortex:4 add:2 perspective:1 driven:2 forcing:1 selectivity:1 yi:1 transmitted:1 seen:3 additional:1 somewhat:1 determine:1 shortest:1 redundant:1 exceeds:1 match:1 faster:1 plausibility:1 post:3 basic:2 poisson:1 achieved:4 cell:38 receive:1 interval:1 pyramidal:1 ot:1 unlike:1 tend:1 sent:1 affect:1 architecture:4 converter:4 idea:1 reprinted:1 pvm:1 inactive:1 whether:1 peter:1 neurones:24 action:1 depression:2 latency:9 involve:1 amount:3 ten:1 generate:3 percentage:1 millisecond:2 jacques:1 neuroscience:1 per:5 affected:1 eckmiller:1 key:1 four:1 threshold:5 fize:1 vast:1 wasted:1 run:1 respond:2 striking:1 place:1 almost:1 ruling:1 layer:17 activity:8 strength:1 constraint:3 awake:2 schreter:1 speed:3 simulate:2 extremely:1 lond:1 membrane:1 postsynaptic:1 primate:5 taken:2 previously:2 remains:1 turn:1 mechanism:1 needed:1 end:1 unusual:1 available:1 v2:1 away:1 alternative:1 hat:1 original:1 top:1 include:1 already:2 spike:27 occurs:1 receptive:2 striate:1 layerl:1 simulated:2 sci:1 majority:1 whom:1 code:7 modeled:1 difficult:1 steel:1 neuron:17 looking:1 precise:1 head:1 frame:1 rn:4 intensity:4 namely:1 hypercolumn:1 extensive:1 connection:4 below:1 pattern:4 shifting:1 event:2 overlap:1 natural:1 scheme:2 loom:1 understanding:1 generation:1 remarkable:1 integrate:3 row:3 excitatory:1 course:1 surprisingly:1 asynchronous:1 arriving:1 face:2 markram:2 soulie:1 cortical:4 world:1 contour:2 made:1 far:1 approximate:1 preferred:1 continuous:1 physiologically:1 promising:1 nature:5 transfer:1 investigated:1 complex:5 som:1 main:1 profile:2 arrival:1 ait:1 neuronal:1 fig:1 en:2 slow:1 axon:1 position:1 msec:1 exponential:1 third:2 pfeifer:1 list:2 decay:1 grouping:1 intrinsic:1 effectively:2 illustrates:1 occurring:1 suited:1 ganglion:2 visual:24 amsterdam:1 holland:1 corresponds:2 chance:1 bioi:1 absence:1 change:3 experimentally:1 specifically:1 mexican:1 total:1 pas:1 selectively:1 hauske:1 phenomenon:1 |
335 | 1,306 | Practical confidence and prediction
intervals
Tom Heskes
RWCP Novel Functions SNN Laboratory; University of Nijmegen
Geert Grooteplein 21, 6525 EZ Nijmegen, The Netherlands
tom@mbfys.kun.nl
Abstract
We propose a new method to compute prediction intervals. Especially for small data sets the width of a prediction interval does not
only depend on the variance of the target distribution, but also on
the accuracy of our estimator of the mean of the target, i.e., on the
width of the confidence interval. The confidence interval follows
from the variation in an ensemble of neural networks, each of them
trained and stopped on bootstrap replicates of the original data set.
A second improvement is the use of the residuals on validation patterns instead of on training patterns for estimation of the variance
of the target distribution. As illustrated on a synthetic example,
our method is better than existing methods with regard to extrapolation and interpolation in data regimes with a limited amount of
data, and yields prediction intervals which actual confidence levels
are closer to the desired confidence levels.
1
STATISTICAL INTERVALS
In this paper we will consider feedforward neural networks for regression tasks:
estimating an underlying mathematical function between input and output variables
based on a finite number of data points possibly corrupted by noise. We are given
a set of Pdata pairs {ifJ, t fJ } which are assumed to be generated according to
t(i)
= f(i)
+ e(i) ,
(1)
where e(i) denotes noise with zero mean. Straightforwardly trained on such a
regression task, the output of a network o(i) given a new input vector i can be
RWCP: Real World Computing Partnership; SNN: Foundation for Neural Networks.
Practical Confidence and Prediction Intervals
177
interpreted as an estimate of the regression f(i) , i.e ., of the mean of the target
distribution given input i. Sometimes this is all we are interested in: a reliable
estimate of the regression f(i). In many applications, however, it is important to
quantify the accuracy of our statements. For regression problems we can distinguish
two different aspects: the accuracy of our estimate of the true regression and the
accuracy of our estimate with respect to the observed output. Confidence intervals
deal with the first aspect, i.e. , consider the distribution of the quantity f(i) - o(i),
prediction intervals with the latter, i.e., treat the quantity t(i) - o(i). We see from
t(i) - o(i) = [f(i) - o(i)]
+ ~(i) ,
(2)
that a prediction interval necessarily encloses the corresponding confidence interval.
In [7] a method somewhat similar to ours is introduced to estimate both the mean
and the variance of the target probability distribution. It is based on the assumption
that there is a sufficiently large data set, i.e., that their is no risk of overfitting and
that the neural network finds the correct regression. In practical applications with
limited data sets such assumptions are too strict. In this paper we will propose
a new method which estimates the inaccuracy of the estimator through bootstrap
resampling and corrects for the tendency to overfit by considering the residuals on
validation patterns rather than those on training patterns.
2
BOOTSTRAPPING AND EARLY STOPPING
Bootstrapping [3] is based on the idea that the available data set is nothing but
a particular realization of some unknown probability distribution . Instead of sampling over the "true" probability distribution , which is obviously impossible, one
defines an empirical distribution . With so-called naive bootstrapping the empirical
distribution is a sum of delta peaks on the available data points, each with probability content l/Pdata . A bootstrap sample is a collection of Pdata patterns drawn with
replacement from this empirical probability distribution. This bootstrap sample is
nothing but our training set and all patterns that do not occur in the training set
are by definition part of the validation set . For large Pdata, the probability that a
pattern becomes part of the validation set is (1 - l/Pdata)Pdata ~ lie ~ 0.37.
When training a neural network on a particular bootstrap sample, the weights are
adjusted in order to minimize the error on the training data. Training is stopped
when the error on the validation data starts to increase. This so-called early stopping procedure is a popular strategy to prevent overfitting in neural networks and
can be viewed as an alternative to regularization techniques such as weight decay.
In this context bootstrapping is just a procedure to generate subdivisions in training
and validation set similar to k-fold cross-validation or subsampling.
On each of the n run bootstrap replicates we train and stop a single neural network.
The output of network i on input vector i IJ is written oi(ilJ ) ==
As "the"
estimate of our ensemble of networks for the regression f(i) we take the average
output l
1 nrun
or.
m(i) == -
L.: oi(i).
n run i=l
lThis is a so-called "bagged" estimator [2]. In [5] it is shown that a proper balancing
of the network outputs can yield even better results.
178
3
T. Heskes
CONFIDENCE INTERVALS
Confidence intervals provide a way to quantify our confidence in the estimate
m(i) of the regression f(i), i.e., we have to consider the probability distribution
P(f(x)lm(i)) that the true regression is f(x) given our estimate m(i). Our line of
reasoning goes as follows (see also [8]).
We assume that our ensemble of neural networks yields a more or less unbiased
estimate for f(x), i.e., the distribution P(f(i)lm(i)) is centered around m(i). The
truth is that neural networks are biased estimators. For example, neural networks
trained on a finite number of examples will always have a tendency (as almost
any other model) to oversmooth a sharp peak in the data. This introduces a bias,
which, to arrive at asymptotically correct confidence intervals, should be taken
into account . However, if it would be possible to compute such a bias correction,
one should do it in the first place to arrive at a better estimator. Our working
hypothesis here is that the bias component of the confidence intervals is negligible
in comparison with the variance component.
There do exist methods that claim to give confidence intervals that are "secondorder correct" , i.e., up to and including terms of order l/P!~~a (see e.g. the discussion
after [3]). Since we do not know how to handle the bias component anyways,
such precise confidence intervals, which require a tremendous amount of bootstrap
samples, are too ambitious for our purposes. First-order correct intervals up to
and including terms of order l/Pdata are always symmetric and can be derived by
assuming a Gaussian distribution P(f(i)lm(i)).
The variance of this distribution can be estimated from the variance in the outputs
of the n run networks:
1
nrun
----1
n run
2: [Oi(X) -
m(iW?
(3)
i=l
This is the crux of the bootstrap method (see e.g. [3]). Since the distribution of
P(f(i)lm(i)) is a Gaussian, so is the "inverse" distribution P(m(i)lf(i)) to find
the regression m( i) by randomly drawing data sets consisting of Pdata data points
according to the prescription (1) . Not knowing the true distribution of inputs and
corresponding targets 2 , the best we can do is to define the empirical distribution as
explained before and estimate P(m(i)lf(i)) from the distribution P(o(i)lm(i)).
This then yields the estimate (3).
So, following this bootstrap procedure we arrive at the confidence intervals
m(i) -
Cconfidenceu(i)
~
f(i) ~ m(i)
+ Cconfidenceu(i) ,
where Cconfidence depends on the desired confidence level 1- o'. The factors Cconfidence
can be taken from a table with the percentage points of the Student's t-distribution
with number of degrees of freedom equal to the number of bootstrap runs n run . A
more direct alternative is to choose Cconfidence such that for no more than 1000'% of
all n run x Pdata network predictions lor - m'" I 2: Cconfidence u"'.
2In this paper we assume that both the inputs and the outputs are stochastic. For the
case of deterministic input variables other bootstrapping techniques (see e.g. [4]) are more
appropriate, since the statistical intervals resulting from naive bootstrapping may be too
conservative.
Practical Confidence and Prediction Intervals
4
179
PREDICTION INTERVALS
Confidence intervals deal with the accuracy of our prediction of the regression, Le.,
of the mean of the target probability distribution. Prediction intervals consider the
accuracy with which we can predict the targets themselves, i.e., they are based on
estimates of the distribution P(t( x) Im(i)). We propose the following method .
The two noise components f(i) - m(x) and ~(x) in (2) are independent. The
variance of the first component has been estimated in our bootstrap procedure to
arrive at confidence intervals. The remaining task is to estimate the noise inherent
to the regression problem. We assume that this noise is more or less Gaussian such
that it again suffices to compute its variance which may however depend on the
input x. In mathematical symbols,
Of course, we are interested in prediction intervals for new points x for which we
do not know the targets t. Suppose that we had left aside a set of test patterns
{Xli, til} that we had never used for training nor for validating our neural networks .
Then we could try and estimate a model X 2 (x) to fit the remaining residuals
(4)
using minus the loglikelihood as the error measure:
(5)
Of course, leaving out these test patterns is a waste of data and luckily our bootstrap
procedure offers an alternative. Each pattern is in about 37% of all bootstrap runs
not part of the training set. Let us write qf = 1 if pattern j.t is in the validation set
of run i and qf = 0 otherwise. If we , for each pattern j.t, use the average
z= qf of z= qf ,
nrun
mvalidation (xJJ)
=
/
i=l
nrun
JJ=l
instead of the average m(xJJ ) we get as close as possible to an unbiased estimate for
the residual on independent test patterns as we can, without wasting any training
data. So, summarizing, we suggest to find a function X(x) that minimizes the
error (5), yet not by leaving out test patterns, which would be a waste of data, nor
by straightforwardly using the training data, which would underestimate the error,
but by exploiting the information about the residuals on the validation patterns.
Once we have found the function X(i), we can compute for any x both the mean
m(x) and the deviation s(x) which are combined in the prediction interval
m(x) -
CpredictionS(X) ~
t(x) ~ m(x)
+ CpredictionS(X) .
Again, the factor Cprediction can be found in a Student's t-table or chosen such that
for no more than 100a% of all Pdata patterns It JJ - mvalidation (xJJ) I ~ Cprediction s( xJJ) .
The function X2 (x) may be modelled by a separate neural network, similar to the
method proposed in [7] with an exponential instead of a linear transfer function for
the output unit to ensure that the variance is always positive.
T. Heskes
180
1.5 .-....----r----.-----,~-....___,
1
.e-:::J
(b)
0.05
0.5
"5
0.06
(a)
Q)
0.04
o
(.)
c
.~0.03
.....
a
o
to
-0.5
> 0.02
-1
0.01
-1.5 L - - " " ' - - _ - - ' -_ __'_____--'-_ _' - - - '
-1
-0.5
a 0.5
a
-1
-0.5
input
0.2 ..--....---.-----.--~---....--n
(c)
0.15
a
0.5
1
input
I
I
I
\
1.5
(d)
1
I
.s=
~ 0.1
\
\
-
0.05 _ _\ -. __ . _'."
_ ._ . _ ._ .I
-1
-1
-0.5
a
_1.5L-----~-~--~-~.....J
0.5
-1
input
-0.5
a
0.5
1
input
Figure 1: Prediction intervals for a synthetic problem. (a) Training set (crosses),
true regression (solid line), and network prediction (dashed line). (b) Validation
residuals (crosses), training residuals (circles), true variance (solid line), estimated
variance based on validation residuals (dashed line) and based on training residuals
(dash-dotted line). (c) Width of standard error bars for the more advanced method
(dashed line), the simpler procedure (dash-dotted line) and what it should be (solid
line). (d) Prediction intervals (solid line) , network prediction (dashed line), and
1000 test points (dots) .
5
ILLUSTRATION
We consider a synthetic problem similar to the one used in [7]. With this example
we will demonstrate the desirability to incorporate the inaccuracy of the regression
estimator in the prediction intervals. Inputs x are drawn from the interval [-1,1]
with probability density p(x)
lxi, i.e., more examples are drawn at the boundary
than in the middle. Targets t are generated according to
=
t = sin(lI'x) cos(511'x/4)
+ ~(x)
with
(e(x)) = 0.005 + 0.005 [1 + sin(lI'x)]2 .
The regression is the solid line in Figure 1(a), the variance of the target distribution
the solid line in Figure 1 (b). Following this prescription we obtain a training set of
Pdata
50 data points [the crosses in Figure l(a)] on which we train an ensemble
of n run = 25 networks, each having 8 hidden units with tanh-transfer function
and one linear output unit . The average network output m(x) is the dashed line
=
Practical Confidence and Prediction Intervals
IBI
in Figure l(a) and (d). In the following we compare two methods to arrive at
prediction intervals: the more advanced method described in Section 4, Le., taking
into account the uncertainty of the estimator and correcting for the tendency to
overfit on the training data, and a simpler procedure similar to [7] which disregards
both effects.
We compute the (squared) "validation residuals" (mealidation - t~)2 [crosses in Figure l(b)], based on runs in which pattern J.t was part of the validation set, and the
"training residuals" (mrrain - t~)2 (circles) , based on runs in which pattern J.t was
part of the training set . The validation residuals are most of the time somewhat
larger than the training residuals.
For our more advanced method we substract the uncertainty of our model from
the validation residuals as in (4) . The other procedure simply keeps the training
residuals to estimate the variance of the target distribution. It is obvious that the
distribution of residuals in Figure l(b) does not allow for a complex model. Here
we take a feedforward network with one hidden unit:
The parameters {vo, VI, V2, V3} are found through minimization of the error (5).
Both for the advanced method (dashed line) and for the simpler procedure (dashdotted line) the variance of the target distribution is estimated to be a step function.
The former, being based on the validation residuals minus the uncertainty of the
estimator, is slightly more conservative than the latter, being based on the training
residuals. Both estimates are pretty far from the truth (solid line), especially for
o < x < 0.5, yet considering such a limited amount of noisy residuals we can hardly
expect anything better.
Figure l(c) considers the width of standard error bars, i.e., of prediction intervals
for error level a ~ 0.32. For the simpler procedure the width of the prediction
interval [dash-dotted line in Figure l(c)] follows directly from the estimate of the
variance of the target distribution. Our more advanced method adds the uncertainty
of the estimator to arrive at the dashed line. The correct width of the prediction
interval, i.e., the width that would include 68% of all targets for a particular input,
is given by the solid line. The prediction intervals obtained through the more
advanced procedure are displayed in Figure l(d) together with a set of 1000 test
points visualizing the probability distribution of inputs and corresponding targets.
The method proposed in Section 4 has several advantages. The prediction intervals
of the advanced method include 65% of the test points in Figure l(d), pretty close
to the desired confidence level of 68%. The simpler procedure is too liberal with
an actual confidence level of only 58% . This difference is mainly due to the use of
validation residuals instead of training residuals. Incorporation of the uncertainty
of the estimator is important in regions of input space with just a few training data.
In this example the density of training data affects both extrapolation and interpolation. For Ixl > 1 the prediction intervals obtained with the advanced method
become wider and wider whereas those obtained through the simpler procedure remain more or less constant. The bump in the prediction interval (dashed line) near
the origin is a result of the relatively large variance in the network predictions in
this region. It shows that our method also incorporates the effect that the density
of training data has on the accuracy of interpolation.
182
6
T. Heskes
CONCLUSION AND DISCUSSION
We have presented a novel method to compute prediction intervals for applications
with a limited amount of data. The uncertainty of the estimator itself has been
taken into account by the computation of the confidence intervals. This explains
the qualitative improvement over existing methods in regimes with a low density of
training data. Usage of the residuals on validation instead of on training patterns
yields prediction intervals with a better coverage. The price we have to pay is in
the computation time: we have to train an ensemble of networks on about 20 to 50
different bootstrap replicates [3, 8]. There are other good reasons for resampling:
averaging over networks improves the generalization performance and early stopping
is a natural strategy to prevent overfitting. It would be interesting to see how our
''frequentist'' method compares with Bayesian alternatives (see e.g. [1, 6]).
Prediction intervals can also be used for the detection of outliers. With regard to
the training set it is straightforward to point out the targets that are not enclosed
by a prediction interval of error level say a = 0.05. A wide prediction interval for a
new test pattern indicates that this test pattern lies in a region of input space with
a low density of training data making any prediction completely unreliable.
A weak point in our method is the assumption of unbiased ness in the computation of
the confidence intervals. This assumption makes the confidence intervals in general
too liberal. However, as discussed in [8], such bootstrap methods tend to perform
better than other alternatives based on the computation of the Hessian matrix,
partly because they incorporate the variability due to the random initialization.
Furthermore, when we model the prediction interval as a function of the input x we
will, to some extent, repair this deficiency. But still, incorporating even a somewhat
inaccurate confidence interval ensures that we can never severely overestimate our
accuracy in regions of input space where we have never been before.
References
[1] C. Bishop and C. Qazaz. Regression with input-dependent noise: a Bayesian
treatment. These proceedings, 1997.
[2] L. Breiman. Bagging predictors. Machine Learning, 24: 123-140, 1996.
[3] B. Efron and R. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall,
London, 1993.
[4] W. HardIe. Applied Nonparametric Regression. Cambridge University Press,
1991.
[5] T. Heskes. Balancing between bagging and bumping. These proceedings, 1997.
[6] D. MacKay. A practical Bayesian framework for backpropagation. Neural Computation, 4:448-472, 1992.
[7] D. Nix and A. Weigend. Estimating the mean and variance of the target probability distribution. In Proceedings of the [JCNN '94, pages 55-60. IEEE, 1994.
[8] R. Tibshirani. A comparison of some error estimates for neural network models.
Neural Computation, 8:152-163,1996.
| 1306 |@word middle:1 grooteplein:1 minus:2 solid:8 ours:1 existing:2 yet:2 written:1 resampling:2 aside:1 liberal:2 simpler:6 lor:1 mathematical:2 direct:1 become:1 qualitative:1 mbfys:1 themselves:1 nor:2 snn:2 actual:2 considering:2 becomes:1 estimating:2 underlying:1 what:1 interpreted:1 minimizes:1 bootstrapping:6 wasting:1 unit:4 overestimate:1 before:2 negligible:1 positive:1 treat:1 severely:1 interpolation:3 ibi:1 initialization:1 co:1 limited:4 practical:6 lf:2 backpropagation:1 bootstrap:16 procedure:13 empirical:4 confidence:27 suggest:1 get:1 close:2 encloses:1 risk:1 impossible:1 context:1 deterministic:1 go:1 straightforward:1 correcting:1 estimator:11 geert:1 handle:1 variation:1 target:18 suppose:1 bumping:1 hypothesis:1 secondorder:1 origin:1 observed:1 region:4 ensures:1 trained:3 depend:2 completely:1 train:3 london:1 qazaz:1 larger:1 loglikelihood:1 drawing:1 otherwise:1 say:1 noisy:1 itself:1 obviously:1 advantage:1 propose:3 realization:1 exploiting:1 wider:2 ij:1 coverage:1 quantify:2 correct:5 stochastic:1 luckily:1 centered:1 explains:1 require:1 crux:1 suffices:1 generalization:1 im:1 adjusted:1 nrun:4 correction:1 sufficiently:1 around:1 hall:1 predict:1 lm:5 claim:1 bump:1 early:3 purpose:1 estimation:1 iw:1 tanh:1 minimization:1 always:3 gaussian:3 desirability:1 rather:1 breiman:1 derived:1 improvement:2 indicates:1 mainly:1 summarizing:1 dependent:1 stopping:3 inaccurate:1 hidden:2 interested:2 ness:1 mackay:1 bagged:1 equal:1 once:1 never:3 having:1 sampling:1 chapman:1 pdata:11 inherent:1 few:1 randomly:1 consisting:1 replacement:1 freedom:1 detection:1 replicates:3 introduces:1 nl:1 closer:1 desired:3 circle:2 stopped:2 deviation:1 predictor:1 too:5 straightforwardly:2 corrupted:1 synthetic:3 combined:1 density:5 peak:2 corrects:1 together:1 ilj:1 again:2 squared:1 choose:1 possibly:1 til:1 li:2 account:3 student:2 waste:2 depends:1 vi:1 try:1 extrapolation:2 start:1 minimize:1 oi:3 accuracy:8 variance:17 ensemble:5 yield:5 xli:1 modelled:1 bayesian:3 weak:1 definition:1 underestimate:1 obvious:1 stop:1 treatment:1 popular:1 efron:1 improves:1 tom:2 furthermore:1 just:2 overfit:2 working:1 defines:1 hardie:1 usage:1 effect:2 true:6 unbiased:3 former:1 regularization:1 symmetric:1 laboratory:1 illustrated:1 deal:2 visualizing:1 sin:2 width:7 anything:1 demonstrate:1 vo:1 fj:1 reasoning:1 novel:2 discussed:1 cambridge:1 heskes:5 had:2 dot:1 anyways:1 add:1 somewhat:3 v3:1 dashed:8 cross:5 offer:1 prescription:2 prediction:36 regression:18 sometimes:1 whereas:1 interval:51 leaving:2 biased:1 strict:1 tend:1 validating:1 incorporates:1 near:1 feedforward:2 affect:1 fit:1 idea:1 knowing:1 jcnn:1 xjj:4 hessian:1 jj:2 hardly:1 netherlands:1 amount:4 nonparametric:1 generate:1 exist:1 percentage:1 dotted:3 delta:1 estimated:4 tibshirani:2 write:1 ifj:1 drawn:3 prevent:2 asymptotically:1 sum:1 weigend:1 run:12 inverse:1 uncertainty:6 arrive:6 almost:1 place:1 pay:1 dash:3 distinguish:1 fold:1 occur:1 incorporation:1 deficiency:1 x2:1 aspect:2 relatively:1 according:3 remain:1 slightly:1 making:1 explained:1 outlier:1 repair:1 taken:3 know:2 available:2 v2:1 appropriate:1 frequentist:1 alternative:5 lxi:1 original:1 bagging:2 denotes:1 remaining:2 subsampling:1 ensure:1 include:2 especially:2 quantity:2 strategy:2 separate:1 considers:1 extent:1 reason:1 assuming:1 illustration:1 kun:1 statement:1 nijmegen:2 proper:1 ambitious:1 unknown:1 perform:1 nix:1 finite:2 displayed:1 variability:1 precise:1 sharp:1 introduced:1 pair:1 tremendous:1 inaccuracy:2 bar:2 substract:1 pattern:21 regime:2 reliable:1 including:2 natural:1 residual:22 advanced:8 naive:2 expect:1 interesting:1 enclosed:1 ixl:1 validation:18 foundation:1 degree:1 balancing:2 oversmooth:1 qf:4 course:2 dashdotted:1 bias:4 allow:1 wide:1 taking:1 regard:2 boundary:1 world:1 collection:1 far:1 keep:1 unreliable:1 overfitting:3 assumed:1 lthis:1 pretty:2 table:2 transfer:2 necessarily:1 complex:1 noise:6 nothing:2 exponential:1 lie:2 bishop:1 symbol:1 decay:1 incorporating:1 simply:1 ez:1 truth:2 viewed:1 price:1 content:1 averaging:1 conservative:2 called:3 partly:1 tendency:3 subdivision:1 disregard:1 partnership:1 latter:2 rwcp:2 incorporate:2 |
336 | 1,307 | Noisy Spiking Neurons with Temporal
Coding have more Computational Power
than Sigmoidal Neurons
Wolfgang Maass
Institute for Theoretical Computer Science
Technische Universitaet Graz, Klosterwiesgasse 32/2
A-80lO Graz, Austria, e-mail: maass@igLtu-graz.ac.at
Abstract
We exhibit a novel way of simulating sigmoidal neural nets by networks of noisy spiking neurons in temporal coding. Furthermore
it is shown that networks of noisy spiking neurons with temporal
coding have a strictly larger computational power than sigmoidal
neural nets with the same number of units.
1
Introduction and Definitions
We consider a formal model SNN for a ?piking neuron network that is basically
a reformulation of the spike response model (and of the leaky integrate and fire
model) without using 6-functions (see [Maass, 1996a] or [Maass, 1996b] for further
backgrou nd).
An SNN consists of a finite set V of spiking neurons, a set E ~ V x V of synapses, a
weight wu,v 2: 0 and a response function cu,v : R+ --+ R for each synapse {u, v} E E
(where R+ := {x E R: x 2: O}) , and a threshold function 8 v : R+ --+ R+ for each
neuron v E V .
If Fu ~ R+ is the set of firing times of a neuron u , then the potential at the trigger
zone of neuron v at time t is given by
Pv(t) :=
L
u:{u,v)EE
L
sEF,,:s<t
Wu,v . cu,v(t - s) .
In a noise-free model a neuron v fires at time t as soon as Pv(t) reaches 8 v(t - t'),
where t' is the time of the most recent firing of v . One says then that neuron v
sends out an "action potential" or "spike" at time t .
212
W Maass
For some specified subset Vin ~ V of input neurons one assumes that the firing times
("spike trains") Fu for neurons u E Vin are not defined by the preceding convention,
but are given from the outside. The firing times Fv for all other neurons v E V are
determined by the previously described rule, and the output of the network is given
in the form of the spike trains Fv for a specified set of output neurons Vout ~ V .
y
y
__
?11," (I-S)
o
s
I
0
s
~e~
I
~
I'
t
Figure 1: Typical shapes of response functions cu,v (EPSP and IPSP) and threshold
functions 8 v for biological neurons.
We will assume in our subsequent constructions that all response functions cu,v
and threshold functions 8 v in an SNN are "stereotyped", i.e. that the response
functions differ apart from their "sign" (EPSP or IPSP) only in their delay du,v
(where du,v := inf {t ~ 0 : cu,v(t) =J O}) , and that the threshold functions 8 v
only differ by an additive constant (i.e. for all u and v there exists a constant cu,v
such that 8 u (t) = 8 v (t) + Cu,v for all t ~ 0) . We refer to a term of the form
wu,v . cu,v(t - s) as an ~xcitatory respectively inhibitory post~naptic potential
(abbreviated: EPSP respectively IPSP).
- Since biological neurons do not always fire in a reliable manner one also considers
the related model of noisy spiking neurons, where Pv(t) is replaced by P ~Oi8Y(t) :=
Pv(t) +av(t) and 8 v (t-t') is replaced by 8~oi8Y(t_t') : = 8 v (t-t')+!3v(t-t').
av(t) and !3v(t - t') are allowed to be arbitrary functions with bounded absolute
value (hence they can also represent "systematic noise").
Furthermore one allows that the current value of the difference D(t) := p vnOi8 Y(t) 8~oi8Y(t - t') does not determine directly the firing time of neuron v, but only its
current firing probability. We assume that the firing probability approaches 1 if
D --+ 00 , and 0 if D --+ -00 . We refer to spiking neurons with these two types of
noise as "noisy spiking neurons".
We will explore in this article the power of analog computations with noisy spiking
neurons, and we refer to [Maass, 1996a] for results about digital computations in
this model. Details to the results in this article appear in [Maass, 1996b] and
[Maass, 1997].
2
Fast Simulation of Sigmoidal Neural Nets with Noisy
Spiking Neurons in Temporal Coding
So far one has only considered simulations of sigmoidal neural nets by spiking neurons where each analog variahle in the sigmOidal neural net is represented by the
firing rate of a spiking neuron. However this "firing rate interpretation" is inconsistent with a number of empirical results about computations in biological neural
Spiking Neurons have more Computational Power than Sigmoidal Neurons
213
systems. For example [Thorpe & Imbert, 1989] have demonstrated that visual pattern analysis and pattern classification can be carried out by humans in just 150 ms,
in spite of the fact that it involvefi a minimum of 10 synaptic stages from the retina
to the temporal lobe. [de Ruyter van Steveninck & Bialek, 1988] have found that
a blowfly can produce flight torques within 30 ms of a visual stimulus by a neural
system with several synaptic stages. However the firing rates of neurons involved
in all these computations are usually below 100 Hz, and interspike intervals tend
to be quite irregular. Hence One cannot interpret these analog computations with
spiking neurons on the basis of an encoding of analog variables by firing rates.
On the other hand experimental evidence has accumulated during the last few years
which indicates that many biological neural systems use the timing of action potentials to encode information (see e.g. [Bialek & Rieke, 1992]' [Bair & Koch, 1996]).
We will now describe a new way of simulating sigmoidal neural nets by networks of
spiking neurons that is based on temporal coding. The key mechanism for this alternative simulation is based on the well known fact that EPSP's and IPSP's are able
to shift the firing time of a spiking neuron. This mechanism can be demonstrated
very clearly in our formal model if one assumes that EPSP's rise (and IPSP's fall)
linearly during a certain initial time period. Hence we assume in the following that
there exists some constant D. > 0 such that each response function eu,v(X) is of the
form Ctu,v . (x - du,v) with Ctu,v E {-I, 1} for x E [du,v, du,v + D.] , and eu,v(X) = 0
for x E [0, du,v] .
Consider a spiking neuron v that receives postsynaptic potentials from n presynaptic
neurons at, ... ,an' For simplicity we asfiume that interspike intervals are so large
that the firing time tv of neuron v depends just on a single firing time t a ? of each
neuron ai, and 8 v has returned to its "resting value" 8 v (0) before v fires again.
Then if the next firing of v occurs at a time when the postsynaptic potentials
described by Wa;,V . ea.,v(t - t a.) are all in their initial linear phase, its firing time
tv is determined in the noise-free model for Wi := Wa;,v . Cta.,v by the equation
E~=t Wi . (tv - tao - da.,v) = 8 v(0) , or equivalently
(1)
This equation revealfi the fiomewhat fiurprifiing fact that (for a certain range of
their parameters) spiking neurons can compute a weighted sum in terms of firing
times, i.e. temporal coding. One fihould alfio note that in the case where all delays
da.,v have the same value, the "weights" Wi of this weighted sum are encoded in
the "strengths" wa.,v of the synapsefi and their "sign" Cta.,t, , as in the "firing rate
interpretation". Finally according to (1) the coefficients of the presynaptic firing
times tao are automatically normalized, which appears to be of biological interest.
In the simplest scheme for temporal coding (which is closely related to that in
[Hopfield, 1995]) an analog variable x E [0,1] is encoded by the firing time T - , ' x
of a neuron, where T is assumed to be independent of x (in a biological context T
might be time-locked to the onset of a fitimulus, or to some oscillation) and , is
some constant that ifi determined in the proof of Theorem 2.1 (e.g. , = tlj2 in the
noifie-free case). In contrafit to [Hopfield, 1995] we afifiume that both the inputs and
the outputs of computationfi are encoded in thifi fafihion. This has the advantage
that one can compose computational modules.
W. Maass
214
We will first focus in Theorem 2.1 on the simulation of sigmoidal neural nets that
employ the piecewise linear "linear saturated" activation function 1r : R - [0,1]
defined by 1r(Y) = 0 ify < 0, 1r(Y) = y if 0 ~ y ~ 1, and 1r(Y) = 1 ify > 1 .
The Theorem 3.1 in the next section will imply that one can simulate with spiking neurons also sigmoidal neural nets that employ arbitrary continuous activation
functions. Apart from the previously mentioned assumptions we will assume for
the proofs of Theorem 2.1 and 3.1 that any EPSP satisfies eu,v(X)
0 for all sufficiently large x, and eu,v(X) ~ eu,v(du,v + ~) for all x E [du,v + Ll, du,v + Ll + 1'] .
We assume that each IPSP is continuous, and has value 0 except for some interval
of R. Furthermore we assume for each EPSP and IPSP that jeu,v(x)1 grows at
most linearly during the interval [du,v + Ll, du,v + Ll + 1'] . In addition we assume
that 0 v (x) = 0 v (0) for sufficiently large x , and that 0 v (x) is sufficiently large for
=
O<x~'Y.
Theorem 2.1 For any given e, 8> 0 one can simulate any given feedforward sigmoidal neural net N with activation function 1r by a network NN,e,6 of noisy spiking
neurons in temporal coding. More precisely, for any network input Xl, . .? ,X m E
[O,IJ the output of NN,e,6 differs with probability 2:: 1 - 0 by at most e from that
of N . Furthermore the computation time of NN,e,6 depends neither on the number
of gates in N nor on the parameters e, 0, but only on the number of layers of the
sigmoidal neural network N .
-We refer to [Maass, 1997] for details of the somewhat complicated proof. One
employs the mechanism described by (1) to simulate through the firing time of a
spiking neuron v a sigmoidal gate with activation function 1r for those gate-inputs
where 1r operates in its linearly rising range. With the help of an auxiliary spiking
neuron that fires at time T one can avoid the automatic "normalization" of the
weights Wi that is provided by (1), and thereby compute a weighted sum with
arbitrary given weights. In order to simulate in temporal coding the behaviour of
the gate in the input range where 1r is "saturated" (Le. constant), it suffices to
employ some auxiliary spiking neurons which make sure that v fires exactly once
during the relevant time window (and not shortly before that).
Since inputs and outputs of the resulting modules for each single gate of N are all
given in temporal coding, one can compose these modules to simulate the multilayer sigmoidal neural net N. With a bit of additional work one can ensure that
this construction also works with noisy spiking neurons.
?
3
Universal Approximation Property of Networks of Noisy
Spiking Neurons with Temporal Coding
It is known [Leshno et al., 1993J that feed forward sigmoidal neural nets whose gates
employ the activation function 1r can approximate with a single hidden layer for
any n, kEN any given continuous function F : [O,I]n - [O,I]k within any e > 0
with regard to the Loo-norm (i.e. uniform convE'rgence). Hence we can derive the
following result from Theorem 2.1 :
Theorem 3.1 Any given continuous function F : [0, l]n _ [O,I]k can be approximated within any given e > 0 with arbitrarily high reliability in temporal coding by
215
Spiking Neurons have more Computational Power than Sigmoidal Neurons
a network of noisy spiking neurons (SNN) with a single hidden layer (and hence
within 15 ms for biologically realistic values of their time-constants).
?
Because of its generality this Theorem implies the same result also for more general
schemes of coding analog variables by the firing times of neurons, besides the particular one that we have considered so far. In fact it implies that the same result
holds for any other coding scheme C that is "continuously related" to the previously considered one in the sense that the transformation between firing times that
encode an analog variable x in the here considered coding scheme and in the coding
scheme C can be described by uniformly continuous functions in both directions.
4
Spiking Neurons have more Computational Power than
Sigmoidal Neurons
We consider the "element distinctness function" EDn
by
I,
EDn(Sl, ... ,Sn) =
{ 0,
arbitrary,
if s i = si for some i
if
lSi -ail
~
=I=-
j
1 for all i,j with i =l=-j
else.
If one encodes the value of input variable Si by a firing of input neuron ai at time
c? Si , then for sufficiently large values of the constant C > 0 a single noisy
spiking neuron v can compute EDn with arbitrarily high reliability. This holds for
any reasonable type ofresponse functions, e.g. the ones shown in Fig. 1. The binary
output of this computation is assumed to be encoded by the firing/non-firing of v .
Hair-trigger situations are avoided since no assumptions have to be made about the
firing or non-firing of v if EPSP's arrive with a temporal distance between 0 and c .
Tin -
On the other hand the following result shows that a fairly large sigmoidal neural
net is needed to compute the same function. Its proof provides the first application
for Sontag's recent results about a new type of "dimension" d of a neural network
N , where d is chosen maximal so that every subset of d inputs is shattered by N .
Furthermore it expands a method due to [Koiran, 1995] for llsing the VC-dimension
to prove lower bounds on network size.
Theorem 4.1 Any sigmoidal neural net N that computes EDn has at least n2"4 -1
hidden units.
Proof: Let N be an arbitrary sigmoidal neural net with k gates that computes
EDn. Consider any set S ~ R+ of size n -1. Let .x > 0 be sufficiently large so that
the numbers in .x . S have pairwise distance ~ 2 . Let A be a set of n - 1 numbers
> max (.x . S) + 2 with pairwise distance ~ 2 .
By assumption N can decide for n arbitrary inputs from .x . SuA whether they
are all different. Let N>. be a variation of N where all weights on edges from the
first input variable are mUltiplied with .x. Then N>. can compute any function from
W Maass
216
S into {O, I} after one has assigned a suitable fixed set of n - 1 pairwise different
numbers from ..\ . SuA to the last n - 1 input variables.
Thus if one considers a'l programmable parameters of N the factor ..\ in the weights
on edges from the first input variable and the ~ k thresholds of gates that are
connected to some of the other n - 1 input variables, then N shatters S with these
k + 1 programmable parameters.
Since S ~ R + of size n - 1 was chosen arbitrarily, we can now apply the result from
[Sontag, 1996], which yields an upper bound of 2w + 1 for the maximal number d
such that every set of d different inputs can be shattered by a sigmoidal neural net
with w programmable parameters (note that this parameter d is in general much
smaller than the VC-dimension of the neural net). For w := k + 1 this implies in
our case that n - 1 ~ 2(k + 1) + 1 , hence k ~ (n - 4)/2 . Thus N has at least
(n - 4) /2 computation nodes, and therefore at least (n - 4)/2 -1 hidden units. One
should point out that due to the generality of Sontag's result this lower bound is
valid for all common activation functions of sigmoidal gates, and even if N employs
heaviside gates besides sigmoidal gates.
?
Theorem 4.1 yields a lower bound of 4997 for the number of hidden units in any
sigmoidal neural net that computes EDn for n = 10 000 , where 10 000 is a common
estimate for the number of inputs (i.e. synapses) of a biological neuron.
Finally we would like to point out that to the best of our knowledge Theorem 4.1
provides the largest known lower bound for any concrete function with n inputs on
a sigmoidal neural net. The largest previously known lower bound for sigmoidal
neural nets wa<; O(nl/4) , due to [Koiran, 1995J.
5
Conclusions
Theorems 2.1 and 3.1 provide a model for analog computations in network of spiking
neurons that is consistent with experimental results on the maximal computation
speed of biological neural systems. As explained after Theorem 3.1, this result holds
for a large variety of possible schemes for encoding analog variables by firing times.
These theoretical results hold rigoro'U.sly only for a rather small time window of
length, for temporal coding. However a closer inspection of the construction
shows that the actual shape of EPSP's and IPSP's in biological neurons provides
an automatic adjustment of extreme values of the inputs tao towards their average,
which allows them to carry out rather similar computations for a substantially larger
window size. It also appears to be of interest from the biological point of view that
the synaptic weights play for temporal coding in our construction basically the same
role as for rate coding, and hence the .~ame network is in principle able to compute
closely related analog functions in both coding schemes.
We have focused in our constructions on feedforward nets, but our method can for
example also be used to simulate a Hopfield net with graded response by a network
of noisy spiking neurons in temporal coding. A stable state of the Hopfield net
corresponds then to a firing pattern of the simulating SNN where all neurons fire
at the same frequency, with the ((pattern" of the stahle state encoded in their phase
differences.
Spiking Neurons have more Computational Power than Sigmoidal Neurons
217
The theoretical results in this article may also provide additional goals and directions for a new computer technology based on artificial spiking neurons.
Acknowledgement
I would like to thank David Haussler, Pa.')cal Koiran, and Eduardo Sontag for helpful
communications.
References
[Bair & Koch, 1996] W. Bair, C. Koch, "Temporal preCISIon of spike trains in
extra.')triate cortex of the behaving macaque monkey", Neural Computation,
vol. 8, pp 1185-1202, 1996.
[Bialek & Rieke, 1992] W. Bialek, and F. Rieke, "Reliability and information transmission in spiking neurons", Trends in Neuroscience, vol. 15, pp 428-434,1992.
[Hopfield, 1995J J. J. Hopfield, "Pattern recognition computation using action potential timing for stimulus representations", Nature, vol. 376, pp 33-36, 1995.
[Koiran, 1995] P. Koiran, "VC-dimension in circuit complexity", Proc. of the 11th
IEEE Conference on Computational Complexity, pp 81-85, 1996.
[Leshno et aI., 1993] M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, "Multilayer
feed forward networks with a nonpolynomial activation function can approximate any function", Neural Networks, vol. 6, pp 861-867, 1993.
[Maass, 1996a] W. Maass, "On the computational power of noisy spiking neurons",
Advances in Neural Information Processing Systems, vol. 8, pp 211-217, MIT
Press, Cambridge, 1996.
[Maass, 1996b] W. Maass, "Networks of spiking neurons: the third generation of
neural network models", FTP-host: archive.cis.ohio-state.edu, FTP-filename:
/pub/neuroprose/maass.third-generation.ps.Z, Neural Networks, to appear.
[Maass, 1997] W. Maa.')s, "Fa.')t sigmoidal networks via spiking neurons", to appear
in Neural Computation. FTP-host: archive.cis.ohio-state.edu FTP-filename:
/pub/neuroprose/maass.sigmoidal-spiking.ps.Z, Neural Computation, to appear in vol. 9, 1997.
[de Ruyter van Steveninck & Bialek, 1988] R. de Ruyter van Steveninck, and
W. Bialek, "Real-time performance of a movement sensitive neuron in the
blowfly visual system", Proc. Roy. Soc. B, vol. 234, pp 379-414, 1988.
[Sontag, 1996] E. D. Sontag, "Shattering all sets of k points in 'general position' requires (k -1)/2 parameters", http://www.math.rutgers.edu/'''sontag/ , follow
links to FTP archive.
[Thorpe & Imbert, 1989J S. T. Thorpe, and M. Imbert, "Biological constraints
on connectionist modelling", In: Connectionism in Perspective, R. Pfeifer,
Z. Schreter, F. Fogelman-Soulie, and 1. Steels, eds., Elsevier, North-Holland,
1989.
| 1307 |@word cu:8 rising:1 norm:1 nd:1 simulation:4 lobe:1 thereby:1 carry:1 initial:2 pub:2 current:2 activation:7 si:3 additive:1 realistic:1 subsequent:1 interspike:2 shape:2 ctu:2 inspection:1 provides:3 math:1 node:1 sigmoidal:29 consists:1 prove:1 compose:2 cta:2 manner:1 pairwise:3 nor:1 torque:1 automatically:1 leshno:3 snn:5 actual:1 window:3 provided:1 bounded:1 circuit:1 ail:1 substantially:1 monkey:1 transformation:1 eduardo:1 temporal:18 every:2 expands:1 exactly:1 unit:4 appear:4 before:2 timing:2 encoding:2 firing:30 might:1 range:3 locked:1 steveninck:3 differs:1 universal:1 empirical:1 spite:1 cannot:1 cal:1 context:1 www:1 demonstrated:2 focused:1 simplicity:1 rule:1 haussler:1 rieke:3 distinctness:1 variation:1 construction:5 trigger:2 play:1 edn:6 pa:1 element:1 trend:1 approximated:1 recognition:1 roy:1 role:1 module:3 graz:3 connected:1 eu:5 movement:1 mentioned:1 complexity:2 sef:1 klosterwiesgasse:1 basis:1 hopfield:6 represented:1 train:3 fast:1 describe:1 artificial:1 outside:1 quite:1 encoded:5 larger:2 whose:1 schocken:1 say:1 noisy:14 advantage:1 net:22 maximal:3 epsp:9 relevant:1 transmission:1 p:2 produce:1 help:1 derive:1 ftp:5 ac:1 ij:1 soc:1 auxiliary:2 implies:3 convention:1 differ:2 direction:2 closely:2 vc:3 human:1 behaviour:1 suffices:1 biological:11 connectionism:1 strictly:1 hold:4 koch:3 considered:4 sufficiently:5 koiran:5 proc:2 sensitive:1 largest:2 weighted:3 mit:1 clearly:1 always:1 rather:2 avoid:1 encode:2 focus:1 modelling:1 indicates:1 sense:1 helpful:1 elsevier:1 accumulated:1 nn:3 shattered:2 hidden:5 tao:3 fogelman:1 classification:1 fairly:1 once:1 shattering:1 connectionist:1 stimulus:2 piecewise:1 few:1 thorpe:3 retina:1 employ:6 replaced:2 phase:2 fire:7 interest:2 saturated:2 extreme:1 nl:1 fu:2 edge:2 closer:1 theoretical:3 technische:1 subset:2 uniform:1 delay:2 loo:1 systematic:1 continuously:1 concrete:1 again:1 potential:7 de:3 coding:21 north:1 coefficient:1 depends:2 onset:1 view:1 wolfgang:1 complicated:1 vin:2 yield:2 vout:1 basically:2 synapsis:2 reach:1 synaptic:3 ed:1 definition:1 jeu:1 frequency:1 involved:1 pp:7 proof:5 austria:1 knowledge:1 ea:1 appears:2 feed:2 follow:1 response:7 synapse:1 generality:2 furthermore:5 just:2 stage:2 sly:1 flight:1 hand:2 receives:1 grows:1 normalized:1 hence:7 assigned:1 maass:18 ll:4 during:4 imbert:3 m:3 novel:1 ohio:2 common:2 spiking:37 pinkus:1 analog:10 interpretation:2 resting:1 interpret:1 refer:4 nonpolynomial:1 cambridge:1 ai:3 automatic:2 reliability:3 stable:1 cortex:1 behaving:1 recent:2 perspective:1 inf:1 apart:2 certain:2 binary:1 arbitrarily:3 minimum:1 additional:2 somewhat:1 filename:2 preceding:1 determine:1 period:1 llsing:1 lin:1 host:2 post:1 hair:1 multilayer:2 rutgers:1 represent:1 normalization:1 irregular:1 addition:1 interval:4 else:1 sends:1 extra:1 archive:3 sure:1 hz:1 tend:1 inconsistent:1 ee:1 feedforward:2 variety:1 ifi:1 shift:1 whether:1 bair:3 returned:1 sontag:7 action:3 programmable:3 ken:1 simplest:1 http:1 sl:1 lsi:1 inhibitory:1 sign:2 neuroscience:1 vol:7 ame:1 key:1 reformulation:1 threshold:5 neither:1 shatters:1 year:1 sum:3 arrive:1 reasonable:1 decide:1 wu:3 oscillation:1 bit:1 layer:3 bound:6 strength:1 precisely:1 constraint:1 encodes:1 schreter:1 simulate:6 speed:1 tv:3 according:1 smaller:1 postsynaptic:2 wi:4 biologically:1 explained:1 neuroprose:2 equation:2 previously:4 abbreviated:1 mechanism:3 needed:1 multiplied:1 apply:1 blowfly:2 simulating:3 alternative:1 shortly:1 gate:11 assumes:2 ensure:1 graded:1 spike:5 occurs:1 fa:1 bialek:6 exhibit:1 distance:3 thank:1 link:1 mail:1 presynaptic:2 considers:2 besides:2 length:1 equivalently:1 ify:2 rise:1 steel:1 upper:1 av:2 neuron:65 finite:1 situation:1 communication:1 arbitrary:6 david:1 specified:2 fv:2 macaque:1 able:2 usually:1 pattern:5 below:1 reliable:1 max:1 power:8 suitable:1 scheme:7 technology:1 imply:1 carried:1 sn:1 acknowledgement:1 generation:2 conve:1 digital:1 integrate:1 consistent:1 article:3 principle:1 lo:1 last:2 free:3 soon:1 formal:2 institute:1 fall:1 absolute:1 leaky:1 van:3 regard:1 soulie:1 dimension:4 valid:1 computes:3 forward:2 made:1 avoided:1 far:2 sua:2 approximate:2 universitaet:1 assumed:2 ipsp:8 continuous:5 nature:1 ruyter:3 du:11 da:2 stereotyped:1 linearly:3 noise:4 n2:1 allowed:1 fig:1 precision:1 position:1 pv:4 xl:1 third:2 tin:1 pfeifer:1 theorem:13 evidence:1 exists:2 ci:2 explore:1 visual:3 adjustment:1 holland:1 maa:1 corresponds:1 satisfies:1 goal:1 towards:1 determined:3 typical:1 except:1 operates:1 uniformly:1 experimental:2 zone:1 heaviside:1 |
337 | 1,308 | Softening Discrete Relaxation
Andrew M. Finch, Richard C. Wilson and Edwin R. Hancock
Department of Computer Science,
University of York, York, Y01 5DD, UK
Abstract
This paper describes a new framework for relational graph matching. The starting point is a recently reported Bayesian consistency
measure which gauges structural differences using Hamming distance. The main contributions of the work are threefold. Firstly,
we demonstrate how the discrete components of the cost function can be softened. The second contribution is to show how
the softened cost function can be used to locate matches using
continuous non-linear optimisation. Finally, we show how the resulting graph matching algorithm relates to the standard quadratic
assignment problem.
1
Introduction
Graph matching [6, 5, 7, 2, 3, 12, 11J is a topic of central importance in pattern
perception. The main computational issues are how to compare inexact relational
descriptions (7J and how to search efficiently for the best match [8J. These two issues
have recently stimulated interest in the connectionist literature (9, 6, 5, lOJ. For
instance, Simic [9], Suganathan et al. (101 and Gold et ai. [6, 51 have addressed the
issue of how to expressively measure relational distance. Both Gold and Rangarajan
(61 and Suganathan et al [101 have shown how non-linear optimisation techniques
such as mean-field annealing [lOJ and graduated assignment [61 can be applied to
find optimal matches.
In a recent series of papers we have developed a Bayesian framework for relational
graph matching [2, 3, 11, 121. The novelty resides in the fact that relational consistency is gauged by a probability distribution that uses Hamming distance to
measure structural differences between the graphs under match. This new framework has not only been used to match complex infra-red (3J and radar imagery
[11], it has also been used to successfully control a graph-edit process (12J of the
sort originally proposed by Sanfeliu and Fu (71. The optimisation of this relational
consistency measure has hitherto been confined to the use of discrete update procedures [11, 2, 3]. Examples include discrete relaxation [7, 11], simulated annealing
Softening Discrete Relaxation
439
[4, 3] and genetic search [2]. Our aim in this paper is to consider how the optimisation of the relational consistency measure can be realised by continuous means
[6, 10]. Specifically we consider how the matching process can be effected using a
non-linear technique similar to mean-field annealing [IOJ or graduated assignment
[6]. In order to achieve this goal we must transform our discrete cost function [11]
into a form suitable for optimisation by continuous techniques. The key idea is
to exploit the apparatus of statistical physics [13] to compute the effective Gibbs
potentials for our discrete relaxation process. The potentials are in-fact weighted
sums of Hamming distance enumerated over the consistent relations of the model
graph. The quantities of interest in the optimisation process are the derivatives
of the global energy function computed from the Gibbs potentials. In the case of
our weighted sum of Hamming distance, these derivatives take on a particularly
interesting form which provides an intuitive insight into the dynamics of the update
process. An experimental evaluation of the technique reveals not only that it is
successful in matching noise corrupted graphs, but that it significantly outperforms
the optimisation of the standard quadratic energy function.
2
Relational Consistency
Our overall goal in this paper is to formulate a non-linear optimisation technique
for matching relational graphs. We use the notation G = (V, E) to denote the
graphs under match, where V is the set of nodes and E is the set of edges. Our
aim in matching is to associate nodes in a graph G D = (VD , ED) representing data
to be matched against those in a graph G M = (VM , EM) representing an available
relational model. Formally, the matching is represented by a function f : VD -T VM
from the nodes in the data graph G D to those in the model graph G M. We represent
the structure of the two graphs using a pair of connection matrices. The connection
matrix for the data graph consists of the binary array
Dab
=
if (a, b) E ED
0 otherwise
{ 1
(1)
while that for the model graph is
MOl{3
1
= { 0
if (a , (3) E EM
otherwise
(2)
The current state of match between the two graphs is represented by the function
f : VD -T VM? In others words the statement f (a) = a means that the node a E VD
is matched to the node a E VM. The binary representation of the current state
of match is captured by a set of assignment variables which convey the following
meaning
if f{a) = a
(3)
Saa o otherwise
_ {1
The basic goal of the matching process is to optimise a consistency-measure which
gauges the structural similarity of the matched data graph and the model graph.
In a recent series of papers, Wilson and Hancock [11, 12] have shown how consistency of match can be modelled using a Bayesian framework. The basic idea is to
construct a probability distribution which models the effect of memoryless matching errors in generating departures from consistency between the data and model
graphs. Suppose that Sa = aU {(3I(a, (3) E EM} represents the set of nodes that
form the immediate contextual neighbourhood of the node a in the model graph.
440
A. M. Finch, R. C. Wilson and E. R. Hancock
Further suppose that ra = f(a) U {f(b)l(a,b) E ED} represents the set of matches
assigned to the contextual neighbourhood of the node a E VD of the data graph.
Basic to Wilson and Hancock's modelling of relational consistency is to regard the
complete set of model-graph relations as mutually exclusive causes from which the
potentially corrupt matched model-graph relations arise. As a result, the probability of the matched configuration r a can be expressed as a mixture distribution over
the corresponding space of model-graph configurations
p(ra) =
L
p(raISa)P(Sa)
(4)
aEVM
The modelling of the match confusion probabilities p(ralSa) draws on the assumption that the error process is independent of location. This allows p(raISa ) to be
factorised over its component matches. Individual label errors are further assumed
to act with a memoryless probability Pe . With these ingredients the probability of
the matched neighbourhood r a reduces to [11, 12]
p(ra) =
I~~I
2:
exp[-ItH(a,a)]
(5)
aEVM
where Ka = (1- Pe)lfal and the exponential constant is related to the probability
of label errors, i.e. It = In (l-;,~e ). Consistency of match is gauged by the "Hamming
distance", H(a, a) between the matched relation r a and the set of consistent neighbourhood structures Sa, 'Va E VM from the model graph. According to our binary
representation of the matching process, the distance measure is computed using the
connectivity matrices and the assignment variables in the following manner
H(a, a) =
2: 2:
Ma{3Dab(l - Sb{3)
(6)
bEVD {3EVM
The probability distribution p(r a) may be regarded as providing a natural way of
modelling departures from consistency at the neighbourhood level. Matching consistency is graded by Hamming distance and controlled hardening may be induced
by reducing the label-error probability Pe towards zero.
3
The Effective Potential for Discrete Relaxation
We commence the development of our graduated assignment approach to discrete
relaxation by computing an effective Gibbs potential U(r a) for the matching configuration r a. In other words, we aim to replace the compound exponential probability
distribution appearing in equation (5) by the single Gibbs distribution
(7)
Our route to the effective potential is provided by statistical physics. If we represent
p(r a) by an equivalent Gibbs distribution with an identical partition function, then
the equilibrium configurational potential is related to the partial derivative of the
log-probability with respect to the coupling constant It in the following manner [13]
Softening Discrete Relaxation
441
u(r a) = _ 8ln p(r a)
8J.t
Upon substituting for p(r a) from equation (5)
2:
u(ra) =
(8)
H(a, a) exp[ -J.tH(a, a)]
_a_E~VM~__________________
2:
(9)
exp[-J.tH(a,a)]
aEVM
In other words the neighbourhood Gibbs potentials are simply weighted sums of
Hamming distance between the data and model graphs. In fact the local clique
potentials display an interesting barrier property. The potential is concentrated at
Hamming distance H ~ ~. Both very large and very small Hamming distances
contribute insignificantly to the energy function, i.e. limH-to H exp[-J.tH] = 0 and
limH-too H exp[-J.tH] = o.
With the neighbourhood matching potentials to hand, we construct a global
"matching-energy" [; = 2:aEVD U(r a) by summing the contributions over the nodes
of the data graph.
4
Optimising the Global Cost Function
We are now in a position to develop a continuous update algorithm by softening the
discrete ingredients of our graph matching potential. The idea is to compute the
derivatives of the global energy given in equation (10) and to effect the softening
process using the soft-max idea of Bridle [1].
4.1
Softassign
The energy function represented by equations (9) and (10) is defined over the discrete matching variables Saa. The basic idea underpinning this paper is to realise a
continuous process for updating the assignment variables. The optimal step-size is
determined by computing the partial derivatives of the global matching energy with
respect to the assignment variables. We commence by computing the derivatives of
the contributing neighbourhood Gibbs potentials, i.e.
where
~aa =
exp(-J.tH(a, a)]
(11)
2:aIEVM exp[-J.tH(a, a l )]
To further develop this result, we must compute the derivatives of the Hamming
distances. From equation (6) it follows that
8H(a,a) _ M D
- - a{3 ab
8 Sb{3
(12)
It is now a straightforward matter to show that the derivative of the global matching
energy is equal to
A. M. Finch, R. C. Wilson and E. R. Hancock
442
We would like our continuous matching vanables to remain constrained to lie within
the range [0, 1]. Rather than using a linear update rule, we exploit Bridle's soft-max
ansatz [1). In doing this we arrive at an update process which has many features in
common with the well-known mean-field equations of statistical physics
exp[-~~]
T OSaa
Sao.
+- -----'::.........,[;:---:--0-3-:"""""""""]
L
exp
a'EVM
-~_?
T
(14)
OSaa'
The mathematical structure of this update process is important and deserves further
comment. The quantity eaa defined in equation (11) naturally plays the role of a
matching probability. The first term appearing under the square bracket in equation
(13) can therefore be thought of as analogous to the optimal update direction for
the standard quadratic cost function [10,6); we will discus this relationship in more
detail in Section 4.2. The second term modifies this principal update direction
by taking into account the weighted fluctuations in the Hamming distance about
the effective potential or average Hamming distance. If the average fluctuation
is zero, then there is no net modification to the update direction. When the net
fluctuation is non-zero, the direction of update is modified so as to compensate for
the movement of the mean-value of the effective potential. This corrective tracking
process provides an explicit mechanism for maintaining contact with the minimum
of the effective potential under rescaling effects induced by changes in the value of
the coupling constant p. Moreover, since the fluctuation term is itself proportional
to p, this has an insignificant effect for Pe ~ ~ but dominates the update process
when Pe -+ 0.
4.2
Quadratic Assignment Problem
Before we proceed to experiment with the new graph matching process, it is interesting to briefly review the standard quadratic formulation of the matching problem
investigated by Simic (9], Suganathan et al (to] and Gold and Rangarajan (6]. The
common feature of these algorithms is to commence from the quadratic cost function
(15)
In this case the derivative of the global cost function is linear in the assignment
variables, i.e.
(16)
This step size is equivalent to that appearing in equation (14) provided that p = 0,
i.e. Pe -+
The update is realised by applying the soft-max ansatz of equation
(14) . In the next section, we will provide some experimental comparison with the
resulting matching process. However, it is important to stress that the update process adopted here is very simplistic and leaves considerable scope for further refinement. For instance, Gold and Rangarajan (6] have exploited the doubly stochastic
properties of Sinckhorn matrices to ensure two-way symmetry in the matching process.
!.
Softening Discrete Relaxation
5
443
Experiments and Conclusions
Our main aim in this Section is to compare the non-linear update equations with
the optimisation of the quadratic matching criterion described in Section 4.2. The
data for our study is provided by synthetic Delaunay graphs. These graphs are
constructed by generating random dot patterns. Each random dot is used to seed
a Voronoi cell. The Delaunay triangulation is the region adjacency graph for the
Voronoi cells. In order to pose demanding tests of our matching technique, we have
added controlled amounts of corruption to the synthetic graphs. This is effected by
deleting and adding a specified fraction of the dots from the initial random patterns.
The associated Delaunay graph is therefore subject to structural corruption. We
measure the degree of corruption by the fraction of surviving nodes in the corrupted
Delaunay graph.
Our experimental protocol has been as follows . For a series of different corruption
levels, we have generated a sample of 100 random graphs. The graphs contain 50
nodes each. According to the specified corruption level , we have both added and
deleted a predefined fraction of nodes at random locations in the initial graphs so
as to maintain their overall size. For each graph we measure the quality of match
by computing the fraction of the surviving nodes for which the assignment variables
indicate the correct match. The value of the temperature T in the update process
has been controlled using a logarithmic annealing schedule of the form suggested
by Geman and Geman (41 . We initialise the assignment variables uniformly across
the set of matches by setting Saa = JM , "ta, 0:.
We have compared the results obtained with two different versions of the matching
algorithm. The first of these involves updating the softened assignment variables
by applying the non-linear update equation given in (14). The second matching
algorithm involves applying the same optimisation apparatus to the quadratic cost
function defined in equation (15) in a simplified form of the quadratic assignment
algorithm [6, 101.
Figure 1 shows the final fraction of correct matches for both algorithms. The data
curves show the correct matching fraction averaged over the graph samples as a
function of the corruption fraction. The main conclusions that can be drawn from
these plots is that the new matching technique described in this paper significantly
outperforms its conventional quadratic counterpart described in Section 4.2. The
main difference between the two techniques resides in the fact that our new method
relies on updating with derivatives of the energy function that are non-linear in the
assignment variables.
To conclude, our main contribution in this paper has been to demonstrate how
the discrete Bayesian relational consistency measure of Wilson and Hancock (111
can be cast in a form that is amenable to continuous non-linear optimisation. We
have shown how the method relates to the standard quadratic assignment algorithm
extensively studied in the connectionist literature [6, 9, 101. Moreover, an experimental analysis reveals that the method offers superior performance in terms of
noise control.
References
[1] Bridle J.S. "Training stochastic model recognition algorithms can lead to maximum
mutual information estimation of parameters" NIPS2, pp. 211-217, 1990.
444
A. M. Finch, R. C. Wilson and E. R. Hancock
~'
..-..
"",
.....
Quadratic Assignment - -
???~?.......~oftened Discrete Relaxation
c:
.....
0.8
til
i0
()
~
".
'13
u:
.
",
0
?..~.-- ..............."
'.'
0.6
?.. '.......,.~-------i;
0.4
\
iii
c:
u::
\
0.2
\
\
'.\
....
0
0
0.2
0.4
0.6
0.8
Fraction of Graph Corrupt
Figure 1: Experimental comparison: softened discrete relaxation (dotted curve);
matching using the quadratic cost function (solid curve).
[2] Cross A.D.J., RC.Wilson and E.R Hancock, "Genetic search for structural matching", Proceedings ECCV96, LNCS 1064, pp. 514-525, 1996.
[3] Cross A.D.J . and E .RHancock, "Relational matching with stochastic optimisation"
IEEE Computer Society International Symposium on Computer Vision, pp . 365-370,
1995.
[4] Geman S. and D. Geman, "Stochastic relaxation, Gibbs distributions and Bayesian
restoration of images," IEEE PAMI, PAMI-6 , pp.721- 741 , 1984.
[5] Gold S., A. Rangarajan and E. Mjolsness, "Learning with pre-knowledge: Clustering
with point and graph-matching distance measures", Neural Computation, 8, pp. 787804, 1996.
[6] Gold S. and A. Rangarajan, "A graduated assignment algorithm for graph matching",
IEEE PAMI, 18, pp. 377-388, 1996.
[7] Sanfeliu A. and Fu K.S ., "A distance measure between attributed relational graphs
for pattern recognition", IEEE SMC, 13, pp 353-362, 1983.
[8] Shapiro L. and RM.Haralick, "Structural description and inexact matching", IEEE
PAM!, 3, pp 504-519, 1981.
[9] Simic P., "Constrained nets for graph matching and other quadratic assignment problems", Neural Computation, 3 , pp . 268- 281, 1991.
[10] Suganathan P.N., E .K. Teoh and D.P. Mital, "Pattern recognition by graph matching
using Potts MFT networks", Pattern Recognition, 28, pp. 997-1009, 1995.
[11] Wilson RC., Evans A.N. and Hancock E.R, "Relational matching by discrete relaxation", Image and Vision Computing, 13, pp. 411-421, 1995.
[12] Wilson RC and Hancock E.R, "Relational matching with dynamic graph structures" , Proceedings of the Fifth International Conference on Computer Vision, pp.
450-456, 1995.
[13] Yuille A., "Generalised deformable models, statistical physics and matching problems", Neural Computation, 2, pp. 1-24, 1990.
| 1308 |@word version:1 briefly:1 solid:1 initial:2 configuration:3 series:3 genetic:2 outperforms:2 current:2 contextual:2 ka:1 must:2 evans:1 partition:1 plot:1 update:16 leaf:1 ith:1 provides:2 node:13 location:2 contribute:1 firstly:1 mathematical:1 rc:3 constructed:1 symposium:1 consists:1 doubly:1 manner:2 ra:4 jm:1 provided:3 notation:1 matched:7 moreover:2 hitherto:1 developed:1 act:1 rm:1 uk:1 control:2 before:1 generalised:1 local:1 apparatus:2 fluctuation:4 pami:3 pam:1 au:1 studied:1 smc:1 range:1 averaged:1 procedure:1 lncs:1 significantly:2 thought:1 matching:43 word:3 pre:1 applying:3 equivalent:2 conventional:1 modifies:1 straightforward:1 starting:1 commence:3 formulate:1 insight:1 rule:1 array:1 regarded:1 initialise:1 analogous:1 suppose:2 play:1 us:1 associate:1 recognition:4 particularly:1 updating:3 geman:4 role:1 region:1 mjolsness:1 movement:1 dynamic:2 radar:1 yuille:1 upon:1 edwin:1 represented:3 corrective:1 hancock:10 effective:7 otherwise:3 dab:2 transform:1 itself:1 final:1 net:3 achieve:1 deformable:1 gold:6 description:2 intuitive:1 rangarajan:5 generating:2 coupling:2 andrew:1 develop:2 pose:1 sa:3 involves:2 indicate:1 direction:4 correct:3 stochastic:4 y01:1 adjacency:1 expressively:1 enumerated:1 underpinning:1 exp:9 equilibrium:1 scope:1 seed:1 substituting:1 estimation:1 label:3 edit:1 gauge:2 successfully:1 weighted:4 aim:4 modified:1 rather:1 wilson:10 haralick:1 potts:1 modelling:3 voronoi:2 i0:1 sb:2 relation:4 issue:3 overall:2 development:1 constrained:2 mutual:1 field:3 construct:2 equal:1 identical:1 represents:2 optimising:1 connectionist:2 infra:1 others:1 richard:1 individual:1 softassign:1 maintain:1 ab:1 interest:2 evaluation:1 mixture:1 bracket:1 predefined:1 amenable:1 fu:2 edge:1 partial:2 instance:2 soft:3 restoration:1 assignment:19 deserves:1 cost:9 successful:1 too:1 reported:1 corrupted:2 finch:4 synthetic:2 international:2 nips2:1 physic:4 vm:6 ansatz:2 connectivity:1 imagery:1 central:1 derivative:10 til:1 rescaling:1 account:1 potential:16 insignificantly:1 factorised:1 ioj:1 matter:1 doing:1 red:1 realised:2 effected:2 sort:1 contribution:4 square:1 efficiently:1 modelled:1 bayesian:5 corruption:6 ed:3 inexact:2 against:1 energy:9 realise:1 pp:13 naturally:1 associated:1 attributed:1 hamming:12 bridle:3 limh:2 knowledge:1 schedule:1 originally:1 ta:1 formulation:1 hand:1 quality:1 effect:4 contain:1 counterpart:1 assigned:1 memoryless:2 simic:3 criterion:1 stress:1 complete:1 demonstrate:2 confusion:1 temperature:1 meaning:1 image:2 recently:2 common:2 superior:1 mft:1 gibbs:8 ai:1 consistency:13 softening:6 dot:3 similarity:1 delaunay:4 recent:2 triangulation:1 compound:1 route:1 binary:3 exploited:1 captured:1 minimum:1 novelty:1 relates:2 reduces:1 match:17 offer:1 compensate:1 cross:2 va:1 controlled:3 basic:4 simplistic:1 optimisation:12 vision:3 represent:2 confined:1 cell:2 addressed:1 annealing:4 configurational:1 comment:1 induced:2 subject:1 surviving:2 structural:6 iii:1 graduated:4 idea:5 york:2 cause:1 proceed:1 amount:1 extensively:1 concentrated:1 shapiro:1 dotted:1 discrete:17 threefold:1 key:1 deleted:1 drawn:1 graph:49 relaxation:12 fraction:8 sum:3 arrive:1 draw:1 display:1 quadratic:14 softened:4 department:1 according:2 describes:1 remain:1 em:3 evm:2 across:1 modification:1 ln:1 equation:13 mutually:1 discus:1 mechanism:1 adopted:1 available:1 appearing:3 neighbourhood:8 clustering:1 include:1 ensure:1 maintaining:1 exploit:2 graded:1 society:1 contact:1 added:2 quantity:2 exclusive:1 distance:16 simulated:1 vd:5 topic:1 relationship:1 providing:1 statement:1 potentially:1 immediate:1 relational:16 locate:1 pair:1 cast:1 specified:2 connection:2 suggested:1 pattern:6 perception:1 departure:2 optimise:1 max:3 deleting:1 suitable:1 demanding:1 natural:1 representing:2 review:1 literature:2 contributing:1 gauged:2 interesting:3 proportional:1 ingredient:2 degree:1 consistent:2 dd:1 sao:1 corrupt:2 taking:1 barrier:1 fifth:1 regard:1 curve:3 resides:2 refinement:1 simplified:1 saa:3 clique:1 global:7 reveals:2 summing:1 assumed:1 conclude:1 continuous:7 search:3 stimulated:1 symmetry:1 mol:1 investigated:1 complex:1 protocol:1 main:6 noise:2 arise:1 convey:1 loj:2 position:1 explicit:1 exponential:2 lie:1 pe:6 insignificant:1 dominates:1 adding:1 importance:1 logarithmic:1 simply:1 expressed:1 tracking:1 aa:1 relies:1 ma:1 goal:3 towards:1 replace:1 considerable:1 change:1 specifically:1 determined:1 reducing:1 uniformly:1 principal:1 experimental:5 formally:1 |
338 | 1,309 | Interpreting images by propagating
Bayesian beliefs
Yair Weiss
Dept. of Brain and Cognitive Sciences
Massachusetts Institute of Technology
E10-120, Cambridge, MA 02139, USA
yweiss<opsyche.mit.edu
Abstract
A central theme of computational vision research has been the realization that reliable estimation of local scene properties requires
propagating measurements across the image. Many authors have
therefore suggested solving vision problems using architectures of
locally connected units updating their activity in parallel. Unfortunately, the convergence of traditional relaxation methods on such
architectures has proven to be excruciatingly slow and in general
they do not guarantee that the stable point will be a global minimum.
In this paper we show that an architecture in which Bayesian Beliefs about image properties are propagated between neighboring
units yields convergence times which are several orders of magnitude faster than traditional methods and avoids local minima. In
particular our architecture is non-iterative in the sense of Marr [5]:
at every time step, the local estimates at a given location are optimal given the information which has already been propagated to
that location. We illustrate the algorithm's performance on real
images and compare it to several existing methods.
1
Theory
The essence of our approach is shown in figure 1. Figure 1a shows the prototypical
ill-posed problem: interpolation of a function from sparse data. Figure 1b shows a
traditional relaxation approach to the problem: a dense array of units represents
the value of the interpolated function at discretely sampled points. The activity of a
unit is updated based on the local data (in those points where data is available) and
the activity of the neighboring points. As discussed below, the local update rule can
Interpreting Images by Propagating Bayesian Beliefs
909
r???_?. ?_-_?__?__?__? ?_?. ???_-_?_??_-_???? __??_-_??" '''--'''1
/.
9
i y*?
0
I
I 0-0-6--0-0-0 !
y
i ..___._.__.__.__._.__....._. __.__._____....._.___._.________ j
a
9.
6
ll,crO..O.. ..0..0 ..0
r???--?-- _ ????_???_-_???? __ ?_???_?_???????_????_?????-???_????_?????? ..??_??_???'1
! y*O
!I
i
!
i
I
:;, _ _ _ ? __ ? _
0
1
I1U ,cfX
..........
I1P,~
!
!
i
!
_ _ _ . _ ......... _ _ _ ? ?? ??? _ _ _ _ ? _ _ ............. _
b
!
...... ....... _ _ ._1
c
Figure 1: a. a prototypical ill-posed problem h. Traditional relaxation approach : dense
array of units represent the value of the interpolated function. Units update their activity
based on local information and the activity of neighboring units. c. The Bayesian Belief
Propagation (BBP) approach. Units transmit probabilities and combine them according
to probability calculus in two non-interacting streams.
be defined such that the network converges to a state in which the activity of each
unit corresponds to the value of the globally optimal interpolating function. Figure
lc shows the Bayesian Belief Propagation (BBP) approach to the problem. As in
the traditional approach the function is represented by the activity of a dense array
of units. However the units transmit probabilities rather than single estimates to
their neighbors and combine the probabilities according to the probability calculus.
To formalize the above discussion , let Yk represent the activity of a unit at location
k, and let Yk be noisy samples from the true function. A typical interpolation
problem would be to minimize:
J(Y) = L
Wk(Yk - yk)2
+ A L(Yi
(1)
- Yi+l)2
k
=
a
Where we have defined Wk = for grid points with no data, and Wk
1 for points
with data. Since J is quadratic, any local update in the direction of the gradient
will converge to the optimal estimate. This yields updates of the sort:
Yk
+-
Yk
+ TJk ( A( Yk-l +2 YHl
- Yk )
+ Wk ( Yk'"
- Yk ?
(2)
Relaxation algorithms differ in their choice of TJ: TJ = Ij(A + Wk) corresponds to
Gauss-Seidel relaxation and TJ = 1.9j(A + Wk) corresponds to successive over relaxation (SOR) which is the method of choice for such problems [10] .
To derive a BBP update rule for this problem, note that that minimizing J(Y)
is equivalent to maximizing the posterior probability of Y given Y'" assuming the
following generative model:
Yi+l
Y;
+ 11
= WiYi + TJ
=
(3)
(4)
Yi
Where 11 "'" N(O, (TR), TJ "'" N(O , (TD) . The ratio of
that of A in the original cost functional.
(TD
to
(TR
plays a role similar to
The advantage of considering the cost functional as a posterior is that it enables us
to use the methods of Hidden Markov Models, Bayesian Belief Nets and Optimal
Estimation to derive local update rules (cf. [6, 7, 1D. Denote the posterior by
Pi(u) P(ri = uIY"'), the Markovian property allows us to factor Pi(U) into three
terms: one depending on the local data, another depending on data to the left of i
and a third depending on data to the right of i. Thus:
=
= cai(u)Li(u).Bi(u)
(5)
= uIYl~i -l)' .Bi(U) = P(Yi = uIYi+l,N) ' Li(U) = P(Yi"'lri = u)
Pi(U)
=
where ai(u)
P(Yi
and c denotes a normalizing constant. Now, denoting the conditional Ci(u, v) =
y. Weiss
910
P(Yi
= ulYi-l = v), O'i(U) can be written in terms of O'i-l(V):
O'j(u) =
O'i-l(V)Ci(u, v)Li-l(V)
C
1
(6)
where c denotes another normalizing constant. A symmetric equation can be written
for f3i( u).
This suggests a propagation scheme where units represent the probabilities given in
the left hand side of equations 5-6 and updates are based on the right hand side, i.e.
on the activities of neighboring units. Specifically, for a Gaussian generating process
the probabilities can be represented by their mean and variance. Thus denote
Pi N(pi' Ui), and similarly O'j """ N(pf,
and f3i
N(p~,
Performing the
integration in 6 gives a Kalman-Filter like update for the parameters:
un
I'V
I'V
1 ILl?
aD' + ..LIL~
ai rl + -;;trrl
.
.!!!..... y..
Pi
uf).
+-
_1_
O~_l
+-
UR
+
(8)
W._l
UD
Wi-l
+ ( -U a1_ - + - )-1
UD
i
(7)
(9)
l
(the update rules for the parameters of f3 are analogous)
So far we have considered continuous estimation problems but identical issues arise
in labeling problems, where the task is to estimate a label L" which can take on M
discrete values. We will denote L,,(m) = 1 if the label takes on value m and zero
otherwise. Typically one minimizes functionals of the form:
J(L) =
L L V,,(m)L,,(m) - ALL L,,(m)Lk+l(m)
m
"
m
(10)
"
Traditional relaxation labeling algorithms minimize this cost functional with updates of the form:
(11)
Again different relaxation labeling algorithms differ in their choice of f. A linear
sum followed by a threshold gives the discrete Hopfield network updates, a linear
sum followed by a "soft" threshold gives the continuous or mean-field Hopfield
up dates and yet another form gives the relaxation labeling algorithm of Rosenfeld
et al. (see [3] for a review of relaxation labeling methods ).
To derive a BBP algorithm for this case one can again rewrite J as the posterior
of a Markov generating process, and calculate P(L,,(m) = 1) for this process.1.
This gives the same expressions as in equations 5-6 with the integral replaced by a
linear sum. Since the probabilities here are not Gaussian, the O'i, f3i, Pi will not be
represented by their mean and variances, but rather by a vector of length M. Thus
the update rule for O'i will be:
(12)
(and similarly for f3.)
IFor certain special cases, knowing P(Lk(m) = 1) is not sufficient for choosing the
sequence of labels that minimizes J. In those cases one should do belief revision rather
than propagation [6]
Interpreting I11Ulges by Propagating Bayesian Beliefs
a.
911
b.
Figure 2: R. the first frame of a sequence. The hand is translated to the left. b. contour
extracted using standard methods
1.1
Convergence
Equations 5-6 are mathematical identities. Hence, it is possible to show [6] that
after N iterations the activity of units Pi will converge to the correct posteriors,
where N is the maximal distance between any two units in the architecture, and an
iteration refers to one update of all units. Furthermore, we have been able to show
that after n < N iterations, the activity of unit Pi is guaranteed to represent the
probability of the hidden state at location i given all data within distance n.
This guarantee is significant in the light of a distinction made by Marr (1982)
regarding local propagation rules. In a scheme where units only communicate with
their neighbors, there is an obvious limit on how fast the information can reach a
given unit: i.e. after n iterations the unit can only know about information within
distance n. Thus there is a minimal number of iterations required for all data to
reach all units. Marr distinguished between two types of iterations - those that are
needed to allow the information to reach the units, versus those that are used to
refine an estimate based on information that has already arrived. The significance
of the guarantee on Pi is that it shows that BBP only uses the first type of iteration
- iterations are used only to allow more information to reach the units. Once the
information has arrived, Pi represents the correct posterior given that information
and no further iterations are needed to refine the estimate. Moreover, we have been
able to show that propagations schemes that do not propagate probabilities (such
as those in equations 2) will in general not represent the optimal estimate given
information that has already arrived.
To summarize, both traditional relaxation updates as in equation 2 and BBP updates as in equations 7-9 give simple rules for updating a unit's activity based on
local data and activities of neighboring units. However, the fact that BBP updates
are based on the probability calculus guarantees that a unit's activity will be optimal
given information that has already arrived and gives rise to a qualitative difference
between the convergence of these two types of schemes. In the next section, we will
demonstrate this difference in image interpretation problems.
2
Results
Figure 2a shows the first frame of a sequence in which the hand is translated to the
left. Figure 2b shows the bounding contour of the hand extracted using standard
techniques.
2.1
Motion propagation along contours
Local measurements along the contour are insufficient to determine the motion.
Hildreth [2] suggested to overcome the local ambiguity by minimizing the following
y. Weiss
912
...,
..
,
:
"
0?
. .. . .
.'~" ~:. ~
. -.-.
- -----... - - ?.. - -..??........ .
"
?
.
,
,
,
a.
-
b.
. - - --- -- - ~
- --.:. '-
...
...
_":.
- --
- - -
-- - - -- - -- d.
c.
Figure 3: R. Local estimate of velocity along the contour. h. Performance of SOR,
gradient descent and BBP as a function of time. BBP converges orders of magnitude
faster than SOR. c. Motion estimate of SOR after 500 iterations. d. Motion estimate of
BBP after 3 iterations.
cost functional:
J(V)
= I)dx~v1: + dt1:)2 +..\ L
1:
IlvUl - v1:11 2
(13)
1:
where dx, dt denote the spatial and temporal image derivatives and V1: denotes the
velocity at point k along the contour. This functional is analogous to the interpolation functional (eq. 1) and the derivation of the relaxation and BBP updates are
also analogous.
Figure 3a shows the estimate of motion based solely on local information. The
estimates are wrong due to the aperture problem. Figure 3b shows the performance
of three propagation schemes: gradient descent, SOR and BBP. Gradient descent
converges so slowly that the improvement in its estimate can not be discerned in the
plot. SOR converges much faster than gradient descent but still has significant error
after 500 iterations. BBP gets the correct estimate after 3 iterations! (Here and in
all subsequent plots an iteration refers to one update of all units in the network).
This is due to the fact that after 3 iterations, the estimate at location k is the
optimal one given data in the interval [k - 3, k + 3]. In this case, there is enough
data in every such interval along the contour to correctly estimate the motion.
Figure 3c shows the estimate produced by SOR after 500 iterations. Even with
simple visual inspection it is evident that the estimate is quite wrong. Figure 3d
shows the (correct) estimate produced by BBP after 3 iterations.
2.2
Direction of figure propagation
The extracted contour in figure 2 bounds a dark and a light region. Direction of
figure (DOF) (e.g. [9]) refers to which of these two regions is figure and which is
ground. A local cue for DOF is convexity - given three neighboring points along
the contour we prefer the DOF that makes the angle defined by those points acute
9/3
Interpreting Images by Propagating Bayesian Beliefs
..
~
4Il
:\
20
""
"
",
"I ,
o
o
a.
b.
c.
d.
- . _ .. - - - - - _.- -
"
_. --.- -- - - _.- - - --
---
-~
I
- . ANII-Or--.,..
-
~ -~ -- -- - - - -- --- --- 10
100
160
:lOD
250)00
Figure 4: a. Local estimate of DOF along the contour. b. Performance of Hopfield,gradient descent, relaxation labeling and BBP as a function of time. BBP is the
only method that converges to the global minimum. c. DOF estimate of Hopfield net
after convergence . d. DOF estimate of BBP after convergence.
rather than obtuse. Figure 4a shows the results of using this local cue on the hand
contour. The local cue is not sufficient.
We can overcome the local ambiguity by minimizing a cost functional that takes into
account the DOF at neighboring points in addition to the local convexity. Denote
by Lk(m) the DOF at point k along the contour and define
(14)
m
k
m
k
with Vk(m) determined by the acuteness of the angle at location k.
Figure 4b shows the performance of four propagation algorithms on this task: three
traditional relaxation labeling algorithms (MF Hopfield, Rosenfeld et aI, constrained
gradient descent) and BBP. All three traditional algorithms converge to a local
minimum, while the BBP converges to the global minimum. Figure 4c shows the
local minimum reached by the Hopfield network and figure 4d shows the correct
solution reached by the BBP algorithm . Recall (section 1.1) that BBP is guaranteed
to converge to the correct posterior given all the data.
2.3
Extensions to 2D
In the previous two examples ambiguity was reduced by combining information from
other points on the same contour . There exist , however, cases when information
should be propagated to all points in the image. Unfortunately, such propagation
problems correspond to Markov Random Field (MRF) generative models, for which
calculation of the posterior cannot be done efficiently. However , Willsky and his
y. Weiss
914
colleagues [4] have recently shown that MRFs can be approximated with hierarchical
or multi-resolution models. In current work, we have been using the multi-resolution
generative model to derive local BBP rules. In this case, the Bayesian beliefs are
propagated between neighboring units in a pyramidal representation of the image.
Although this work is still in preliminary stages, we find encouraging results in
comparison with traditional 2D relaxation schemes.
3
Discussion
The update rules in equations 5-6 differ slightly from those derived by Pearl [6]
in that the quantities a, {3 are conditional probabilities and hence are constantly
normalized to sum to unity. Using Pearl's original algorithm for sequences as long
as the ones we are considering will lead to messages that become vanishingly small.
Likewise our update rules differ slightly from the forward-backward algorithm for
HMMs [7] in that ours are based on the assumption that all states are equally likely
apriore and hence the updates are symmetric in a and {3. Finally, equation 9 can
be seen as a variant of a Riccati equation [1] .
In addition to these minor notational differences, the context in which we use the
update rules is different . While in HMMs and Kalman Filters, the updates are seen
as interim calculations toward calculating the posterior, we use these updates in a
parallel network of local units and are interested in how the estimates of units in
this network improve as a function of iteration. We have shown that an architecture
that propagates Bayesian beliefs according to the probability calculus yields orders
of magnitude improvements in convergence over traditional schemes that do not
propagate probabilities. Thus image interpretation provides an important example
of a task where it pays to be a Bayesian.
Acknowledgments
I thank E. Adelson , P. Dayan, J. Tenenbaum and G. Galperin for comments on versions ofthis manuscript; M.1. Jordan for stimulating discussions and for introducing
me to Bayesian nets. Supported by a training grant from NIGMS .
References
[1] Arthur Gelb, editor. Applied Optimal Estimation. MIT Press, 1974.
[2] E. C. Hildreth. The Measurement of Visual Motion. MIT Press, 1983.
[3] S.Z. Li. Markov Random Field Modeling in Computer Vision. Springer-Verlag, 1995.
[4] Mark R. Luettgen, W . Clem Karl, and Allan S. Willsky. Efficient multiscale regularization with application to the computation of optical flow. IEEE Transactions on
image processing, 3(1):41-64, 1994.
[5] D. Marr. Vision. H. Freeman and Co., 1982.
[6] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
Inference. Morgan Kaufmann, 1988.
[7] Lawrence Rabiner and Biing-Hwang Juang. Fundamentals of Speech recognition. PTR
Prentice Hall, 1993.
[8] A. Rosenfeld, R. Hummel, and S. Zucker. Scene labeling by relaxation operations.
IEEE Transactions on Systems, Man and Cybernetics, 6:420-433, 1976.
[9] P. Sajda and L. H. Finkel. Intermediate-level visual representations and the construction of surface perception. Journal of Cognitive Neuroscience, 1994.
[10] Gilbert Strang. Introduction to Applied Mathematics. Wellesley-Cambridge, 1986.
| 1309 |@word version:1 calculus:4 propagate:2 tr:2 denoting:1 ours:1 existing:1 current:1 yet:1 dx:2 written:2 subsequent:1 enables:1 plot:2 update:24 generative:3 cue:3 inspection:1 provides:1 location:6 successive:1 mathematical:1 along:8 become:1 qualitative:1 combine:2 allan:1 multi:2 brain:1 freeman:1 globally:1 td:2 encouraging:1 pf:1 considering:2 revision:1 moreover:1 minimizes:2 guarantee:4 temporal:1 every:2 wrong:2 unit:31 grant:1 local:25 limit:1 solely:1 interpolation:3 suggests:1 co:1 hmms:2 bi:2 acknowledgment:1 refers:3 get:1 cannot:1 prentice:1 context:1 gilbert:1 equivalent:1 maximizing:1 resolution:2 rule:11 array:3 marr:4 his:1 analogous:3 updated:1 transmit:2 construction:1 play:1 us:1 lod:1 velocity:2 approximated:1 recognition:1 updating:2 role:1 calculate:1 region:2 connected:1 cro:1 yk:10 anii:1 convexity:2 ui:1 solving:1 rewrite:1 translated:2 hopfield:6 represented:3 derivation:1 sajda:1 fast:1 labeling:8 choosing:1 dof:8 quite:1 posed:2 plausible:1 otherwise:1 rosenfeld:3 noisy:1 advantage:1 sequence:4 net:3 cai:1 maximal:1 vanishingly:1 neighboring:8 combining:1 realization:1 date:1 riccati:1 convergence:7 juang:1 generating:2 converges:6 illustrate:1 derive:4 depending:3 propagating:5 ij:1 minor:1 eq:1 differ:4 direction:3 correct:6 filter:2 sor:7 yweiss:1 preliminary:1 extension:1 biing:1 considered:1 ground:1 hall:1 lawrence:1 tjk:1 estimation:4 label:3 mit:3 gaussian:2 rather:4 finkel:1 derived:1 improvement:2 vk:1 notational:1 lri:1 sense:1 inference:1 mrfs:1 dayan:1 typically:1 hidden:2 interested:1 issue:1 ill:3 spatial:1 integration:1 special:1 constrained:1 field:3 once:1 f3:2 identical:1 represents:2 adelson:1 intelligent:1 replaced:1 hummel:1 message:1 light:2 tj:5 integral:1 arthur:1 obtuse:1 minimal:1 soft:1 modeling:1 markovian:1 cost:5 introducing:1 fundamental:1 probabilistic:1 again:2 central:1 ambiguity:3 luettgen:1 slowly:1 cognitive:2 derivative:1 li:4 account:1 wk:6 ad:1 stream:1 wellesley:1 reached:2 sort:1 parallel:2 minimize:2 il:1 variance:2 kaufmann:1 efficiently:1 likewise:1 yield:3 correspond:1 rabiner:1 bayesian:12 produced:2 cybernetics:1 reach:4 colleague:1 obvious:1 propagated:4 sampled:1 judea:1 massachusetts:1 recall:1 formalize:1 i1p:1 manuscript:1 dt:1 wei:4 discerned:1 done:1 furthermore:1 stage:1 hand:6 multiscale:1 propagation:11 hildreth:2 hwang:1 usa:1 normalized:1 true:1 hence:3 regularization:1 symmetric:2 ll:1 essence:1 ptr:1 arrived:4 evident:1 demonstrate:1 motion:7 interpreting:4 reasoning:1 image:12 recently:1 functional:7 rl:1 discussed:1 interpretation:2 measurement:3 significant:2 cambridge:2 ai:3 grid:1 mathematics:1 similarly:2 stable:1 zucker:1 acute:1 surface:1 posterior:9 certain:1 verlag:1 yi:8 seen:2 morgan:1 minimum:6 converge:4 determine:1 ud:2 seidel:1 faster:3 calculation:2 long:1 equally:1 mrf:1 variant:1 vision:4 iteration:18 represent:5 addition:2 interval:2 pyramidal:1 comment:1 flow:1 jordan:1 intermediate:1 enough:1 architecture:6 f3i:3 regarding:1 knowing:1 expression:1 speech:1 ifor:1 dark:1 locally:1 tenenbaum:1 reduced:1 exist:1 neuroscience:1 correctly:1 discrete:2 four:1 threshold:2 backward:1 v1:3 relaxation:16 sum:4 angle:2 communicate:1 prefer:1 bound:1 pay:1 followed:2 guaranteed:2 quadratic:1 refine:2 discretely:1 activity:14 scene:2 ri:1 interpolated:2 performing:1 optical:1 interim:1 uf:1 according:3 across:1 slightly:2 ur:1 wi:1 unity:1 equation:10 needed:2 know:1 available:1 operation:1 hierarchical:1 distinguished:1 yair:1 original:2 denotes:3 cf:1 calculating:1 already:4 quantity:1 traditional:11 gradient:7 distance:3 thank:1 me:1 toward:1 willsky:2 assuming:1 kalman:2 length:1 insufficient:1 ratio:1 minimizing:3 unfortunately:2 rise:1 lil:1 galperin:1 markov:4 descent:6 gelb:1 frame:2 interacting:1 required:1 distinction:1 pearl:3 able:2 suggested:2 below:1 perception:1 wiyi:1 summarize:1 reliable:1 belief:11 scheme:7 improve:1 technology:1 lk:3 review:1 strang:1 prototypical:2 proven:1 versus:1 sufficient:2 propagates:1 editor:1 pi:11 karl:1 supported:1 side:2 allow:2 institute:1 neighbor:2 sparse:1 overcome:2 avoids:1 contour:13 author:1 made:1 forward:1 far:1 transaction:2 functionals:1 aperture:1 global:3 un:1 iterative:1 continuous:2 interpolating:1 significance:1 dense:3 bounding:1 arise:1 slow:1 lc:1 theme:1 third:1 normalizing:2 ofthis:1 ci:2 magnitude:3 mf:1 nigms:1 likely:1 visual:3 springer:1 corresponds:3 dt1:1 constantly:1 extracted:3 ma:1 stimulating:1 yhl:1 conditional:2 identity:1 man:1 typical:1 specifically:1 determined:1 bbp:22 gauss:1 e10:1 mark:1 dept:1 |
339 | 131 | 384
MODELING SMALL OSCILLATING
BIOLOGICAL NETWORKS IN ANALOG VLSI
Sylvie Ryckebusch, James M. Bower, and Carver Mead
California Instit ute of Technology
Pasadena, CA 91125
ABSTRACT
We have used analog VLSI technology to model a class of small oscillating biological neural circuits known as central pattern generators (CPG). These circuits generate rhythmic patterns of activity
which drive locomotor behaviour in the animal. We have designed,
fabricated, and tested a model neuron circuit which relies on many
of the same mechanisms as a biological central pattern generator
neuron, such as delays and internal feedback. We show that this
neuron can be used to build several small circuits based on known
biological CPG circuits, and that these circuits produce patterns of
output which are very similar to the observed biological patterns.
To date, researchers in applied neural networks have tended to focus on mammalian systems as the primary source of potentially useful biological information.
However, invertebrate systems may represent a source of ideas in many ways more
appropriate, given current levels of engineering sophistication in building neural-like
systems, and given the state of biological understanding of mammalian circuits. Invertebrate systems are based on orders of magnitude smaller numbers of neurons
than are mammalian systems. The networks we will consider here, for example,
are composed of about a dozen neurons, which is well within the demonstrated
capabilities of current hardware fabrication techniques. Furthermore, since much
more detailed structural information is available about these systems than for most
systems in higher animals, insights can be guided by real information rather than by
guesswork. Finally, even though they are constructed of small numbers of neurons,
these networks have numerous interesting and potentially even useful properties.
CENTRAL PATTERN GENERATORS
Of all the invertebrate neural networks currently being investigated by neurobiologists, the class of networks known as central pattern generators (CPGs) may
be especially worthy of attention. A CPG is responsible for generating oscillatory
neural activity that governs specific patterns of motor output, and can generate
its pattern of activity when isolated from its normal neuronal inputs. This prop-
Modeling Small Oscillating Biological Networks
erty, which greatly facilitates experiments, has enabled biologists to describe several
CPGs in detail at the cellular and synaptic level. These networks have been found
in all animals, but have been extensively studied in invertebrates [Selverston, 1985].
We chose to model several small CPG networks using analog VLSI technology. Our
model differs from most computer simulation models of biological networks [Wilson
and Bower, in press] in that we did not attempt to model the details of the individual
ionic currents, nor did we attempt to model each known connection in the networks.
Rather, our aim was to determine the basic functionality of a set of CPG networks
by modeling them as the minimum set of connections required to reproduce output
qualitatively similar to that produced by the real network under certain conditions.
MODELING CPG NEURONS
The basic building block for our model is a general purpose CPG neuron circuit.
This circuit, shown in Figure 1, is our model for a typical neuron found in central
pattern generators, and contains some of the essential elements of real biological
neurons. Like real neurons, this model integrates current and uses positive feedback
to output a train of pulses, or action potentials, whose frequency depends on the
magnitude of the current input. The part of the circuit which generates these pulses
is shown in Figure 2a [Mead, 19891.
The second element in the CPG neuron circuit is the synapse. In Figure 1, each pair
of transistors functions as a synapse. The p-well transistors are.excitatory synapses,
whereas the n-well transistors are inhibitory synapses. One of the transistors in the
pair sets the strength of the synapse, while the other transistor is the input of the
synapse. Each CPG neuron has four different synapses.
The third element of our model CPG neuron involves temporal delays. Delays are
an essential element in the function of CPGs, and biology has evolved many different
mechanisms to introduce delays into neural networks. The membrane capacitance
of the cell body, different rates of chemical reactions, and axonal transmission are
just a few of the mechanisms which have time constants associated with them. In
our model we have included synaptic delay as the principle source of delay in the
network. This is modeled as an RC delay, implemented by the follower-integrator
circuit shown in Figure 2b [Mead, 19891. The time constant of the delay is a function
of the conductance of the amplifier, set by the bias G. A multiple time constant
delay line is formed by cascading several of these elements. Our neuron circuit uses
a delay line with three time constants. The synapses which are before the delay
element are slow synapses, whereas the undelayed synapses are fa.st synapses.
We fabricated the circuit shown in Figure 1 using CMOS, VLSI technology. Several
of these circuits were put on each chip, with all of the inputs and controls going
out to pads, so that these cells could be externally connected to form the network
of interest.
385
386
Ryckebusch, Bower, and Mead
slow
excitation
-t
Yout.
pulse length
G
Figure 1. The CPG neuron circuit.
Yout.
r
-111
0
I- Pulse
Length
~
(a)
-QJ(b)
Figure 2. (a). The neuron spike-generating circuit. (b). The follower-integrater
circuit. Each delay box 0 contains a delay line formed by three follower-integrater
circuits.
The Endogenous Bursting Neuron
One type of cell which has been found to play an important role in many oscillatory circuits is the endogenous bursting neuron. This type of cell has an intrinsic
oscillatory membrane potential, enabling it to produce bursts of action potentials
at rhythmic intervals. These cells have been shown to act both as external "pacemakers" which set the rhythm for the CPG, or as an integral part of a central
pattern generator. Figure 3a shows the output from a biological endogenous bursting neuron. Figure 3b demonstrates how we can configure our CPG neuron to be
an endogenous bursting neuron. The delay element in the cell must have three time
constants in order for this circuit to oscillate stably. Note that in the circuit, the
Modeling Small Oscillating Biological Networks
cell has internal negative feedback. Since real neurons don't actually make synaptic
connections onto themselves, this connection should be thought of as representing
an internal molecular or ionic mechanism which results in feedback within the cell.
4 mV
1 sec
AS
(a)
(b)
(c)
Figure 3. (a). The output from the AB cell in the lobster stomatogastric ganglion
CPG [Eisen and Marder, 1982]. This cell is known to burst endogenously. (b).
The CPG neuron circuit configured as an endogenous bursting neuron and (c) the
output from this circuit.
Postinhibitory Rebound
A neuron configured to be an endogenous burster also exhibits another property
common to many neurons, including many CPG neurons. This property, illustrated
in Figures 4a and 4b, is known as postinhibitory rebound (PIR). Neurons with this
property display increased excitation for a certain period of time following the
release of an inhibitory influence. This property is a useful one for central pattern
generator neurons to have, because it enables patterns of oscillations to be reset
following the release of inhibition.
387
388
Ryckebusch, Bower and Mead
-
(a)
(b)
...
"
.I
..
?
'
LA
t
4
????
I
I
N'
........
(c)
Figure 4. (a) The output of a ganglion cell of the mudpuppy retina exhibiting
postinhibitory rebound [Miller and Dacheux, 19761. The bar under the trace indicates the duration of the inhibition. (b) To exhibit PIR in the CPG neuron circuit,
we inhibit ,the cell with the square pulse shown in (c). When the inhibition is
released, the circuit outputs a brief burst of pulses.
MODELING CENTRAL PATTERN GENERATORS
The Lobster Stomatogastric Ganglion
The stomatogastric ganglion is a CPG which controls the movement of the teeth
in the lobster's stomach. This network is relatively complex, and we have only
modeled the relationships between two of the neurons in the CPG (the PD and LP
cells) which have a kind of interaction found in many CPGs known as reciprocal
inhibition (Figure Sa). In this case, each cell inhibits the other, which produces a
pattern of output in which the cells fire alternatively (Figure Sb). Note that in the
absence of external input, a mechanism such as postinhibitory rebound must exist
in order for a cell to begin firing again once it has been released from inhibition.
Modeling Small Oscillating Biological Networks
_
120
~rrN
(a)
(b)
(c)
(d)
Figure 5. (a) Output from the PD and LP cells in the lobster stomatogastric
ganglion [Miller and Selverston, 1985]. (c) and (d) demonstrate reciprocal inhibition
with two CPG neuron circuits.
The Locust Flight CPG
A CPG has been shown to play an important role in producing the motor pattern
for flight in the locust [Robertson and Pearson, 19851. Two of the cells in the CPG,
the 301 and 501 cells, fire bursts of action potentials as shown in Figure 6a. The
301 cell is active when the wings of the locust are elevated, whereas the 501 cell is
active when the wings are depressed. The phase relationship between the two cells
is very similar to the reciprocal inhibition pattern just discussed, but the circuit
that produces this pattern is quite different. The connections between these two
cells are shown in Figure 6b. The 301 cell makes a delayed excitatory connection
onto the 501 cell, and the 501 cell makes fast inhibitory contact with the 301 cell.
Therefore, the 301 cell begins to fire, and after some delay, the 501 cell is activated.
When the 501 cell begins to fire, it immediately shuts off the 301 cell. Since the 501
cell is no longer receiving excitatory input, it will eventually stop firing, releasing
the 301 cell from inhibition. The cycle then repeats. This same circuit has been
reproduced with our model in Figures 6c and 6d.
389
390
Ryckebusch, Bower and Mead
?
---.J50mv
100ms
~
\5'~'1
501
I--~
301~~~~=-~~~~~~__~__~~
Dl
(a)
(b)
301
301 CELL
SOl CELL
(e)
(d)
Figure 6. (a) The 301 and 501 cells in the locust flight CPG [Robertson and
Pearson, 19851. (b) Simultaneous intracellular recordings of 301 and 501 during
flight. (c) The model circuit and (d) its output.
The Tritonia Swim CPG
One of the best studied central pattern generators is the CPG which controls the
swimming in the small marine mollusc Tritonia. This CPG was studied in great
detail by Peter Getting and his colleagues at the University of Iowa, and it is one
of the few biological neural networks for which most of the connections and the
synaptic parameters are known in detail. Tritonia swims by making alternating
dorsal and ventral flexions. The dorsal and ventral motor neurons are innervated
by the DSI and VSI cells, respectively. Shown in Figure 7a and 7b is a simplified
schematic diagram for the network and the corresponding output. The DSI and VSI
cells fire out of phase, which is consistent with the alternating nature of the animal's
swimming motion. The basic circuit consists of reciprocal inhibition between DSI
and VSI paralleled by delayed excitation via the C2 cell. The DSI and VSI cells
fire out of phase, and the DSI and C2 cells fire in phase. Swimming is initiated by
sensory stimuli which feed into DSI and cause it to begin to fire a burst of impulses.
DSI inhibits VSI, and at the same time excites C2. C2 has excitatory synapses
on VSIj however, the initial response of VSI neurons is delayed. VSI then fires,
during which there is inhibition by VSI of C2 and DSI. During this period, VSI no
longer receives excitatory input from C2, and hence the VSI firing rate declines;
DSI is therefore released from inhibition, and is ready to fire again to initiate a
new cycle. Figure 7c and 8 show the model circuit which is identical to the circuit
Modeling Small Oscillating Biological Networks
shown in Figure 7a, and the output from this circuit. Note that although the model
output closely resembles the biological data, there are small differences in the phase
relationships between the cells which can be accounted for by taking into account
other connections and delays in the circuit not currently incorporated in our model.
OSI : I
-
VSI
B
C2
(b)
(a)
50 mV
C2
J
5 sec
.'
.'
.~
.'
.. ......... . -
1-0
.. .....
DSI
- --I
\
\
. .. .....
\
\
\
..... ,
\
......\
,, ...... '
,,
t' ...
\
,---------VSI
(c)
Figure 1. (a) Simplified schematic diagram of the Tritonia CPG (which actually
has 14 cells) and (b) output from the three types of cells in the circuit.(c) The
model circuit.
391
392
Ryckebusch, Bower and Mead
VSI
C2
DSI
Figure 8. Output from the circuit shown in Figure 7c.
CONCLUSIONS
One may ask why it is interesting to model these systems in analog VLSI, or, for
that matter, why it is interesting to model invertebrate networks altogether. Analog
VLSI is a very nice medium for this type of modeling, because in addition to being
compact, it runs in real time, eliminating the need to wait hours to get the results of
a simulation. In addition, the electronic circuits rely on the same physical principles
as neural processes {including gain, delays, and feedback}, allowing us to exploit the
inherent properties of the medium in which we work rather than having to explicitly
model them as in a digital simulation.
Like all models, we hope that this work will help us learn something about the
systems we are studying. But in addition, although invertebrate neural networks are
relatively simple and have small numbers of cells, the behaviours of these networks
and animals can be fairly complex. At the same time, their small size allows us to
understand how they are engineered in detail. Accordingly, modeling these networks
allows us to study a well engineered system at the component level-a level of
modeling not yet possible for more complex mammalian systems, for which detailed
structural information is scarce.
Modeling Small Oscillating Biological Networks
Acknowledgments
This work relies on information supplied by the hard work of many experimentalists.
We would especially like to acknowledge the effort and dedication of Peter Getting
who devoted 12 years to understanding the organization of the Tritonia network of
14 neurons. We also thank Hewlett-Packard for computing support, and DARPA
and MOSIS for chip fabrication. This work was sponsored by the Office of Naval
Research, the System Development Foundation, and the NSF (EET-8700064 to
J.B.).
Reference8
Eisen, Judith S. and Marder, Eve (1982). Mechanisms underlying pattern generation in lobster stomatogastric ganglion as determined by selective inactivation of
identified neurons. III. Synaptic connections of electrically coupled pyloric neurons.
J. Neurophysiol. 48:1392-1415.
Getting, Peter A. and Dekin, Michael S. (1985). Tritonia swimming: A model
system for integration within rhythmic motor systems. In Allen I. Selverston (Ed.),
Model Neural Networks and Behavior, New York, NY: Plenum Press.
Mead, Carver A. (in press). Analog VLSI and Neural Systems. Reading, MA:
Addison-Wesley.
Miller, John P. and Selverston, Allen I. (1985). Neural Mechanisms for the production of the lobster pyloric motor pattern. In Allen I. Selverston (Ed.), Model Neural
Networks and Behavior, New York, NY: Plenum Press.
Miller, R. F. and Dacheux, R. F. (1976). Synaptic organization and ionic basis of
on and off channels in mudpuppy retina. J. Gen. Physiol. 67:639-690.
Robertson, R. M. and Pearson, K. G. (1985). Neural circuits in the ft.ight system
of the locust. J. Neurophysiol. 53:110-128.
Selverston, Allen I. and Moulins, Maurice (1985). Oscillatory neural networks.
Ann. Rev. Physiol. 47:29-48.
Wilson, M. and Bower, J. M. (in press). Simulation oflarge scale neuronal networks.
In C. Koch and I. Segev (Eds.), Methods in Neuronal Modeling: From Synapses to
Networks, Cambridge j MA: MIT Press.
393
| 131 |@word eliminating:1 pulse:6 simulation:4 initial:1 contains:2 reaction:1 current:5 yet:1 follower:3 must:2 john:1 physiol:2 enables:1 motor:5 designed:1 sponsored:1 pacemaker:1 accordingly:1 reciprocal:4 marine:1 judith:1 cpg:28 rc:1 burst:5 constructed:1 c2:9 consists:1 introduce:1 behavior:2 themselves:1 nor:1 integrator:1 begin:4 underlying:1 circuit:40 erty:1 medium:2 evolved:1 kind:1 selverston:6 shuts:1 fabricated:2 temporal:1 act:1 demonstrates:1 control:3 producing:1 positive:1 before:1 engineering:1 mead:8 initiated:1 firing:3 chose:1 studied:3 bursting:5 resembles:1 locust:5 acknowledgment:1 responsible:1 block:1 differs:1 thought:1 wait:1 get:1 onto:2 put:1 influence:1 demonstrated:1 attention:1 duration:1 immediately:1 stomatogastric:5 insight:1 cascading:1 his:1 enabled:1 plenum:2 play:2 us:2 element:7 robertson:3 mammalian:4 observed:1 role:2 ft:1 connected:1 cycle:2 movement:1 inhibit:1 sol:1 pd:2 basis:1 neurophysiol:2 darpa:1 chip:2 train:1 fast:1 describe:1 pearson:3 whose:1 quite:1 mudpuppy:2 reproduced:1 transistor:5 interaction:1 reset:1 date:1 osi:1 gen:1 yout:2 getting:3 transmission:1 oscillating:7 produce:4 generating:2 cmos:1 help:1 excites:1 sa:1 implemented:1 involves:1 exhibiting:1 guided:1 closely:1 functionality:1 engineered:2 behaviour:2 biological:17 koch:1 normal:1 great:1 ventral:2 released:3 purpose:1 vsi:13 integrates:1 currently:2 hope:1 mit:1 aim:1 rather:3 inactivation:1 moulins:1 wilson:2 office:1 release:2 focus:1 naval:1 indicates:1 greatly:1 sb:1 pad:1 pasadena:1 vlsi:7 reproduce:1 going:1 selective:1 development:1 animal:5 integration:1 fairly:1 biologist:1 once:1 having:1 biology:1 identical:1 rebound:4 oflarge:1 stimulus:1 inherent:1 few:2 retina:2 composed:1 individual:1 delayed:3 phase:5 fire:10 attempt:2 amplifier:1 conductance:1 ab:1 interest:1 organization:2 hewlett:1 configure:1 activated:1 devoted:1 integral:1 carver:2 isolated:1 increased:1 modeling:13 delay:17 fabrication:2 st:1 off:2 receiving:1 michael:1 again:2 central:9 external:2 maurice:1 wing:2 rrn:1 account:1 potential:4 sec:2 matter:1 configured:2 explicitly:1 mv:2 depends:1 endogenous:6 capability:1 formed:2 square:1 tritonia:6 who:1 miller:4 produced:1 ionic:3 drive:1 researcher:1 oscillatory:4 synapsis:9 simultaneous:1 tended:1 synaptic:6 ed:3 colleague:1 frequency:1 lobster:6 james:1 associated:1 dekin:1 stop:1 gain:1 stomach:1 ask:1 actually:2 feed:1 wesley:1 higher:1 response:1 synapse:4 though:1 box:1 furthermore:1 just:2 flight:4 receives:1 stably:1 impulse:1 building:2 postinhibitory:4 hence:1 chemical:1 alternating:2 illustrated:1 during:3 excitation:3 rhythm:1 m:1 demonstrate:1 motion:1 allen:4 common:1 physical:1 analog:6 elevated:1 discussed:1 cambridge:1 guesswork:1 depressed:1 ute:1 longer:2 locomotor:1 inhibition:11 something:1 certain:2 minimum:1 determine:1 period:2 ight:1 multiple:1 molecular:1 schematic:2 basic:3 experimentalists:1 represent:1 cell:45 whereas:3 addition:3 interval:1 diagram:2 source:3 releasing:1 recording:1 facilitates:1 structural:2 axonal:1 eve:1 iii:1 identified:1 idea:1 decline:1 qj:1 pir:2 sylvie:1 swim:2 effort:1 peter:3 york:2 oscillate:1 cause:1 action:3 useful:3 detailed:2 governs:1 extensively:1 hardware:1 instit:1 generate:2 supplied:1 exist:1 nsf:1 inhibitory:3 four:1 mosis:1 swimming:4 year:1 run:1 electronic:1 neurobiologist:1 oscillation:1 display:1 activity:3 strength:1 marder:2 segev:1 invertebrate:6 generates:1 flexion:1 relatively:2 inhibits:2 electrically:1 membrane:2 smaller:1 lp:2 rev:1 making:1 eventually:1 mechanism:7 initiate:1 addison:1 studying:1 available:1 appropriate:1 innervated:1 altogether:1 exploit:1 build:1 especially:2 contact:1 capacitance:1 spike:1 fa:1 ryckebusch:5 primary:1 exhibit:2 thank:1 cellular:1 length:2 modeled:2 relationship:3 potentially:2 trace:1 negative:1 allowing:1 neuron:38 enabling:1 acknowledge:1 incorporated:1 worthy:1 mollusc:1 pair:2 required:1 connection:9 california:1 burster:1 hour:1 bar:1 pattern:21 reading:1 including:2 packard:1 endogenously:1 rely:1 scarce:1 representing:1 technology:4 brief:1 numerous:1 pyloric:2 ready:1 coupled:1 nice:1 understanding:2 dsi:11 interesting:3 generation:1 dedication:1 generator:9 digital:1 foundation:1 iowa:1 teeth:1 consistent:1 principle:2 production:1 excitatory:5 accounted:1 repeat:1 cpgs:4 undelayed:1 bias:1 understand:1 taking:1 rhythmic:3 feedback:5 eisen:2 sensory:1 qualitatively:1 simplified:2 compact:1 eet:1 active:2 alternatively:1 don:1 why:2 nature:1 learn:1 channel:1 ca:1 investigated:1 complex:3 did:2 intracellular:1 body:1 neuronal:3 slow:2 ny:2 bower:7 third:1 dozen:1 externally:1 specific:1 dl:1 essential:2 intrinsic:1 magnitude:2 sophistication:1 ganglion:6 relies:2 ma:2 prop:1 ann:1 absence:1 hard:1 included:1 typical:1 determined:1 la:1 internal:3 support:1 dorsal:2 paralleled:1 tested:1 |
340 | 1,310 | Salient Contour Extraction by Temporal Binding
in a Cortically-Based Network
Shih.Cheng Yen and Leif H. Finkel
Department of Bioengineering and
Institute of Neurological Sciences
University of Pennsylvania
Philadelphia, PA 19104, U. S. A.
syen @jupiter.seas.upenn.edu
leif@jupiter.seas.upenn.edu
Abstract
It has been suggested that long-range intrinsic connections in striate cortex may
play a role in contour extraction (Gilbert et aI., 1996). A number of recent
physiological and psychophysical studies have examined the possible role of
long range connections in the modulation of contrast detection thresholds (Polat
and Sagi, 1993,1994; Kapadia et aI., 1995; Kovacs and Julesz, 1994) and various
pre-attentive detection tasks (Kovacs and Julesz, 1993; Field et aI., 1993). We
have developed a network architecture based on the anatomical connectivity of
striate cortex, as well as the temporal dynamics of neuronal processing, that is
able to reproduce the observed experimental results. The network has been tested
on real images and has applications in terms of identifying salient contours in
automatic image processing systems.
1
INTRODUCTION
Vision is an active process, and one of the earliest, preattentive actions in visual
processing is the identification of the salient contours in a scene. We propose that this
process depends upon two properties of striate cortex: the pattern of horizontal
connections between orientation columns, and temporal synchronization of cell responses.
In particular, we propose that perceptual salience is directly related to the degree of cell
synchronization.
We present results of network simulations that account for recent physiological and
psychophysical "pop-out" experiments, and which successfully extract salient contours
from real images.
916
2
S. Yen and L. H. Finkel
MODEL ARCHITECTURE
Linear quadrature steerable filter pyramids (Freeman and Adelson, 1991) are used to model
the response characteristics of cells in primary visual cortex. Steerable filters are
computationally efficient as they allow the energy at any orientation and spatial frequency
to be calculated from the responses of a set of basis filters . The fourth derivative of a
Gaussian and its Hilbert transform were used as the filter kernels to approximate the shape
of the receptive fields of simple cells.
The model cells are interconnected by long-range horizontal connections in a pattern
similar to the co-circular connectivity pattern of Parent and Zucker (1989), as well as the
"association field" proposed by Field et at. (1993). For each cell with preferred
orientation, e, the orientations, ?, of the pre-synaptic cell at position (i,j) relative to the
post-synaptic cell, are specified by:
?(e,i,j)
= 2tan- l
(f)-e
(see Figure 1a). These excitatory connections are confined to two regions, one flaring out
along the axis of orientation of the cell (co-axial), and another confined to a narrow zone
extending orthogonally to the axis of orientation (trans-axial). The fan-out of the co-axial
connections is limited to low curvature deviations from the orientation axis while the
trans-axial connections are limited to a narrow region orthogonal to the cell's orientation
axis. These constraints are expressed as:
1, if tan-I
r(e , i , J. ,.."III) - ,
1 zif tan -I
0,
(f) -e <
(1)i _e =
1(,
? E'
2
7!
otherwise.
where K represents the maximum angular deviation from the orientation axis of the postsynaptic cell and ? represents the maximum angular deviation from the orthogonal axis of
the post-synaptic cell. Connection weights decrease for positions with increasing angular
deviation from the orientation axis of the cell, as well as positions with increasing
distance, in agreement with the physiological and psychophysical findings. Figure 1b
illustrates the connectivity pattern.
There is physiological, anatomical and
psychophysical evidence consistent with the existence of both sets of connections (Nelson
and Frost, 1985; Kapadia et at., 1995; Rockland and Lund, 1983; Lund et at., 1985;
Fitzpatrick, 1996; Polat and Sagi, 1993, 1994).
Cells that are facilitated by the connections inhibit neighboring cells that lie outside the
facilitatory zones. The magnitude of the inhibition is such that only cells receiving
strong support are able to remain active. This is consistent with the physiological
findings of Nelson and Frost (1985) and Kapadia et al. (1995) as well as the intra-cellular
recordings of Weliky et al. (1995) which show EPSPs followed by IPSPs when the longdistance connections were stimulated. This inhibition is thought to occur through disynaptic pathways.
In the model, cells are assumed to enter a "bursting mode" in which they synchronize
with other bursting cells. In cortex, bursting has been associated with supragranular
"chattering cells" (Gray and McCormick (1996). In the model, cells that enter the bursting
Salient Contour Extraction by Temporal Binding in a Cortically-based Network
917
mode are modeled as homogeneous coupled neural oscillators with a common fundamental
frequency but different phases (Kopell and Ermentrout, 1986; Baldi and Meir, 1990). The
p~ase of each oscillator is modulated by the phase of the oscillators to which it is
coupled. Oscillators are coupled only to other oscillators with which they have strong,
reciprocal, connections. The oscillators synchronize using a simple phase averaging rule:
w .. 8 .(1 -1)
8 j (/)=
e
L
IJ
J
LWij
,
W ii
=1
where
represents the phase of the oscillator and W U represents the weight of the
connection between oscillator i and j. The oscillators synchronize iteratively with
synchronization defined as the following condition:
18;Ct)-8/t)1 < 8, i.j E C, t < t max
where C represents all the coupled oscillators on the same contour, 8 represents the
maximum phase difference between oscillators, and Imax represents the maximum number
of time steps the oscillators are allowed to synchronize. The salience of the chain is then
represented by the sum of the activities of all the synchronized elements in the group, C.
The chain with the highest salience is chosen as the output of the network. This allows
us to compare the output of the model to psychophysical results on contour extraction.
It has been postulated that the 40 Hz oscillations observed in the cortex may be
responsible for perceptual binding across different cortical regions (Singer and Gray,
1995). Recent studies have questioned the functional significance and even the existence
of these oscillations (Ghose and Freeman, 1992; Bair el ai., 1994). We use neural
oscillators only as a simple functional means of computing synchronization and make no
assumption regarding their possible role in cortex.
r-------------------,
(x "
y,l
=-6 ifLll=O. 4,=0
b
Figure I: a) Co-circularity constraint. b) Connectivity pattern of a horizontally oriented cell. Length of line
indicates connection strength.
3
RESULTS
This model was tested by simulating a number of psychophysical experiments. A
number of model parameters remain to be defined through further physiological and
psychophysical experiments, thus we only attempt a qualitative fit to the data. All
simulations were conducted with the same parameter set.
3.1
EXTRACTION OF SALIENT CONTOURS
Using the same methods as Field el al. (1993), we tested the model's ability to extract
contours embedded in noise (see Figure 2). Pairs of stimulus arrays were presented to the
918
S. Yen and L H. Finkel
network, one array contains a contour, the other contains only randomly oriented
elements. The network determines the stimulus containing the contour with the highest
salience. Network performance was measured by computing the percentage of correct
detection. The network was tested on a range of stimulus variables governing the target
contour: 1) the angle, ~, between elements on a contour, 2) the angle between elements
on a contour but with the elements aligned orthogonal to the contour passing through
them, 3) the angle between elements with a random offset angle, ?a, with respect to the
contour passing through them, and 4) average separation of the elements. 500
simulations were run at each data point. The results are shown in Figure 2. The model
shows good qualitative agreement with the psychophysical data. When the elements are
aligned, the performance of the network is mostly modulated by the co-axial connections,
whereas when the elements are oriented orthogonal to the contour, the trans-axial
connections mediate performance. Both the model and human subjects are adversely
affected as the weights between consecutive elements decrease in strength. This reduces
the length of the contour and thus the saliency of the stimulus.
a) Elements aligned
parallell to path
~I:~ -U:":-,~_,:_, ,,~
~ 70
~ 60
~
50
''0
"'"0
c) Elements aligned parallel
b) Elements aligned
orthogonal to path
to path with
a = 30'
I:~ ~~o,. I:~
d) Elements at 0.9' separation
., .. .... All
I:~ ~ ~\~
70
:"'~
70
70
60
50
"
.. _ . _ . "
60
50
60
50
?
---0---
'b
DW
' ",
_ ?? _ ?? _ ?? _.
..::&
- 0 - - Model
4 0 ~~~~~~ 40 ~~-~1~~~4 0 ~~~~~~40~~~r-~~
15 :\ 0 45 60 75 90
o 15 30 45 6 0 75
15 :\0 45 60 75
o 15 30 45 6 0 75
Angle (deg)
Angle (deg)
Angle (deg)
Angle (deg)
Figure 2: Simulation results are compared to the data from 2 subjects (AH, DJF) in Field ef al. (1993). Stimuli
consisted of 256 randomly oriented Gabor patches with 12 elements aligned to form a contour. Each data
point represents results for 500 simulations .
3.2
EFFECTS OF CONTOUR CLOSURE
In a series of experiments using similar stimuli to Field et al. (1993), Kovacs and Julesz
(1993) found that closed contours are much more salient than open contours. They
reported that when the inter-element spacing between all elements was gradually increased,
the maximum inter-element separation for detecting closed contours, ~ c (defined at 75%
performance), is higher than that for open contours, ~ o . In addition, they showed that
when elements spaced at ~ o are added to a 'Jagged" (open) contour, the saliency of the
contour increases monotonically but when elements spaced at ~ c are added to a circular
contour, the saliency does not change until the last element is added and the contour
becomes closed. In fact, at ~c , the contour is not salient until it is closed, at which point
it suddenly "pops-out" (see Figure 3c). This finding places a strong constraint on the
computation of saliency in visual perception.
Interestingly, it has been shown that synchronization in a chain of coupled neural
oscillators is enhanced when the chain is closed (Kopell and Ermentrout, 1986;
Ermentrout, 1985; Somers and Kopell, 1993). This property seems to be related to the
differences in boundary effects on synchronization between open and closed chains and
appears to hold across different families of coupled oscillators. It has also been shown
that synchronization is dependent on the coupling between oscillators -- the stronger the
coupling, the better the synchronization, both in terms of speed and coherence (Somers
919
Salient Contour Extraction by Temporal Binding in a Cortically-based Network
and Kopell, 1993; Wang, 1995).
psychophysical results.
We believe these findings may apply to the
As in Kovacs and Julesz (1993), the network is presented with two stimuli, one
containing a contour and the other made up of all randomly oriented elements. The
network picks the stimulus containing the synchronized contour with the higher salience.
In separate trials, the threshold for maximum separation between elements was determined
for open and closed contours. The ratio of the separation of the background elements to
the that of elements on a closed curve, <Pc, was found to be 0.6 (which is similar to the
threshold of 0.65 recently reported by Kovacs et al., 1996), whereas the ratio for open
contours, <Po, was found to be 0.9. (11 is the threshold separation of contour elements, <P,
at a particular background separation). We then examined the changes in salience for open
and closed contours. The performance of the network was measured as additional elements
were added to an initial short contour of elements. The results are shown in Figure 3b.
At <Po, both open and closed contours are synchronized but at <Pc, elements are
synchronized only when the chains are closed. If salience can only be computed for
synchronized contours, then as additional elements are added to an open chain at <Po' the
salience would increase since the whole chain is synchronized. On the other hand, at <Pc,
as long as the last element is missing, the chain is really an open chain, and since <Pc is
smaller than <Po' the elements on the chain will not be able to synchronize and adding
elements has no effect on salience. Once the last element is added, the chain is
immediately able to synchronize and the salience of the contour increases dramatically and
causes the contour to "pop-out".
a) Performance as a function of
background to contour separation.
100
_ 90
g 80
u
C 70
~
~ 60
.. _"
-.. ~
\
00cP,
.
0
--------~---- ~--~o
?.,b
,
b) Performance as a function of closure.
85
80 ?
~
~
c
~
50,..-r----r-"T""___rA-....
1.4
1.2
I
0.8
0.6
0 .4
Ip
75
70.
~'
65
60
o.~'.o.
55'
/l'
50
85 " T " " - - - - - - - - ,
80_
,t"'-~\
--------1------8J. ,0'
I
c) Performance as a function of closure.
Data from Kovacs and Julesz (1993)
\
,
(S,d
'0'
??
0123456789
Additional ElemenLS
~
8
~
d!
75
7065
------------'4:i,If:
--o()-.... .,..,.~
60."
55
9.,,-.0-_--0- 0 '
50
/
~
?
01234567
Cl osed
Open
89
Additional Elements
Figure 3: Simulation of the experiments of Kovacs and lulesz (1993). Stimuli consisted of 2025 randomly
oriented Gabor patches, with 24 elements aligned to form a contour. Each data point represents results from
500 trials. a) Plot of the performance of the model with respect to the ratio of the separation of the
background elements to the contour elements. Results show closed contours are salient to a more salient than
open contours. b) Changes in salience as additional elements are added to open and closed contours. Results
show that the salience of open contours increase monotonically while the salience of closed contours only
change with the addition of the last element. Open contours were initially made up of 7 elements while closed
contours were made up of 17 elements. c) The data from Kovacs and lulesz (\993) are re-plotted for
comparison .
3.3
REAL IMAGES
A stringent test of the model's capabilities is the ability to extract perceptually salient
contours in real images. Figure 4 and 5 show results for a typical image. The original
grayscale image, the output of the steerable filters, and the output of the model are shown
in Figure 4a,b,c and Figure 5a,b,c respectively. The network is able to extract some of
the more salient contours and ignore other high contrast edges detected by the steerable
filters. Both simulations used filters at only one spatial scale and could be improved
through interactions across multiple spatial frequencies. Nevertheless, the model shows
promise for automated image processing applications
920
S. Yen and L. H. Finkel
Figure 4: a) Plane image. b) Steerable filter response. c) Result of model showing the most salient contours.
Figure 5: a) Satellite image of Bangkok. b) Steerable filter response. c) Salient contours extracted from the
image. The model included filters at only one spatial frequency.
4
CONCLUSION
We have presented a cortically-based model that is able to identify perceptually salient
contours in images containing high levels of noise. The model is based on the use of
long distance intracortical connections that facilitate the responses of cells lying along
smooth contours. Salience is defined as the combined activity of the synchronized
population of cells responding to a particular contour. The model qualitatively accounts
for a range of physiological and psychophysical results and can be used in extracting
salient contours in real images.
Acknowledgements
Supported by the Office of Naval Research (NOO014-93-1-0681), The Whitaker
Foundation, and the McDonnell-Pew Program in Cognitive Neuroscience.
References
Bair, W., Koch, c., Newsome, W. & Britten, K. (1994). Power spectrum analysis of
bursting cells in area MT in the behaving monkey. Journal of Neuroscience, 14,
2870-2892.
Baldi, P. & Meir, R. (1990). Computing with arrays of coupled oscillators: An
application to preattentive texture discrimination. Neural Computation, 2,458-471.
Ermentrout, G. B. (1985). The behavior of rings of coupled oscillators. Journal of
Mathematical Biology, 23, 55-74.
Field, D. J., Hayes, A. & Hess, R. F. (1993). Contour integration by the human visual
system: Evidence for a local "Association Field". Vision Research, 33, 173-193.
Fitzpatrick, D. (1996). The functional-organization of local circuits in visual-cortex insights from the study of tree shrew striate cortex. Cerebral Cortex, 6, 329-341.
Freeman, W. T. & Adelson, E. H. (1991). The design and use of steerable filters. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 13,891-906.
Salient Contour Extraction by Temporal Binding in a Cortically-based Network
921
Gilbert, C. D. , Das, A., Ito, M., Kapadia, M. & Westheimer, G. (1996) . Spatial
integration and cortical dynamics. Proceedings of the National Academy of Sciences
USA, 93, 615-622.
Ghose, G. M. & Freeman, R. D. (1992). Oscillatory discharge in the visual system:
Does it have a functional role? Journal of Neurophysiology, 68, 1558-1574.
Gray, C. M . & McCormick, D. A. (1996). Chattering cells -- superficial pyramidal
neurons contributing to the generation of synchronous oscillations in the visualcortex. Science, 274, 109-113.
Kapadia, M. K., Ito, M ., Gilbert, C. D. & Westheimer. G. (1995). Improvement in
visual sensitivity by changes in local context: Parallel studies in human observers
and in VI of alert monkeys . Neuron, 15, 843-856.
Kopell, N. & Ermentrout, G. B. (1986). Symmetry and phaselocking in chains of weakly
coupled oscillators. Communications on Pure and Applied Mathematics, 39, 623660.
Kovacs, 1. & Julesz, B. (1993) . A closed curve is much more than an incomplete one:
Effect of closure in figure-ground segmentation. Proceedings of National Academy of
Sciences, USA, 90, 7495-7497.
Kovacs, 1. & Julesz, B. (1994). Perceptual sensitivity maps within globally defined
visual shapes. Nature, 370, 644-646.
Kovacs, 1., Pol at, U. & Norcia, A. M. (1996). Breakdown of binding mechanisms in
amblyopia. Investigative Ophthalmology & Visual Science, 37, 3078 .
Lund, J., Fitzpatrick, D. & Humphrey, A. L. (1985). The striate visual cortex of the tree
shrew . In Jones, E. G. & Peters, A. (Eds), Cerebral Cortex (pp. 157-205). New
York: Plenum.
Nelson, J. 1. & Frost, B. J. (1985) . Intracortical facilitation among co-oriented, co-axially
aligned simple cells in cat striate cortex. Experimental Brain Research, 61, 54-61.
Parent, P. & Zucker, S. W. (1989) . Trace inference, curvature consistency, and curve
detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 823839.
Polat, U. & Sagi, D. (1993). Lateral interactions between spatial channels: Suppression
and facilitation revealed by lateral masking experiments. Vision Re5earch, 33, 993999.
Polat, U. & Sagi, D. (1994). The architecture of perceptual spatial interactions. Vision
Research, 34, 73-78.
Rockland, K. S. & Lund, J. S. (1983). Intrinsic laminar lattice connections in primate
visual cortex. Journal of Comparative Neurology, 216, 303-318.
Singer, W. & Gray, C. M . (1995). Visual feature integration and the temporal correlation
hypothesis. Annual Review of Neuroscience, 18, 555-586.
Somers, D . & Kopell, N. (1993). Rapid synchronization through fast threshold
modulation. Biological Cybernetics, 68, 393-407.
Wang, D. (1995). Emergent synchrony in locally coupled neural oscillators. IEEE
Transactions on Neural Networks, 6, 941-948.
Weliky M., Kandler, K., Fitzpatrick, D. & Katz, L. C. (1995). Patterns of excitation arxl
inhibition evoked by horizontal connections in visual cortex share a common
relationship to orientation columns. Neuron, 15, 541-552.
PART
VIII
ApPLICATIONS
| 1310 |@word neurophysiology:1 trial:2 seems:1 stronger:1 open:15 closure:4 simulation:7 pick:1 initial:1 contains:2 series:1 interestingly:1 shape:2 plot:1 discrimination:1 intelligence:2 plane:1 reciprocal:1 short:1 detecting:1 mathematical:1 along:2 alert:1 qualitative:2 pathway:1 baldi:2 inter:2 upenn:2 ra:1 rapid:1 behavior:1 brain:1 freeman:4 globally:1 humphrey:1 increasing:2 becomes:1 circuit:1 monkey:2 developed:1 finding:4 temporal:7 local:3 sagi:4 ghose:2 path:3 modulation:2 examined:2 bursting:5 evoked:1 co:7 limited:2 range:5 responsible:1 steerable:7 area:1 thought:1 gabor:2 pre:2 context:1 gilbert:3 map:1 missing:1 identifying:1 immediately:1 pure:1 rule:1 insight:1 array:3 imax:1 facilitation:2 dw:1 population:1 discharge:1 plenum:1 target:1 play:1 tan:3 enhanced:1 homogeneous:1 hypothesis:1 agreement:2 pa:1 element:42 breakdown:1 observed:2 role:4 wang:2 region:3 decrease:2 highest:2 inhibit:1 pol:1 ermentrout:5 dynamic:2 weakly:1 upon:1 basis:1 po:4 emergent:1 various:1 represented:1 cat:1 investigative:1 fast:1 detected:1 outside:1 otherwise:1 ability:2 transform:1 ip:1 shrew:2 kapadia:5 propose:2 interconnected:1 interaction:3 neighboring:1 aligned:8 rockland:2 academy:2 supragranular:1 parent:2 extending:1 sea:2 satellite:1 comparative:1 ring:1 coupling:2 axial:6 measured:2 ij:1 strong:3 epsps:1 synchronized:7 correct:1 filter:11 human:3 stringent:1 really:1 biological:1 hold:1 lying:1 koch:1 ground:1 fitzpatrick:4 consecutive:1 successfully:1 gaussian:1 finkel:4 office:1 earliest:1 naval:1 improvement:1 indicates:1 contrast:2 suppression:1 inference:1 dependent:1 el:2 initially:1 reproduce:1 among:1 orientation:11 spatial:7 integration:3 field:9 once:1 extraction:7 biology:1 represents:9 adelson:2 jones:1 stimulus:9 oriented:7 randomly:4 national:2 phase:5 attempt:1 detection:4 organization:1 circular:2 intra:1 circularity:1 pc:4 chain:13 bioengineering:1 edge:1 orthogonal:5 tree:2 incomplete:1 re:1 plotted:1 increased:1 column:2 newsome:1 lattice:1 deviation:4 conducted:1 reported:2 lulesz:2 combined:1 fundamental:1 sensitivity:2 receiving:1 connectivity:4 containing:4 adversely:1 cognitive:1 derivative:1 account:2 intracortical:2 postulated:1 jagged:1 polat:4 depends:1 vi:1 observer:1 closed:16 parallel:2 capability:1 masking:1 djf:1 synchrony:1 yen:4 characteristic:1 spaced:2 saliency:4 identify:1 identification:1 axially:1 kopell:6 cybernetics:1 ah:1 oscillatory:1 synaptic:3 ed:1 attentive:1 energy:1 disynaptic:1 frequency:4 pp:1 associated:1 hilbert:1 segmentation:1 appears:1 higher:2 noo014:1 response:6 improved:1 angular:3 governing:1 until:2 correlation:1 hand:1 horizontal:3 zif:1 mode:2 gray:4 believe:1 facilitate:1 effect:4 usa:2 consisted:2 iteratively:1 excitation:1 cp:1 image:13 ef:1 recently:1 common:2 functional:4 mt:1 cerebral:2 association:2 katz:1 ai:4 enter:2 pew:1 automatic:1 hess:1 consistency:1 mathematics:1 zucker:2 cortex:15 behaving:1 inhibition:3 curvature:2 recent:3 showed:1 additional:5 monotonically:2 ii:1 multiple:1 reduces:1 smooth:1 long:5 post:2 ipsps:1 ase:1 vision:4 kernel:1 pyramid:1 confined:2 cell:26 whereas:2 addition:2 background:4 spacing:1 pyramidal:1 recording:1 hz:1 subject:2 extracting:1 revealed:1 iii:1 automated:1 fit:1 pennsylvania:1 architecture:3 regarding:1 synchronous:1 bair:2 peter:1 questioned:1 passing:2 cause:1 york:1 action:1 dramatically:1 julesz:7 locally:1 meir:2 percentage:1 neuroscience:3 anatomical:2 promise:1 affected:1 group:1 salient:18 shih:1 threshold:5 nevertheless:1 sum:1 run:1 facilitated:1 angle:8 fourth:1 somers:3 place:1 family:1 separation:9 oscillation:3 patch:2 coherence:1 ct:1 followed:1 cheng:1 fan:1 laminar:1 annual:1 activity:2 strength:2 occur:1 constraint:3 scene:1 facilitatory:1 speed:1 department:1 mcdonnell:1 remain:2 frost:3 across:3 postsynaptic:1 smaller:1 ophthalmology:1 primate:1 gradually:1 computationally:1 mechanism:1 singer:2 apply:1 simulating:1 existence:2 original:1 responding:1 whitaker:1 suddenly:1 psychophysical:10 added:7 receptive:1 primary:1 striate:6 distance:2 separate:1 lateral:2 nelson:3 cellular:1 viii:1 length:2 modeled:1 relationship:1 ratio:3 westheimer:2 mostly:1 trace:1 design:1 mccormick:2 neuron:3 communication:1 kovacs:11 pair:1 specified:1 connection:19 narrow:2 pop:3 trans:3 able:6 suggested:1 pattern:8 perception:1 lund:4 program:1 max:1 power:1 synchronize:6 orthogonally:1 axis:7 parallell:1 britten:1 extract:4 coupled:10 philadelphia:1 review:1 acknowledgement:1 contributing:1 relative:1 synchronization:9 embedded:1 generation:1 leif:2 foundation:1 longdistance:1 degree:1 weliky:2 consistent:2 share:1 excitatory:1 supported:1 last:4 salience:14 allow:1 institute:1 boundary:1 calculated:1 cortical:2 curve:3 contour:62 made:3 qualitatively:1 transaction:3 approximate:1 ignore:1 preferred:1 deg:4 active:2 hayes:1 assumed:1 neurology:1 grayscale:1 spectrum:1 stimulated:1 nature:1 channel:1 superficial:1 symmetry:1 cl:1 da:1 significance:1 whole:1 noise:2 mediate:1 allowed:1 quadrature:1 neuronal:1 cortically:5 position:3 phaselocking:1 lie:1 perceptual:4 bangkok:1 ito:2 showing:1 offset:1 physiological:7 evidence:2 intrinsic:2 adding:1 texture:1 magnitude:1 perceptually:2 illustrates:1 visual:13 horizontally:1 expressed:1 chattering:2 neurological:1 binding:6 determines:1 extracted:1 oscillator:20 change:5 included:1 determined:1 typical:1 averaging:1 experimental:2 preattentive:2 zone:2 support:1 modulated:2 tested:4 |
341 | 1,311 | Consistent Classification, Firm and Soft
Yoram Baram*
Department of Computer Science
Technion, Israel Institute of Technology
Haifa 32000, Israel
baram@cs.technion.ac.il
Abstract
A classifier is called consistent with respect to a given set of classlabeled points if it correctly classifies the set. We consider classifiers defined by unions of local separators and propose algorithms
for consistent classifier reduction. The expected complexities of the
proposed algorithms are derived along with the expected classifier
sizes. In particular, the proposed approach yields a consistent reduction of the nearest neighbor classifier, which performs "firm"
classification, assigning each new object to a class, regardless of
the data structure. The proposed reduction method suggests a
notion of "soft" classification, allowing for indecision with respect
to objects which are insufficiently or ambiguously supported by
the data. The performances of the proposed classifiers in predicting stock behavior are compared to that achieved by the nearest
neighbor method.
1
Introduction
Certain classification problems, such as recognizing the digits of a hand written zipcode, require the assignment of each object to a class. Others, involving relatively
small amounts of data and high risk, call for indecision until more data become
available. Examples in such areas as medical diagnosis, stock trading and radar
detection are well known. The training data for the classifier in both cases will
correspond to firmly labeled members of the competing classes. (A patient may be
?Presently a Senior Research Associate of the National Research Council at M. S.
210-9, NASA Ames Research Center, Moffett Field, CA 94035, on sabbatical leave from
the Technion.
Consistent Classification, Firm and Soft
327
either ill or healthy. A stock price may increase, decrease or stay the same). Yet,
the classification of new objects need not be firm. (A given patient may be kept
in hospital for further observation. A given stock need not be bought or sold every
day). We call classification of the first kind "firm" and classification of the second
kind "soft". The latter is not the same as training the classifier with a "don't care"
option, which would be just another firm labeling option, as "yes" and "no", and
would require firm classification. A classifier that correctly classifies the training
data is called "consistent". Consistent classifier reductions have been considered in
the contexts of the nearest neighbor criterion (Hart, 1968) and decision trees (Holte,
1993, Webb, 1996).
In this paper we present a geometric approach to consistent firm and soft classification. The classifiers are based on unions of local separators, which cover all the
labeled points of a given class, and separate them from the others. We propose a
consistent reduction of the nearest neighbor classifier and derive its expected design
complexity and the expected classifier size. The nearest neighbor classifier and its
consistent derivatives perform "firm" classification. Soft classification is performed
by unions of maximal- volume spherical local separators. A domain of indecision
is created near the boundary between the two sets of class-labeled points, and in
regions where there is no data. We propose an economically motivated benefit function for a classifier as the difference between the probabilities of success and failure.
Employing the respective benefit functions, the advantage of soft classification over
firm classification is shown to depend on the rate of indecision. The performances
of the proposed algorithms in predicting stock behavior are compared to those of
the nearest neighbor method.
2
Consistent Firm Classification
Consider a finite set of points X = {X(i), i = 1, ... , N} in some subset of Rn,
the real space of dimension n . Suppose that each point of X is assigned to one
of two classes, and let the corresponding subsets of X, having N1 and N2 points,
respectively, be denoted Xl and X 2 ? We shall say that the two sets are labeled L1
and L 2 , respectively. It is desired to divide Rn into labeled regions, so that new, .
unlabeled points can be assigned to one of the two classes.
We define a local separator of a point x of Xl with respect to X 2 as a convex set,
s(xI2), which contains x and no point of X2. A separator family is defined as a rule
that produces local separators for class- labeled points.
We call the set of those points of Rn that are closer to a point x E Xl than to any
point of X2 the minimum-distance local separator of x with respect to X2.
We define the local clustering degree, c, of the data as the expected fraction of data
points that are covered by a local minimum- distance separator.
The nearest neighbor criterion extends the class assignment of a point x E Xl to
its minimum-distance local separator. It is clearly a consistent and firm classifier
whose memory size is O(N).
Hart's Condensed Nearest Neighbor (CNN) classifier (Hart, 1968) is a consistent subset of the data points that correctly classifies the entire data by the nearest
neighbor method. It is not difficult to show that the complexity of the algorithm
328
Y. Baram
proposed by Hart for finding such a subset is O(N3). The expected memory requirement (or classifier size) has remained an open question.
We propose the following Reduced Nearest Neighbor (RNN) classifier: include
a labeled point in the consistent subset only if it is not covered by the minimumdistance local separator of any of the points of the same class already in the subset.
It can be shown (Baram, 1996) that the complexity of the RNN algorithm is O(N2).
and that the expected classifier size is O(IOgl/(I-C) N). It can also be shown that
the latter bounds the expected size of the CNN classifier as well.
It has been suggested that the utility of the Occam's razor in classification would
be (Webb, 1996):
"Given a choice between two plausible classifiers that perform identically on the
data set, the simpler classifier is expected to classify correctly more objects outside
the training set".
The above statement is disproved by the CNN and the RNN classifiers, which are
strict consistent reductions of the nearest neighbor classifier, likely to produce more
errors.
3
Soft Classification: Indecision Pays, Sometimes
When a new, unlabeled, point is closely surrounded by many points of the same
class, its assignment to the same class can be said to be unambiguously supported
by the data. When a new point is surrounded by points of different classes, or
when it is relatively far from any of the labeled points, its assignment to either
class can be said to be unsupported or ambiguously supported by the data. In the
latter cases, it may be more desirable to have a certain indecision domain, where
new points will not be assigned to a class. This will translate into the creation of
indecision domains near the boundary between the two sets of labeled points and
where there is no data.
We define a separntor S(112) of Xl with respect to X2 as a set that includes Xl
and excludes X2.
Given a separator family, the union of local separators
X(i), i = 1 ... , N I , of Xl with respect to X 2 ,
S(x(i) 12)
of the points
(1)
is a separator of Xl with respect to X2. It consists of NI local separators.
Let XI,c be a subset of Xl. The set
(2)
will be called a consistent separator of Xl with respect to X2 if it contains all the
points of X 1. The set XI,c will then be called a consistent subset with respect to
the given separator family.
Let us extend the class assignment of each of the labeled points to a local separator
of a given family and maximize the volume of each of the local separators without
329
Consistent Classification, Finn and Soft
including in it any point of the competing class. Let Sc(112) and Sc(211) be consistent separators of the two sets, consisting of maximal-volume (or, simply, maximaQ
local separators of labeled points of the corresponding classes. The intersection of
Sc(112) and Sc(211) defines a conflict and will be called a domain of ambiguity of
the first kind. A region uncovered by either separator will be called a domain of
ambiguity of the second kind. The union of the domains of ambiguity will be designated the domain of indecision. The remainders of the two separators, excluding
their intersection, define the conflict-free domains assigned to the two classes.
The resulting "soft" classifier rules out hard conflicts, where labeled points of one
class are included in the separator of the other. Yet, it allows for indecision in areas
which are either claimed by both separators or claimed by neither.
Let the true class be denoted y (with possible values, e.g., y=1 or y=2) and let the
classification outcome be denoted y. Let the probabilities of decision and indecision
by the soft classifier be denoted Pd and Pid, respectively (of course, P id = 1 - Pd),
and let the probabilities of correct and incorrect decisions by the firm and the soft
classifiers be denoted Pfirm {y = y}, Pfirm {y =P y}, P soft {y = y} and Psoft {y =P y},
respectively. Finally, let the joint probabilities of a decision being made by the soft
classifier and the correctness or incorrectness of the decision be denoted, respectively, Psoft {d, Y = y} and P soft {d, Y =P y} and let the corresponding conditional
probabilities be denoted Psoft {y = y I d} and Psoft {y =P y I d}, respectively.
We define the benefit of using the firm classifier as the difference between the probability that a point is classified correctly by the classifier and the probability that
it is misclassified:
This definition is motivated by economic consideration: the profit produced by
an investment will be, on average, proportional to the benefit function. This will
become more evident in a later section, were we consider the problem of stock
trading.
For a soft classifier, we similarly define the benefit as the difference between the
probability of a correct classification and that of an incorrect one (which, in an
economic context, assumes that indecision has no cost, other than the possible
loss of profit). Now, however, these probabilities are for the joint events that a
classification is made, and that the outcome is correct or incorrect, respectively:
(4)
Soft classification will be more beneficial than firm classification if Bsoft > Bfirm'
which may be written as
Pfirm{Y = y} - 0.5
Pid
< 1- Psoft{Y =y I d} -0.5
(5)
For the latter to be a useful condition, it is necessary that P firm {y = y} > 0.5,
Psofdy = y I d} > 0.5 and Psoft {y = y I d} > Pfirm {y = y}. The latter will
be normally satisfied, since points of the same class can be expected to be denser
under the corresponding separator than in the indecision domain. In other words,
Y. Baram
330
the error ratio produced by the soft classifier on the decided cases can be expected
to be smaller than the error ratio produced by the firm classifier, which decides on
all the cases. The satisfaction of condition (5) would depend on the geometry of
the data. It will be satisfied for certain cases, and will not be satisfied for others.
This will be numerically demonstrated for the stock trading problem.
The maximal local spherical separator of x is defined by the open sphere centered
at x, whose radius r(xI2) is the distance between x and the point of X2 nearest to
x. Denoting by s(x, r) the sphere of radius r in Rn centered at x, the maximal local
separator is then sM(xI2) = s(x, r(xI2)).
A separator construction algorithm employing maximal local spherical separators
is described below. Its complexity is clearly O(N2).
Let Xl = Xl. For each of the points xci) of Xl, find the minimal distance to
the points of X 2 ? Call it r(x(i) 12). Select the point x(i) for which r(x(i) 12) 2:
r(x(j) 12), j f: i, for the consistent subset. Eliminate from Xl all the points that
are covered by SM(X(i) 12). Denote the remaining set Xl. Repeat the procedure
while Xl is non-empty. The union of the maximal local spherical separators is a
separator for Xl with respect to X 2 .
4
Example: Firm and soft prediction of stock behaviour
Given a sequence of k daily trading ("close") values of a stock, it is desired to predict
whether the next day will show an increase or a decrease with respect to the last
day in the sequence. Records for ten different stocks, each containing, on average,
1260 daily values were used. About 60 percent of the data were used for training
and the rest for testing. The CNN algorithm reduced the data by 40% while the
RNN algorithm reduced the data by 35%. Results are show in Fig. 1. It can be
seen that, on average, the nearest neighbor method has produced the best results.
The performances of the CNN and the RNN classifiers (the latter producing only
slightly better results) are somewhat lower. It has been argued that performance
within a couple of percentage points by a reduced classifier supports the utility of
Occam's razor (Holte, 1993). However, a couple of percentage points can be quite
meaningful in stock trading.
In order to evaluate the utility of soft classification in stock trading, let the prediction success rate of a firm classifier, be denoted f and that of a soft classifier for the
decided cases s. For a given trade, let the gain or loss per unit invested be denoted
q, and the rate of indecision of the soft classifier ir. Suppose that, employing the
firm classifier, a stock is traded once every day (say, at the "close" value), and that,
employing the soft classifier, it is traded on a given day only if a trade is decided
by the classifier (that is, the input does not fall in the indecision domain). The
expected profit for M days per unit invested is 2(1 - 0.5)qM for the firm classifier
and 2(s - 0.5)q(l- ir)M for the soft classifier (these values disregard possible commission and slippage costs). The soft classifier will be preferred over the firm one if
the latter quantity is greater than the former, that is, if
.
~r
f - 0.5
< 1 - =---s - 0.5
(6)
which is the sample representation of condition (5) for the stock trading problem.
331
Consistent Classification, Firm and Soft
NN
CNN
RNN
. . ~llI~~ifip.?
..... . ~ni
~llCce~~
bene.fit
70.1%
-
xaip
61.4%
58.4%
57.0%
48.4%
xaldnf
51.7%
52.3%
50.1%
74.6%
51.3%
-
xddddf
52.0%
49.6%
51.8%
44.3%
53.3%
-
xdssi
48.3%
47.7%
48.6%
43.6%
52.6%
+
xecilf
53.0%
50.9%
52.6%
47.6%
48.8%
-
xedusf
80.7%
74.7%
76.3%
30.6%
89.9%
-
xelbtf
53.7%
55.6%
52.5%
42.2%
50.1%
-
xetz
66.0%
61.0%
61.0%
43.8%
68.6%
-
xelrnf
51.5%
49.0%
49.2%
39.2%
56.0%
+
xelt
85.6%
82.7%
84.2%
32.9%
93.0%
-
Average
60.4%
58.2%
58.3%
44.7%
63.4%
Figure 1: Success rates in the prediction of rize and fall in stock values.
Results for the soft classifier, applied to the stock data, are presented in Fig. 1.
The indecision rates and the success rates in the decided cases are then specified
along with a benefit sign. A positive benefit represents a satisfaction of condition
(6), with ir, f and s replaced by the corresponding sample values given in the table.
This indicates a higher profit in applying the soft classifier over the application of
the nearest neighbor classifier. A negative benefit indicates that a higher profit
is produced by the nearest neighbor classifier. It can be seen that for two of the
stocks (xdssi and xelrnf) soft classification has produced better results than firm
classification, and for the remaining eight stocks finn classification by the nearest
neighbor method has produced better results.
5
Conclusion
Solutions to the consistent classification problem have been specified in tenns of
local separators of data points of one class with respect to the other. The expected
complexities of the proposed algorithms have been specified, along with the expected sizes of the resulting classifiers. Reduced consistent versions of the nearest
neighbor classifier have been specified and their expected complexities have been
derived. A notion of "soft" classification has been introduced an algorithm for its
implementation have been presented and analyzed. A criterion for the utility of
such classification has been presented and its application in stock trading has been
demonstrated.
Acknowledgment
The author thanks Dr. Amir Atiya of Cairo University for providing the stock data
used in the examples and for valuable discussions of the corresponding results.
332
y. Baram
References
Baram Y. (1996) Consistent Classification, Firm and Soft, CIS Report No. 9627,
Center for Intelligent Systems, Technion, Israel Institute of Technology, Haifa 32000,
Israel.
Baum, E. B. (1988) On the Capabilities of Multilayer Perceptrons, J. Complexity,
Vol. 4, pp. 193 - 215.
Hart, P. E. (1968) The Condensed Nearest Neighbor Rule, IEEE Trans. on Information Theory, Vol. IT-14, No.3, pp. 515 - 516.
Holte, R. C. (1993) Very Simple Classification Rules Perform Well on Most Commonly Used databases, Machine Learning, Vol. 11, No. 1 pp. 63 - 90.
Rosenblatt, F. (1958) The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain, Psychological Review, Vol. 65, pp. 386 - 408.
Webb, G. 1. (1996) Further Experimental Evidence against the Utility of Occam's
Razor, J. of Artificial Intelligence Research 4, pp. 397 - 147.
| 1311 |@word economically:1 version:1 cnn:6 open:2 profit:5 reduction:6 contains:2 uncovered:1 denoting:1 assigning:1 yet:2 written:2 intelligence:1 amir:1 record:1 ames:1 simpler:1 along:3 become:2 incorrect:3 consists:1 expected:15 behavior:2 brain:1 spherical:4 classifies:3 israel:4 kind:4 finding:1 every:2 classifier:51 qm:1 normally:1 medical:1 unit:2 producing:1 positive:1 local:20 id:1 suggests:1 decided:4 acknowledgment:1 testing:1 union:6 investment:1 digit:1 procedure:1 area:2 rnn:6 word:1 unlabeled:2 close:2 storage:1 risk:1 context:2 applying:1 demonstrated:2 center:2 xci:1 baum:1 regardless:1 convex:1 rule:4 notion:2 construction:1 suppose:2 associate:1 labeled:12 slippage:1 database:1 region:3 decrease:2 trade:2 valuable:1 pd:2 complexity:8 radar:1 depend:2 creation:1 joint:2 stock:20 artificial:1 sc:4 labeling:1 outside:1 outcome:2 firm:25 whose:2 quite:1 indecision:15 plausible:1 unsupported:1 say:2 denser:1 invested:2 zipcode:1 advantage:1 sequence:2 propose:4 ambiguously:2 maximal:6 remainder:1 translate:1 empty:1 requirement:1 produce:2 leave:1 object:5 derive:1 ac:1 nearest:18 c:1 trading:8 radius:2 closely:1 correct:3 centered:2 require:2 argued:1 behaviour:1 considered:1 predict:1 traded:2 condensed:2 healthy:1 council:1 correctness:1 clearly:2 incorrectness:1 derived:2 indicates:2 nn:1 entire:1 eliminate:1 misclassified:1 classification:33 ill:1 denoted:9 field:1 once:1 having:1 represents:1 others:3 report:1 intelligent:1 national:1 replaced:1 geometry:1 consisting:1 n1:1 detection:1 organization:1 analyzed:1 closer:1 necessary:1 daily:2 respective:1 tree:1 divide:1 haifa:2 desired:2 minimal:1 psychological:1 classify:1 soft:31 cover:1 assignment:5 cost:2 subset:9 technion:4 recognizing:1 commission:1 thanks:1 ifip:1 stay:1 probabilistic:1 ambiguity:3 satisfied:3 containing:1 dr:1 derivative:1 includes:1 performed:1 later:1 option:2 capability:1 il:1 ni:2 ir:3 yield:1 correspond:1 yes:1 produced:7 lli:1 classified:1 definition:1 failure:1 against:1 pp:5 couple:2 gain:1 baram:7 nasa:1 higher:2 day:6 unambiguously:1 just:1 until:1 hand:1 defines:1 true:1 former:1 assigned:4 razor:3 criterion:3 evident:1 performs:1 l1:1 percent:1 consideration:1 sabbatical:1 volume:3 extend:1 numerically:1 similarly:1 claimed:2 certain:3 success:4 tenns:1 seen:2 holte:3 minimum:3 care:1 somewhat:1 greater:1 maximize:1 desirable:1 sphere:2 hart:5 prediction:3 involving:1 multilayer:1 patient:2 sometimes:1 achieved:1 rest:1 strict:1 member:1 call:4 bought:1 near:2 identically:1 fit:1 competing:2 economic:2 whether:1 motivated:2 utility:5 useful:1 covered:3 amount:1 ten:1 atiya:1 reduced:5 percentage:2 sign:1 correctly:5 per:2 rosenblatt:1 diagnosis:1 shall:1 vol:4 neither:1 kept:1 excludes:1 fraction:1 extends:1 family:4 decision:5 bound:1 pay:1 insufficiently:1 x2:8 n3:1 relatively:2 department:1 designated:1 smaller:1 beneficial:1 slightly:1 presently:1 pid:2 xi2:4 finn:2 available:1 eight:1 assumes:1 clustering:1 include:1 remaining:2 yoram:1 question:1 already:1 quantity:1 said:2 distance:5 separate:1 ratio:2 providing:1 difficult:1 webb:3 statement:1 negative:1 design:1 implementation:1 perform:3 allowing:1 observation:1 sold:1 sm:2 finite:1 excluding:1 rn:4 introduced:1 bene:1 specified:4 conflict:3 trans:1 suggested:1 below:1 including:1 memory:2 event:1 satisfaction:2 predicting:2 technology:2 firmly:1 created:1 review:1 geometric:1 loss:2 proportional:1 moffett:1 degree:1 consistent:24 occam:3 surrounded:2 course:1 supported:3 repeat:1 free:1 last:1 senior:1 perceptron:1 institute:2 neighbor:17 fall:2 benefit:8 boundary:2 dimension:1 author:1 made:2 commonly:1 employing:4 far:1 preferred:1 decides:1 xi:2 don:1 table:1 ca:1 separator:32 domain:10 n2:3 fig:2 xl:17 remained:1 evidence:1 ci:1 intersection:2 simply:1 likely:1 conditional:1 price:1 hard:1 included:1 called:6 hospital:1 experimental:1 disregard:1 meaningful:1 perceptrons:1 select:1 support:1 latter:7 evaluate:1 |
342 | 1,312 | Statistically Efficient Estimation Using
Cortical Lateral Connections
Alexandre Pouget
alex@salk.edu
Kechen Zhang
zhang@salk.edu
Abstract
Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these
codes, such as population vector analysis, are either inefficient, i.e.,
the variance of the estimate is much larger than the smallest possible variance, or biologically implausible, like maximum likelihood .
Moreover, these methods attempt to compute a scalar or vector
estimate of the encoded variable. Neurons are faced with a similar estimation problem . They must read out the responses of the
presynaptic neurons, but, by contrast, they typically encode the
variable with a further population code rather than as a scalar.
We show how a non-linear recurrent network can be used to perform these estimation in an optimal way while keeping the estimate
in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated
noise among neurons representing similar variables .
1
Introduction
Most sensory and motor variables in the brain are encoded with coarse codes, i.e.,
through the activity of large populations of neurons with broad tuning to the variables. For instance, direction of visual motion is believed to be encoded in visual
area MT by the responses of a large number of cells with bell-shaped tuning, as
illustrated in figure I-A.
Neurophysiological recordings have shown that, in response to an object moving
along a particular direction, the pattern of activity across such a population would
look like a noisy hill of activity (figure I-B) . On the basis of this activity, A, the
best that can be done is to recover the conditional probability of the direction of
motion , (), given the activity, p( (}IA). A slightly less ambitious goal is to come up
with a good "guess", or estimate, 0, of the direction, (), given the activity. Because
of the stochastic nature of the noise, the estimator is a random variable, i.e, for
o AP is at the Institute for Computational and Cognitive Sciences, Georgetown University, Washington, DC 20007 and KZ is at The Salk Institute, La Jolla, CA 92037 . This
work was funded by McDonnell-Pew and Howard Hughes Medical Institute.
A. Pouget and K. Zhang
98
B
A
3
i\
i
\
1
\
2.5
.~ 2
."5
?
OL-----------------------J
100
200
300
Direction (deg)
1.5
100
200
300
Preferred Direction (deg)
Figure 1: A- Tuning curves for 16 direction tuned neurons. B- Noisy pattern of
activity (0) from 64 neurons when presented with a direction of 180 0 ? The ML
estimate is found by moving an "expected" hill of activity (dotted line) until the
squared distance with the data is minimized (solid line)
the same image, B will vary from trial to trial. A good estimator should have
the smallest possible variance across those trials because the variance determines
how well two similar directions can be discriminated using this estimator. The
Cramer-Rao bound provides an analytical lower bound for this variance given the
noise in the system and the unit tuning curves [5] . Typically, computationally
simple estimators, such as optimum linear estimator (OLE), are very inefficient;
their variances are several times the bound. By contrast, Bayesian or maximum
likelihood (ML) estimators (which are equivalent for the case under consideration
in this paper) can reach this bound but require more complex calculations [5].
These decoding technics are valuable for a neurophysiologist interested in reading
out the population code but they are not directly relevant for understanding how
neural circuits perform estimation. In particular, they all provide the estimate in a
format which is incompatible with what we know of sensory representations in the
cortex. For example, cells in V 4 are estimating orientation from the noisy responses
of orientation tuned VI cells, but, unlike ML or OLE which provide a scalar estimate, V4 neurons retain orientation in a coarse code format, as demonstrated by
the fact that V4 cells are just as broadly tuned to orientation as VI neurons.
Therefore, it seems that a theory of estimation in biological networks should have
two critical characteristics: 1- it should preserve the estimate in a coarse code and
2- it should be efficient, i.e., the variance should be close to the Cramer-Rao bound.
We explore in this paper various network architectures for performing estimations
with coarse code using lateral connections. We start by briefly describing several
classical estimators such as OLE or ML. Then, we consider linear and non-linear
recurrent networks and compare their performances with the classical estimators.
2
Classical Methods
The simplest estimators are linear of the form BOLE = WT A. Better performance
can be obtained with a center of mass estimator (COM), BeoM = 2:i Biai/ 2:i ai;
however, in the case of a periodic variable, such as direction of motion, the best
one-shot method known is the complex estimator (CaMP), BeoMP = phase(z)
where z = 2::=1 akei91c [5]. This estimator consists in fitting a cosine through
the pattern of activity, like the one shown in figure I-B, and using the phase of
99
Statistically Efficient Estimations Using CorticaL LateraL Connections
A
Activity over Time
B
40
o
200
300
pret~~~d Direction (deg)
Figure 2: A- Circular network of 64 units. Only the connections originating from
one unit are shown. B- Activity over time in the non-linear network when initialized
with a random pattern at t = O. The activity of the units are plotted as a function
of their position along the circle which is equivalent to their preferred direction of
motion with appropriate choice of weights.
the best cosine fit as the estimate of direction. This method is suboptimal if the
data were not generated by cosine tuning functions as in the case illustrated in
figure I-A . It is possible to obtain optimum performance by fitting the curve that
was actually used to generate the data, i.e, the actual tuning curves of the units.
A maximum likelihood estimate, defined as being the direction maximizing p(AIO),
involves exactly this type of curve fitting, a process illustrated in figure 1-B [5]. The
estimate is computed by finding first the "expected" hill- the hill that would be
obtained in a noise free system- minimizing the distance with the data. In the case
of gaussian noise, the appropriate distance measure to minimize is the euclidian
squared distance. The final position of the peak of the hill corresponds to the
maximum likelihood estimate, OML.
3
Recurrent Networks
Consider a circular network of 64 units fully connected like the one depicted in
figure 2-A. With an appropriate choice of weights and activation function, this
network will develop a hill-shaped pattern of activity in response to a transient
input as illustrated in figure 2-B. If we initialize this networks with activity patterns
A = {ad corresponding to the responses of 64 direction tuned units (figure 1), we
can use the final position of the hill across the neuronal array after relaxation as
an estimate of the direction, O. The variance of this estimator will depend on the
exact choice of activation function and weights.
3.1
Linear Network
We first consider a network of 64 units whose dynamics is governed by the following
difference equation:
(1)
The dynamics of such networks is well understood [3]. If each unit receives the
same weight vector 'Iii, then the weight matrix W is symmetric. In this case, the
A. Pouget and K. Zhang
100
network dynamics amplifies or suppresses the Fourier component of the initial input
pattern, {ad, independently by a factors equal to the corresponding component of
the Fourier transform, ;];, of w. For example, if the first component of;]; is more
than one (resp. less than one) the first Fourier component of the initial pattern of
activity will be amplified (resp. suppressed).
Thus, we can choose W such that the network amplifies selectively the first Fourier
component of the data while suppressing the others. The network would be unstable
but if we stop after a large, yet fixed, number of iterations, the activity pattern would
look like a cosine function of direction with a phase corresponding to the phase of
the first Fourier components of the data. In other words, the network would end
up fitting a cosine function in the data which is equivalent to the CaMP method
described above. A network for orientation selectivity proposed by Ben-Yishai et
al [1] is closely related to this linear network.
Although this method keeps the estimate in a coarse code format, it suffers two
problems: it is unclear how it could be extended to non periodic variables, such as
disparity, and it is suboptimal since it is equivalent to the CaMP estimator.
3.2
Non-Linear Network
We consider next a network of 64 units fully connected whose dynamics is governed
by the following difference equations:
Oi(t) = g( Ui(t? = 6.3 (log ( 1 + e5+1 0U ,(t?) ) 0.8
u,(t Ht)
= u, (t) Ht ( -u,(t) +
t,
(2)
W'jOj (t) )
(3)
Zhang (1996) has demonstrated that with appropriate symmetric weights, {Wij},
this network develops a stable hill of activity in response to an arbitrary transient
input pattern {Id(figure 2-B). The shape of the hill is fully specified by the weights
and activation function whereas, by contrast, the final position of the hill on the
neuronal array depends only on the initial input. Therefore, like ML, the network
fits an "expected" function through the data. We first present a set of simulations
in which we investigated whether ML and the network place the hill at the same
location.
Methods: The simulations consisted estimating the value of the direction of a
moving bar based on the activity, A = {ad, of 64 input units with hill-shaped
tuning to direction corrupted by noise. We used circular normal functions like the
ones showed in figure I-A to model the mean activities, fiCO):
fiCO) = 3exp(7(cos(O - Od - 1? + 0.3
(4)
The value 0.3 corresponds to the mean spontaneous activity of each unit. The peak,
OJ, of the circular normal functions were uniformly spread over the interval [0?,360?].
The activities, {ad, depended on the noise distribution. We used two types of noise,
normally distributed with fixed variance, O'~ = 1 and Poisson distributed:
P(ai
1
(
= ale) = J27r0'2
exp n
(a -
f'(e?2)
'
,
20'2n
P(ai =
kle) =
:!
1.(O)k
J,
-f,(9)
(5)
Our results compare the standard deviation offour estimators, OLE, COM, CaMP
and ML to the non-linear recurrent network (RN) with transient inputs (the input
patterns are shown on the first iteration only). In the case of ML, we used the
Statistically Efficient Estimations Using Cortical Lateral Connections
Noise with Normal Distribution
OLE
COM
COMP
ML
101
Noise with Poisson Distribution
RN
Figure 3: Histogram of the standard deviations of the estimate for all five methods
Cramer-Rao bound to compute the standard deviation as described in Seung and
Sompolinsky (1993). The weights in the recurrent network were chosen such that
the final pattern of activity in the network have a profile very similar to the tuning
function fi(O).
Results: Since the preferred direction of two consecutive units in the network
are more than 50 apart, we first wonder whether RN estimates exhibit a biasa difference between the mean estimate and the true direction- in particular for
directions between the peaks of two consecutive units. Our simulations showed no
significant bias for any of the orientations tested (not shown). Next, we compared
standard deviations of the estimates for all five methods and for the two types
of noise. The RN method was found to outperform the OLE, COM and COMP
estimators in both cases and to match the Cramer-Rao bound for gaussian noise
(figure 3) as suggested by our analysis. For noise with Poisson distribution, the
standard deviation for RN was only 0.344 0 above ML (figure 3).
We also estimated numerically -lJORN j lJai 19=1700, the derivative of the RN estimate
with respect to the initial activity of each of 64 units for an orientation of 1700 ? This
derivative in the case of ML matches closely the derivative of the cell tuning curve,
/,(0). In other words, in ML, units contribute to the estimate according to the
amplitude of the derivative of the tuning curve. As shown in figure the same is true
for RN, -lJORN jlJai 19=1700 matches closely the derivative of the units tuning curves.
In contrast, the same derivatives for the COMP estimate, (dotted line), or the
COM estimate, (dash-dotted line), do not match the profile of /'(0). In particular,
units with preferred direction far away from 1700 , i.e. units whose activity is just
noise, end up contributing to the final estimate, hindering the performance of the
estimator.
We also looked at the standard deviation of the RN as a function of time, i.e.,
the number of iterations. Reaching a stable state can take up to several hundred
iterations which could make the RN method too slow for any practical purpose.
We found however that the standard deviation decreases very rapidly over the first
5-6 iterations and reaches asymptotic values after around 20 iterations (figure 4-B).
Therefore, there is no need to wait for a perfectly stable pattern of activity to obtain
minimum standard deviation.
Analysis: One way to determine which factors control the final position of the
hill is to find a function, called a Lyapunov function, which is minimized over time
by the network dynamics. Cohen and Grossberg (1983) have shown that network
characterized by the dynamical equation above and in which the input pattern {sIi}
A. Pouget and K. Zhang
102
A
B
1 ,
CD
>
~
'c
,
:
0.5
RN
COMP
COM
. '
, ,
20.-~--~--~-~--,
CD
o
u
01----'-"
16
~
E -0.5
o
z
,
"
'.
-1L-_ _~_ _~_'_
" _
' ~--J'
100
200
300
Preferred Direction (deg)
OL-~--~-~--~~
o
20
40
60
Time (# of iterations)
Figure 4: A- Comparison of g'(B) (solid line), -oO/oai!9=1700 for RN, CaMP and
COM. All functions have been normalized to one. B- Standard deviation as a
function of the number of iterations for RN.
is clamped, minimizes a Lyapunov function of the form :
(6)
The last term is the dot product between the input pattern, {sIi }, and the current
activity pattern, {g( Ui)}, on the neuronal array. Here s is simply a scaling factor
for the input pattern. The dynamics of the network will therefore tend to minimize
- Li Iig( ud, or equivalently, to maximize the overlap between the stable pattern
and the input pattern . The other terms however are also dependent on Ii because
the shape of the final stable activity profile depends on the input pattern. Therefore
the network will settle into a compromise between maximizing overlap and getting
the right profile given the clamped input.
We can show however that, for small input (i.e., as the scaling factor s -+ 0),
the dominant term in the Lyapunov function is the dot product. To see this, we
consider the Taylor expansion of Lyapunov function L with respect to s. First, let
{Ui } denote the profile of the stable activity {ud in the limit of zero input (s -+ 0),
and then write the corresponding value of the Lyapunov function at zero input as
La. Now keeping only the first-order terms of s in the Taylor expansion, we obtain:
(7)
This means that the dot product is the only first order term of s, and disturbances
to the shape of the final activity profile contribute only to higher order terms of
s, which are negligible when s is small. Notice that in the limit of zero input, the
shape of the activity profile {Ui} is fixed, and the only thing unknown is its peak
position. Because La is a constant, the global minimum of the Lyapunov function
here should correspond to a peak position which maximizes the dot product. The
difference between Ui and Ui is negligible for sufficiently small input because, by
definition, Ui -+ Ui as s -+ O. Consequently, for small input, the network will
converge to a solution maximizing primarily Li Iig( Ui), which is mathematically
equivalent to minimizing the square distance between the input and the output
pattern.
Therefore, if we use an activity pattern, A = {ad, as the input to this network,
the stable hill should have its peak at a position very close to the direction corre-
Statistically Efficient Estimations Using Cortical Lateral Connections
103
sponding to the maximum likelihood estimate (under the assumption of gaussian
noise), provided the network is not attracted into a local minimum of the Liapunov
function. This result is valid when using a small clamped input but our simulations
show that a transient input is sufficient to reach the Cramer-Rao bound.
4
Discussion
Our results demonstrate that it is possible to perform efficient unbiased estimation
with coarse coding using a neurally plausible architecture. Our model relies on
lateral connections to implement a prior expectation on the profile of the activity
patterns. As a consequence, units determine their activation according to their
own input and the activity of their neighbors. This approach shows that one of
the advantages of coarse code is to provide a representation which simplifies the
problem of cleaning up uncorrelated noise within a neuronal population.
Unlike OLE, COM and CaMP, the RN estimate is not the result of a voting process
in which units vote from their preferred direction, (Ji. Instead, units turn out to
contribute according to the derivatives of their tuning curves, If( (J), as in the case
of ML. This feature allows the network to ignore background noise, that is to say,
responses due to other factors beside the variable of interest. This property also
predicts that discrimination of directions around the vertical (90?) would be most
affected by shutting off the units tuned at 60? and 120?. This prediction is consistent
with psychophysical experiments showing that discrimination around the vertical
in human is affected by prior adaptation to orientations displaced from the vertical
by ?300 [4].
Our approach can be readily extended to any other periodic sensory or motor variables. For non periodic variables such as the disparity of a line in an image, our
network needs to be adapted since it currently relies on circular symmetrical weights.
Simply unfolding the network will be sufficient to deal with values around the center
of the interval under consideration, but more work is needed to deal with boundary
values. We can also generalize this approach to arbitrary mapping between two
coarse codes for variables x and y where y is a function of x. Indeed, a coarse code
for x provides a set of radial basis functions of x which can be subsequently used to
approximate arbitrary functions. It is even conceivable to use a similar approach
for one-to-many mappings, a common situation in vision or robotics, by adapting
our network such that several hills can coexist simultaneously.
References
[1] R. Ben-Yishai, R . L. Bar-Or, and H. Sompolinsky. Proc. Natl. Acad. Sci. USA,
92:3844-3848, 1995.
[2] M. Cohen and S. Grossberg. IEEE Trans. SMC, 13:815-826, 1983.
[3] M. Hirsch and S. Smale. Differential equations, dynamical systems and linear
algebra. Academic Press, New York, 1974.
[4] D. M. Regan and K. 1. Beverley. J. Opt. Soc. Am., 2:147-155, 1985.
[5] H. S. Seung and H. Sompolinsky. Proc. Natl. Acad. Sci. USA, 90:10749-10753,
1993.
| 1312 |@word trial:3 briefly:1 seems:1 simulation:4 euclidian:1 solid:2 shot:1 initial:4 disparity:2 tuned:5 suppressing:1 current:1 com:8 od:1 activation:4 yet:1 must:1 readily:1 attracted:1 oml:1 shape:4 motor:3 designed:1 discrimination:2 guess:1 liapunov:1 coarse:11 provides:2 contribute:3 location:1 zhang:6 five:2 along:2 sii:2 differential:1 consists:1 fitting:4 indeed:1 expected:3 brain:2 ol:2 actual:1 provided:1 estimating:2 moreover:1 circuit:1 mass:1 maximizes:1 what:1 minimizes:1 suppresses:1 offour:1 fico:2 finding:1 voting:1 exactly:1 control:1 unit:23 medical:1 normally:1 negligible:2 understood:1 local:1 depended:1 limit:2 consequence:1 acad:2 id:1 ap:1 suggests:1 co:1 smc:1 statistically:4 grossberg:2 practical:1 hughes:1 implement:1 area:1 bell:1 adapting:1 word:2 radial:1 wait:1 close:2 coexist:1 equivalent:5 demonstrated:2 center:2 maximizing:3 independently:1 pouget:4 estimator:17 array:3 population:6 resp:2 spontaneous:1 cleaning:2 exact:1 predicts:1 connected:2 sompolinsky:3 decrease:1 valuable:1 ui:9 seung:2 dynamic:6 depend:1 algebra:1 compromise:1 basis:2 iig:2 various:1 ole:7 whose:3 encoded:3 widely:1 larger:1 plausible:1 say:1 transform:1 noisy:3 final:8 advantage:1 analytical:1 hindering:1 product:4 adaptation:1 relevant:1 rapidly:1 amplified:1 amplifies:2 getting:1 optimum:2 ben:2 object:1 oo:1 recurrent:5 develop:1 soc:1 involves:1 come:1 lyapunov:6 direction:27 closely:3 stochastic:1 subsequently:1 human:1 transient:4 settle:1 require:1 opt:1 biological:1 mathematically:1 around:4 cramer:5 sufficiently:1 normal:3 exp:2 mapping:2 vary:1 consecutive:2 smallest:2 purpose:1 estimation:10 proc:2 currently:1 unfolding:1 gaussian:3 rather:1 reaching:1 encode:2 likelihood:5 contrast:4 am:1 camp:6 dependent:1 typically:2 originating:1 wij:1 interested:1 among:1 orientation:8 initialize:1 equal:1 shaped:3 washington:1 broad:1 look:2 minimized:2 others:1 develops:1 primarily:1 preserve:1 simultaneously:1 phase:4 attempt:1 interest:1 circular:5 natl:2 yishai:2 taylor:2 initialized:1 circle:1 plotted:1 instance:1 rao:5 aio:1 deviation:9 hundred:1 wonder:1 too:1 periodic:4 corrupted:1 peak:6 retain:1 v4:2 off:1 decoding:1 squared:2 choose:1 pret:1 cognitive:1 inefficient:2 derivative:7 li:2 coding:1 vi:2 ad:5 depends:2 start:1 recover:1 minimize:2 oi:1 square:1 variance:9 characteristic:1 correspond:1 generalize:1 bayesian:1 comp:4 implausible:1 reach:3 suffers:1 definition:1 involved:1 stop:1 amplitude:1 actually:1 alexandre:1 higher:1 response:8 done:1 just:2 until:1 receives:1 usa:2 consisted:1 true:2 normalized:1 unbiased:1 read:1 symmetric:2 illustrated:4 deal:2 cosine:5 hill:15 demonstrate:1 motion:4 image:2 consideration:2 fi:1 common:1 mt:1 discriminated:1 ji:1 cohen:2 interpret:1 numerically:1 significant:1 ai:3 pew:1 tuning:12 funded:1 dot:4 moving:3 stable:7 cortex:2 dominant:1 own:1 showed:2 jolla:1 apart:1 beverley:1 selectivity:1 minimum:3 converge:1 determine:2 maximize:1 ud:2 ale:1 ii:1 neurally:1 match:4 academic:1 characterized:1 believed:1 calculation:1 prediction:1 vision:1 expectation:1 poisson:3 iteration:8 histogram:1 sponding:1 robotics:1 cell:5 background:1 whereas:1 interval:2 unlike:2 recording:1 tend:1 thing:1 iii:1 fit:2 architecture:2 perfectly:1 suboptimal:2 simplifies:1 whether:2 york:1 simplest:1 generate:1 outperform:1 notice:1 dotted:3 estimated:1 broadly:1 write:1 affected:2 ht:2 relaxation:1 place:1 throughout:1 incompatible:1 scaling:2 bound:8 dash:1 corre:1 activity:33 adapted:1 alex:1 fourier:5 performing:1 format:4 according:3 mcdonnell:1 across:3 slightly:1 suppressed:1 biologically:1 computationally:1 equation:4 describing:1 turn:1 needed:1 know:1 end:2 away:1 appropriate:4 classical:3 psychophysical:1 looked:1 unclear:1 exhibit:1 conceivable:1 oai:1 distance:5 lateral:7 sci:2 presynaptic:1 unstable:1 code:13 minimizing:2 equivalently:1 smale:1 ambitious:1 unknown:1 perform:3 vertical:3 neuron:8 displaced:1 howard:1 situation:1 extended:2 dc:1 rn:13 arbitrary:3 specified:1 connection:8 trans:1 bar:2 suggested:1 dynamical:2 pattern:23 reading:1 oj:1 ia:1 critical:1 overlap:2 disturbance:1 representing:1 faced:1 prior:2 understanding:1 georgetown:1 contributing:1 asymptotic:1 beside:1 fully:3 regan:1 sufficient:2 consistent:1 uncorrelated:2 cd:2 last:1 keeping:2 free:1 bias:1 institute:3 neighbor:1 distributed:2 curve:9 boundary:1 cortical:4 valid:1 kechen:1 kz:1 sensory:4 far:1 approximate:1 ignore:1 preferred:6 keep:1 deg:4 ml:13 global:1 hirsch:1 symmetrical:1 nature:1 ca:1 e5:1 expansion:2 investigated:1 complex:2 shutting:1 spread:1 noise:17 profile:8 neurophysiologist:1 neuronal:4 salk:3 slow:1 position:8 governed:2 clamped:3 showing:1 depicted:1 simply:2 explore:1 neurophysiological:1 visual:2 scalar:3 corresponds:2 determines:1 relies:2 conditional:1 goal:1 consequently:1 uniformly:1 wt:1 called:1 la:3 vote:1 selectively:1 tested:1 |
343 | 1,313 | Interpolating Earth-science Data using RBF
Networks and Mixtures of Experts
E.VVan
D.Bone
Division of Infonnation Technology
Canberra Laboratory, CSIRO
GPO Box 664, Canberra, ACT, 2601, Australia
{ernest, don} @cbr.dit.csiro.au
Abstract
We present a mixture of experts (ME) approach to interpolate sparse,
spatially correlated earth-science data. Kriging is an interpolation
method which uses a global covariation model estimated from the data
to take account of the spatial dependence in the data. Based on the
close relationship between kriging and the radial basis function (RBF)
network (Wan & Bone, 1996), we use a mixture of generalized RBF
networks to partition the input space into statistically correlated
regions and learn the local covariation model of the data in each
region. Applying the ME approach to simulated and real-world data,
we show that it is able to achieve good partitioning of the input space,
learn the local covariation models and improve generalization.
1. INTRODUCTION
Kriging is an interpolation method widely used in the earth sciences, which models the
surface to be interpolated as a stationary random field (RF) and employs a linear model.
The value at an unsampled location is evaluated as a weighted sum of the sparse,
spatially correlated data points. The weights take account of the spatial correlation
between the available data points and between the unknown points and the available data
points. The spatial dependence is specified in the form of a global covariation model.
Assuming global stationarity, the kriging predictor is the best unbiased linear predictor
of the un sampled value when the true covariation model is used, in the sense that it
minimizes the squared error variance under the unbiasedness constraint. However, in
practice, the covariation of the data is unknown and has to be estimated from the data by
an initial spatial data analysis. The analysis fits a covariation model to a covariation
measure of the data such as the sample variogram or the sample covariogram, either
graphically or by means of various least squares (LS) and maximum likelihood (ML)
approaches. Valid covariation models are all radial basis functions.
Optimal prediction is achieved when the true covariation model of the data is used. In
general, prediction (or generalization) improves as the covariation model used more
Interpolating Earth-science Data using RBFN and Mixtures of Experts
989
closely matches the true covariation of the data. Nevertheless, estimating the covariation
model from earth-science data has proved to be difficult in practice due to the sparseness
of data samples. Furthermore for many data sets the global stationarity assumption is
not valid. To address this, data sets are commonly manually partitioned into smaller
regions within which the stationarity assumption is valid or approximately so.
In a previous paper, we showed that there is a close, formal relationship between kriging
and RBF networks (Wan & Bone, 1996). In the equivalent RBF network formulation of
kriging, the input vector is a coordinate and the output is a scalar physical quantity of
interest. We pointed out that, under the stationarity assumption, the radial basis function
used in an RBF network can be viewed as a covariation model of the data. We showed
that an RBF network whose RBF units share an adaptive norm weighting matrix, can be
used to estimate the parameters of the postulated covariation model, outperforming more
conventional methods. In the rest of this paper we will refer to such a generalization of
the RBF network as a generalized RBF (GRBF) network.
In this paper, we discuss how a mixture of GRBF networks can be used to partition the
input space into statistically correlated regions and learn the local covariation model of
each region. We demonstrate the effectiveness of the ME approach with a simulated
data set and an aero-magnetic data set. Comparisons are also made of prediction
accuracy of a single GRBF network and other more traditional RBF networks.
2 MIXTURE OF GRBF EXPERTS
Mixture of experts (Jacobs et al , 1991) is a modular neural network architecture in
which a number of expert networks augmented by a gating network compete to learn the
data. The gating network learns to assign probability to the experts according to their
performance over various parts of the input space, and combines the outputs of the
experts accordingly. During training, each expert is made to focus on modelling the
local mapping it performs best, improving its performance further. Competition among
the experts achieves a soft partitioning of the input space into regions with each expert
network learning a separate local mapping. An hierarchical generalization of ME, the
hierarchical mixture of experts (HME), in which each expert is allowed to expand into a
gating network and a set of sub-experts, has also been proposed (Jordan & Jacobs, 1994).
Under the global stationarity assumption, training a GRBF network by minimizing the
mean squared prediction error involves adjusting its norm weighting matrix. This can
be interpreted as an attempt to match the RBF to the covariation of the data. It then
seems natural to use a mixture of GRBF networks when only local stationarity can be
assumed. After training, the gating network soft partitions the input space into
statistically correlated regions and each GRBF network provides a model of the
covariation of the data for a local region. Instead of an ME architecture, an HME
architecture can be used. However, to simplify the discussion we restrict ourselves to the
ME architecture.
Each expert in the mixture is a GRBF network. The output of expert i is given by:
...
Yi(X;Oi) =
L Wijq,(x;cij~Mi)+ WiD
(2.1)
j =\
where ni is the number of RBF units, 0i = {{wi);~o,{cij}i=\,Md are the parameters
of the expert and q,(x;c,M)=qX:II x-c II M). Assuming zero-mean Gaussian error and
common variance a/, the conditional probability of y given x and ~ is given by:
(2.3)
990
E. Wan and D. Bone
Since the radial basis functions we used bave compact support and eacb expert only
learns a local covariation model, small GRBF networks spanning overlapping regions
can be used to reduce computation at the expense of some resolution in locating the
boundaries of the regions. Also, only the subset of data within and around the region
spanned by a GRBF network is needed to train it, further reducing computational effort.
With m experts, the i lb output of the gating network gives the probability of selecting the
expert i and is given by the normalized function:
g, (x~'U) = P(ilx, '0) = Il, exp(q(x~'UJ)/ ~lllj exp{q(x;'U
J)
(2.4)
wbere'U = { raj::\, {'UJ::1}. Using q(x~ '0,) = 'U;[x T If and setting all a, 's to 1, the
gating network implements the softmax function and partitions the input space into a
smoothed planar tessellation.
Alternatively, with q(x~1>i)=-IITi(X-u;)112 (wbere
1>i={u;,Td consists of a location vector and an affine transformation matrix) and
restricting the a/s to be non-negative, the gating network divides the input space into
packed anisotropic ellipsoids. These two partitionings are quite convenient and adequate
for most earth-science applications wbere x is a 2D or 3D coordinate.
The output of the experts are combined to give the overall output of the mixture:
Y{x~a) =
III
III
i=1
i=1
L P(ilx, '\?)9i (x;a i ) = L g, (x; '0 )Yi (x;a,)
(2.5)
wbere a = {'U, {ai }::1} and the conditional probability of observing y given x and a is:
III
p(ylx,a) =
3
L
P(ilx, '0 )p(ylx,a,) .
,=1
(2.6)
THE TRAINING ALGORITHM
The Expectation-Maximization (EM) algorithm of Jordan and Jacobs is used to train the
mixture of GRBF networks. Instead of computing the ML estimates, we extend the
algorithm by including priors on the parameters of the experts and compute the
maximum a posteriori (MAP) estimates. Since an expert may be focusing on a small
subset of the data, the priors belp to prevent over-fitting and improve generalization.
Jordan & Jacobs introduced a set of indicator random variables Z = {Z<t)}~1 as missing
data to label the experts that generate the observable data D = ((x(t), y<t?} ~1. The log
joint probability of the complete data Dc = {D, Z} and parameters a can be written as:
wbere A. is a set of byperparameters. Assuming separable priors on the parameters of the
model i.e. p(alA.) = p('UIAo)D p(ail~) with A. =
N
In p(Dc,alA.) =
{~}:o' (3.1) can be rewritten as:
III
L L Zi(t) In P(ilx(t) , '0)+ In P('UIAo)
/=1 ,=1
(3.2)
Interpolating Earth-science Data using RBFN and Mixtures of Experts
991
Since the posterior probability of the model parameters is proportional to the joint
probability, maximizing (3.2) is equivalent to maximizing the log posterior. In the Estep, the observed data and the current network parameters are used to compute the
expected value of the complete-data log joint probability:
Q(OIO(k) ) =
LN L'" h;(k)(t)In P(ilx(I), '\))+ In p( '\)IAo)
1=1 1=1
(3.3)
(3.4)
where
e
In the M-step, Q(OIO(k) is maximized with respect to
to obtain 0(1+1). As a result of
the use of the indicator variables, the problem is decoupled into a separate set of interim
MAP estimations:
'\)(k+1)
= arg max L L hi(k)(t) In P(ilx(I), '\)) + In p( '\)IAo)
N
1)
O~HI) = arg lI}.ax
I
'"
(3.5)
1=1 i=1
N
L ~(1)(t)In P(l')lx(I),OJ+ In p(OP't)
(3.6)
1=1
We assume a flat prior for the gating network parameters and the prior
P(Oi I~) = exp(-t ~
II;
II;
LL
WiT
Wi.rq,(C iT -Ci.r? I ZR(~) where ZR(A-.) is a normalization
constant, for the experts. This smoothness prior is used on the GRBF networks because
it can be derived from regularization theory (Girosi & Poggio, 1990) and at the same
time is consistent with the interpretation of the radial basis function as a covariation
model. Hence, maximizing i with (3.6) is equivalent to minimizing the cost function:
e
where A-.' = ~(ji2. The value of the effective regularization parameter, ~', can be set by
generalized cross validation (GCV) (Orr, 1995) or by the 'evidence' method of (Mackay,
1991) using re-estimation formulas. However, in the simulations, for simplicity, we
preset the value of the regularization parameter to a fixed value.
4 SIMULATION RESULTS
Using the Cholesky decomposition method (Cressie, 1993), we generate four 2D data
sets using the four different covariation models shown in Figure 1. The four data set are
then joined together to form a single 64x64 data set. Figure 3a shows the original data
set and the hard boundaries of the 4 statistically distinct regions. We randomly sample
the data to obtain a 400 sample training set and use the rest of the data for validation.
Two GRBF networks, with 64 and 144 adaptive anistropic spherical! units respectively,
are used to learn the postulated global covariation model and the mapping. A 2-level
I
The spherical model is widely used in geostatistics and when used as a covariance function is defined as
lI'(h;a) = 1- {7(~) -
t<l!)3} for ~llhll~ and rp{b;a) = 0 for Ilhll>a. Spherical does NOT mean isotropic.
992
E. Wan and D. Bone
HME with 4 GRBF network experts each with 36 spherical units are used to learn the
local covariation models and the mapping. Softmax gating networks are used and each
expert is somewhat 'localized' in each quadrant of the input space. The units of the
experts are located at the same locations as the units of the 64-unit GRBF network with
24 overlapping units between any two of the experts. The design ensures that the HME
does not have an advantage over the 64-unit GRBF network if the data is indeed globally
stationary. Figure 2 shows the local covariation models learned by the HME with the
smoothness priors and Figure 3b shows the interpolant generated and the partitioning.
(a) NW (exponential) (b) NE (spherical)
~
-10
(a) NW (spherical) (b) NE (spherical)
lij" ~ ~"01
.
~ [11"" ~ ~.01
-10
-10
-10
-20
-20
-20 -10 0 10 20 -20 -10 0 10 20
(c) SW (spherical) (d) SE (spherical)
-20-20
-20 -10 0 10 20 -20 -10 0 10 20
(c) SW (spherical) (d) SE (spherical)
~~"~1
~~"1"
~
-10
-20
-20
-20-10 0 1020 -20-10 0 1020
-20-20
-20-10 0 1020 -20-10 0 1020
-10
-10
~~~[I}.
?
Figure 1: The profile of the true local
covariation models of the simulated data set.
Exponential and spherical models are used.
(a)
-10
Figure 2: The profile of the local
covariation models learned by the HME.
(b)
(c)
60
60
60
40
40
40
20
20
20
20
40
60
20
40
60
20
40
60
Figure 3: (a) Simulated data set and true partitions. (b) Interpolant generated by the 144
spherical unit GRBFN. (c) The HME interpolant and the soft partitioning learned (0.5,
0.9 probability contours of the 4 experts shown in solid and dotted lines respectively)
Table 1: Nonnalized mean squared prediction error for the simulated data set.
Network
RBFN (isotropic RBF units with width set to the
distance to the nearest neighbor)
RBFN (identical isotropic RBF units with adaptive
width)
GRBFN (identical RBF units with adaptive norm
weiRhtinR matrix)
HME (2 levels, 4 GRBFN eXlJerts) without lJriors
HME (2 levels 4 GRBFN eXlJerts) with lJriors
kriging predictor (usinR true local models)
RBF unit
64, Gaussian
144, Gaussian
400, Gaussian
64, Gaussian
144, Gaussian
64, spherical
144, spherical
4x36, spherical
4x36, spherical
NMSE
0.761
0.616
0.543
0.477
0.475
0.506
0.431
0.938
0.433
0.372
For comparison, a number of ordinary RBF networks are also used to learn the mapping.
In all cases, the RBF units of networks of the same size share the same locations which
993
Interpolating Earth-science Data using RBFN and Mixtures of Experts
are preset by a Kohonen map. Table 1 summarizes the normalized mean squared
prediction error (NMSE)- the squared prediction error divided by the variance of the
validation set - for each network. With the exception of HME, all results listed are
obtained with a smoothness prior and a regularization parameter of 0.1. Ordinary
weight decay is used for RBF networks with units of varying widths and the smoothness
prior discussed in section 3 are used for the remaining networks. The NMSE of the
kriging predictor that uses the true local models is also listed as a reference.
Similar experiments are also conducted on a real aero-magnetic data set. The flight
paths along which the data is collected are divided into a 740 data points training set and
a 1690 points validation set. The NMSE for each network is summarized in Table 2, the
local covariation models learned by the HME is shown in Figure 4, and the interpolant
generated by the HME and the partitioning is shown in Figure 5b.
(a) NW (spherical)
-::.::~
-100
0
100 -100
(c) SW (spherical)
0
'~[I]'~~
-100
-100
0
l00~
100 -100
0
120
120
80
80
40
40
40
100
(d) SE (spherical)
(b)
(a)
(b) NE (spherical)
80 120
40
80 120
Figure 5: (a) Thin-plate interpolant of the
entire aero-magnetic data set. (b) The HME
interpolant and the soft partitioning (0.5, 0.9
probability contours of the 4 experts shown
in solid and dotted lines respectively).
100
Figure 4: The profile of the local covariation models
of the aero-magnetic data set learned by the HME.
Table 2: Normalized mean squared prediction error for the aero-magnetic data set.
Network
RBFN (isotropic RBF units with width set to the
distance to the nearest neighbor)
RBFN (isotropic RBF units with width set to the
mean distance to the 8 nearest neighbors)
RBFN (identical isotropic RBF units with adaptive
width)
GRBFN (identical RBF units with adaptive norm
weighting matrix)
HME (2 levels, 4 GRBFN experts) without priors
HME (2 levels, 4 GRBFN expertsl with priors
RBF units
49, Gaussian
100, Gaussian
49, Gaussian
100, Gaussian
49, Gaussian
100, Gaussian
49, spherical
100 spherical
4x25, spherical
4x25, spherical
NMSE
1.158
1.256
0.723
0.699
0.692
0.614
0.684
0.612
0.389
0.315
5 DISCUSSION
The ordinary RBF networks perform worst with both the simulated data and the aeromagnetic data. As neither data set is globally stationary, the GRBF networks do not
improve prediction accuracy over the corresponding RBF networks that use isotropic
Gaussian units. In both cases, the hierarchical mixture of GRBF networks improves the
prediction accuracy when the smoothness priors are used. Without the priors, the ML
estimates of the HME parameters lead to improbably high and low predictions.
994
E. Wan and D. Bone
The improvement in prediction accuracy is more significant for the aero-magnetic data
set than for the simulated data set due to some apparent global covariation of the
simulated data which only becomes evident when the directional variograms of the data
are plotted. However, despite the similar NMSE, Figure 3 shows that the interpolant
generated by the 144-unit GRBF network does not contain the structural information
that is captured by the HME interpolant and is most evident in the north-east region.
In the case of the simulated data set, the HME learns the local covariation models
accurately despite the fact that the bottom level gating networks fail to partition the input
space precisely along the north-south direction. The availability of more data and the
straight east-west discontinuity allows the upper gating network to partition the input
space precisely along the east-west direction. In the north-west region, although the class
of function the expert used is different from that of the true model, the model learned
still resembles the true model especially in the inner region where it matters most.
In the case of the aero-magnetic data set, the RBF and GRBF networks perform poorly
due to the considerable extrapolation that is required in the prediction and the absence of
global stationarity. However, the HME whose units capture the local covariation of the
data interpolates and extrapolates significantly better. The partitioning as well as the
local covariation model learned by the HME seems to be reasonably accurate and leads
to the construction of prominent ridge-like structures in the north-west and south-east
which are only apparent in the thin-plate interpolant of the entire data set of Figure Sa.
6
CONCLUSIONS
We show that a mixture of GRBF networks can be used to learn the local covariation of
spatial data and improve prediction (or generalization) when the data is approximately
locally stationary - a viable assumption in many earth-science applications. We believe
that the improvement will be even more significant for data sets with larger spatial
extent especially if the local regions are more statistically distinct. The estimation of the
local covariation models of the data and the use of these models in producing the
interpolant helps to capture the structural information in the data which, apart from
accuracy of the prediction, is of critical importance to many earth-science applications.
The ME approach allows the objective and automatic partitioning of the input space into
statistically correlated regions. It also allows the use of a number of small local GRBF
networks each trained on a subset of the data making it scaleable to large data sets.
The mixture of GRBF networks approach is motivated by the statistical interpolation
method of kriging. The approach therefore has a very sound physical interpretation and
all the parameters of the network have clear statistical and/or physical meanings.
References
Cressie, N. A (1993). Statistics for Spatial Data. Wiley, New York.
Jacobs, R. A, Jordan, M. I., Nowlan, S. J. & Hinton, G. E. (1991). Adaptive Mixtures of Local
Experts. Neural Computation 3, pp. 79-87.
Jordan, M. I. & Jacobs, R. A (1994). Hierarchical Mixtures of Experts and the EM Algorithm.
Neural Computation 6, pp. 181-214.
MacKay, D. J. (1992). Bayesian Interpolation. Neural Computation 4, pp. 415-447.
Orr, M. J. (1995). Regularization in the Selection of Radial Basis Function Centers. Neural
Computation 7, pp. 606-623.
Poggio, T. & Girosi, F. (1990). Networks for Approximation and Learning. In Proceedings of the
IEEE 78, pp. 1481-1497.
Wan, E. & Bone, D. (1996). A Neural Network Approach to Covariation Model Fitting and the
Interpolation of Sparse Earth-science Data. In Proceedings of the Seventh Australian
Conference on Neural Networks, pp. 121-126.
| 1313 |@word norm:4 seems:2 simulation:2 jacob:6 decomposition:1 covariance:1 solid:2 initial:1 selecting:1 ala:2 current:1 nowlan:1 written:1 partition:7 girosi:2 stationary:4 accordingly:1 isotropic:7 ji2:1 provides:1 location:4 lx:1 along:3 viable:1 consists:1 combine:1 fitting:2 indeed:1 expected:1 globally:2 spherical:25 td:1 becomes:1 estimating:1 interpreted:1 minimizes:1 ail:1 transformation:1 act:1 x36:2 partitioning:9 unit:24 producing:1 local:24 despite:2 path:1 interpolation:5 approximately:2 au:1 resembles:1 statistically:6 practice:2 implement:1 oio:2 significantly:1 convenient:1 radial:6 quadrant:1 close:2 selection:1 applying:1 equivalent:3 conventional:1 map:3 missing:1 maximizing:3 center:1 graphically:1 l:1 resolution:1 wit:1 simplicity:1 iiti:1 spanned:1 x64:1 coordinate:2 construction:1 us:2 cressie:2 located:1 observed:1 bottom:1 aero:7 capture:2 worst:1 region:17 ensures:1 kriging:9 rq:1 interpolant:10 trained:1 division:1 basis:6 joint:3 various:2 train:2 distinct:2 effective:1 whose:2 modular:1 widely:2 quite:1 apparent:2 larger:1 statistic:1 advantage:1 kohonen:1 poorly:1 achieve:1 competition:1 help:1 nearest:3 op:1 sa:1 involves:1 australian:1 direction:2 closely:1 wid:1 australia:1 unsampled:1 assign:1 generalization:6 around:1 exp:3 mapping:5 nw:3 achieves:1 earth:11 estimation:3 label:1 infonnation:1 weighted:1 gaussian:13 varying:1 ax:1 focus:1 derived:1 improvement:2 modelling:1 likelihood:1 sense:1 posteriori:1 entire:2 expand:1 overall:1 among:1 arg:2 spatial:7 softmax:2 mackay:2 field:1 manually:1 identical:4 thin:2 simplify:1 employ:1 randomly:1 interpolate:1 gpo:1 ourselves:1 csiro:2 attempt:1 cbr:1 scaleable:1 stationarity:7 interest:1 mixture:19 accurate:1 poggio:2 decoupled:1 divide:1 re:1 plotted:1 soft:4 tessellation:1 maximization:1 ordinary:3 cost:1 subset:3 predictor:4 gcv:1 conducted:1 seventh:1 ilhll:1 combined:1 unbiasedness:1 together:1 squared:6 wan:6 expert:37 li:2 account:2 hme:21 orr:2 summarized:1 north:4 availability:1 matter:1 postulated:2 bone:7 extrapolation:1 observing:1 oi:2 ni:1 square:1 accuracy:5 variance:3 il:1 bave:1 maximized:1 directional:1 bayesian:1 accurately:1 straight:1 pp:6 mi:1 sampled:1 proved:1 adjusting:1 covariation:35 improves:2 variogram:1 focusing:1 planar:1 formulation:1 evaluated:1 box:1 furthermore:1 correlation:1 flight:1 grbf:23 iao:2 overlapping:2 believe:1 normalized:3 unbiased:1 true:9 contain:1 regularization:5 hence:1 spatially:2 laboratory:1 ll:1 during:1 width:6 generalized:3 prominent:1 plate:2 evident:2 complete:2 demonstrate:1 ridge:1 performs:1 meaning:1 common:1 physical:3 anisotropic:1 extend:1 interpretation:2 discussed:1 refer:1 significant:2 ai:1 smoothness:5 automatic:1 pointed:1 surface:1 posterior:2 showed:2 raj:1 apart:1 outperforming:1 yi:2 captured:1 somewhat:1 eacb:1 ii:4 sound:1 match:2 cross:1 divided:2 ernest:1 prediction:15 expectation:1 normalization:1 achieved:1 rest:2 south:2 effectiveness:1 jordan:5 structural:2 iii:4 fit:1 zi:1 architecture:4 restrict:1 reduce:1 inner:1 motivated:1 effort:1 locating:1 interpolates:1 york:1 adequate:1 se:3 listed:2 clear:1 ylx:2 locally:1 dit:1 generate:2 dotted:2 estimated:2 four:3 nevertheless:1 prevent:1 neither:1 sum:1 compete:1 summarizes:1 hi:2 extrapolates:1 constraint:1 precisely:2 flat:1 interpolated:1 separable:1 interim:1 estep:1 according:1 smaller:1 em:2 partitioned:1 wi:2 making:1 ln:1 discus:1 fail:1 needed:1 available:2 rewritten:1 hierarchical:4 magnetic:7 rp:1 original:1 remaining:1 sw:3 uj:2 especially:2 objective:1 quantity:1 dependence:2 md:1 traditional:1 distance:3 separate:2 simulated:9 me:7 collected:1 extent:1 spanning:1 assuming:3 relationship:2 ellipsoid:1 minimizing:2 wbere:5 difficult:1 cij:2 expense:1 negative:1 design:1 packed:1 unknown:2 perform:2 upper:1 hinton:1 nonnalized:1 dc:2 smoothed:1 lb:1 introduced:1 required:1 specified:1 learned:7 geostatistics:1 discontinuity:1 address:1 able:1 rf:1 including:1 max:1 oj:1 critical:1 natural:1 indicator:2 zr:2 improve:4 technology:1 ne:3 lij:1 prior:13 proportional:1 localized:1 validation:4 x25:2 affine:1 consistent:1 share:2 formal:1 neighbor:3 sparse:3 boundary:2 world:1 valid:3 contour:2 commonly:1 adaptive:7 made:2 qx:1 compact:1 observable:1 l00:1 ml:3 global:8 assumed:1 alternatively:1 don:1 un:1 table:4 learn:8 reasonably:1 improving:1 interpolating:4 profile:3 allowed:1 lllj:1 nmse:6 augmented:1 canberra:2 west:4 wiley:1 sub:1 exponential:2 weighting:3 learns:3 formula:1 gating:11 decay:1 rbfn:8 evidence:1 restricting:1 importance:1 ci:1 sparseness:1 ilx:6 scalar:1 joined:1 conditional:2 viewed:1 rbf:28 absence:1 considerable:1 hard:1 reducing:1 preset:2 east:4 exception:1 support:1 cholesky:1 correlated:6 |
344 | 1,314 | Smoothing Regularizers for
Projective Basis Function Networks
John E. Moody and Thorsteinn S. Rognvaldsson *
Department of Computer Science, Oregon Graduate Institute
PO Box 91000, Portland, OR 97291
moody@cse.ogi.edu denni@cca.hh.se
Abstract
Smoothing regularizers for radial basis functions have been studied extensively,
but no general smoothing regularizers for projective basis junctions (PBFs), such
as the widely-used sigmoidal PBFs, have heretofore been proposed. We derive new classes of algebraically-simple mH'-order smoothing regularizers for
Ujg [x T Vj + Vjol + uo, with general
networks of the form f(W, x) =
projective basis functions g[.]. These regularizers are:
L7=1
N
Ra(W,m)
=
LU;lIvjIl2m-1
j=1
GlobalForm
N
RdW,m)
=
LU;lIvjll2m
j=1
Local Form
t"
These regularizers bound the corresponding m -order smoothing integral
where W denotes all the network weights {Uj, uo, vj, vo}, and O(x) is a weighting function on the D-dimensional input space. The global and local cases are
distinguished by different choices of O( x) .
The simple algebraic forms R(W, m) enable the direct enforcement of smoothness without the need for costly Monte-Carlo integrations of S (W, m). The new
regularizers are shown to yield better generalization errors than weight decay
when the implicit assumptions in the latter are wrong. Unlike weight decay, the
new regularizers distinguish between the roles of the input and output weights
and capture the interactions between them.
? Address as of September I. 1996: Centre for Computer Architecture. University of Halmstad,
P.O.Box 823, S-301 18 Halmstad, Sweden
1. E. Moody and T. S. Rognvaldsson
586
1 Introduction: What are the right biases?
Regularization is a technique for reducing prediction risk by balancing model bias and
model variance. A regularizer R(W) imposes prior constraints on the network parameters
W. Using squared error as the most common example, the objective functional that is
minimized during training is
M
E =
2~ ?=[y(i) -
I(W, x(i)
W+ AR(W)
(1)
,
.=1
where y(;) are target values corresponding to the inputs x(;) , M is the number of training
patterns, and the regularization parameter A controls the importance of the prior constraints
relative to the fit to the data. Several approaches can be applied to estimate A (e.g. Eubank
(1988) or Wahba (1990?.
Regularization reduces model variance at the cost of some model bias. An important
question arises: What are the right biases? (Geman, Bienenstock & Doursat 1992). A good
choice of R(W) will result in lower expected prediction error than will a poor choice.
Weight decay is often used effectively, but it is an ad hoc technique that controls weight
values without regard to the function 1(' ). It is thus not necessarily optimal and not appropriate for arbitrary function parameterizations. It will give very different results, depending
upon whether a function is parameterized, for example, as f (w, x) or as I (w -I, x).
Since many real world problems are intrinsically smooth, we propose that in many cases,
an appropriate bias to impose is to favor solutions with low mth-order curvature. Direct penalization of curvature is a parametrization-independent approach. The desired regularizer
is the standard D dimensional curvature functional of order m:
(2)
Here 1111 denotes the ordinary euclidean tensor norm and am /ax mdenotes the mth order
differential operator. The weighting function n (x) ensures that the integral converges and
detennines the region over which we require the function to be smooth. n( x) is not required
to be equal to the input density p( x), and will most often be different.
The use of smoothing functionals like (2) has been extensively studied for smoothing splines
(Eubank 1988, Hastie & Tibshirani 1990, Wahba 1990) and for radial basis function (RBF)
networks (powell 1987, Poggio & Girosi 1990, Girosi, Jones & Poggio 1995). However,
no general class of smoothing regularizers that directly enforce smoothness
m) for
projective basis junctions (PBFs), such as the widely used sigmoidal PBFs, has been
previously proposed.
sew,
Since explicit enforcement of smoothness using (2) requires costly, impractical MonteCarlo integrations, 1 we derive algebraically-simple regularizers R(W, m) that tightly bound
S(W,m).
2
Derivation of Simple Regularizers from Smoothing Functionals
We consider single hidden layer networks with D input variables, Nh nonlinear hidden
units, and No linear output units. For clarity, we set No = 1, and drop the subscript on Nh
'Note that (2) is not just one integral, but actually O(Dm) integrals, since the norm of the operator
8 m /8x m has O(Dm) terms. This is extremely expensive to compute for large D or large m .
587
Smoothing RegularizersJor Projective Basis Function Networks
(the derivation is trivially extended to the case No
> 1). Thus, our network function is
N
j(x)
=L
ujg[9j, x]
+ Uo
(3)
j=1
where g[.] are the nonlinear transfer functions of the internal hidden units, x E RD is the
input vector , 9j are the parameters associated with internal unit j, and W denotes all
parameters in the network.
For regularizers R(W), we wiIl derive strict upper bounds for S(W, m). We desire the
regularizers to be as general as possible so that they can easily be applied to different
network models. Without making any assumptions about n(x) or g(.), we have the upper
bound
S(W,m)
~ Nt. u;
JdDx{l(..)Iliim~~~,"1 II'-
(4)
ar.
which follows from the inequality (L~I
$ NL~I
We consider two possible
options for the weighting function n(x). One is to require global smoothness, in which
case n(x) is a very wide function that covers all relevant parts of the input space (e.g. a
very wide gaussian distribution or a constant distribution). The other option is to require
local smoothness, in which case n( x) approaches zero outside small regions around some
reference points (e.g. the training data).
a;)2
2.1
Projective Basis Representations
Projective basis functions (PBFs) are of the form g[9 j , x] = 9 [x T Vj + Vjol , where 9j =
{Vj, Vjo}, Vj = (Vj), Vj2, ??? ,VjD) is the vector of weights connecting hidden unit j to the
inp.uts, and VjO is the bias, offset, or threshold. For PBFs, expression (4) simplifies to
N
S(W,m) $
NL u;llvjIl
2m lj(W,m),
(5)
j=)
with
I;(W,m) "
where
Zj(x)
=
x T Vj
JdDx{l(..)(dm~;t)l)
2
(6)
+ Vjo.
Although the most commonly used g[.]' s are sigmoids, our analysis applies to many other
forms, for example flexible fourier units, polynomials, andfational functions. 3 The classes
of PBF transfer functions g[.] that are applicable (as determined by n(x? are those for
which the integral (8) is finite and well-defined.
2.2 Global weighting
For the global case, we select a gaussian form for the weighting function
na(x)
= (.Ji;u)-D exp [ _~:~12]
2Throughout, we use smaliletler boldface to denote vector quantities.
3See for example Moody & Yarvin (1992).
(7)
1. E. Moody and T S. Rognvaldsson
588
and require (J to be large. Integrating out all dimensions, except the one associated with the
projection vector U j, we are left with
(8)
If (d mg[z]/dz m )2 is integrable and approaches zero outside a region that is small compared
to (J, we can bound (8) by setting the exponential equal to unity. This implies
Ij(W,m)
I(m).
=:; IIUjll
1
mg [z])2
(ddzm
(9)
= NI(m)Re(W,m) ,
(10)
_
1
I(m) = (J.,fi;
WIth
00
-00
dz
The bound of equation (5) then becomes
N
S(W,m)
=:; NI(m) LU;IIUjWm -
1
j;l
where the subscript G stands for global. Since A absorbs all constant multiplicative factors,
we need only weigh Re(W, m) into the training objective function.
2.2.1
Local weighting
For the local case, we consider weighting functions of the general form
(11)
x(i) are a set of points, and n( xU) , (J) is a function that decays rapidly for large
x(i) II. We require that limu-to n( x(i) ,(J) = 6( x - xCi) ). Thus, when the xCi) are the
where
II x -
training data points, the limiting distribution of (11) is the empirical distribution.
In the limit (J -+ 0, equation (5) becomes
(12)
For the empirical distribution, we could compute the expression within parenthesis in (12)
for each input pattern x( i) during training and use it as our regularization cost. This is done
2. However, this requires explicit design for
by Bishop (1993) for the special case m
each transfer function and becomes increasingly complicated as we go to higher m. To
construct a simpler and more general form, we instead assume that the m th derivative of
=
g[.] isboundedfromabovebyCL(m) == max z
(d;z9lZlf
This gives the bound
N
S(W,m)
=:; NCL(m) L u;lIujWm = NCL(m)RL(W,m)
j;l
for the maximum local curvature of the function (the subscript L denotes local limit).
(13)
Smoothing RegularizersJor Projective Basis Function Networks
589
3 Empirical Example
We have done extensive simulation studies that demonstrate the efficacy of our new regularizers for PBF networks on a variety of problems. An account is given in Moody &
Rognvaldsson (1996). Here, we demonstrate the value of using smoothing regularizers on a
simple problem which illustrates a key difference between smoothing and quadratic weight
decay, the two dimensional bilinear function
(14)
This example was used by Friedman & Stuetzle (1981) to demonstrate projection pursuit
regression. It is the simplest function with interactions between input variables.
We fit this function with one hidden layer networks using the m = {I, 2, 3} smoothing
regularizers, comparing the results with using weight decay. In a large set of experiments,
we find that both the global and local smoothing regularizers with m = 2 and m = 3
outperform weight decay. An example is shown in figure 1. The local m = 1 case performs
poorly, which is unsurprising, given that the target function is quadratic. Weight decay
performs poorly because it lacks any form of interaction between the input layer and output
layer weights v j and U j.
(a)GIoboI
(b) Weight doc<o)' va. gIobaJ _
_al_ontOfdor
Otobol_. mot Otobol_.rn.2 - - Otobol_. ""'" - -
.g
~
O.t
I
J
I
I
....N .........?.~ ?.?)?
1,8
,
,
I
'~~ ... ,"\
~\
'.
\
\:"
I
-
J
1-IO.t
'I
11
'1,
'1-
,l
,
H
I'
Figure 1: (8) Generalization errors on the XtX2 problem, with 40 training data points and a
signal-to-noise ratio of2l3, for different values of the regularization parameter and different
orders of the smoothing regularizer. For each value of A, 10 networks with 8 hidden units'
have been trained and averaged (geometric average). The shaded area shows the 95%
confidence bands for the average performance of a linear model on the same problem.
(b) Similar plot for the m = 3 smoother compared to the standard weight decay method.
Error bars mark the estimated standard deviation of the mean generalization error of the 10
networks. The m = 3 regularizer performs significantly better than weight decay.
4
Quality of the Regularizers: Approximations vs Bounds
Equations (10) and (13) are strict upper bounds to the smoothness functional S(W, m),
eq. (2), in the global and local limits, U -+ 00 and U -+ O. However, if the bounds are not
sufficiently tight, then penalizing R(W, m) may not have the effect of penalizing S(W, m )4 .
4 For the proposed regularizers R(W, m) to be effective in penalizing S(W, m), we need only
have an approximate monotonic relationship between them.
1. E. Moody and T. S. Rognvaldsson
590
The bound (4) is tighter the more uncorrelated the mth derivatives of the internal unit
activities are. If they are uncorrelated, then the bounds of equations (10) and (13) can be
replaced by the approximations:
~
SG(W,m)
SL(w,m)
~
IG(m)RG(W, m)
CL(m)RL(W,m) ,
(15)
(16)
The right hand sides differ from those in equations (10) and
using (~~I a j ) ~ ~~I
(13) only by a factor of N, so these approximations are actually proportional to the bounds.
2
a;.
For our regularizers, the constant factor N doesn't matter, since it can be absorbed into
the regularization parameter A (along with the values of the factors IG(m) or CL(m? . In
practical terms, there is no difference between using the upper bounds (10) and (13) or the
uncorrelated approximations (15) and (16). Our empirical results (see figure 2) indicate
that an approximate linear relationship holds between S(W, m) and R(W; m) for both the
global and the local cases. This suggests that the uncorrelated hidden unit assumption
yields a good approximation. This approximation also improves with the dimensionality of
the input space. Extensive results and discussion are presented in (Moody & Rognvaldsson
1996).
,. FIlS! crde! global SII"IOOIhe!, projection-based tam tI1~S
Lineal regression (COIIelation: 0.992) ---
Pi'
~
t
Ii
-8
6
~
?
Ct
?
Ii"
>
2
, -?,
!!!.
.~
/ t
8
6
+
t,, ' t
!!!. 10
~
projection-based tanh unls
Unearregression (correlation: 0.998) ._-
12
+
if
~ orde! global smoother,
N
~
if
Ii
-8
Ct
5
?
.,'
tf"?
3
~
?"
Ii
,
0"
0
>
../
?
1
5
2
3
6
Monte Carlo eslimated value a S(W,I) [E +3)
7
O?
0
.'
0.5
1.0
1.5
2.0
'-bie Carlo eslimaled value a S(W,2) [E+6)
2.5
Figure 2: Linear correlation between S(W; m) and the global RG(W, m) for neural networks with 10 input units, 10 internal tanh[?] PBF units, and one linear output. The values
of S(W; m) are computed through Monte Carlo integration. The left graph shows m = 1
and the right graph shows m = 2. Results are similar for the local form R L (W; m).
5 Summary
Our regularizers R(W; m) are the first general class of mth-order smoothing regularizers
to be proposed for projective basis function (PBF) networks. They apply to large classes
of transfer functions g[.], including sigmoids. They differ fundamentally from quadratic
weight decay in that they distinguish the roles of the input and output weights and capture
the interactions between them.
Our approach is quite different from that developed for smoothing splines and smoothing
radial basis functions (RBFs), since we derive smoothing regularizers for given classes of
units g[8, x], rather than derive the forms of the units g[.] by requiring them to be Greens
functions of the smoothing operator S(?). Our approach thus has the advantage that it can
be applied to the types of networks most often used in practice, namely PBFs.
Smoothing Regularizersfor Projective Basis Function Networks
591
In Moody & Rognvaldsson (1996), we present further analysis and simulation results for
PBFs. We have also extended our work to RBFs (Moody & Rognvaldsson 1997).
Acknowledgements
Both authors thank Steve Rehfuss and Lizhong Wu for stimulating input. John Moody
thanks Volker Tresp for a provocative discussion at a 1991 Neural Networks Workshop
sponsored by the Deutsche Informatik Akademie. We gratefully acknowledge support for
this work from ARPA and ONR (grant NOOO 14-92-J-4062), NSF (grant CDA-9503968), the
Swedish Institute, and the Swedish Research Council for Engineering Sciences (contract
TFR-282-95-847).
References
Bishop, C. (1993), 'Curvature-driven smoothing: A learning algorithm for feed forward networks',
IEEE Trans. Neural Networks 4, 882-884.
Eubank, R. L. (1988), Spline Smoothing and Nonparametric Regression, Marcel Dekker, Inc.
Friedman, J. H. & Stuetzle, W. (1981), 'Projection pursuit regression', J. Amer. Stat. Assoc.
76(376),817-823.
Geman, S., Bienenstock, E. & Doursat, R. (1992), 'Neural networks and the bias/variance dilemma',
Neural Computation 4(1), 1-58.
Girosi, E, Jones, M. & Poggio, T. (1995), 'Regularization theory and neural network architectures',
Neural Computation 7,219-269.
Hastie, T. 1. & libshirani, R. 1. (1990). Generalized Additive Models, Vol. 43 of Monographs on
Statistics and Applied Probability, Chapman and Hall.
Moody, 1. E. & Yarvin, N. (1992), Networks with learned unit response functions, in J. E. Moody,
S. J. Hanson & R. P. Lippmann, eds. 'Advances in Neural Information Processing Systems 4',
Morgan Kaufmann Publishers, San Mateo. CA. pp. 1048-55.
Moody, J. & Rognvaldsson. T. (1996), Smoothing regularizers for projective basis function networks.
Submitted to Neural Computation.
Moody, 1. & Rognvaldsson, T. (1997), Smoothing regularizers for radial basis function networks,
Manuscript in preparation.
Poggio, T. & Girosi, E (1990). 'Networks for approximation and learning', IEEE Proceedings 78(9).
Powell, M. (1987). Radial basis functions for multivariable interpolation: a review.? in 1. Mason &
M. Cox, eds, 'Algorithms for Approximation', Clarendon Press, Oxford.
Wahba, G. (1990), Spline models for observational data, CBMS-NSF Regional Conference Series in
Applied Mathematics.
| 1314 |@word cox:1 polynomial:1 norm:2 dekker:1 heretofore:1 simulation:2 series:1 efficacy:1 comparing:1 nt:1 bie:1 john:2 additive:1 girosi:4 drop:1 plot:1 sponsored:1 v:1 parametrization:1 parameterizations:1 cse:1 sigmoidal:2 simpler:1 along:1 sii:1 direct:2 differential:1 absorbs:1 expected:1 ra:1 becomes:3 deutsche:1 what:2 developed:1 impractical:1 wrong:1 assoc:1 control:2 unit:14 uo:3 grant:2 engineering:1 local:12 limit:3 io:1 bilinear:1 oxford:1 subscript:3 interpolation:1 studied:2 mateo:1 suggests:1 shaded:1 projective:11 graduate:1 averaged:1 practical:1 practice:1 stuetzle:2 powell:2 area:1 empirical:4 akademie:1 significantly:1 projection:5 confidence:1 radial:5 inp:1 integrating:1 operator:3 risk:1 xci:2 dz:2 go:1 limiting:1 target:2 expensive:1 geman:2 role:2 capture:2 region:3 ensures:1 weigh:1 monograph:1 trained:1 tight:1 dilemma:1 upon:1 basis:16 po:1 rognvaldsson:10 mh:1 easily:1 regularizer:4 derivation:2 effective:1 monte:3 outside:2 quite:1 widely:2 favor:1 statistic:1 hoc:1 advantage:1 mg:2 propose:1 provocative:1 interaction:4 relevant:1 rapidly:1 poorly:2 sew:1 converges:1 derive:5 depending:1 stat:1 ij:1 eq:1 marcel:1 implies:1 indicate:1 differ:2 detennines:1 enable:1 observational:1 require:5 generalization:3 tighter:1 hold:1 around:1 sufficiently:1 hall:1 exp:1 applicable:1 tanh:2 council:1 tf:1 gaussian:2 rather:1 volker:1 ax:1 portland:1 rehfuss:1 am:1 lj:1 bienenstock:2 mth:4 hidden:7 l7:1 flexible:1 smoothing:26 integration:3 special:1 equal:2 construct:1 chapman:1 jones:2 minimized:1 spline:4 fundamentally:1 tightly:1 replaced:1 vjo:3 friedman:2 nl:2 lizhong:1 regularizers:25 integral:5 poggio:4 sweden:1 euclidean:1 desired:1 re:2 cda:1 arpa:1 ar:2 cover:1 ordinary:1 cost:2 deviation:1 unsurprising:1 mot:1 thanks:1 density:1 contract:1 connecting:1 moody:15 na:1 squared:1 tam:1 derivative:2 account:1 matter:1 oregon:1 inc:1 ad:1 multiplicative:1 option:2 complicated:1 rbfs:2 ni:2 variance:3 kaufmann:1 yield:2 eubank:3 informatik:1 lu:3 carlo:4 submitted:1 ed:2 pp:1 dm:3 associated:2 pbf:4 halmstad:2 intrinsically:1 ut:1 improves:1 dimensionality:1 ujg:2 actually:2 cbms:1 manuscript:1 feed:1 steve:1 higher:1 clarendon:1 response:1 swedish:2 amer:1 done:2 box:2 just:1 implicit:1 correlation:2 hand:1 nonlinear:2 lack:1 quality:1 effect:1 requiring:1 regularization:7 ogi:1 during:2 multivariable:1 generalized:1 demonstrate:3 vo:1 performs:3 fi:1 common:1 functional:3 ji:1 rl:2 nh:2 smoothness:6 rd:1 trivially:1 mathematics:1 centre:1 gratefully:1 curvature:5 driven:1 inequality:1 onr:1 integrable:1 morgan:1 impose:1 algebraically:2 signal:1 ii:7 smoother:2 reduces:1 smooth:2 wiil:1 parenthesis:1 va:1 prediction:2 regression:4 publisher:1 doursat:2 unlike:1 regional:1 strict:2 variety:1 fit:2 architecture:2 wahba:3 hastie:2 simplifies:1 ti1:1 whether:1 expression:2 algebraic:1 se:1 nonparametric:1 extensively:2 band:1 simplest:1 outperform:1 sl:1 zj:1 nsf:2 estimated:1 tibshirani:1 vol:1 key:1 threshold:1 clarity:1 penalizing:3 graph:2 parameterized:1 throughout:1 wu:1 doc:1 layer:4 cca:1 bound:14 ct:2 distinguish:2 quadratic:3 activity:1 constraint:2 fourier:1 lineal:1 extremely:1 department:1 poor:1 increasingly:1 unity:1 making:1 equation:5 previously:1 montecarlo:1 hh:1 enforcement:2 junction:2 pursuit:2 apply:1 appropriate:2 enforce:1 distinguished:1 denotes:4 tfr:1 uj:1 tensor:1 objective:2 question:1 quantity:1 costly:2 september:1 thank:1 boldface:1 relationship:2 ratio:1 ncl:2 design:1 upper:4 finite:1 acknowledge:1 extended:2 vj2:1 rn:1 arbitrary:1 namely:1 required:1 nooo:1 extensive:2 hanson:1 learned:1 trans:1 address:1 bar:1 pattern:2 max:1 including:1 green:1 tresp:1 prior:2 geometric:1 sg:1 acknowledgement:1 review:1 relative:1 proportional:1 penalization:1 limu:1 imposes:1 uncorrelated:4 pi:1 balancing:1 summary:1 bias:7 side:1 institute:2 wide:2 rdw:1 regard:1 dimension:1 world:1 stand:1 yarvin:2 doesn:1 author:1 commonly:1 forward:1 san:1 ig:2 functionals:2 approximate:2 lippmann:1 global:11 transfer:4 ca:1 necessarily:1 cl:2 vj:7 noise:1 xu:1 explicit:2 exponential:1 weighting:7 bishop:2 offset:1 decay:11 mason:1 workshop:1 effectively:1 importance:1 sigmoids:2 illustrates:1 rg:2 absorbed:1 desire:1 applies:1 monotonic:1 stimulating:1 rbf:1 determined:1 except:1 reducing:1 select:1 internal:4 mark:1 support:1 latter:1 arises:1 preparation:1 |
345 | 1,315 | One-unit Learning Rules for
Independent Component Analysis
Aapo Hyvarinen and Erkki Oja
Helsinki University of Technology
Laboratory of Computer and Information Science
Rakentajanaukio 2 C, FIN-02150 Espoo, Finland
email: {Aapo.Hyvarinen.Erkki.Oja}(Qhut.fi
Abstract
Neural one-unit learning rules for the problem of Independent Component Analysis (ICA) and blind source separation are introduced.
In these new algorithms, every ICA neuron develops into a separator that finds one of the independent components. The learning
rules use very simple constrained Hebbianjanti-Hebbian learning
in which decorrelating feedback may be added. To speed up the
convergence of these stochastic gradient descent rules, a novel computationally efficient fixed-point algorithm is introduced.
1
Introduction
Independent Component Analysis (ICA) (Comon, 1994; Jutten and Herault, 1991)
is a signal processing technique whose goal is to express a set of random variables as linear combinations of statistically independent component variables. The
main applications of ICA are in blind source separation, feature extraction, and
blind deconvolution. In the simplest form of ICA (Comon, 1994), we observe m
scalar random variables Xl, ... , Xm which are assumed to be linear combinations of
n unknown components 81, ... 8 n that are zero-mean and mutually statistically independent. In addition, we must assume n ~ m. If we arrange the observed variables
Xi into a vector x = (Xl,X2, ... ,xm)T and the component variables 8j into a vector
s, the linear relationship can be expressed as
x=As
(1)
Here, A is an unknown m x n matrix of full rank, called the mixing matrix. Noise
may also be added to the model, but it is omitted here for simplicity. The basic
One-unit Learning Rules for Independent Component Analysis
481
problem of ICA is then to estimate (separate) the realizations of the original independent components Sj, or a subset of them, using only the mixtures Xi. This is
roughly equivalent to estimating the rows, or a subset of the rows, of the pseudoinverse of the mixing matrix A . The fundamental restriction of the model is that
we can only estimate non-Gaussian independent components, or ICs (except if just
one of the ICs is Gaussian). Moreover, the ICs and the columns of A can only be
estimated up to a multiplicative constant, because any constant multiplying an IC
in eq. (1) could be cancelled by dividing the corresponding column of the mixing
matrix A by the same constant. For mathematical convenience, we define here that
the ICs Sj have unit variance. This makes the (non-Gaussian) ICs unique, up to
their signs. Note the assumption of zero mean of the ICs is in fact no restriction, as
this can always be accomplished by subtracting the mean from the random vector
x. Note also that no order is defined between the lCs.
In blind source separation (Jutten and Herault, 1991), the observed values of x
correspond to a realization of an m-dimensional discrete-time signal x(t), t = 1,2, ....
Then the components Sj(t) are called source signals. The source signals are usually
original, uncorrupted signals or noise sources. Another application of ICA is feature
extraction (Bell and Sejnowski, 1996; Hurri et al., 1996), where the columns of the
mixing matrix A define features, and the Sj signal the presence and the amplitude
of a feature. A closely related problem is blind deconvolution, in which a convolved
version x(t) of a scalar LLd. signal s(t) is observed. The goal is then to recover the
original signal s(t) without knowing the convolution kernel (Donoho, 1981). This
problem can be represented in a way similar to eq. (1), replacing the matrix A by
a filter.
The current neural algorithms for Independent Component Analysis, e.g. (Bell and
Sejnowski, 1995; Cardoso and Laheld, 1996; Jutten and Herault, 1991; Karhunen
et al., 1997; Oja, 1995) try to estimate simultaneously all the components. This is
often not necessary, nor feasible, and it is often desired to estimate only a subset of
the ICs. This is the starting point of our paper. We introduce learning rules for a
single neuron, by which the neuron learns to estimate one of the ICs. A network of
several such neurons can then estimate several (1 to n) ICs. Both learning rules for
the 'raw' data (Section 3) and for whitened data (Section 4) are introduced. If the
data is whitened, the convergence is speeded up, and some interesting simplifications
and approximations are made possible. Feedback mechanisms (Section 5) are also
mentioned. Finally, we introduce a novel approach for performing the computations
needed in the ICA learning rules, which uses a very simple, yet highly efficient, fixedpoint iteration scheme (Section 6). An important generalization of our learning rules
is discussed in Section 7, and an illustrative experiment is shown in Section 8.
2
Using Kurtosis for leA Estimation
We begin by introducing the basic mathematical framework of ICA. Most suggested solutions for ICA use the fourth-order cumulant or kurtosis of the signals,
defined for a zero-mean random variable vas kurt(v) = E{v 4 } - 3(E{V 2})2. For a
Gaussian random variable, kurtosis is zero. Therefore, random variables of positive
kurtosis are sometimes called super-Gaussian, and variables of negative kurtosis
sub-Gaussian. Note that for two independent random variables VI and V2 and for a
scalar 0:, it holds kurt(vi + V2) = kurt(vJ) + kurt(v2) and kurt(o:vd = 0: 4 kurt(vd?
482
A. Hyviirinen and E. Oja
Let us search for a linear combination of the observations Xi, say, w T x, such that
it has maximal or minimal kurtosis. Obviously, this is meaningful only if w is
somehow bounded; let us assume that the variance of the linear combination is
constant: E{(w T x)2} = 1. Using the mixing matrix A in eq. (1), let us define
z = ATw. Then also IIzl12 = w T A ATw = w T E{xxT}w = E{(W T x)2} = 1.
Using eq. (1) and the properties of the kurtosis, we have
n
kurt(w Tx)
=
kurt(w TAs)
= kurt(zT s) = L zJ kurt(sj)
(2)
j=1
Under the constraint E{(w T x)2} = IIzll2 = 1, the function in (2) has a number of
local minima and maxima. To make the argument clearer, let us assume for the
moment that the mixture contains at least one Ie whose kurtosis is negative, and
at least one whose kurtosis is positive. Then, as may be obvious, and was rigorously
proven by Delfosse and Loubaton (1995), the extremal points of (2) are obtained
when all the components Zj of z are zero except one component which equals ?1.
In particular, the function in (2) is maximized (resp. minimized) exactly when the
linear combination w T x = zT S equals, up to the sign, one of the les Sj of positive
(resp. negative) kurtosis. Thus, finding the extrema of kurtosis of w T x enables
estimation of the independent components. Equation (2) also shows that Gaussian
components, or other components whose kurtosis is zero, cannot be estimated by
this method.
To actually minimize or maximize kurt(w Tx), a neural algorithm based on gradient
descent or ascent can be used. Then w is interpreted as the weight vector of a neuron
with input vector x and linear output w T x. The objective function can be simplified
because of the constraint E{ (w T X)2} = 1: it holds kurt(w Tx) = E{ (w T x)4} - 3.
The constraint E{(w T x)2} = 1 itself can be taken into account by a penalty term.
The final objective function is then of the form
(3)
where a, (3 > 0 are arbitrary scaling constants, and F is a suitable penalty function.
Our basic leA learning rules are stochastic gradient descents or ascents for an
objective function of this form. In the next two sections, we present learning rules
resulting from adequate choices of the penalty function F . Preprocessing of the
data (whitening) is also used to simplify J in Section 4. An alternative method for
finding the extrema of kurtosis is the fixed-point algorithm; see Section 6.
3
Basic One-Unit leA Learning Rules
In this section, we introduce learning rules for a single neural unit. These basic
learning rules require no preprocessing of the data, except that the data must be
made zero-mean. Our learning rules are divided into two categories. As explained in
Section 2, the learning rules either minimize the kurtosis of the output to separate
les of negative kurtosis, or maximize it for components of positive kurtosis.
Let us assume that we observe a sample sequence x(t) of a vector x that is a
linear combination of independent components 81, ... , 8 n according to eq. (1). For
separating one of the les of negative kurtosis, we use the following learning rule for
One-unit Learning Rules for Independent Component Analysis
483
the weight vector w of a neuron:
Aw(t)
<X
x(t)g-(w(t? x(t))
(4)
Here, the non-linear learning function g- is a simple polynomial: g-(u) = au - bu 3
with arbitrary scaling constants a, b > O. This learning rule is clearly a stochastic
gradient descent for a function of the form (3), with F(u) = -u. To separate an IC
of positive kurtosis, we use the following learning rule:
Aw(t)
<X
x(t)g!(t) (w(t? x(t))
(5)
where the learning function g!(t) is defined as follows:
g~(u)
3
-au(w(t)TCw(t))2 + bu where C is the covariance matrix of x(t), i.e. C
E{x(t)X(t)T}, and a, b > a are arbitrary constants. This learning rule is a stochastic gradient ascent for a function of the form (3), with F(u) = -u 2. Note that
w(t)TCw(t) in g+ might also be replaced by (E{(w(t)Tx(t))2})2 or by IIw(t)114 to
enable a simpler implementation.
It can be proven (Hyvarinen and Oja, 1996b) that using the learning rules (4) and
(5), the linear output converges to CSj(t) where Sj(t) is one of the ICs, and C is a
scalar constant. This multiplication of the source signal by the constant c is in fact
not a restriction, as the variance and the sign of the sources cannot be estimated.
The only condition for convergence is that one of the ICs must be of negative (resp.
positive) kurtosis, when learning rule (4) (resp. learning rule (5)) is used. Thus
we can say that the neuron learns to separate (estimate) one of the independent
components. It is also possible to combine these two learning rules into a single rule
that separates an IC of any kurtosis; see (Hyvarinen and Oja, 1996b).
4
One-Unit ICA Learning Rules for Whitened Data
Whitening, also called sphering, is a very useful preprocessing technique. It speeds
up the convergence considerably, makes the learning more stable numerically, and
allows some interesting modifications of the learning rules. Whitening means that
the observed vector x is linearly transformed to a vector v = Ux such that its
elements Vi are mutually uncorrelated and all have unit variance (Comon, 1994).
Thus the correlation matrix of v equals unity: E{ vv T } = I. This transformation is
always possible and can be accomplished by classical Principal Component Analysis.
At the same time, the dimensionality of the data should be reduced so that the
dimension of the transformed data vector v equals n, the number of independent
components. This also has the effect of reducing noise.
Let us thus suppose that the observed signal vet) is whitened (sphered). Then, in
order to separate one of the components of negative kurtosis, we can modify the
learning rule (4) so as to get the following learning rule for the weight vector w:
Aw(t)
<X
v(t)g- (w(t? vet)) - wet)
(6)
Here, the function g- is the same polynomial as above: g-(u) = au - bu 3 with
a > 1 and b > O. This modification is valid because we now have Ev(wT v) = w
and thus we can add +w(t) in the linear part of g- and subtract wet) explicitly
afterwards. The modification is useful because it allows us to approximate g- with
A. Hyviirinen and E. Oja
484
the 'tanh' function, as w(t)T vet) then stays in the range where this approximation
is valid. Thus we get what is perhaps the simplest possible stable Hebbian learning
rule for a nonlinear Perceptron.
To separate one of the components of positive kurtosis, rule (5) simplifies to:
dw(t)
5
<X
bv(t) (w(t)T v(t))3 - allw(t)114 w (t) .
(7)
Multi-Unit ICA Learning Rules
If estimation of several independent components is desired, it is possible to construct
a neural network by combining N (1 ~ N ~ n) neurons that learn according to
the learning rules given above, and adding a feedback term to each of those learning
rules. A discussion of such networks can be found in (Hyv~rinen and Oja, 1996b) .
6
Fixed-Point Algorithm for ICA
The advantage of neural on-line learning rules like those introduced above is that
the inputs vet) can be used in the algorithm at once, thus enabling faster adaptation
in a non-stationary environment. A resulting trade-off, however, is that the convergence is slow, and depends on a good choice of the learning rate sequence, i.e. the
step size at each iteration. A bad choice of the learning rate can, in practice, destroy
convergence. Therefore, some ways to make the learning radically faster and more
reliable may be needed. The fixed-point iteration algorithms are such an alternative. Based on the learning rules introduced above, we introduce here a fixed-point
algorithm, whose convergence is proven and analyzed in detail in (Hyv~rinen and
Oja, 1997). For simplicity, we only consider the case of whitened data here.
Consider the general neural learning rule trying to find the extrema of kurtosis.
In a fixed point of such a learning rule, the sum of the gradient of kurtosis and
the penalty term must equal zero: E{v(wT v)3} - 311wll 2w + f(lIwI1 2)w = 0. The
solutions of this equation must satisfy
(8)
Actually, because the norm of w is irrelevant, it is the direction of the right hand
side that is important. Therefore the scalar in eq. (8) is not significant and its effect
can be replaced by explicit normalization.
Assume now that we have collected a sample of the random vector v , which is a
whitened (or sphered) version of the vector x in eq. (1). Using (8), we obtain the
following fixed-point algorithm for leA:
1. Take a random initial vector w(o) of norm 1. Let k = 1.
2. Let w(k) = E{v(w(k - I)T v)3} - 3w(k - 1). The expectation can be
estimated using a large sample of v vectors (say, 1,000 points).
3. Divide w(k) by its norm.
4. If IW(k)Tw(k - 1)1 is not close enough to 1, let k = k
to step 2. Otherwise, output the vector w(k).
+ 1 and
go back
One-unit Learning Rules for Independent Component Analysis
485
The final vector w* = limk w(k) given by the algorithm separates one of the nonGaussian les in the sense that w*T v equals one of the les Sj. No distinction
between components of positive or negative kurtosis is needed here. A remarkable
property of our algorithm is that a very small number of iterations, usually 5-10,
seems to be enough to obtain the maximal accuracy allowed by the sample data.
This is due to the fact that the convergence of the fixed point algorithm is in fact
cubic, as shown in (Hyv:trinen and Oja, 1997).
To estimate N les, we run this algorithm N times. To ensure that we estimate each
time a different Ie, we only need to add a simple projection inside the loop, which
forces the solution vector w(k) to be orthogonal to the previously found solutions.
This is possible because the desired weight vectors are orthonormal for whitened
data (Hyv:trinen and Oja, 1996bj Karhunen et al., 1997). Symmetric methods of
orthogonalization may also be used (Hyv:trinen, 1997).
This fixed-point algorithm has several advantages when compared to other suggested
leA methods. First, the convergence of our algorithm is cubic. This means very fast
convergence and is rather unique among the leA algorithms. Second, contrary to
gradient-based algorithms, there is no learning rate or other adjustable parameters
in the algorithm, which makes it easy to use and more reliable. Third, components
of both positive and negative kurtosis can be directly estimated by the same fixedpoint algorithm.
7
Generalizations of Kurtosis
In the learning rules introduced above, we used kurtosis as an optimization criterion
for leA estimation. This approach can be generalized to a large class of such
optimizaton criteria, called contrast functions. For the case of on-line learning
rules, this approach is developed in (Hyv:trinen and Oja, 1996a), in which it is
shown that the function 9 in the learning rules in section 4 can be, in fact, replaced
by practically any non-linear function (provided that w is normalized properly).
Whether one must use Hebbian or anti-Hebbian learning is then determined by the
sign of certain 'non-polynomial cumulants'. The utility of such a generalization is
that one can then choose the non-linearity according to some statistical optimality
criteria, such as robustness against outliers.
The fixed-point algorithm may also be generalized for an arbitrary non-linearity, say
g. Step 2 in the fixed-point algorithm then becomes (for whitened data) (Hyv:trinen,
1997): w(k) = E{vg(w(k -l)Tv)} - E{g'(w(k -l)Tv)}w(k -1).
8
Experiments
A visually appealing way of demonstrating how leA algorithms work is to use
them to separate images from their linear mixtures. On the left in Fig. 1, four
superimposed mixtures of 4 unknown images are depicted. Defining the j-th Ie
Sj to be the gray-level value of the j-th image in a given position, and scanning
the 4 images simultaneously pixel by pixel, we can use the leA model and recover
the original images. For example, we ran the fixed-point algorithm four times,
estimating the four images shown on the right in Fig. 1. The algorithm needed on
the average 7 iterations for each Ie.
486
A. Hyviirinen and E. Oja
Figure 1: Three photographs of natural scenes and a noise image were linearly
mixed to illustrate our algorithms. The mixtures are depicted on the left. On the
right, the images recovered by the fixed-point algorithm are shown.
References
Bell, A. and Sejnowski, T. (1995). An information-maximization approach to blind
separation and blind deconvolution. Neural Computation, 7:1129-1159.
Bell, A. and Sejnowski, T. J . (1996). Edges are the independent components of natural
scenes. In NIPS *96, Denver, Colorado.
Cardoso, J .-F. and Laheld, B. H. (1996) . Equivariant adaptive source separation. IEEE
Trans. on Signal Processing. 44(12).
Comon, P. (1994). Independent component analysis - a new concept? Signal Processing,
36:287-314.
Delfosse, N. and Loubaton, P. (1995). Adaptive blind separation of independent sources:
a deflation approach. Signal Processing, 45:59- 83 .
Donoho, D. (1981) . On minimum entropy deconvolution. In Applied Time Series Analysis II. Academic Press.
Hurri, J., Hyv:irinen, A., Karhunen, J., and Oja, E. (1996). Image feature extraction
using independent component analysis. In Proc. NORSIG'96, Espoo, Finland.
Hyv:irinen, A. (1997). A family of fixed-point algorithms for independent component
analysis. In Pmc. ICASSP'9'1, Munich, Germany.
Hyv:irinen, A. and Oja, E. (1996a) . Independent component analysis by general nonlinear hebbian-like learning rules. Technical Report A41, Helsinki University of Technology, Laboratory of Computer and Information Science.
Hyv:irinen, A. and Oja, E. (1996b). Simple neuron models for independent component
analysis. Technical Report A37, Helsinki University of Technology, Laboratory of
Computer and Information Science.
Hyv:irinen, A. and Oja, E. (1997). A fast fixed-point algorithm for independent component analysis. Neural Computation. To appear.
Jutten, C. and Herault, J. (1991). Blind separation of sources, part I: An adaptive
algorithm based on neuromimetic architecture. Signal Processing, 24:1-10.
Karhunen, J., Oja, E., Wang, L., Vigario, R., and Joutsensalo, J . (1997). A class of neural networks for independent component analysis. IEEE Trans. on Neural Networks.
To appear.
Oja, E. (1995). The nonlinear PCA learning rule and signal separation - mathematical
analysis. Technical Report A 26, Helsinki University of Technology, Laboratory of
Computer and Information Science. Submitted to a journal.
| 1315 |@word version:2 polynomial:3 norm:3 seems:1 hyv:12 covariance:1 moment:1 initial:1 contains:1 series:1 kurt:12 current:1 recovered:1 yet:1 must:6 wll:1 enables:1 stationary:1 simpler:1 mathematical:3 combine:1 inside:1 introduce:4 ica:13 equivariant:1 roughly:1 nor:1 multi:1 becomes:1 begin:1 estimating:2 moreover:1 bounded:1 provided:1 linearity:2 what:1 interpreted:1 developed:1 finding:2 extremum:3 transformation:1 every:1 exactly:1 unit:11 appear:2 positive:9 local:1 modify:1 might:1 au:3 speeded:1 statistically:2 range:1 unique:2 practice:1 laheld:2 bell:4 projection:1 get:2 convenience:1 cannot:2 close:1 restriction:3 equivalent:1 go:1 starting:1 simplicity:2 rule:45 orthonormal:1 dw:1 resp:4 suppose:1 colorado:1 rinen:2 us:1 element:1 atw:2 observed:5 wang:1 trade:1 ran:1 mentioned:1 environment:1 rigorously:1 icassp:1 represented:1 tx:4 xxt:1 fast:2 sejnowski:4 whose:5 say:4 otherwise:1 itself:1 final:2 obviously:1 sequence:2 advantage:2 kurtosis:28 subtracting:1 maximal:2 adaptation:1 combining:1 realization:2 loop:1 mixing:5 convergence:10 converges:1 illustrate:1 clearer:1 eq:7 dividing:1 direction:1 closely:1 filter:1 stochastic:4 enable:1 require:1 generalization:3 hold:2 practically:1 ic:14 visually:1 bj:1 finland:2 arrange:1 omitted:1 delfosse:2 estimation:4 proc:1 wet:2 tanh:1 iw:1 extremal:1 clearly:1 gaussian:7 always:2 super:1 rather:1 properly:1 rank:1 superimposed:1 contrast:1 sense:1 transformed:2 germany:1 pixel:2 among:1 espoo:2 herault:4 constrained:1 equal:6 construct:1 once:1 extraction:3 minimized:1 report:3 develops:1 simplify:1 loubaton:2 oja:19 simultaneously:2 replaced:3 highly:1 mixture:5 analyzed:1 edge:1 necessary:1 orthogonal:1 divide:1 desired:3 minimal:1 column:3 cumulants:1 maximization:1 introducing:1 subset:3 aw:3 scanning:1 considerably:1 fundamental:1 ie:4 stay:1 bu:3 off:1 nongaussian:1 choose:1 account:1 satisfy:1 explicitly:1 blind:9 vi:3 depends:1 multiplicative:1 try:1 recover:2 minimize:2 accuracy:1 variance:4 maximized:1 correspond:1 raw:1 multiplying:1 submitted:1 email:1 against:1 obvious:1 dimensionality:1 amplitude:1 actually:2 back:1 ta:1 decorrelating:1 just:1 correlation:1 hand:1 replacing:1 nonlinear:3 somehow:1 jutten:4 gray:1 perhaps:1 effect:2 normalized:1 concept:1 symmetric:1 laboratory:4 illustrative:1 criterion:3 generalized:2 trying:1 orthogonalization:1 image:9 iiw:1 novel:2 fi:1 denver:1 discussed:1 numerically:1 significant:1 stable:2 whitening:3 add:2 irrelevant:1 certain:1 accomplished:2 uncorrupted:1 a41:1 minimum:2 maximize:2 signal:16 ii:1 afterwards:1 full:1 hebbian:5 technical:3 faster:2 academic:1 divided:1 va:1 aapo:2 basic:5 whitened:8 expectation:1 vigario:1 iteration:5 kernel:1 sometimes:1 normalization:1 lea:9 addition:1 source:11 limk:1 ascent:3 contrary:1 presence:1 enough:2 easy:1 architecture:1 simplifies:1 knowing:1 whether:1 pca:1 utility:1 penalty:4 adequate:1 useful:2 cardoso:2 category:1 simplest:2 reduced:1 zj:2 sign:4 estimated:5 discrete:1 express:1 four:3 demonstrating:1 destroy:1 sum:1 run:1 fourth:1 family:1 separation:8 scaling:2 simplification:1 bv:1 constraint:3 helsinki:4 erkki:2 x2:1 scene:2 speed:2 argument:1 optimality:1 performing:1 sphering:1 tv:2 according:3 munich:1 combination:6 lld:1 unity:1 appealing:1 tw:1 modification:3 comon:4 explained:1 outlier:1 taken:1 computationally:1 equation:2 mutually:2 previously:1 deflation:1 mechanism:1 needed:4 neuromimetic:1 observe:2 v2:3 cancelled:1 alternative:2 robustness:1 convolved:1 original:4 ensure:1 classical:1 objective:3 added:2 gradient:7 separate:9 separating:1 vd:2 collected:1 relationship:1 pmc:1 negative:9 implementation:1 zt:2 unknown:3 adjustable:1 neuron:9 convolution:1 observation:1 fin:1 enabling:1 descent:4 anti:1 defining:1 arbitrary:4 introduced:6 distinction:1 nip:1 trans:2 suggested:2 usually:2 xm:2 ev:1 reliable:2 suitable:1 natural:2 force:1 scheme:1 technology:4 multiplication:1 mixed:1 interesting:2 proven:3 remarkable:1 vg:1 uncorrelated:1 row:2 side:1 vv:1 perceptron:1 optimizaton:1 feedback:3 csj:1 dimension:1 valid:2 made:2 adaptive:3 preprocessing:3 fixedpoint:2 simplified:1 hyvarinen:4 sj:9 approximate:1 pseudoinverse:1 assumed:1 hurri:2 xi:3 lcs:1 search:1 vet:4 hyviirinen:3 learn:1 joutsensalo:1 separator:1 vj:1 main:1 linearly:2 noise:4 allowed:1 fig:2 cubic:2 slow:1 sub:1 position:1 explicit:1 xl:2 third:1 learns:2 bad:1 deconvolution:4 adding:1 karhunen:4 subtract:1 entropy:1 depicted:2 photograph:1 expressed:1 ux:1 scalar:5 irinen:5 radically:1 tcw:2 goal:2 donoho:2 feasible:1 determined:1 except:3 reducing:1 wt:2 principal:1 called:5 meaningful:1 sphered:2 cumulant:1 |
346 | 1,316 | Recursive algorithms for approximating
probabilities in graphical models
Tommi S. Jaakkola and Michael I. Jordan
{tommi,jordan}Opsyche.mit.edu
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
We develop a recursive node-elimination formalism for efficiently
approximating large probabilistic networks. No constraints are set
on the network topologies. Yet the formalism can be straightforwardly integrated with exact methods whenever they are/become
applicable. The approximations we use are controlled: they maintain consistently upper and lower bounds on the desired quantities
at all times. We show that Boltzmann machines, sigmoid belief
networks, or any combination (i.e., chain graphs) can be handled
within the same framework. The accuracy of the methods is verified experimentally.
1
Introduction
Graphical models (see, e.g., Lauritzen 1996) provide a medium for rigorously embedding domain knowledge into network models. The structure in these graphical
models embodies the qualitative assumptions about the independence relationships
in the domain while the probability model attached to the graph permits a consistent computation of belief (or uncertainty) about the values of the variables in the
network. The feasibility of performing this computation determines the ability to
make inferences or to learn on the basis of observations. The standard framework
for carrying out this computation consists of exact probabilistic methods (Lauritzen
1996). Such methods are nevertheless restricted to fairly small or sparsely connected
networks and the use of approximate techniques is likely to be the rule for highly
interconnected graphs of the kind studied in the neural network literature.
There are several desiderata for methods that calculate approximations to posterior
probabilities on graphs. Besides having to be (1) reasonably accurate and fast to
compute, such techniques should yield (2) rigorous estimates of confidence about
T. S. Jaakkola and M. I. Jordan
488
the attained results; this is especially important in many real-world applications
(e.g., in medicine). Furthermore, a considerable gain in accuracy could be obtained
from (3) the ability to use the techniques in conjunction with exact calculations
whenever feasible. These goals have been addressed in the literature with varying
degrees of success. For inference and learning in Boltzmann machines, for example,
classical mean field approximations (Peterson & Anderson, 1987) address only the
first goal. In the case of sigmoid belief networks (Neal 1992), partial solutions have
been provided to the first two goals (Dayan et al. 1995; Saul et al. 1996; Jaakkola
& Jordan 1996). The goal of integrating approximations with exact techniques
has been introduced in the context of Boltzmann machines (Saul & Jordan 1996)
but nevertheless leaving the solution to our second goal unattained. In this paper,
we develop a recursive node-elimination formalism that meets all three objectives
for a powerful class of networks known as chain graphs (see, e.g., Lauritzen 1996);
the chain graphs we consider are of a restricted type but they nevertheless include
Boltzmann machines and sigmoid belief networks as special cases.
We start by deriving the recursive formalism for Boltzmann machines. The results
are then generalized to sigmoid belief networks and the chain graphs.
2
Boltzmann machines
We begin by considering Boltzmann machines with binary (Ojl) variables. We
assume the joint probability distribution for the variables S = {SI,' .. , Sn} to be
given by
1
(1)
Pn(Slh, J) = Zn(h, J) Bn(Slh, J)
where hand J are the vector of biases and weights respectively, and the Boltzmann
factor B has the form
(2)
The partition function Zn(h, J) = Z=s Bn(Slh, J) normalizes the distribution. The
Boltzmann distribution defined in this manner is tractable insofar as we are able to
compute the partition function; indeed, all marginal distributions can be reduced
to ratios of partition functions in different settings.
We now turn to methods for computing the partition function . In special cases
(e.g., trees or chains) the structure of the weight matrix Jij may allow us to employ exact methods for calculating Z. Although exact methods are not feasible
in more generic networks, selective approximations may nevertheless restore their
utility. The recursive framework we develop provides a general and straightforward
methodology for combining approximate and exact techniques.
The crux of our approach lies in obtaining variational bounds that allow the creation
of recursive node-elimination formulas of the form 1 :
< C(h, J) Zn-l(h, J)
(3)
>
Such formulas are attractive for three main reasons: (1) a variable (or many at
the same time) can be eliminated by merely transforming the model parameters (h
and J); (2) the approximations involved in the elimination are controlled, i.e., they
Zn(h,J)
1 Related schemes in the physics literature (renormalization group) are unsuitable here
as they generally don't provide strict upper/lower bounds.
489
Recursive Bounds for Graphical Models
consistently yield upper or lower bounds at each stage of the recursion; (3) most
importantly, if the remaining (simplified) partition function Zn-l(h, j) allow the
use of exact methods, the corresponding model parameters hand j can simply be
passed on to such routines.
Next we will consider how to obtain the bounds and outline their implications. Note
that since the quantities of interest are predominantly ratios of partition functions,
it is the combination of upper and lower bounds that is necessary to rigorously
bound the target quantities. This applies to parameter estimation as well even if
only a lower bound on likelihood of examples is used; such likelihood bound relies
on both upper and lower bounds on partition functions.
2.1
Simple recursive factorizations
We start by developing a lower bound recursion. Consider eliminating the variable
Si:
Zn(h,J)
LBn(Slh,J)
S
L
(1
=L
LBn(Slh,J)
S\S, S,
+ eh,+L::, J,jSj)B n _l(S \
Sil h , J)
(4)
(5)
S\Si
> L el',(h,+L:: j J,jSj)+H(I'.) B n-l(S \ Silh, J)
(6)
S\S,
el',h,+H(I',) L B n - 1 (S \ Si Ih, J)
S\S,
el',hi+ H (l'i) Zn-l(h, J)
(7)
(8)
=
where hi
hi + l'iJii for j =j:. i, H(?) is the binary entropy function and I'i are free
parameters that we will refer to as "variational parameters." The variational bound
introduced in eq. (6) can be verified by a direct maximization which recovers the
original expression. This lower bound recursion bears a connection to mean field
approximation and in particular to the structured mean field approximation studied
by Saul and Jordan (1996).2
Each recursive elimination translates into an additional bound and therefore the
approximation (lower bound) deteriorates with the number of such iterations. It
is necessary, however, to continue with the recursion only to the extent that the
prevailing partition function remains unwieldy to exact methods. Consequently,
the problem becomes that of finding the variables the elimination of which would
render the rest of the graph tractable. Figure 1 illustrates this objective. Note
that the simple recursion does not change the connection matrix J for the remaining variables; thus, graphically, the operation translates into merely removing the
variable.
The above recursive procedure maintains a lower bound on the partition function
that results from the variational representation introduced in eq. (6). For rigorous
2Each lower bound recursion can be shown to be equivalent to a mean field approximation of the eliminated variable(s). The structured mean field approach of Saul and
Jordan (1996) suggests using exact methods for tractable substructures while mean field
for the variables mediating these structures. Translated into our framework this amounts
to eliminating the mediating variables through the recursive lower bound formula with a
subsequent appeal to exact methods. The connection is limited to the lower bound.
T. S. Jaakkola and M. I. Jordan
490
Figure 1: Enforcing tractable networks. Each variable in the graph can be removed
(in any order) by adding the appropriate biases for the existing adjacent variables.
The elimination of the dotted nodes reveals a simplified graph underneath.
bounds we need an upper bound as well. In order to preserve the graphical interpretation of the lower bound, the upper bound should also be factorized. With this
in mind, the bound of eq. (6) can be replaced with
(9)
where
(10)
for j
that
> 0, ho = f(h i ), f(x) = log(1 + eX), and
Lj qj
2.2
qj are variational parameters such
= 1. The derivation of this bound can be found in appendix A.
Refined recursive bound
If the (sub )network is densely or fully connected the simple recursive methods presented earlier can hardly uncover any useful structure. Thus a large number of
recursive steps are needed before relying on exact methods and the accuracy of
the overall bound is compromised. To improve the accuracy, we introduce a more
sophisticated variational (upper) bound to replace the one in eq. (6). By denoting
Xi = hi + Lj Jij Sj we have:
1+
eX. ::; ex./2+>.(x.)X~-F(>'.Xi)
(11)
The derivation and the functional forms of A(Xi) and F(A, Xi) are presented in
appendix B. We note here, however, that the bound is exact whenever Xi = Xi. In
terms of the recursion we obtain
Zn(h,J)
<
eh./2+>.(x.)h~-F(>'.Xi)
Zn-l(h, J)
(12)
where
h?3
Jjk
+ 2hjA(Xi)Jij + Ji j/2 + A(Xi)Ji}
Jjk + 2A(Xi)JjJik
hj
(13)
(14)
for j 1= k 1= i . Importantly and as shown in figure 2a, this refined recursion
imposes (qualitatively) the proper structural changes on the remaining network : the
variables adjacent to the eliminated (or marginalized) variable become connected.
In other words, if Jij 1= 0 and hk 1= 0 then Jjk 1= 0 after the recursion.
To substantiate the claim of improved accuracy we tested the refined upper bound
recursion against the factorized lower bound recursion in random fully connected
networks with 8 variables3 . The weights in these networks were chosen uniformly
in the range [-d, d) and all the initial biases were set to zero. Figure 3a plots the
relative errors in the log-partition function estimates for the two recursions as a
3The small network size was chosen to facilitate comparisons with exact results.
491
Recursive Bounds for Graphical Models
:'~"""""""""' \
:
-. -- --- - --- .-"
b) ( ............~~\_..
.~
t
J
a)
Figure 2: a) The graphical changes in the network following the refined recursion
match those of proper marginalization. b) Example of a chain graph. The dotted
ovals indicate the undirected clusters.
0.03
0.04.---~----.----~-----,
0.025
0.02
o~~~~~
0.02
refined
__~-~-~-~~-?~____M-4-
0.015
0.D1
-0.02
......
0.005
a) -0?040
0.25
0.5
b)
0.75
00
0.5
0.75
Figure 3: a) The mean relative errors in the log-partition function as a function of
the scale of the random weights (uniform in [-d, dJ). Solid line: factorized lower
bound recursion; dashed line: refined upper bound. b) Mean relative difference
between the upper and lower bound recursions as a function of dJn/8, where n is
the network size. Solid: n 8; dashed: n 64; dotdashed: n 128.
=
=
=
function of the scale d. Figure 3b reveals how the relative difference between the
two bounds is affected by the network size. In the illustrated scale the size has little
effect on the difference. We note that the difference is mainly due to the factorized
lower bound recursion as is evident from Figure 3a.
3
Chain graphs and sigmoid belief networks
The recursive bounds presented earlier can be carried over to chain graphs4. An
example of a chain graph is given in figure 2b. The joint distribution for a chain
graph can written as a product of conditional distributions for clusters of variables:
Pn(SIJ)
= II p(Sk Ipa[k], hk, Jk)
(15)
k
where Sk = {SdiECk is the set of variables in cluster k. In our case, the conditional
probabilities for each cluster are conditional Boltzmann distributions given by
p(Sk I [k] hk Jk)
pa "
= B(Sk Ih~, Jk)
Z(h~,Jk)
(16)
where the added complexity beyond that of ordinary Boltzmann machines is that
the Boltzmann factors now include also outside cluster biases:
[h~]i
= hf + L
Ji~? out Sj
(17)
j~Ck
4While Boltzmann machines are undirected networks (interactions defined through potentials), sigmoid networks are directed models (constructed from conditional probabilities). Chain graphs contain both directed and undirected interactions.
T. S. Jaakkola and M. I. Jordan
492
where the index i stays within the kth cluster. We note that sigmoid belief networks
correspond to the special case where there is only single binary variable in each
cluster; Boltzmann machines, on the other hand, have only one cluster.
We now show that the recursive formalism can be extended to chain graphs. This
is achieved by rewriting or bounding the conditional probabilities in terms of variational Boltzmann factors. Consequently, the joint distribution - being a product of the conditionals - will also be a Boltzmann factor. Computing likelihoods
(marginals) from such a joint distribution amounts to calculating the value of a
particular partition function and therefore reduces to the case considered earlier.
It suffices to find variational Boltzmann factors that bound (or rerepresent in some
cases) the cluster partition functions in the conditional probabilities. We observe
first that in the factorized lower bound or in the refined upper bound recursions, the
initial biases will appear in the resulting expressions either linearly or quadratically
in the exponent 5 . Since the initial biases for the clusters are of the form of eq. (17),
the resulting expressions must be Boltzmann factors with respect to the variables
outside the cluster. Thus, applying the recursive approximations to each cluster
partition function yields an upper/lower bound in the form of a Boltzmann factor.
Combining such bounds from each cluster finally gives upper/lower bounds for the
joint distribution in terms of variational Boltzmann factors.
We note that for sigmoid belief networks the Boltzmann factors bounding the joint
distribution are in fact exact variational translations of the true joint distribution.
To see this, let us denote Xi = L: Jij Sj + hi and use the variational forms, for
example, from eq. (6) and (11):
O'(Xi)
= (1 + e- X ? )-1
<
>
e l1 ?Xi - H (l1i)
(18)
eXi/2->'(Xi)X~+F(>.,Xi)
(19)
where the sigmoid function 0'(.) is the inverse cluster partition function in this case.
Both the variational forms are Boltzmann factors (at most quadratic in Xi in the
exponent) and are exact if minimized/maximized with respect to the variational
parameters.
In sum, we have shown how the joint distribution for chain graphs can be bounded
by (translated into) Boltzmann factors to which the recursive approximation formalism is again applicable.
4
Conclusion
To reap the benefits of probabilistic formulations of network architectures, approximate methods are often unavoidable in real-world problems. We have developed a
recursive node-elimination formalism for rigorously approximating intractable networks. The formalism applies to a large class of networks known as chain graphs
and can be straightforwardly integrated with exact probabilistic calculations whenever they are applicable. Furthermore, the formalism provides rigorous upper and
lower bounds on any desired quantity (e.g., the variable means) which is crucial in
high risk application domains such as medicine.
bThis follows from the linearity of the propagation rules for the biases, and the fact
that the emerging prefactors are either linear or quadratic in the exponent.
493
Recursive Bounds for Graphical Models
References
P. Dayan, G. Hinton, R. Neal, and R. Zemel (1995). The Helmholtz machine.
Neural Computation 7: 889-904.
S. L. Lauritzen (1996). Graphical Models. Oxford: Oxford University Press.
T . Jaakkola and M. Jordan (1996). Computing upper and lower bounds on likelihoods in intractable networks. To appear in Proceedings of the twelfth Conference
on Uncertainty in Artificial Intelligence.
R. Neal. Connectionist learning of belief networks (1992). Artificial Intelligence 56 :
71-113.
C. Peterson and J. R. Anderson (1987). A mean field theory learning algorithm for
neural networks. Complex Systems 1: 995-1019.
L. K. Saul, T. Jaakkola, and M. I. Jordan (1996). Mean field theory for sigmoid
belief networks. lAIR 4: 61-76.
L. Saul and M. Jordan (1996). Exploiting tractable substructures in intractable
networks. To appear in Advances of Neural Information Processing Systems 8.
MIT Press.
A
Factorized upper bound
The bound follows from the convexity of f( x) = loge 1 + eX) and from an application
of Jensen's inequality. Let fk(x) = I(x + hk) and note that Ik(X) has the same
convexity properties as I. For any convex function Ik then we have (by Jensen's
inequality)
Ik (L.ilkiSi)
By rewriting Ik
B
(Jk;jSj)
= Ik
= Si
(L.i%
lki~i)
:::; ~qilk (lkj~i)
q)
i
q)
[/A: (~) -
Ik(O)]
+ Ida)
(20)
we get the desired result .
Refined upper bound
To derive the upper bound consider first
1 + eX = exj2
+ log(e- x / 2 + eX/ 2 )
(21)
Now, g(x) = log(e- x / 2 + ex / 2 ) is a symmetric function of x and also a concave
function of x 2 . Any tangent line for a concave function always remains above the
function and so it also serves as an upper bound. Therefore we may bound g(x) by
the tangents of g(.,fY) (due to the concavity in x 2 ). Thus
log(e- x / 2
+e
X/
2)
< ag~v:) (x 2 - y) + g(.JY)
A(Y)X 2
where
A(y)
F(A, y)
-
F(A, y)
a
(22)
(23)
ayg( .JY)
(24)
A(y) y - g(.JY)
(25)
The desired result now follows the change of variables: y = xl- Note that the
tangent bound is exact whenever Xi = x (a tangent defined at that point) .
| 1316 |@word eliminating:2 twelfth:1 bn:2 reap:1 solid:2 initial:3 denoting:1 existing:1 ida:1 si:5 yet:1 written:1 must:1 subsequent:1 partition:15 plot:1 intelligence:2 hja:1 slh:5 provides:2 node:5 constructed:1 direct:1 become:2 ik:6 qualitative:1 consists:1 introduce:1 manner:1 indeed:1 brain:1 relying:1 little:1 considering:1 becomes:1 provided:1 begin:1 bounded:1 linearity:1 medium:1 factorized:6 kind:1 emerging:1 developed:1 finding:1 ag:1 concave:2 appear:3 before:1 oxford:2 meet:1 studied:2 suggests:1 factorization:1 limited:1 range:1 directed:2 recursive:21 procedure:1 confidence:1 integrating:1 word:1 get:1 context:1 applying:1 risk:1 equivalent:1 straightforward:1 graphically:1 convex:1 rule:2 importantly:2 deriving:1 embedding:1 target:1 exact:18 pa:1 helmholtz:1 jk:5 sparsely:1 calculate:1 connected:4 removed:1 transforming:1 convexity:2 complexity:1 rigorously:3 carrying:1 creation:1 basis:1 translated:2 joint:8 exi:1 derivation:2 fast:1 artificial:2 zemel:1 outside:2 refined:8 tested:1 ability:2 interconnected:1 jij:5 product:2 interaction:2 combining:2 exploiting:1 cluster:14 jjk:3 develop:3 derive:1 lauritzen:4 eq:6 indicate:1 tommi:2 elimination:8 crux:1 suffices:1 considered:1 claim:1 estimation:1 applicable:3 mit:2 always:1 ck:1 pn:2 hj:1 varying:1 jaakkola:7 conjunction:1 consistently:2 likelihood:4 mainly:1 hk:4 rigorous:3 underneath:1 inference:2 dayan:2 el:3 integrated:2 lj:2 selective:1 overall:1 exponent:3 prevailing:1 special:3 fairly:1 marginal:1 field:8 having:1 eliminated:3 minimized:1 connectionist:1 employ:1 preserve:1 densely:1 lbn:2 replaced:1 maintain:1 interest:1 highly:1 bthis:1 chain:14 implication:1 accurate:1 partial:1 necessary:2 tree:1 loge:1 desired:4 formalism:9 earlier:3 zn:9 maximization:1 ordinary:1 uniform:1 straightforwardly:2 stay:1 probabilistic:4 physic:1 michael:1 again:1 unavoidable:1 cognitive:1 potential:1 start:2 hf:1 maintains:1 substructure:2 jsj:3 accuracy:5 efficiently:1 maximized:1 yield:3 correspond:1 whenever:5 against:1 involved:1 recovers:1 gain:1 massachusetts:1 knowledge:1 routine:1 uncover:1 sophisticated:1 attained:1 methodology:1 improved:1 formulation:1 anderson:2 furthermore:2 stage:1 hand:3 propagation:1 facilitate:1 effect:1 contain:1 true:1 symmetric:1 neal:3 illustrated:1 attractive:1 adjacent:2 substantiate:1 generalized:1 outline:1 evident:1 l1:1 variational:13 predominantly:1 sigmoid:10 functional:1 ji:3 attached:1 interpretation:1 marginals:1 refer:1 cambridge:1 fk:1 dj:1 prefactors:1 posterior:1 inequality:2 binary:3 success:1 continue:1 additional:1 dashed:2 ii:1 reduces:1 match:1 calculation:2 jy:3 controlled:2 feasibility:1 desideratum:1 iteration:1 achieved:1 conditionals:1 addressed:1 leaving:1 crucial:1 rest:1 strict:1 undirected:3 jordan:12 structural:1 insofar:1 independence:1 marginalization:1 architecture:1 topology:1 translates:2 qj:2 expression:3 handled:1 utility:1 passed:1 render:1 hardly:1 generally:1 useful:1 amount:2 reduced:1 ipa:1 dotted:2 deteriorates:1 affected:1 group:1 nevertheless:4 rewriting:2 verified:2 exj2:1 graph:18 merely:2 sum:1 inverse:1 uncertainty:2 powerful:1 appendix:2 bound:55 hi:5 quadratic:2 constraint:1 performing:1 department:1 developing:1 structured:2 combination:2 restricted:2 sij:1 remains:2 turn:1 needed:1 mind:1 tractable:5 serf:1 operation:1 permit:1 observe:1 generic:1 appropriate:1 ho:1 original:1 remaining:3 include:2 graphical:9 marginalized:1 medicine:2 embodies:1 calculating:2 unsuitable:1 especially:1 approximating:3 classical:1 objective:2 added:1 quantity:4 kth:1 extent:1 fy:1 reason:1 enforcing:1 besides:1 index:1 relationship:1 ratio:2 mediating:2 lki:1 lair:1 proper:2 boltzmann:23 upper:20 observation:1 extended:1 hinton:1 introduced:3 connection:3 quadratically:1 address:1 able:1 beyond:1 lkj:1 belief:10 eh:2 restore:1 recursion:17 scheme:1 improve:1 technology:1 djn:1 carried:1 sn:1 literature:3 tangent:4 relative:4 fully:2 bear:1 sil:1 degree:1 consistent:1 imposes:1 translation:1 normalizes:1 free:1 bias:7 allow:3 institute:1 saul:6 peterson:2 benefit:1 world:2 concavity:1 qualitatively:1 simplified:2 sj:3 approximate:3 ojl:1 reveals:2 xi:17 don:1 compromised:1 sk:4 learn:1 reasonably:1 obtaining:1 complex:1 domain:3 main:1 linearly:1 bounding:2 renormalization:1 sub:1 xl:1 lie:1 formula:3 unwieldy:1 removing:1 jensen:2 appeal:1 intractable:3 ih:2 adding:1 illustrates:1 entropy:1 simply:1 likely:1 applies:2 determines:1 relies:1 ma:1 conditional:6 goal:5 consequently:2 replace:1 considerable:1 experimentally:1 feasible:2 change:4 uniformly:1 oval:1 l1i:1 d1:1 ex:7 |
347 | 1,317 | Exploiting Model Uncertainty Estimates
for Safe Dynamic Control Learning
Jeff G. Schneider
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
schneide@cs.cmu.edu
Abstract
Model learning combined with dynamic programming has been shown to
be effective for learning control of continuous state dynamic systems. The
simplest method assumes the learned model is correct and applies dynamic
programming to it, but many approximators provide uncertainty estimates
on the fit. How can they be exploited? This paper addresses the case
where the system must be prevented from having catastrophic failures during learning. We propose a new algorithm adapted from the dual control
literature and use Bayesian locally weighted regression models with dynamic programming. A common reinforcement learning assumption is that
aggressive exploration should be encouraged. This paper addresses the converse case in which the system has to reign in exploration. The algorithm
is illustrated on a 4 dimensional simulated control problem.
1
Introduction
Reinforcement learning and related grid-based dynamic programming techniques are
increasingly being applied to dynamic systems with continuous valued state spaces.
Recent results on the convergence of dynamic programming methods when using
various interpolation methods to represent the value (or cost-to-go) function have
given a sound theoretical basis for applying reinforcement learning to continuous
valued state spaces [Gordon, 1995]. These are important steps toward the eventual
application of these methods to industrial learning and control problems.
It has also been reported recently that there are significant benefits in data and
computational efficiency when data from running a system is used to build a model,
rather than using it once for single value function updates (as Q-learning would
do) and discarding it [Sutton, 1990, Moore and Atkeson, 1993, Schaal and Atkeson,
1993, Davies, 1996]. Dynamic programming sweeps can then be done on the learned
model either off-line or on-line. In its vanilla form, this method assumes the model
is correct and does deterministic dynamic programming using the model. This
assumption is often not correct, especially in the early stages of learning. When
learning simulated or software systems, there may be no harm in the fact that this
J. G. Schneider
1048
assumption does not hold. However, in real, physical systems there are often states
that really are catastrophic and must be avoided even during learning. Worse yet,
learning may have to occur during normal operation of the system in which case its
performance during learning must not be significantly degraded.
The literature on adaptive and optimal linear control theory has explored this problem considerably under the names stochastic control and dual control. Overviews
can be found in [Kendrick, 1981, Bar-Shalom and Tse, 1976]. The control decision
is based on three components call the deterministic, cautionary, and probing terms.
The deterministic term assumes the model is perfect and attempts to control for
the best performance. Clearly, this may lead to disaster if the model is inaccurate.
Adding a cautionary term yields a controller that considers the uncertainty in the
model and chooses a control for the best expected performance. Finally, if the system learns while it is operating, there may be some benefit to choosing controls
that are suboptimal and/or risky in order to obtain better data for the model and
ultimately achieve better long-term performance. The addition of the probing term
does this and gives a controller that yields the best long-term performance.
The advantage of dual control is that its strong mathematical foundation can provide the optimal learning controller under some assumptions about the system,
the model, noise, and the performance criterion. Dynamic programming methods
such as reinforcement learning have the advantage that they do not make strong
assumptions about the system, or the form of the performance measure. It has
been suggested [Atkeson, 1995, Atkeson, 1993] that techniques used in global linear
control, including caution and probing, may also be applicable in the local case. In
this paper we propose an algorithm that combines grid based dynamic programming with the cautionary concept from dual control via the use of a Bayesian locally
weighted regression model.
Our algorithm is designed with industrial control applications in mind. A typical
scenario is that a production line is being operated conservatively. There is data
available from its operation, but it only covers a small region of the state space and
thus can not be used to produce an accurate model over the whole potential range
of operation. Management is interested in improving the line's response to changes
in set points or disturbances, but can not risk much loss of production during the
learning process. The goal of our algorithm is to collect new data and optimize the
process while explicitly minimizing the risk.
2
The Algorithm
Consider a system whose dynamics are given by xk+1 = f(xk, uk). The state, x,
and control,u, are real valued vectors and k represents discrete time increments.
A model of f is denoted as j. The task is to minimize a cost functional of the
form J = E:=D L(xk, uk, k) subject to the system dynamics. N mayor may not
be fixed depending on the problem. L is given, but f must be learned. The goal is
to acquire data to learn f in order to minimize J without incurring huge penalties
in J during learning. There is an implicit assumption that the cost function defines
catastrophic states. If it were known that there were no disasters to avoid, then
simpler, more aggressive algorithms would likely outperform the one presented here.
The top level algorithm is as follows:
1. Acquire some data while operating the system from an existing controller.
2. Construct a model from the data using Bayesian locally weighted regression.
3. Perform DP with the model to compute a value function and a policy.
4. Operate the system using the new policy and record additional data.
Exploiting Model Uncertainty Estimates for Safe Dynamic Control Learning
1049
5. Repeat to step 2 while there is still some improvement in performance.
In the rest of this section we describe steps 2 and 3.
2.1 Bayesian locally weighted regression
We use a form of locally weighted regression [Cleveland and Delvin, 1988,
Atkeson, 1989, Moore, 1992] called Bayesian locally weighted regression [Moore
and Schneider, 1995] to build a model from data. When a query, x q , is made, each
of the stored data points receives a weight Wi = exp( -llxi - xq l1 2 / K). K is the
kernel width which controls the amount of localness in the regression. For Bayesian
LWR we assume a wide, weak normal-gamma prior on the coefficients of the regression model and the inverse of the noise covariance. The result of a prediction is a
t distribution on the output that remains well defined even in the absence of data
(see [Moore and Schneider, 1995] and [DeGroot, 1970] for details) .
The distribution of the prediction in regions where there is little data is crucial to
the performance of the DP algorithm. As is often the case with learning through
search and experimentation, it is at least as important that a function approximator
predicts its own ignorance in regions of no data as it is how well it interpolates in
data rich regions.
2.2 Grid based dynamic programming
In dynamic programming, the optimal value function, V, represents the cost-to-go
from each state to the end of the task assuming that the optimal policy is followed
from that point on. The value function can be computed iteratively by identifying
the best action from each state and updating it according to the expected results
of the action as given by a model of the system. The update equation is:
Vk+1(x) = minL(x, u) + Vk(j(x, u?
(1)
In our algorithm, updates to the ~que function are computed while considering
the probability distribution on the results of each action. If we assume that the
output of the real system at each time step is an independent random variable
whose probability density function is given by the uncertainty from the model, the
update equation is as follows:
V k+1(x) = minL(x, u) + E[Vk(f(x, u))lj]
(2)
Note that the independence as~~fhption does not hold when reasonably smooth
system dynamics are modeled by a smooth function approximator. The model
error at one time step along a trajectory is highly correlated with the model error
at the following step assuming a small distance traveled during the time step.
Our algorithm for DP with model uncertainty on a grid is as follows:
1. Discretize the state space, X, and the control space, U.
2. For each state and each control cache the cost of taking this action from
this state. Also compute the probability density function on the next state
from the model and cache the information. There are two cases which are
shown graphically in fig. 1:
? If the distribution is much narrower than the grid spacing, then the
model is confident and a deterministic update will be done according to
eq. 1. Multilinear interpolation is used to compute the value function
at the mean of the predicted next state [Davies, 1996] .
? Otherwise, a stochastic update will be done according to eq. 2. The pdf
of each of the state variables is stored, discretized at the same intervals
as the grid representing the value function. Output independence is
J G. Schneider
1050
High Confidence Next State
v7
Low Confidence Next State
.......---::
v8
vlO
~
V. ~V
V
(.-/.
vI,.!!
V
...---:l
17__
-..;:~
Figure 1: Illustration of the two kinds of cached updates. In the high confidence scenario the transition is treated as deterministic and the value function is computed
with multilinear interpolation : Vl~+l
L(x, u) + OAV; + 0.3V; + 0.2V1k1 + 0.1 2 ?
In the low confidence scenario the transition is treated stochastically and the update takes a weighted sum over all the vertices of significant weight as well as the
,.
Ie
I
?l?
?d h
?d TTk+l _ L(
) L-f.,/ lp(.,/l><} p(x lJ ,x,u)V (x)
pro b a b Iltymassoutsl et egn: VIO X,u +~
L-{.,'Ip(,,' ?<} p(x If ,x ,u)
vl
=
~
I'
assumed and later the pdf of each grid point will be computed as the
product of the pdfs for each dimension and a weighted sum of all the
grid points with significant weight will be computed. Also the total
probability mass outside the bounds of the grid is computed and stored.
3. For each state, use the cached information to estimate the cost of choosing each action from that state. Update the value function at that state
according to the cost of the best action found .
4. Repeat 3 until the value function converges, or the desired number of steps
has been reached in finite step problems.
5. Record the best action (policy) for each grid point.
3
Experiments: Minimal Time Cart-Pole Maneuvers
The inverted pendulum is a well studied problem. It is easy to learn to stabilize it in
a small number oftrials, but not easy to learn quick maneuvers. We demonstrate our
algorithm on the harder problem of moving the cart-pole stably from one position
to another as quickly as possible. We assume we have a controller that can balance
the pole and would like to learn to move the cart quickly to new positions, but
never drop the pole during the learning process. The simulation equations and
parameters are from [Barto et aI., 1983] and the task is illustrated at the top of fig.
2. The state vector is x = [ pole angle (0), pole angular velocity (8), cart position
(p), cart velocity (p) ]. The control vector, u, is the one dimensional force applied
to the cart. Xo is [0 0 170] and the cost function is J
E~o x T X + 0.01 u T u. N is
not fixed. It is determined by the amount of time it takes for the system to reach
a goal region about the target state, [0 0 0 0] . If the pole is dropped, the trial ends
and an additional penalty of 10 6 is incurred.
=
This problem has properties similar to familiar process control problems such as
cooking, mixing, or cooling, because it is trivial to stabilize the system and it can
be moved slowly to a new desired position while maintaining the stability by slowly
changing positional setpoints. In each case, the goal is to learn how to respond
faster without causing any disasters during, or after, the learning process.
Exploiting Model Uncertainty Estimates for Safe Dynamic Control Learning
1051
3.1 Learning an LQR controller
We first learn a linear quadratic regulator that balances the pole. This can be done
with minimal data. The system is operated from the state, [0 0 0 0] for 10 steps
of length 0.1 seconds with a controller that chooses u randomly from a zero mean
gaussian with standard deviation 0.5. This is repeated to obtain a total of 20 data
points. That data is used to fit a global linear model mapping x onto x'. An LQR
controller is constructed from the model and the given cost function following the
derivation in [Dyer and McReynolds, 1970].
The resulting linear controller easily stabilizes the pole and can even bring the
system stably (although very inefficiently as it passes through the goal several times
before coming to rest there) to the origin when started as far out as x = [0 0 10 0].
If the cart is started further from the origin, the controller crashes the system.
3.2 Building the initial Bayesian LWR model
We use the LQR controller to generate data for an initial model. The system is
started at x = [0 0 1 0] and controlled by the LQR controller with gaussian noise
added as before. The resulting 50 data points are stored for an LWR model that
maps [e, 0, u] -+ [0, pl. The data in each dimension of the state and control space is
scaled to [0 1]. In this scaled space, the LWR kernel width is set to 1.0.
Next, we consider the deterministic DP method on this model. The grid covers the
ranges: [?1.0 ?4.0 ?21.0 ?20.0] and is discretized to [11 9 11 9] levels. The control
is ?30.0 discretized to 15 levels. Any state outside the grid bounds is considered
failure and incurs the 10 6 penalty. If we assume the model is correct, we can use
deterministic DP on the grid to generate a policy. The computation is done with
fixed size steps in time of 0.25 seconds. We observe that this policy is able to move
the system safely from an initial state of [0 0 12 0], but crashes if it is started further
out. Failure occurs because the best path g.enerated using the model strays far from
the region of the data (in variables e and e) used to construct the model.
It is disappointing that the use of LWR for nonlinear modeling didn't improve much
over a globally linear model and an LQR controller. We believe this is a common
situation. It is difficult to build better controllers from naive use of nonlinear
modeling techniques because the available data models only a narrow region of
operation and safely acquiring a wider range of data is difficult.
3.3 Cautionary dynamic programming
At this point we are ready to test our algorithm. Step 3 is executed using the LWR
model from the data generated by the LQR controller as before. A trace of the
system's operation when started at a distance of 17 from the goal is shown at the
top of fig. 2. The controller is extremely conservative with respect to the angle of
the pole. The pole is never allowed to go outside ?0.13 radians. Even as the cart
approaches the goal at a moderate velocity the controller chooses to overshoot the
goal considerably rather than making an abrupt action to brake the system.
The data from this run is added to the model and the steps are repeated. Traces of
the runs from three iterations of the algorithm are shown in fig. 2. At each trial, the
controller becomes more aggressive and completes the task with less cost. After the
third iteration, no significant improvement is observed. The costs are summarized
and compared with the LQR and deterministic DP controllers in table 1.
Fig. 3 is another illustration of how the policy becomes increasingly aggressive. It
plots the pole angle vs. the pole angular velocity for the original LQR data and the
executions at each of the following three trials. In summary, our algorithm is able
1052
1. G. Schneider
Goal Reg-ion
o
24
12
13
25
11
2'
2'1.
10
o
I
3
10
?
I
I
I
I
I
,
o
I
2.
'A i ? ?
I
I
I
I
?
I
I
I
I
I
I
t
I
I
I
01
5
I
I
I
I
I
'
,
Figure 2: The task is to move the cart to the origin as quickly as possible without
dropping the pole. The bottom three pictures show a trace of the policy execution
obtained after one, two, and three trials (shown in increments of 0.5 seconds)
Controller
LQR
Deterministic D P
Stochastic DP trial 1
Stochastic DP trial 2
Stochastic DP trial 3
Number of data points used
to build the controller
20
50
50
221
272
Cost from initial state 17
failure
failure
12393
7114
6270
Table 1: Summary of experimental results
to start from a simple controller that can stabilize the pole and learn to move it
aggressively over a long distance without ever dropping the pole during learning.
4
Discussion
We have presented an algorithm that uses Bayesian locally weighted regression
models with dynamic programming on a grid. The result is a cautionary adaptive
control algorithm with the flexibility of a non-parametric nonlinear model instead
of the more restrictive parametric models usually considered in the dual control
literature. We note that this algorithm presents a viewpoint on the exploration
vs exploitation issue that is different from many reinforcement learning algorithms,
which are devised to encourage exploration (as in the probing concept in dual control) . However, we argue that modeling the data first with a continuous function
approximator and then doing DP on the model often leads to a situation where
exploration must be inhibited to prevent disasters. This is particularly true in the
case of real, physical systems.
Exploiting Model Uncertainty Estimatesfor Safe Dynamic Control Learning
1.5
"
-'
.. - -.. .
LQR data 0
1st trial
_....... 2nd trial
'3td trial
1
" " " " " "
1053
."
---
0.5
-....
Angular
'.
0
Velocity
..
:
-0.5
" "" " ""
-1
.
.,
",," "
'.
"" "
"
..
.
,,""
"
",,"""" "
-1.5
-0.8
-0.6
-0.4
-0.2
0
Pole Angle
0.2
0.4
0.6
Figure 3: Execution trace. At each iteration, the controller is more aggressive.
References
[Atkeson, 1989) C. Atkeson. Using local models to control movement . In Advances in Neural Information Processing Systems, 1989 .
[Atkeson, 1993] C . Atkeson. Using local trajectory optimizers to speed up global optimization in dynamic programming. In Advances in Neural Information Processing Systems (NIPS-6), 1993 .
[Atkeson , 1995) C . Atkeson . Local methods for active learning. Invited talk at AAAI Fall Symposium
on Active Learning, 1995 .
[Bar-Shalom and Tse, 1976) Y . Bar-Shalom and E . Tse. Concepts and Methods in Stochastic Control.
Academic Press, 1976.
[Barto et al., 1983) A . Barto, R. Sutton, and C. Anderson. Neuronlike adaptive elements that can solve
difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 1983.
[Cleveland and Delvin, 1988) W . Cleveland and S. Delvin . Locally weighted regression: An approach to
regression analysis by local fitting. Journal of the American Statistical Association, pages 596-610,
September 1988.
[Davies, 1996] S. Davies . Applying grid-based interpolation to reinforcement learning. In Neural Information Proceuing Systems 9, 1996.
[DeGroot, 1970) M. DeGroot. Optimal Statistical Decisions. McGraw-Hill, 1970.
[Dyer and McReynolds , 1970) P. Dyer and S. McReynolds. The Computation and Theory of Optimal
Control. Academic Press, 1970.
[Gordon , 1995] G. Gordon. Stable function approximation in dynamic programming. In The 12th
International Conference on Machine Learning, 1995 .
[Kendrick, 1981) D. Kendrick. Stochastic Control for Economic Models. McGraw-Hill, 1981.
[Moore and AtkesoD, 1993) A . Moore and C. Atkeson. Prioritized sweeping: Reinforcement learning
with less data and less real time. Machine Learning, 13(1):103-130,1993.
[Moore and Schneider, 1995] A. Moore and J. Schneider. Memory based stochastic optimization. In
Advances in Neural Information Proceuing Systems (NIPS-B), 1995 .
[Moore, 1992) A. Moore. Fast, robust adaptive control by learning only forward models. In Advances
in Neural Information Processing Systems 4, 1992.
[Schaal and Atkeson, 1993) S. Schaal and C . Atkeson. Assessing the quality of learned local models. In
Advances in Neural Information Processing Systems (NIPS-6), 1993 .
[Sutton, 1990) R. Sutton . First results with dyna, an intergrated architecture for learning, planning,
and reacting. In AAAI Spring Symposium on Planning in Uncertain, Unpredictable , or Changing
Environment", 1990 .
| 1317 |@word trial:10 exploitation:1 nd:1 simulation:1 covariance:1 incurs:1 harder:1 initial:4 lqr:10 existing:1 yet:1 must:5 designed:1 drop:1 update:9 plot:1 v:2 xk:3 record:2 simpler:1 mathematical:1 along:1 constructed:1 symposium:2 combine:1 fitting:1 expected:2 planning:2 discretized:3 globally:1 td:1 little:1 cache:2 considering:1 unpredictable:1 becomes:2 cleveland:3 mass:1 didn:1 kind:1 caution:1 safely:2 scaled:2 uk:2 control:36 converse:1 cooking:1 maneuver:2 before:3 dropped:1 local:6 sutton:4 proceuing:2 reacting:1 path:1 interpolation:4 studied:1 collect:1 range:3 optimizers:1 kendrick:3 significantly:1 davy:4 confidence:4 onto:1 risk:2 applying:2 optimize:1 deterministic:9 quick:1 map:1 brake:1 go:3 graphically:1 identifying:1 abrupt:1 egn:1 stability:1 increment:2 target:1 programming:15 us:1 origin:3 pa:1 velocity:5 element:1 particularly:1 updating:1 cooling:1 predicts:1 observed:1 bottom:1 region:7 movement:1 environment:1 dynamic:24 ultimately:1 overshoot:1 efficiency:1 basis:1 easily:1 various:1 talk:1 derivation:1 fast:1 effective:1 describe:1 query:1 choosing:2 que:1 outside:3 whose:2 valued:3 solve:1 otherwise:1 ip:1 advantage:2 propose:2 product:1 coming:1 causing:1 mixing:1 flexibility:1 achieve:1 moved:1 exploiting:4 convergence:1 assessing:1 produce:1 cached:2 perfect:1 converges:1 wider:1 depending:1 eq:2 strong:2 c:1 predicted:1 safe:4 correct:4 mcreynolds:3 stochastic:8 exploration:5 really:1 multilinear:2 pl:1 hold:2 considered:2 normal:2 exp:1 mapping:1 stabilizes:1 early:1 applicable:1 weighted:10 ttk:1 clearly:1 gaussian:2 rather:2 avoid:1 barto:3 schaal:3 improvement:2 vk:3 pdfs:1 industrial:2 inaccurate:1 lj:2 vl:2 interested:1 issue:1 dual:6 denoted:1 once:1 construct:2 having:1 lwr:6 never:2 encouraged:1 represents:2 gordon:3 inhibited:1 delvin:3 randomly:1 gamma:1 familiar:1 attempt:1 neuronlike:1 huge:1 highly:1 operated:2 accurate:1 encourage:1 desired:2 theoretical:1 minimal:2 uncertain:1 tse:3 modeling:3 setpoints:1 cover:2 cost:12 pole:17 vertex:1 deviation:1 reported:1 stored:4 considerably:2 combined:1 chooses:3 confident:1 density:2 st:1 international:1 ie:1 off:1 quickly:3 aaai:2 management:1 slowly:2 worse:1 v7:1 stochastically:1 american:1 aggressive:5 potential:1 summarized:1 stabilize:3 coefficient:1 explicitly:1 vi:1 later:1 doing:1 pendulum:1 reached:1 start:1 reign:1 minimize:2 degraded:1 yield:2 vlo:1 weak:1 bayesian:8 trajectory:2 cybernetics:1 reach:1 failure:5 radian:1 intergrated:1 localness:1 response:1 done:5 anderson:1 angular:3 stage:1 implicit:1 until:1 receives:1 nonlinear:3 defines:1 stably:2 quality:1 believe:1 name:1 building:1 concept:3 true:1 aggressively:1 moore:10 iteratively:1 illustrated:2 ignorance:1 during:10 width:2 criterion:1 pdf:2 hill:2 demonstrate:1 l1:1 bring:1 pro:1 recently:1 common:2 functional:1 physical:2 overview:1 association:1 mellon:1 significant:4 ai:1 vanilla:1 grid:15 schneide:1 moving:1 stable:1 operating:2 own:1 recent:1 shalom:3 moderate:1 disappointing:1 scenario:3 approximators:1 exploited:1 inverted:1 additional:2 schneider:8 minl:2 sound:1 smooth:2 faster:1 academic:2 long:3 devised:1 prevented:1 controlled:1 prediction:2 regression:11 controller:23 mayor:1 cmu:1 iteration:3 represent:1 kernel:2 disaster:4 robotics:1 ion:1 addition:1 crash:2 spacing:1 interval:1 completes:1 crucial:1 invited:1 rest:2 operate:1 pass:1 degroot:3 subject:1 cart:9 call:1 easy:2 independence:2 fit:2 architecture:1 suboptimal:1 economic:1 penalty:3 interpolates:1 action:8 v8:1 amount:2 locally:8 simplest:1 generate:2 outperform:1 carnegie:1 discrete:1 dropping:2 changing:2 prevent:1 sum:2 run:2 inverse:1 angle:4 uncertainty:8 respond:1 decision:2 bound:2 followed:1 quadratic:1 adapted:1 occur:1 cautionary:5 software:1 regulator:1 speed:1 extremely:1 spring:1 according:4 increasingly:2 wi:1 lp:1 making:1 xo:1 equation:3 remains:1 dyna:1 mind:1 dyer:3 end:2 available:2 operation:5 incurring:1 experimentation:1 observe:1 inefficiently:1 original:1 assumes:3 running:1 top:3 maintaining:1 restrictive:1 build:4 especially:1 sweep:1 move:4 added:2 occurs:1 parametric:2 september:1 dp:10 distance:3 simulated:2 argue:1 considers:1 trivial:1 toward:1 assuming:2 length:1 modeled:1 illustration:2 minimizing:1 acquire:2 balance:2 difficult:3 executed:1 trace:4 policy:8 perform:1 discretize:1 finite:1 situation:2 ever:1 sweeping:1 learned:4 narrow:1 nip:3 address:2 able:2 bar:3 suggested:1 usually:1 including:1 memory:1 treated:2 force:1 disturbance:1 representing:1 improve:1 risky:1 picture:1 started:5 ready:1 naive:1 xq:1 traveled:1 prior:1 literature:3 loss:1 approximator:3 foundation:1 incurred:1 viewpoint:1 production:2 summary:2 repeat:2 institute:1 wide:1 fall:1 taking:1 benefit:2 dimension:2 transition:2 rich:1 conservatively:1 forward:1 made:1 reinforcement:7 adaptive:4 avoided:1 atkeson:14 far:2 transaction:1 mcgraw:2 global:3 vio:1 active:2 harm:1 pittsburgh:1 assumed:1 continuous:4 search:1 table:2 learn:7 reasonably:1 robust:1 improving:1 whole:1 noise:3 repeated:2 allowed:1 fig:5 probing:4 position:4 stray:1 third:1 learns:1 discarding:1 explored:1 adding:1 execution:3 likely:1 positional:1 applies:1 acquiring:1 goal:9 llxi:1 narrower:1 eventual:1 jeff:1 prioritized:1 absence:1 man:1 change:1 typical:1 determined:1 conservative:1 called:1 total:2 catastrophic:3 experimental:1 reg:1 correlated:1 |
348 | 1,318 | Triangulation by Continuous Embedding
Marina MeiHl and Michael I. Jordan
{mmp, jordan }@ai.mit.edu
Center for Biological & Computational Learning
Massachusetts Institute of Technology
45 Carleton St. E25-201
Cambridge, MA 02142
Abstract
When triangulating a belief network we aim to obtain a junction
tree of minimum state space. According to (Rose, 1970), searching
for the optimal triangulation can be cast as a search over all the
permutations of the graph's vertices. Our approach is to embed
the discrete set of permutations in a convex continuous domain D.
By suitably extending the cost function over D and solving the
continous nonlinear optimization task we hope to obtain a good
triangulation with respect to the aformentioned cost. This paper
presents two ways of embedding the triangulation problem into
continuous domain and shows that they perform well compared to
the best known heuristic.
1
INTRODUCTION. WHAT IS TRIANGULATION?
Belief networks are graphical representations of probability distributions over a set
of variables. In what follows it will be always assumed that the variables take
values in a finite set and that they correspond to the vertices of a graph. The
graph's arcs will represent the dependencies among variables. There are two kinds of
representations that have gained wide use : one is the directed acyclic graph model,
also called a Bayes net, which represents the joint distribution as a product of the
probabilities of each vertex conditioned on the values of its parents; the other is the
undirected graph model, also called a Markov field, where the joint distribution is
factorized over the cliques! of an undirected graph. This factorization is called a
junction tree and optimizing it is the subject of the present paper. The power of both
models lies in their ability to display and exploit existent marginal and conditional
independencies among subsets of variables. Emphasizing independencies is useful
1 A clique is a fully connected set of vertices and a maximal clique is a clique that is
not contained in any other clique.
558
M. Meilii and M. /. Jordan
from both a qualitative point of view (it reveals something about the domain under
study) and a quantitative one (it makes computations tractable). The two models
differ in the kinds of independencies they are able to represent and often times
in their naturalness in particular tasks. Directed graphs are more convenient for
learning a model from data; on the other hand, the clique structure of undirected
graphs organizes the information in a way that makes it immediately available to
inference algorithms. Therefore it is a standard procedure to construct the model
of a domain as a Bayes net and then to convert it to a Markov field for the purpose
of querying it.
This process is known as decomposition and it consists of the following stages:
first, the directed graph is transformed into an undirected graph by an operation
called moralization. Second, the moralized graph is triangulated. A graph is called
triangulated if any cycle of length> 3 has a chord (i.e. an edge connecting two
nonconsecutive vertices). If a graph is not triangulated it is always possible to add
new edges so that the resulting graph is triangulated. We shall call this procedure
triangulation and the added edges the fill-in. In the final stage, the junction tree
(Kjrerulff, 1991) is constructed from the maximal cliques of the triangulated graph.
We define the state space of a clique to be the cartesian product of the state spaces
of the variables associated to the vertices in the clique and we call weight of the
clique the size of this state space. The weight of the junction tree is the sum of the
weights of its component cliques. All further exact inference in the net takes place
in the junction tree representation. The number of computations required by an
inference operation is proportional to the weight of the tree.
For each graph there are several and usually a large number of possible triangulations, with widely varying state space sizes. Moreover, triangulation is the only
stage where the cost of inference can be influenced. It is therefore critical that the
triangulation procedure produces a graph that is optimal or at least "good" in this
respect.
Unfortunately, this is a hard problem. No optimal triangulation algorithm is known
to date. However, a number of heuristic algorithms like maximum cardinality search
(Tarjan and Yannakakis, 1984), lexicographic search (Rose et al., 1976) and the
minimum weight heuristic (MW) (Kjrerulff, 1990) are known. An optimization
method based on simulated annealing which performs better than the heuristics
on large graphs has been proposed in (Kjrerulff, 1991) and recently a "divide and
conquer" algorithm which bounds the maximum clique size of the triangulated graph
has been published (Becker and Geiger, 1996). All but the last algorithm are based
on Rose's (Rose, 1970) elimination procedure: choose a node v of the graph, connect
all its neighbors to form a clique, then eliminate v and all the edges incident to it
and proceed recursively. The resulting filled-in graph is triangulated.
It can be proven that the optimal triangulation can always be obtained by applying
Rose's elimination procedure with an appropriate ordering of the nodes. It follows
then that searching for an optimal triangulation can be cast as a search in the space
of all node permutations. The idea of the present work is the following: embed
the discrete search space of permutations of n objects (where n is the number of
vertices) into a suitably chosen continuous space. Then extend the cost to a smooth
function over the continuous domain and thus transform the discrete optimization
problem into a continuous nonlinear optimization task. This allows one to take
advantage of the thesaurus of optimization methods that exist for continuous cost
functions. The rest of the paper will present this procedure in the following sequence:
the next section introduces and discusses the objective function; section 3 states
the continuous version of the problem; section 4 discusses further aspects of the
optimization procedure and presents experimental results and section 5 concludes
559
Triangulation by Continuous Embedding
the paper.
2
THE OBJECTIVE
In this section we introduce the objective function that we used and we discuss its
relationship to the junction tree weight. First, some notation. Let G = (V, E) be a
graph, its vertex set and its edge set respectively. Denote by n the cardinality of the
vertex set, by ru the number of values of the (discrete) variable associated to vertex
v E V, by # the elimination ordering of the nodes, such that #v = i means that
node v is the i-th node to be eliminated according to ordering #, by n(v) the set of
neighbors of v E V in the triangulated graph and by C u = {v} U {u E n( v) I #u >
#v}. 2 Then, a result in (Golumbic, 1980) allows us to express the total weight of
the junction tree obtained with elimination ordering # as
(1)
where ismax(Cu ) is a variable which is 1 when C u is a maximal clique and 0 otherwise. As stated, this is the objective of interest for belief net triangulation. Any
reference to optimality henceforth will be made with respect to J* .
This result implies that there are no more than n maximal cliques in a junction tree
and provides a method to enumerate them. This suggests defining a cost function
that we call the raw weight J as the sum over all the cliques C u (thus possibly
including some non-maximal cliques) :
J(#)
= I:
II ru
(2)
uEV uEC v
J is the cost function that will be used throughout this paper. A reason to use
it instead of J* in our algorithm is that the former is easier to compute and to
approximate. How to do this will be the object of the next section. But it is
natural to ask first how well do the two agree?
Obviously, J is an upper bound for J*. Moreover, it can be proved that if r = min ru
(3)
and therefore that J is less than a fraction 1/(r - 1) away from J* . The upper
bound is attained when the triangulated graph is fully connected and all ru are
equal.
In other words, the differece between J and J* is largest for the highest cost triangulation. We also expect this difference to be low for the low cost triangulation .
An intuitive argument for this is that good triangulations are associated with a
large number of smaller cliques rather than with a few large ones. But the former
situation means that there will be only a small number of small size non-maximal
cliques to contribute to the difference J - J* , and therefore that the agreement with
J* is usually closer than (3) implies. This conclusion is supported by simulations
(Meila. and Jordan, 1997).
2Both n(v) and CO) depend on
for the sake of readability.
#
but we chose not to emphasize this in the notation
560
3
M. Meilii and M. I. Jordan
THE CONTINUOUS OPTIMIZATION PROBLEM
This section shows two ways of defining J over continuous domains. Both rely on
a formulation of J that eliminates explicit reference to the cliques Gu ; we describe
this formulation here.
Let us first define new variables J.tUIl and eUIl ,
J.tuu
{ 1 if #u ~ #v
o otherwise
U,
v = 1, .. , n . For any permutation #
{ 1 if the edge
o otherwise
(u,v) E EUF#
where F # is the set of fill-in edges.
In other words, J.t represent precedence relationships and e represent the edges
between the n vertices. Therefore, they will be called precedence variables and edge
variables respectively. With these variables, J can be expressed as
J(#)
=
I: IT r~vuevu
(4)
uEV uEV
In (4), the product J.tuue uu acts as an indicator variable being 1 iff "u E Gil" is true.
For any given permutation, finding the J.t variables is straightforward. Computing
the edge variables is possible thanks to a result in (Rose et al., 1976) . It states that
an edge (u, v) is contained in F# iff there is a path in G between u and v containing
only nodes w for which #w < mine #u, #v). Formally, eUIl e uu
1 iff there exists
a path P = (u, WI, W2, ... v) such that
=
IT
J.tw,uJ.tw,u
=
=
1
WoEP
So far, we have succeeded in defining the cost J associated with any permutation
in terms of the variables J.t and e. In the following, the set of permutations will be
embedded in a continuous domain. As a consequence, J.t and e will take values in
the interval [0,1] but the form of J in (4) will stay the same.
The first method, called J.t-continuous embedding (J.t-CE) assumes that the variables J.tuu E [0,1] represent independent probabilities that #u < #v. For any
permutation, the precedence variables have to satisfy the transitivity condition.
Transitivity means that if #u < #v and #v < #w, then #u < #w, or, that for
any triple (J.tuu, J.tIlW, J.twu) the assignments (0 , 0,0) and (1,1,1) are forbidden. According to the probabilistic interpretation of J.t we introduce a term that penalizes
the probability of a transitivity violation:
L P[(u, v, w) nontransitive]
U<u<W
I: [J.tUUJ.tIlWJ.tWU + (1 - J.tuu)(l >
(5)
J.tuw)(l- J.twu)]
U<Il<W
P[assignment non transitive]
(6)
(7)
In the second approach, called O-continuous embedding (O-CE), the permutations
are directly embedded into the set of doubly stochastic matrices. A doubly stochastic
matrix () is a matrix for which the elements in a row or column sum to one.
I:0ij
I:0ij = 1 Oij ~ 0 for i,j = 1, .. n.
j
(8)
Triangulation by Continuous Embedding
561
When Oij are either 0 or 1, implying that there is exactly one nonzero element
in each row or column, the matrix is called a permutation matrix. Oij
1 and
#i = j both mean that the position of object i is j in the given permutation. The
set of doubly stochastic matrices
is a convex polytope of dimension (n - 1)2
whose extreme points are the permutation matrices (Balinski and Russakoff, 1974).
Thus, every doubly stochastic matrix can be represented as a convex combination
of permutation matrices. To constrain the optimum to be a an extreme point, we
use the penalty term
(9)
R(O) =
Oij (1 - Oij)
ij
=
e
I:
The precedence variables are defined over
J.luv
e as
1 - J.lvu
1
Now, for both embeddings, the edge variables can be computed from J.l as follows
{
I
max
PE {path&
u-v}
ITwEP J.lwuJ.lwv
for (u, v) E E or u = v
otherwise
The above assignments give the correct values for J.l and e for any point representing
a permutation. Over the interior of the domain, e is a continuous, piecewise differentiable function. Each euv , (u, v) ftE can be computed by a shortest path algorithm
between u and v, with the length of (WI,W2) E E defined as (-logJ.lwluJ.lw:>v).
O-CE is an interior point method whereas in J.l-CE the current point, although inside
[0,I]n(n-I)/2, isn't necessarily in the convex hull of the hypercube's corners that
represent permutations. The number of operation required for one evaluation of J
and its gradient is as follows: O(n4) operations to compute J.l from 0, O(n 3 10gn) to
compute e, O(n 3 ) for ~: and O(n 2 ) for ~~ and ~~ afterwards. Since computing J.l is
the most computationally intensive step, J.l-CE is a clear win in terms of computation
cost. In addition, by operating directly in the J.l domain, one level of approximation
is eliminated, which makes one expect J.l-CE to perform better than O-CE. The
results in the following section will confirm this.
4
EXPERIMENTAL RESULTS
To assess the performance of our algorithms we compared their results with the
results of the minimum weight heuristic (MW), the heuristic that scored best in
empirical tests (Kjrerulff, 1990). The lowest junction tree weight obtained in 200
runs of MW was retained and denoted by J MW ' Tests were run on 6 graphs of
different sizes and densities:
graph
n= IVI
density
r m in/ r max/ r avf1,
10giO J MW
h9
h12
dlO
m20
a20
d20
9
.33
2/2/2/
2.43
12
.25
3/3/3
2.71
10
.6
6/15/10
7.44
20
.25
2/8/5
5.47
20
.45
6/15/10
12.75
20
.6
6/15/10
13.94
The last row of the table shows the 10giO J MW ' We ran 11 or more trials of each
of our two algorithms on each graph. To enforce the variables to converge to a
permutation, we minimized the objective J + >.R, where>. > 0 is a parameter
M. Meilii and M. l Jordan
562
100
20
10
30
-
10
2
--
3
=0.3
5
-
h9
h12
........-
-
d10
m20
a
.5
.2
820
d20
.1~~--~--~------~--~----~
h9
h12
d10
m20
a20
d20
b
Figure 1: Minimum, maximum (solid line) and median (dashed line) values of
obtained by O-CE (a) and JL-CE (b).
J1?
MW
that was progressively increased following a deterministic annealing schedule and
R is one of the aforementioned penalty terms. The algorithms were run for 50150 optimization cycles, usually enough to reach convergence. However, for the
JL-embedding on graph d20, there were several cases where many JL values did not
converge to 0 or 1. In those cases we picked the most plausible permutation to be
the answer.
The results are shown in figure 1 in terms of the ratio of the true cost obtained by
the continuous embedding algorithm (denoted by J*) and J'Mw. For the first two
graphs, h9 and h12, J1-w is the optimal cost; the embedding algorithms reach it
most trials. On the remaining graphs, JL-CE clearly outperforms O-CE, which also
performs poorer than MW on average. On dIO, a20 and m20 it also outperforms
the MW heuristic, attaining junction tree weights that are 1.6 to 5 times lower
on average than those obtained by MW. On d20, a denser graph, the results are
similar for MW and JL-CE in half of the cases and worse for JL-CE otherwise. The
plots also show that the variability of the results is much larger for CE than for
MW. This behaviour is not surprising, given that the search space for CE, although
continuous, comprises a large number of local minima. This induces dependence on
the initial point and, as a consequence, nondeterministic behaviour of the algorithm.
Moreover, while the number of choices that MW has is much lower than the upper
limit of n!, the "choices" that CE algorithms consider, although soft, span the space
of all possible permutations.
5
CONCLUSION
The idea of continuous embedding is not new in the field of applied mathematics.
The large body of literature dealing with smooth (sygmoidal) functions instead
of hard nonlinearities (step functions) is only one example. The present paper
shows a nontrivial way of applying a similar treatment to a new problem in a new
field. The results obtained by it-embedding are on average better than the standard
MW heuristic. Although not directly comparable, the best results reported on
triangulation (Kjrerulff, 1991; Becker and Geiger, 1996) are only by little better
than ours. Therefore the significance of the latter goes beyond the scope of the
present problem. They are obtained on a hard problem, whose cost function has no
feature to ease its minimization (J is neither linear, nor quadratic, nor is it additive
Triangulation by Continuous Embedding
563
w.r.t. the vertices or the edges) and therefore they demonstrate the potential of
continuous embedding as a general tool.
Colaterally, we have introduced the cost function J, which is directly amenable
to continuous approximations and is in good agreement with the true cost
Since minimizing J may not be NP-hard, this opens a way for investigating new
triangulation methods.
r.
Acknowledgements
The authors are grateful to Tommi Jaakkola for many discussions and to Ellie
Bonsaint for her invaluable help in typing the paper.
References
Balinski, M. and Russakoff, R. (1974). On the assignment polytope. SIAM Rev.
Becker, A. and Geiger, D. (1996) . A sufficiently fast algorithm for finding close to
optimal junction trees. In UAI 96 Proceedings.
Golumbic, M. (1980) . Algorithmic Graph Theory and Perfect Graphs. Academic
Press, New York .
Kjrerulff, U. (1990) . Triangulation of graphs-algorithms giving small total state
space. Technical Report R 90-09 , Department of Mathematics and Computer
Science, Aalborg University, Denmark.
Kjrerulff, U. (1991). Optimal decomposition of probabilistic networks by simulated
annealing. Statistics and Computing.
Meila., M. and Jordan, M. I. (1997) . An objective function for belief net triangulation. In Madigan, D., editor , AI and Statistics, number 7. (to appear).
Rose, D. J. (1970). Triangulated graphs and the elimination process. Journal of
Mathematical Analysis and Applications.
Rose, D. J., Tarjan, R. E., and Lueker, E . (1976). Algorithmic aspects of vertex
elimination on graphs. SIAM J. Comput.
Tarjan, R. and Yannakakis, M. (1984). Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs, and select reduced acyclic hypergraphs. SIAM 1. Comput.
| 1318 |@word uev:3 trial:2 cu:1 version:1 suitably:2 open:1 simulation:1 decomposition:2 solid:1 recursively:1 initial:1 ours:1 outperforms:2 current:1 surprising:1 additive:1 j1:2 meilii:3 plot:1 progressively:1 implying:1 half:1 provides:1 node:7 contribute:1 readability:1 gio:2 mathematical:1 constructed:1 qualitative:1 consists:1 doubly:4 nondeterministic:1 inside:1 introduce:2 nor:2 little:1 cardinality:2 moreover:3 notation:2 factorized:1 lowest:1 what:2 kind:2 finding:2 quantitative:1 every:1 act:1 exactly:1 appear:1 local:1 limit:1 consequence:2 path:4 chose:1 luv:1 suggests:1 co:1 ease:1 factorization:1 directed:3 procedure:7 empirical:1 convenient:1 word:2 madigan:1 interior:2 close:1 applying:2 deterministic:1 center:1 straightforward:1 go:1 convex:4 immediately:1 fill:2 embedding:13 searching:2 exact:1 agreement:2 element:2 logj:1 connected:2 cycle:2 ordering:4 highest:1 chord:1 ran:1 rose:8 mine:1 existent:1 depend:1 solving:1 grateful:1 chordality:1 gu:1 joint:2 represented:1 fast:1 describe:1 whose:2 heuristic:8 widely:1 plausible:1 denser:1 larger:1 otherwise:5 ability:1 statistic:2 transform:1 final:1 obviously:1 advantage:1 sequence:1 differentiable:1 net:5 nontransitive:1 product:3 maximal:6 date:1 iff:3 intuitive:1 parent:1 convergence:1 optimum:1 extending:1 produce:1 perfect:1 object:3 help:1 dlo:1 ij:3 implies:2 uu:2 triangulated:10 differ:1 tommi:1 correct:1 stochastic:4 hull:1 elimination:6 dio:1 behaviour:2 biological:1 precedence:4 sufficiently:1 scope:1 algorithmic:2 purpose:1 largest:1 tool:1 hope:1 minimization:1 mit:1 clearly:1 lexicographic:1 always:3 aim:1 rather:1 varying:1 jaakkola:1 inference:4 eliminate:1 her:1 transformed:1 among:2 aforementioned:1 denoted:2 marginal:1 field:4 construct:1 equal:1 eliminated:2 represents:1 yannakakis:2 minimized:1 np:1 report:1 piecewise:1 few:1 aalborg:1 interest:1 evaluation:1 introduces:1 violation:1 extreme:2 amenable:1 poorer:1 edge:13 closer:1 succeeded:1 tree:12 filled:1 divide:1 penalizes:1 a20:3 increased:1 column:2 soft:1 gn:1 moralization:1 assignment:4 cost:16 vertex:13 subset:1 reported:1 dependency:1 connect:1 answer:1 st:1 thanks:1 density:2 siam:3 stay:1 probabilistic:2 michael:1 connecting:1 e25:1 containing:1 choose:1 possibly:1 henceforth:1 worse:1 corner:1 potential:1 nonlinearities:1 attaining:1 satisfy:1 view:1 picked:1 bayes:2 ass:1 il:1 correspond:1 raw:1 published:1 influenced:1 reach:2 associated:4 proved:1 treatment:1 massachusetts:1 ask:1 schedule:1 attained:1 formulation:2 stage:3 hand:1 nonlinear:2 d10:2 true:3 former:2 nonzero:1 transitivity:3 euv:1 nonconsecutive:1 demonstrate:1 performs:2 invaluable:1 recently:1 jl:6 extend:1 interpretation:1 hypergraphs:2 cambridge:1 ai:2 meila:2 balinski:2 mathematics:2 operating:1 add:1 something:1 forbidden:1 triangulation:23 optimizing:1 minimum:5 converge:2 shortest:1 dashed:1 ii:1 afterwards:1 smooth:2 technical:1 academic:1 marina:1 represent:6 whereas:1 addition:1 annealing:3 interval:1 median:1 carleton:1 tuw:1 w2:2 rest:1 eliminates:1 ivi:1 subject:1 undirected:4 jordan:7 call:3 mw:15 embeddings:1 enough:1 idea:2 intensive:1 becker:3 penalty:2 proceed:1 york:1 enumerate:1 useful:1 clear:1 induces:1 reduced:1 exist:1 gil:1 discrete:4 shall:1 express:1 independency:3 neither:1 ce:16 graph:37 fraction:1 convert:1 sum:3 run:3 place:1 throughout:1 d20:5 geiger:3 thesaurus:1 h12:4 comparable:1 bound:3 display:1 quadratic:1 nontrivial:1 constrain:1 sake:1 aspect:2 argument:1 optimality:1 min:1 span:1 department:1 according:3 combination:1 smaller:1 wi:2 tw:2 n4:1 rev:1 computationally:1 agree:1 discus:3 tractable:1 junction:11 naturalness:1 available:1 operation:4 away:1 appropriate:1 enforce:1 assumes:1 remaining:1 graphical:1 fte:1 exploit:1 giving:1 uj:1 conquer:1 hypercube:1 objective:6 added:1 dependence:1 gradient:1 win:1 simulated:2 polytope:2 reason:1 denmark:1 ru:4 length:2 retained:1 relationship:2 ratio:1 minimizing:1 unfortunately:1 stated:1 perform:2 upper:3 markov:2 arc:1 finite:1 defining:3 situation:1 variability:1 tarjan:3 introduced:1 cast:2 required:2 h9:4 continous:1 able:1 beyond:1 usually:3 including:1 max:2 belief:4 power:1 critical:1 natural:1 rely:1 oij:5 typing:1 indicator:1 representing:1 technology:1 concludes:1 transitive:1 isn:1 literature:1 acknowledgement:1 embedded:2 fully:2 expect:2 permutation:19 proportional:1 acyclic:2 querying:1 proven:1 triple:1 incident:1 editor:1 row:3 supported:1 last:2 institute:1 wide:1 neighbor:2 tuu:4 dimension:1 author:1 made:1 far:1 approximate:1 emphasize:1 clique:20 confirm:1 dealing:1 reveals:1 investigating:1 uai:1 assumed:1 continuous:22 search:6 table:1 golumbic:2 necessarily:1 domain:9 did:1 significance:1 scored:1 body:1 m20:4 position:1 comprises:1 explicit:1 mmp:1 comput:2 lie:1 pe:1 lw:1 emphasizing:1 embed:2 moralized:1 exists:1 gained:1 conditioned:1 cartesian:1 easier:1 expressed:1 contained:2 ma:1 conditional:1 hard:4 called:9 total:2 triangulating:1 experimental:2 acyclicity:1 organizes:1 formally:1 select:1 latter:1 |
349 | 1,319 | Cholinergic Modulation Preserves Spike
Timing Under Physiologically Realistic
Fluctuating Input
Akaysha C. Tang
The Salk Institute
Howard Hughes Medical Institute
Computational Neurobiology Laboratory
La Jolla, CA 92037
Andreas M. Bartels
Zoological Institute
University of Zurich
Ziirich
Switzerland
Terrence J. Sejnowski
The Salk Institute
Howard Hughes Medical Institute
Computational Neurobiology Laboratory
La Jolla, CA 92037
Abstract
Neuromodulation can change not only the mean firing rate of a
neuron, but also its pattern of firing . Therefore, a reliable neural coding scheme, whether a rate coding or a spike time based
coding, must be robust in a dynamic neuromodulatory environment . The common observation that cholinergic modulation leads
to a reduction in spike frequency adaptation implies a modification of spike timing, which would make a neural code based on
precise spike timing difficult to maintain. In this paper, the effects
of cholinergic modulation were studied to test the hypothesis that
precise spike timing can serve as a reliable neural code. Using the
whole cell patch-clamp technique in rat neocortical slice preparation and compartmental modeling techniques, we show that cholinergic modulation, surprisingly, preserved spike timing in response
to a fluctuating inputs that resembles in vivo conditions. This result suggests that in vivo spike timing may be much more resistant
to changes in neuromodulator concentrations than previous physiological studies have implied.
A. C. Tang, A. M. Bartels and T. J. Sejnowski
112
1
Introduction
Recently, there has been a vigorous debate concerning the nature of neural coding
(Rieke et al. 1996; Stevens and Zador 1995; Shadlen and Newsome 1994). The prevailing view has been that the mean firing rate conveys all information about the
sensory stimulus in a spike train and the precise timing of the individual spikes is
noise. This belief is, in part, based on a lack of correlation between the precise timing of the spikes and the sensory qualities of the stimulus under study, particularly,
on a lack of spike timing repeatability when identical stimulation is delivered. This
view has been challenged by a number of recent studies, in which highly repeatable
temporal patterns of spikes can be observed both in vivo (Bair and Koch 1996;
Abeles et al. 1993) and in vitro (Mainen and Sejnowski 1994). Furthermore, application of information theory to the coding problem in the frog and house fly (Bialek
et al. 1991; Bialek and Rieke 1992) suggested that additional information could be
extracted from spike timing. In the absence of direct evidence for a timing code in
the cerebral cortex, the role of spike timing in neural coding remains controversial.
1.1
A necessary condition for a spike timing code
If spike timing is important in defining a stimulus, precisely timed spikes must
be maintained under a range of physiological conditions. One important aspect
of a neuron's environment is the presence of various neuromodulators. Due to
their widespread projections in the nervous system, major neuromodulators, such
as acetylcholine (ACh) and norepinephrine (NA), can have a profound influence on
the firing properties of most neurons. If a change in concentration of a neuromodulator completely alters the temporal structure of the spike train , it would be unlikely
that spike timing could serve as a reliable neural code. A major effect of cholinergic modulation on cortical neurons is a reduction in spike frequency adaptation,
which is characterized by a shortening of inter-spike-intervals and an increase in
neuronal excitability (McCormick 1993; Nicoll 1988) . One obvious consequence of
this cholinergic effect is a modification of spike timing (Fig. 1A). This modification
of spike timing due to a change in neuromodulator concentration would seem to
preclude the possibility of a neural code based on precise spike timing .
1.2
Re-examination of the cholinergic modulation of spike timing
Despite its popularity, the square pulse stimulus used in most eletrophysiological
studies is rarely encountered by a cortical neuron under physiological conditions.
The corresponding behavior of the neuron at the input/output level may have limited relevance to the behavior of the neuron under its natural condition, which is
characterized in vivo by highly fluctuating synaptic inputs. In this paper, we reexamine the effect of cholinergic modulation on spike timing under two contrasting
stimulus conditions: the physiologically unrealistic square pulse input versus the
more plausible fluctuating input. We report that under physiologically more realistic fluctuating inputs, effects of cholinergic modulation preserved the timing of each
individual spike (Fig . IB). This result is consistent with the hypothesis that spike
timing may be relevant to information encoding.
Cholinergic Modulation Preserves Spike Timing
2
2.1
113
Methods
Experimental
Using the whole cell patch-clamp technique, we made somatic recordings from layer
2/3 neocortical neurons in the rat visual cortex. Coronal slices of 400 p.m were
prepared from 14 to 18 days old Long Evans rats (for details see (Mainen and Sejnowski 1994). Spike trains elicited by current injection of 900 ms were recorded for
the square pulse inputs and fluctuating inputs with equal mean synaptic inputs, in
the absence and presence of a cholinergic agonist carbachol. The fluctuating inputs
were constructed from Gaussian noise and convolved with an alpha function with a
time constant of 3 ms, reflecting the time course of the synaptic events. The amplitude of fluctuation was such that the subthreshold membrane potential fluctuation
observed in our experiments were comparable to that in whole-cell patch clamp
study in vivo (Ferster and Jagadeesh 1992). The cholinergic agonist carbachol at
concentrations of 5,7.5, 15,30 p.M was delivered through bath perfusion (perfusion
time: between 1 and 6 min). For each cell, three sets of blocks were recorded before,
during and after carbachol perfusion at a given concentration. Each block contained
20 trials of stimulation under identical experimental conditions.
2.2
Simulation
We used a compartmental model of a neocortical neuron to explore the contribution
of three potassium conductances affected by cholinergic modulation (Madison et ai.
1987). Simulations were performed in a reduced 9 compartment model, based on
a layer 2 pyramidal cell reconstruction using the NEURON program. The model
had five conductances: gNa, gK v , 9KM' gCa, gK(Ca)' Membrane resistivity was
40KOcm 2 , capacitance was Ip.F/ p.m 2 , and axial resistance was 2000cm. Intrinsic
noise was simulated by injecting a randomly fluctuating current to fit the spike jitter
observed experimentally. Different potassium conductances were manipulated as
independent variables and the spike timing displacement was measured for multiple
levels of conductance change corresponding to multiple concentrations of carbachol.
2.3
Data analysis
For both experimental and simulation data, first derivatives were used to detect
spikes and to determine the timing for spike initiation. Raster plots of the spike
trains were derived from the series of membrane potentials for each trial, and a
smoothed histogram was then constructed to reflect the instantaneous firing rate
for each block of trials under identical stimulation and pharmacological conditions.
An event was then defined as a period of increase in instantaneous firing rate that
is greater than a threshold level (set at 3 times of the mean firing rate within the
block of trials) (Mainen and Sejnowski 1994).
The effect of carbachol on spike timing under fluctuating inputs was quantified by
defining the displacement in spike timing for each event, dj, as the time difference
between the nearest peaks of the events under carbachol and control condition. The
weight for each event, Wi, is determined by the peak of the event. The higher the
peak, the less the spike jitter. The mean displacement is
(1)
A. C. Tang, A. M. Bartels and T. J. Sejnowski
114
where i= 1, 2, ... nth event in the control condition.
3
3.1
Results
Experimental
The effects of carbachol on spike timing under the square pulse and fluctuating
inputs are shown in Fig. lA and B respectively. In the absence of carbachol, a
square pulse input produced a spike train with clear spike frequency adaptation
(Fig. lAl) . Similar to previous reports from the literature, addition of carbachol
to the perfusion medium reduced spike frequency adaptation (Fig. lA2). This
reduction in spike frequency adaptation is reflected in the shortening of inter-spikeintervals and an increase in the firing frequency. Most importantly, spike timing
was altered by carbachol perfusion. When a fluctuating current was injected, the
strong spike frequency adaptation observed under a square pulse input was no longer
apparent (Fig. IBl). Unlike the results under the square pulse condition, addition
of carbachol to the bath medium preserved the timing of the spikes (Fig. IB2). An
increased excitability was achieved with the insertion of additional spikes between
the existing spikes.
At
8t
82
Figure 1: Response of a cortical neuron to square pulse current injection (A) and a
fluctuating input (B). The membrane potential during the 1024 ms sampling period
is plotted as a function of time for the two types of inputs (onset: 5 ms; duration:
900 ms). The grey lines show where the spikes occurred in the upper traces.
Preservation of spike timing under carbachol was examined at concentrations of 5,
7.5, 15, and 30J.tM, here shown in one cell (Fig. 2, 5J.tM). The smoothed histograms
(as described in section 2.3) were plotted for blocks of 20 identical trials under
the same fluctuating input. The alignment of the events between the control and
carbachol indicates that spike timing was well preserved. The table gives the mean
spike displacement, D, for a range of carbachol concentrations. The spike jitter
within the control and carbachol conditions was approximately 1 ms, and was not
changed significantly by carbachol (control: 0.96 ? 0.3; carbachol: 0.94 ? 0.42 ms.)
3.2
Simulation
The model captured the basic characteristics of experimental data. In response to
fluctuating inputs, the model neurons showed reduced spike frequency adaptation
and preservation of spike timing. The in vitro experiment were limited to only
two levels of stimulus fluctuation. To show that reduced adaptation in response
115
Cholinergic Modulation Preserves Spike Timing
Cbolinergic Modification of Spike Timing
Control
1
D(ms)
1
r
l
N
Carbachol
(microM)
2.76? 0.38
15
5-7.5
3.33
1
15
9.3
2
30
Carbachol
Figure 2: Preservation of spike timing for a range of carbachol concentrations.
Left: the top portion is the histogram for the control condition; the bottom is the
histogram for the carbachol condition shown inverted. The alignment of the events
between the control and carbachol indicates preserved timing. Right: statistics of
spike displacement.
55
~
c::
Q
48
41
'::I
SQ,
.a<
34
27
20
0
50
100
150
Fluctuation (pA)
Figure 3: Reduced adaptation as a function of increasing stimulus fluctuation.
Adaptation measured as a normalized spike count difference between the first and
second halves of the 900 ms stimulation: (C2-Cl)/Cl.
to fluctuating inputs is a general phenomenon, in the model neuron we measured
adaptation for multiple levels of stimulus fluctuation. As shown in Fig. 3, spike frequency adaptation decreased as a function of increasing stimulus fluctuation over a
range of fluctuation amplitude. The effects cholinergic modulation on spike timing
were studied under simulated cholinergic modulation. Similar to the experimental
finding, increased neuronal excitability to fluctuating inputs was accompanied by
insertion of additional spikes (Fig. 4 left) and spike timing was preserved simultaneously (Fig. 4 right).
In real neurons, the total effects of cholinergic modulation depends on its effects on
at least three potassium conductances. Using the model, we examined the effects of
manipulating each of the three potassium conductances on spike displacement and
spike jitter. We found that (1) spike displacement due to reduction in potassium
conductances were all very small, on the order of a few milliseconds (Fig. 5 top row);
(2) Compared to the conductances underlying 1M and I, eak , spike displacement was
most sensitive to changes in the conductance underlying IAHP (Fig. 5 top row),
whose reduction alone led to the best reproduction of the experimental data; (3)
spike jitters of approximately 1 ms were independent of the values of the three
A. C. Tang, A. M. Bartels and T. 1. Sejnowski
116
lOOms
1
30mV
I
Figure 4: Preservation of spike timing in the model neocortical neuron. Left: Responses of the model neuron to fluctuating input. Top: replicating data from the
control condition. Bottom: reproducing the carbachol effect by blocking the adaptation current, IAHP. Right: histogram display of preservation of spike timing in a
block of 20 trials .
potassium conductances (Fig. 5 bottom row) . These results make predictions for
new experiments where each individual current is blocked selectively.
4
Conclusions
The results showed that under the physiologically realistic fluctuating input, the
effects of cholinergic modulation on spike timing are rather different from that
observed when unphysiological square pulse inputs were used. Instead of moving the spikes forward in time by shortening the inter-spike-intervals, cholinergic
modulation preserved spike timing. This preservation of spike timing was achieved
simultaneously with an increase in neuronal excitability.
According to the classical view of neuromodulation, one would have expected that
a spike timing based neural code would be difficult to maintain across a range of
neuromodulator concentrations. The fact that spike timing was rather resistant to
changes in the neuromodulatory environment raises the possibility that spike timing
may serve some function in the cortex.
The differential effect of cholinergic modulation on spike timing observed under the
square pulse and fluctuating inputs also calls for caution in generalizing an observation from one set of parameter values to another, especially when generalizing
from in vitro to in vivo. This concern for external validity is particularly important
for computational neuroscientists whose work involves integrating phenomena from
the cellular, systems and finally, to behavioral levels.
Acknowledgments
Supported by the Howard Hughes Medical Institute. We are grateful to Zachary
Mainen , Barak Pearlmutter, Raphael Ritz, Anthony Zador, David Horn, Chuck
Stevens, William Bialek, and Christof Koch for helpful discussions.
References
Abeles, M., Bergman, H., Margalit, E., and Vaadia, E. (1993). Spatiotemporal
117
Cholinergic Modulation Preserves Spike Timing
5
cu
:~
EL__
-8'"
'"
",8
gKleak
~~
0.. ' - '
is
00
5 10 15 20 25
1:~ 1:~1:~
0.5
o
o
25
50
75
0.5
0
0
15
30
45
0.5
0
0
5 10 15 20 25
Reduction of conductance (%)
Figure 5: Effects of individual conductance changes on spike timing. Top: spike
displacement as a function of changing conductances. Bottom: spike jitter as a
function of changing conductances. Each conductance was reduced from its control
value which was determined by fitting experimentally observed spike trains . The
range of change for the leak conductance was constrained by the experimentally
observed resting membrane potential changes (avg. 5 m V.)
firing patterns in the frontal cortex of behaving monkeys. J. Neurophysiol.,
70, 1629-1638 .
Bair, W . and Koch, C . (1996). Temporal precision of spike trains in extrastriate
cortex of the behaving Macaque monkey. Neural Computation, 8(6), 11841202.
Bialek, W. and Rieke, F. (1992). Reliability and information transmission in spiking
neurons. Trends Neurosci., 15,428-434.
Bialek, W ., Rieke, F ., de Ruyter van Stevenick, R. R., and Warland, D. (1991) .
Reading a neural code. Science, 252, 1854-7.
Ferster, D. and Jagadeesh, B. (1992) . EPSP-IPSP interactions in cat visual cortex
studied with in vivo whole-cell patch recording. J. Neurosci., 12(4), 12621274.
Madison , D . V. , Lancaster, B., and Nicoll, R. A. (1987) . Voltage clamp analysis of
cholinergic action in the hippocampus. J. Neurosci., 7(3), 733-741.
Mainen, Z. F. and Sejnowski, T. J. (1994). Reliability of spike timing in neocortical
neurons. Science, 268, 1503-6.
McCormick, D. A. (1993). Actions of acetylecholine in the cerebral cortex and
thalamus and implications for function .. Prog. Brain Res., 98, 303- 308.
Nicoll, R. (1988). The coupling of neurotransmitter receptors to ion channels in the
brain. Science, 241, 545-550.
Rieke, F. , Warland, D., de Ruyter van Steveninck, R., and Bialek, W . (1996).
Spikes: Exploring the Neural Code. MIT Press.
Shadlen, M. N. and Newsome, W . T. (1994) . Noise, neural codes and cortical
organization. Current Opinion in Neurobiology, 4, 569-579.
Stevens, C. and Zador , A. (1995) . The enigma of the brain. Current Biology, 5,
1-2.
| 1319 |@word trial:6 cu:1 hippocampus:1 grey:1 km:1 pulse:10 simulation:4 extrastriate:1 reduction:6 series:1 mainen:5 existing:1 current:8 must:2 evans:1 realistic:3 plot:1 alone:1 half:1 nervous:1 zoological:1 five:1 constructed:2 direct:1 c2:1 profound:1 differential:1 fitting:1 behavioral:1 inter:3 expected:1 behavior:2 brain:3 preclude:1 increasing:2 underlying:2 medium:2 cm:1 monkey:2 contrasting:1 caution:1 finding:1 temporal:3 kocm:1 control:10 medical:3 christof:1 before:1 timing:51 consequence:1 despite:1 encoding:1 receptor:1 firing:9 modulation:18 fluctuation:8 approximately:2 frog:1 studied:3 resembles:1 quantified:1 suggests:1 examined:2 limited:2 range:6 steveninck:1 acknowledgment:1 horn:1 hughes:3 block:6 sq:1 displacement:9 significantly:1 projection:1 integrating:1 influence:1 zador:3 duration:1 importantly:1 ritz:1 rieke:5 hypothesis:2 bergman:1 pa:1 trend:1 particularly:2 blocking:1 observed:8 role:1 bottom:4 fly:1 reexamine:1 jagadeesh:2 environment:3 leak:1 insertion:2 dynamic:1 raise:1 grateful:1 serve:3 completely:1 neurophysiol:1 various:1 cat:1 neurotransmitter:1 train:7 sejnowski:8 coronal:1 lancaster:1 apparent:1 whose:2 plausible:1 compartmental:2 statistic:1 delivered:2 ip:1 vaadia:1 reconstruction:1 clamp:4 interaction:1 raphael:1 epsp:1 adaptation:13 relevant:1 bath:2 potassium:6 ach:1 transmission:1 perfusion:5 coupling:1 axial:1 measured:3 nearest:1 strong:1 involves:1 implies:1 switzerland:1 stevens:3 opinion:1 exploring:1 koch:3 major:2 injecting:1 sensitive:1 mit:1 gaussian:1 rather:2 voltage:1 acetylcholine:1 derived:1 indicates:2 ibl:1 detect:1 helpful:1 enigma:1 akaysha:1 unlikely:1 margalit:1 bartels:4 manipulating:1 prevailing:1 constrained:1 equal:1 sampling:1 identical:4 biology:1 report:2 stimulus:9 few:1 randomly:1 manipulated:1 preserve:4 simultaneously:2 individual:4 maintain:2 william:1 conductance:16 neuroscientist:1 organization:1 highly:2 possibility:2 cholinergic:22 alignment:2 implication:1 necessary:1 old:1 timed:1 re:2 plotted:2 increased:2 modeling:1 newsome:2 challenged:1 unphysiological:1 spatiotemporal:1 abele:2 peak:3 terrence:1 na:1 reflect:1 neuromodulators:2 recorded:2 external:1 derivative:1 potential:4 de:2 accompanied:1 coding:6 mv:1 onset:1 depends:1 performed:1 view:3 portion:1 elicited:1 vivo:7 contribution:1 square:10 compartment:1 characteristic:1 subthreshold:1 repeatability:1 la2:1 agonist:2 produced:1 resistivity:1 synaptic:3 raster:1 frequency:9 obvious:1 conveys:1 amplitude:2 reflecting:1 higher:1 day:1 reflected:1 response:5 furthermore:1 correlation:1 lack:2 widespread:1 quality:1 effect:15 validity:1 normalized:1 excitability:4 laboratory:2 pharmacological:1 during:2 maintained:1 rat:3 m:10 neocortical:5 pearlmutter:1 instantaneous:2 recently:1 common:1 stimulation:4 spiking:1 vitro:3 cerebral:2 occurred:1 resting:1 blocked:1 ai:1 neuromodulatory:2 replicating:1 had:1 dj:1 reliability:2 moving:1 resistant:2 cortex:7 behaving:2 longer:1 recent:1 showed:2 jolla:2 initiation:1 chuck:1 inverted:1 captured:1 additional:3 greater:1 determine:1 period:2 preservation:6 multiple:3 thalamus:1 characterized:2 long:1 concerning:1 prediction:1 basic:1 histogram:5 achieved:2 cell:7 ion:1 preserved:7 addition:2 decreased:1 interval:2 pyramidal:1 unlike:1 recording:2 seem:1 call:1 presence:2 fit:1 andreas:1 tm:2 whether:1 bair:2 gca:1 resistance:1 action:2 clear:1 shortening:3 prepared:1 reduced:6 millisecond:1 alters:1 popularity:1 affected:1 threshold:1 changing:2 jitter:6 injected:1 prog:1 patch:4 comparable:1 layer:2 display:1 encountered:1 precisely:1 aspect:1 min:1 injection:2 according:1 membrane:5 across:1 wi:1 modification:4 nicoll:3 zurich:1 remains:1 count:1 neuromodulation:2 fluctuating:19 convolved:1 top:5 madison:2 warland:2 especially:1 classical:1 implied:1 capacitance:1 spike:89 concentration:10 bialek:6 simulated:2 cellular:1 gna:1 code:10 difficult:2 debate:1 gk:2 trace:1 mccormick:2 upper:1 neuron:18 observation:2 howard:3 defining:2 neurobiology:3 precise:5 somatic:1 smoothed:2 reproducing:1 david:1 lal:1 macaque:1 suggested:1 pattern:3 reading:1 program:1 reliable:3 belief:1 unrealistic:1 event:9 natural:1 examination:1 nth:1 scheme:1 altered:1 loom:1 literature:1 versus:1 controversial:1 consistent:1 shadlen:2 neuromodulator:4 row:3 course:1 changed:1 surprisingly:1 supported:1 barak:1 institute:6 van:2 slice:2 cortical:4 zachary:1 sensory:2 forward:1 made:1 avg:1 ib2:1 eak:1 alpha:1 ipsp:1 physiologically:4 norepinephrine:1 table:1 nature:1 channel:1 robust:1 ca:3 ruyter:2 cl:2 anthony:1 carbachol:23 neurosci:3 whole:4 noise:4 neuronal:3 fig:14 salk:2 precision:1 house:1 ib:1 tang:4 repeatable:1 physiological:3 evidence:1 reproduction:1 intrinsic:1 concern:1 vigorous:1 led:1 generalizing:2 explore:1 visual:2 contained:1 extracted:1 ferster:2 absence:3 change:10 experimentally:3 determined:2 total:1 experimental:7 la:3 rarely:1 selectively:1 iahp:2 relevance:1 frontal:1 preparation:1 phenomenon:2 |
350 | 132 | 451
THEORY OF SELF-ORGANIZATION OF
CORTICAL MAPS
Shigeru Tanaka
Fundamental Research Laboratorys, NEC Corporation
1-1 Miyazaki 4-Chome, Miyamae-ku, Kawasaki, Kanagawa 213, Japan
ABSTRACT
We have mathematically shown that cortical maps in the
primary sensory cortices can be reproduced by using three
hypotheses which have physiological basis and meaning.
Here, our main focus is on ocular.dominance column formation
in the primary visual cortex. Monte Carlo simulations on the
segregation of ipsilateral and contralateral afferent terminals
are carried out. Based on these, we show that almost all the
physiological experimental results concerning the ocular
dominance patterns of cats and monkeys reared under normal
or various abnormal visual conditions can be explained from a
viewpoint of the phase transition phenomena.
ROUGH SKETCH OF OUR THEORY
In order to describe the use-dependent self-organization of neural connections
{Singer,1987 and Frank,1987}, we have proposed a set of coupled equations
involving the electrical activities and neural connection density {Tanaka,
1988}, by using the following physiologically based hypotheses: (1) Modifiable
synapses grow or collapse due to the competition among themselves for some
trophic factors, which are secreted retrogradely from the postsynaptic side to
the presynaptic side. (2) Synapses also sprout or retract according to the
concurrence of presynaptic spike activity and postsynaptic local membrane
depolarization. (3) There already exist lateral connections within the layer,
into which the modifiable nerve fibers are destined to project, before the
synaptic modification begins. Considering this set of equations, we find that
the time scale of electrical activities is much smaller than time course
necessary for synapses to grow or retract. So we can apply the adiabatic
approximation to the equations. Furthermore, we identify the input electrical
activities, i.e., the firing frequency elicited from neurons in the projecting
neuronal layer, with the stochastic process which is specialized by the spatial
correlation function Ckp;k' p'. Here, k and k' represent the positions of the
neurons in the projecting layer. II stands for different pathways such as
ipsilateral or contralateral, on-center or off-center, colour specific or
nonspecific and so on. From these approximations, we have a nonlinear
452
Tanaka
stochastic differential equation for the connection density, which describes a
survival process of synapses within a small region, due to the strong
competition. Therefore, we can look upon an equilibrium solution of this
equation as a set of the Potts spin variables 0jk11'S {Wu, 1982}. Here, if the
neuron k in the projecting layer sends the axon to the position j in the target
layer, 0jk11 = 1 and if not, 0jk11 = O. The Potts spin variable has the following
property:
If we limit the discussion within such equilibrium solutions, the problem is
reduced to the thermodynamics in the spin system. ' The details of the
mathematics are not argued here because they are beyond the scope of this
paper {Tanaka}. We find that equilibrium behavior of the modifiable nerve
terminals can be described in terms of thermodynamics in the system in
which Hamiltonian H and fictitious temperature T are given by
where k and Ck11 ;k' 11' are the averaged firing frequency and the correlation
function, respectively. Vii' describes interaction between synapses in the
target layer. q is the ratio of the total averaged membrane potential to the
a veraged membrane potential induced through the modifiable synapses from
the projecting layer. "tc and "ts are the correlation time of the electrical
activities and the time course necessary for synapses to grow or collapse.
APPLICATION TO THE OCULAR DOMINANCE
COLUMN FORMATION
A specific cortical map structure is determined by the choice of the correlation
function and the synaptic interaction function. Now, let us neglect k
dependence of the correlation function and take into account only ipsilateral
and contralateral pathways denoted by p, for mathematical simplicity. In this
case, we can reduce the Potts spin variable into the Ising spin one through the
following transformation:
Theory of Self-Organization of Cortical Maps
where j is the position in the layer 4 of the primary visual cortex, and Sj takes
only + 1 or -1, according to the ipsilateral or contralateral dominance. We
find that this system can be described by Hamiltonian:
J
H = -h L.
" S J.- -2 ""
L. ""
L. V JJ.., S J. S.,
J
j
j
j':;ej
(3)
The first term of eq.(3) reflects the ocular dominance shift, while the second
term is essential to the ocular dominance stripe segregation.
Here, we adopt the following simplified function as Vii':
qex
qinh
ex
tn
V . ., = - 2 8 (A -d ..,) - - 2 - 8 (A . h- d . .,) ,
JJ
IIA
ex
JJ
IIA . h
m
JJ
(4)
where djj' is the distance between j and j'. Aex and Ainh are determined by the
extent of excitatory and inhibitory lateral connections, respectively. 8 is the
step function. q." and q i"~ are propotional to the membrane potentials
induced by excitatory and inhibitory neurons {Tanaka}. It is not essential to
the qualitative discussion whether the interaction function is given by the use
of the step function, the Gaussian function, or others.
Next, we define 11+1 and 11-1 as the average firing frequencies of ipsilateral
and contralateral retinal ganglion cells (RGCs), and ~? 1v and ~? 18 as their
fluctuations which originate in the visually stimulated and the spontaneous
firings of RGCs, respectively. These are used to calculate two new
parameters, r and a:
(5)
a=
(6)
453
454
Tanaka
r is related to the correlation of firings elicited from the left and right RGCs.
If there are only spontaneous firings, there is no correlation between the left
and right RGCs' firings. On the other hand, in the presence of visual
stimulation, they will correlate, since the two eyes receive almost the same
images in normal animals. a is a function of the imbalance of firings of the left
and right RGCs. Now, J and h in eq.(3) can be expressed in terms ofr and a:
l_a2 )
J=b ( l - r - 1
1 + a2
'
(8)
where b 1 is a constant of the order of 1, and b2 is determined by average
membrane potentials.
Using the above equations, it will now be shown that patterns such as the
ones observed for the ocular dominance column of new-world monkeys and
cats can be explained. The patterns are very much dependent on three
parameters r, a and K which is the ratio of the membrane potentials (qinh/qex)
induced by the inhibitory and excitatory neurons.
RESULTS AND DISCUSSIONS
In the subsequent analysis by Monte Carlo simulations, we fix the values of
parameters: qex=I.O, Aex =O.25, Ainh=l.O, T=O.25, bl=l.O, b2=O.I, and
dx=O.l. dx is the diameter ofa small area which is occupied by one spin. In
the computer simulations of Fig. 1, we can see that the stripe patterns become
more segregated as the correlation strength r decreases. The similarity of the
pattern in Fig.lc to the well-known experimental evidence {Hubel and Wiesel,
1977} is striking. Furthermore, it is known that if the animal has been reared
under the condition where the two optic nerves are electrically stimulated
synchronously, stripes in the primary visual cortex are not formed {Stryker}.
This condition corresponds to r values close to I and again our theory predicts
these experimental results as can be seen in Fig.la. On the contrary, if the
strabismic animal has been reared under the normal condition {Wiesel and
Hubel, 1974}, r is effectively smaller than that of a normal animal. So we
expect that the ocular dominance stripe has very sharp delimitations as it is
observed experimentally. In the case of a binocularly deprived animal {Wiesel
and Hubel, 1974},i.e., ~+lv=~_lv=O, it is reasonable to expect that the
situation is similar to the strabismic animal.
Theory of Self-Organization of Cortical Maps
Figure 1. Ocular dominance patterns given by the computer
simulations in the case of the large inhibitory connections
(K= 1.0) and the balanced activities (a= 0). The correlation
strength r is given in each case: r=0.9 for (a), r=0.6 for (b),
and r= 0.1 for (c).
In the case of a* 0, we can get asymmetric stripe patterns such as one in
Fig.2a. Since this situation corresponds to the condition of the monocular
deprivation, we can also explain the experimental observation {Hubel et
a1.,1977} successfully. There are other patterns seen in Fig.2b, which we call
blob lattice patterns. The existence of such patterns has not been confirmed
physiologically, as far as we know. However, this theory on the ocular
dominance column formation predicts that the blob lattice patterns will be
found if appropriate conditions, such as the period of the monocular
Figure 2. Ocular dominance patterns given by the computer
simulations in the case of the large inhibitory connections
(K=1.0) and the imbalanced activities: a=0.2 for (a) and
a= 0.4 for (b). The correlation strength r is given by r= 0.1 for
both (a) and (b).
455
456
Tanaka
deprivation, are chosen.
We find that the straightness of the stripe pattern is controlled by the
parameter K. Namely, if K is large, i.e. inhibitory connections are more
effective than excitatory ones, the pattern is straight. However if K is small
the pattern has many branches and ends. This is illustrated in Fig. 3c. We
can get a pattern similar to the ocular dominance pattern of normal cats
{Anderson et al., 1988}, ifK is small and r~rc (Fig.3b). The meaning of rc will
be discussed in the following paragraphs. We further get a labyrinth pattern
by means of r smaller than r c and the same K. We can think K val ue is specific
to the animal under consideration because of its definition. Therefore, this
theory also predicts that the ocular dominance pattern of the strabismic cat
will be sharply delimitated but not a straight stripe in contrast to the pattern
of monkey.
Figure 3. Ocular dominance patterns given by the computer
simulations in the case of the small inhibitory connections
(K=0.3) and the balanced activities(a=O). The correlation
strength r is given in each case: r= 0.9 for (a), r=0.6 for (b) and
r=O.l for (c).
Having seen specific examples, let us now discuss the importance of
parameters r and a, which stand for the correlation strength and the
imbalance of firings. According to qualitative difference of patterns obtained
from our simulations, we classify the parameter space (r, !l) into three regions
in Fig.4: In region (S), stripe patterns appear. The left-eye dominance and the
right-eye dominance bands are equal in width, for a=O. On the other hand,
they are not equal for non-zero value. In region (B), patterns are blob lattices.
In region (U), the patterns are uniform and we do not see any spatial
modulation. A uniform pattern whose a val ue is close to 0 is a random
pattern, while if a is close to 1 or -1 either ipsilateral or contralateral nerve
terminals are present. On the horizontal axis, (S) and (U) regions are devided
by the critical point rc. In practice if we define the order parameter as the
Theory of Self-Organization of Cortical Maps
ensemble-averaged amplitude of the dominant Fourier component of spatial
patterns, and the susceptibility as the variance of the amplitude, then we can
observe their singular behavior near r = r c'
Various conditions where animals have been reared correspond positions in
the parameter space of Fig.4: normal (N), synchronized electrical stimulation
(SES), strabismus (S), binocular deprivation (BD), long-term monocular
deprivation (LMD) and short-term monocular deprivation (SMD). If an
animal is kept under the monocular deprivation for a long period, the absolute
value of is close to 1 and r value is 0, considering eqs.(5) and (6). For a shortterm monocular deprivation, the corresponding point falls on anywhere on the
line from N to LMD, because relaxation from the symmetric stripe pattern to
the open-eye dominant uniform pattern is incomplete. The position on this
line is, therefore, determined by this relaxation period, in which the animal is
kept under the monocular deprivation.
1
UiD
(U)
a
(5)
BD~~______~____~_SE_S~
o
S
N
re
1
r
Figure 4. Schematic phase diagram for the pattern of ocular
dominance columns. The parameter space (r, a) is devided into
three regions: (S) stripe region, (B) blob lattice region, and (U)
uniform region. N, SES, S, BD, LMD, and SMD stand for
conditions: normal, synchronized electrical stimulation,
strabismus, binocular deprivation, long-term monocular
deprivation, and short-term monocular deprivation,
respectively. We show only the diagram on the upper half
plane, because the diagram is symmetrical with respect to the
line of a=O.
457
458
Tanaka
CONCLUSION
In this report, a new theory has been proposed which is able to explain such
use-dependent self-organization as the ocular dominance column formation.
We have compared the theoretical results with various experimental data and
excellent agreement is observed. We can also explain and predict selforganizing process of other cortical map structures such as the orientation
column, the retinotopic organization, and so on. Furthermore, the three main
hypotheses of this theory are not confined to the primary visual cortex. This
suggests that the theory will have a wide applicability to the formation of
cortical map structures seen in the somatosensory cortex {Kaas et al.,1983},
the auditory cortex {Knudsen et al.,1987}, and the cerebellum {Ito,1984}.
References
P.A.Anderson, J.Olavarria, RC.Van Sluyter, J.Neurosci. 8,2184 (1988).
E.Frank, Trends in Neurosci. 10,188 (1987).
D.H.Hubel and T.N.Wiesel, Proc.RSoc.Lond.B198,1(1977).
D.H.Hubel, T.N.Wiesel, S.LeVay, Phil.Trans.RSoc. Lond. B278, 131 (1977).
M.Ito, The Cerebellum and Neural Control (Raven Press, 1984).
J.H.Kaas, M.M.Merzenich, H.P.Killackey, Ann. Rev. Neurosci. 6,325 (1983).
E.I.Knudsen, S.DuLac, S.D.Esterly, Ann. Rev. Neurosci. 10,41 (1987).
W.Singer,in The Neural and Molecular Bases of Learning (Hohn Wiley &
Sons Ltd.,1987) pp.301-336;
M.P.Stryker, in Developmental Neurophysiology (Johns Hopkins Press), in
press.
S.Tanaka, The Proceeding ofSICE'88, ESS2-5, p. 1069 (1988).
S. Tanaka, to be submitted.
T.N.Wiesel and D.H.Hubel, J.Comp.Neurol.158, 307 (1974).
F.Y.Wu, Rev. Mod. Phys. 54,235 (1982).
| 132 |@word neurophysiology:1 wiesel:6 open:1 simulation:7 dx:2 bd:3 john:1 subsequent:1 half:1 plane:1 destined:1 hamiltonian:2 short:2 mathematical:1 rc:4 differential:1 become:1 qualitative:2 pathway:2 paragraph:1 behavior:2 themselves:1 terminal:3 considering:2 project:1 begin:1 retinotopic:1 miyazaki:1 monkey:3 depolarization:1 transformation:1 corporation:1 esterly:1 ofa:1 lmd:3 control:1 appear:1 before:1 local:1 limit:1 firing:9 fluctuation:1 modulation:1 suggests:1 collapse:2 averaged:3 practice:1 area:1 secreted:1 retrogradely:1 get:3 close:4 map:8 center:2 phil:1 nonspecific:1 simplicity:1 target:2 spontaneous:2 dulac:1 hypothesis:3 agreement:1 trend:1 stripe:10 asymmetric:1 ising:1 predicts:3 observed:3 electrical:6 calculate:1 region:10 decrease:1 balanced:2 developmental:1 trophic:1 upon:1 basis:1 cat:4 various:3 fiber:1 describe:1 effective:1 monte:2 formation:5 whose:1 s:2 think:1 reproduced:1 blob:4 interaction:3 levay:1 competition:2 devided:2 eq:3 strong:1 somatosensory:1 synchronized:2 stochastic:2 argued:1 fix:1 mathematically:1 normal:7 visually:1 equilibrium:3 scope:1 predict:1 adopt:1 a2:1 susceptibility:1 proc:1 successfully:1 reflects:1 rough:1 gaussian:1 occupied:1 ej:1 focus:1 potts:3 contrast:1 dependent:3 aex:2 among:1 orientation:1 denoted:1 animal:10 spatial:3 equal:2 having:1 look:1 others:1 report:1 phase:2 organization:7 necessary:2 incomplete:1 re:1 theoretical:1 column:7 classify:1 lattice:4 applicability:1 contralateral:6 uniform:4 density:2 fundamental:1 off:1 hopkins:1 again:1 japan:1 account:1 potential:5 retinal:1 b2:2 afferent:1 qex:3 kaas:2 shigeru:1 elicited:2 formed:1 spin:6 variance:1 ensemble:1 correspond:1 identify:1 carlo:2 confirmed:1 comp:1 straight:2 submitted:1 explain:3 synapsis:7 phys:1 synaptic:2 definition:1 frequency:3 ocular:15 pp:1 auditory:1 amplitude:2 nerve:4 anderson:2 furthermore:3 labyrinth:1 binocular:2 anywhere:1 correlation:12 sketch:1 hand:2 horizontal:1 nonlinear:1 rgcs:5 merzenich:1 symmetric:1 laboratory:1 illustrated:1 cerebellum:2 self:6 ue:2 width:1 ck11:1 djj:1 tn:1 temperature:1 meaning:2 image:1 ifk:1 consideration:1 specialized:1 stimulation:3 discussed:1 mathematics:1 iia:2 strabismic:3 cortex:7 similarity:1 base:1 dominant:2 imbalanced:1 ofr:1 straightness:1 seen:4 period:3 ii:1 branch:1 long:3 concerning:1 molecular:1 a1:1 controlled:1 schematic:1 involving:1 represent:1 confined:1 cell:1 receive:1 diagram:3 grow:3 singular:1 sends:1 induced:3 contrary:1 mod:1 call:1 near:1 presence:1 reduce:1 shift:1 whether:1 colour:1 ltd:1 jj:4 selforganizing:1 band:1 diameter:1 reduced:1 exist:1 inhibitory:7 ipsilateral:6 modifiable:4 dominance:18 olavarria:1 uid:1 kept:2 relaxation:2 striking:1 almost:2 reasonable:1 wu:2 abnormal:1 layer:8 smd:2 activity:8 strength:5 optic:1 sharply:1 fourier:1 lond:2 according:3 electrically:1 membrane:6 smaller:3 describes:2 son:1 postsynaptic:2 rev:3 modification:1 deprived:1 explained:2 projecting:4 binocularly:1 segregation:2 equation:6 monocular:9 discus:1 singer:2 know:1 end:1 apply:1 reared:4 observe:1 appropriate:1 existence:1 neglect:1 bl:1 already:1 spike:1 primary:5 dependence:1 stryker:2 distance:1 lateral:2 originate:1 presynaptic:2 extent:1 ratio:2 frank:2 imbalance:2 upper:1 neuron:5 observation:1 t:1 knudsen:2 situation:2 synchronously:1 sharp:1 namely:1 connection:9 tanaka:10 trans:1 beyond:1 able:1 pattern:31 critical:1 thermodynamics:2 eye:4 axis:1 carried:1 shortterm:1 coupled:1 sprout:1 val:2 segregated:1 expect:2 fictitious:1 lv:2 viewpoint:1 course:2 excitatory:4 side:2 concurrence:1 fall:1 wide:1 absolute:1 van:1 cortical:8 transition:1 stand:3 world:1 sensory:1 simplified:1 far:1 correlate:1 sj:1 hubel:7 symmetrical:1 physiologically:2 stimulated:2 ku:1 kanagawa:1 excellent:1 main:2 neurosci:4 neuronal:1 fig:9 axon:1 lc:1 wiley:1 miyamae:1 position:5 adiabatic:1 strabismus:2 ito:2 deprivation:11 specific:4 neurol:1 physiological:2 survival:1 evidence:1 essential:2 raven:1 effectively:1 importance:1 nec:1 vii:2 tc:1 ganglion:1 visual:6 expressed:1 corresponds:2 ann:2 experimentally:1 determined:4 retract:2 total:1 experimental:5 la:1 kawasaki:1 phenomenon:1 ex:2 |
351 | 1,320 | An Orientation Selective Neural Network
for Pattern Identification in Particle
Detectors
Halina Abramowicz, David Horn, Ury Naftaly, Carmit Sahar- Pikielny
School of Physics and Astronomy, Tel Aviv University
Tel Aviv 69978, Israel
halinaOpost.tau.ac.il, horn~neuron.tau.ac.il
ury~ost.tau.ac.il, carmitOpost.tau.ac.il
Abstract
We present an algorithm for identifying linear patterns on a twodimensional lattice based on the concept of an orientation selective
cell, a concept borrowed from neurobiology of vision. Constructing a multi-layered neural network with fixed architecture which
implements orientation selectivity, we define output elements corresponding to different orientations, which allow us to make a selection decision. The algorithm takes into account the granularity
of the lattice as well as the presence of noise and inefficiencies. The
method is applied to a sample of data collected with the ZEUS
detector at HERA in order to identify cosmic muons that leave
a linear pattern of signals in the segmented calorimeter. A two
dimensional representation of the relevant part of the detector is
used. The algorithm performs very well. Given its architecture,
this system becomes a good candidate for fast pattern recognition
in parallel processing devices.
I
Introduction
A typical problem in experiments performed at high energy accelerators aimed at
studying novel effects in the field of Elementary Particle Physics is that of preselecting interesting interactions at as early a stage as possible, in order to keep the
data volume manageable. One class of events that have to be eliminated is due to
cosmic muons that pass all trigger conditions.
926
H. Abramowicz. D. Hom. U. Naftaly and C. Sahar-Pikielny
The most characteristic feature of cosmic muons is that they leave in the detector
a path of signals aligned along a straight line. The efficiency of pattern recognition
algorithms depends strongly on the granularity with which such a line is probed, on
the level of noise and the response efficiency of a given detector. Yet the efficiency
of a visual scan is fairly independent of those features [1]. This lead us to look for
a new approach through application of ideas from the field of vision.
The main tool that we borrow from the neuronal circuitry of the visual cortex is
the orientation selective simple cell [2]. It is incorporated in the hidden layers of a
feed forward neural network, possessing a predefined receptive field with excitatory
and inhibitory connections. Using these elements we have developed [3] a method
for identifying straight lines of varying slopes and lengths on a grid with limited
resolution. This method is then applied to the problem of identifying cosmic muons
in accelerator data, and compared with other tools.
By using a network with a fixed architecture we deviate from conventional approaches of neural networks in particle physics [4]. One advantage of this approach
is that the number of free parameters is small, and it can, therefore, be determined
using a small data set. The second advantage is the fact that it opens up the possibility of a relatively simple implementation in hardware. This is an important
feature for particle detectors, since high energy physics experiments are expected
to produce in the next decade a flux of data that is higher than present analysis
methods can cope with.
II
Description of the Task
In a two-dimensional representation, the granularity of the rear part of the ZEUS
calorimeter [6] can be emulated roughly by a 23 x 23 lattice of 20 x 20 cm 2 squares.
While such a representation does not use the full information available in the detector, it is sufficient for our study. In our language each cell of this lattice will
be denoted as a pixel. A pixel is activated if the corresponding calorimeter cell is
above a threshold level predetermined by the properties of the detector.
Figure 1: Example of patterns corresponding to a cosmic muon (left), a typical
accelerator event (middle), and an accelerator event that looks like a muon (right),
as seen in a two dimensional projection.
A cosmic muon, depending on its angle of incidence, activates along its linear path
typically from 3 to 25 neighboring pixels anywhere on the 23 x 23 grid. The pattern
of signals generated by accelerator events consists on average of 3 to 8 clusters,
of typically 4 adjacent activated pixels, separated by empty pixels. The clusters
927
Orientation Selective Neural Network
tend to populate the center of the 23 X 23 lattice. Due to inherent dynamics of
the interactions under study, the distribution of clusters is not isotropic. Examples
of events, as seen in the two-dimensional projection in the rear part of the ZEUS
calorimeter, are shown in figure 1.
The lattice discretizes the data and distorts it . Adding conventional noise levels,
the decision of classification of the data into accelerator events and cosmic muon
events is difficult to obtain through automatic means. Yet, it is the feeling of experimentalists dealing with these problems, that any expert can distinguish between
the two cases with high efficiency (identifying a muon as such) and purity (not
misidentifying an accelerator event) . We define our task as developing automatic
means of doing the same.
III
The Orientation Selective Neural Network
Our analysis is based on a network of orientation selective neurons (OSNN) that
will be described in this chapter. We start out with an input layer of pixels on a
(x, y) of the neuron (pixel) that can
two dimensional grid with discrete labeling i
get the values Sj = 1 or 0, depending on whether the pixel is activated or not.
=
Figure 2: Connectivity patterns for orientation selective cells on the second layer
of the OSNN. From left to right are examples of orientations of 0, 7r/4 and 57r/8.
Non-zero weights are defined only within a 5 X 5 grid. The dark pixels have weights
of +1, and the grey ones have weights of -1. White pixels have zero weights.
The input is being fed into a second layer that is composed of orientation selective
neurons
at location i with orientation
where a belongs to a discrete set of
16 labels, i.e.
a7r/16. The neuron
is the analog of a simple cell in the
visual cortex. Its receptive field consists of an array of dimension 5 X 5 centered at
pixel i. Examples of the connectivity, for three different choices of a, are shown in
Fig. 2. The weights take the values of 1,0 and -1.
V;?a
Oa
V;?a
Oa =
The second layer consists then of 23 x 23 x 16 neurons, each of which may be thought
of as one of 16 orientation elements at some (x, y) location of the input layer. Next
we employ a modified Winner Take All (WTA) algorithm, selecting the leading
orientation amax(i) for which the largest
is obtained at the given location i.
If we find that several
at the same location i are close in value to the maximal
one, we allow up to five different
neurons to remain active at this stage of the
processing, provided they all lie within a sector of a max ? 2, or Omax ? 7r /8. All
other
are reset to zero. If, however, at a given location i we obtain several
V;?a
V;?a
V;?a
V;?a
928
H. Abramowicz, D. Hom, U. Naftaly and C. Sahar-Pikielny
large values of
discarded .
V;?a that correspond to non-neighboring orientations, all are being
The third layer also consists of orientation selective cells. They are constructed
with a receptive field of size 7 x 7, and receive inputs from neurons with the same
orientation on the second layer. The weights on this layer are defined in a similar
fashion to the previous ones , but here negative weights are assigned the value of
-3 , not -1. For linear patterns, the purpose of this layer is to fill in the holes due
to fluctuations in the pixel activation , i.e. complete the lines of same orientation
of the second layer. As before, we keep also here up to five highest values at each
location , following the same WTA procedure as on the second layer.
The fourth layer of the OSNN consists of only 16 components , D a , each corresponding to one of the discrete orientations Q' . For each orientation we calculate
a Si . The elements
the convolution of the first and third layers, D a = l:i
D a carry the information about the number of the input pixels that contribute to a
given orientation (}a . Cosmic muons are characterized by high values of D a whereas
accelerator events possess low values, as shown in figure 3 below.
vi?
The computational complexity of this algorithm is O( n) where n is the number of
pixels , since a constant number of operations is performed on each pixel. There
are basically four free parameters in the algorithm. These are the sizes of the
receptive fields on the second and third layer and the corresponding activation
thresholds. Their values can be tuned for the best performance, however they are
well constrained by the spatial resolution , the noise level in the system and the
activation properties of the input pixels. The size of the receptive field determines
to a large extent the number of orientations allowed to survive in the modified WTA
algorithm.
IV
OSNN and a Selection Criterion on the Training Set
The details of the design of the OSNN and the tuning of its parameters were fixed
while training it on a sample of 250 cosmic muons and a similar amount of accelerator events. The sample was obtained by preselection with existing algorithms and
a visual scan as a cross-check.
For cosmic muon events the highest value of Da , D max , determines the orientation
of the straight line. In figure 3 we present the correlation between Dmax and the
number np of activated input pixels for cosmic muon and accelerator events. As
expected one observes a linear correlation between Dmax and np for the muons
while almost no correlation is observed for accelerator events. This allows us to set
a selection criterion defined by the separator in this figure. We quantify the quality
of our selection by quoting the efficiency of properly identifying a cosmic muon for
100% purity, corresponding to no accelerator event misidentified as a muon . In
OSNN-D , which we define according to the separator shown in Fig 3, we obtain
93.0% efficiency on the training set.
On the right hand side of Fig 3 we present results of a conventional method for
detecting lines on a grid, the Hough transform [7, 8, 9] . This is based on the
analysis of a parameter space describing locations and slopes of straight lines. The
cells of this space with the largest occupation number, N max , are the analogs of
our Dmax. In the figure we show the correlation of N m ax with np which allows us
to draw a separator between cosmic muons and accelerator events , leading to an
efficiency of 88% for 100% purity. Although this number is not much lower than the
929
Orientation Selective Neural Network
..
0_
30
.
: .
......
.
Hough
. ..-' "
10
11
20
21
3G
?
40
10
15
10
25
30
35
40
Figure 3: Left: Correlation between the maximum value of DOt , D max , and the
number np of input pixels for cosmic muon (dots) and accelerator events (open circles) . The dashed line defines a separator such that all events above it correspond
to cosmic muons (100% purity). This selection criterion has 93% efficiency. Right:
Using the Hough Transform method, we compare the values of the largest accumulation cell N max with np and find that the two types of events have different slopes,
thus allowing also the definition of a separator. In this case, the efficiency is 88%.
efficiency of OSNN-D, we note that the difference between the two types of event
distributions is not as significant as in OSNN-D. In the test set, to be discussed in
the next chapter , we will consider 40 ,000 accelerator events contaminated by less
than 100 cosmic muons. Clearly the expected generalization quality of OSNN-D
will be higher than that of the Hough transform. It should of course be noted that
the OSNN is a multi-layer network, whereas the Hough transform method that we
have described is a single-layer operation , i.e. it calculates global characteristics. If
one wishes to employ some quasi-local Hough transform one is naturally led back
to a network that has to resemble our OSNN.
V
Training and Testing of OSNN-S
If instead of applying a simple cut we employ an auxiliary neural network to search
for the best classification of events using the OSNN outputs, we obtain still better
results. The auxiliary network has 6 inputs , one hidden layer with 5 nodes and one
output unit. The input consists of a set of five consecutive DOt centered around
Dmax and the total number of activated input pixels , np ' The cosmic muons are
assigned an output value s = 1 and the accelerator events s = O. The net is trained
on our sample with error back-propagation. This results in an improved separation
of cosmic muon events from the rest . Whereas in OSNN-D we find a continuum
of cosmic muons throughout the range of Dmax , here we obtain a clear bimodal
distribution , as seen in Figure 4. For s ~ 0.1 no accelerator events are found and
the muons are selected with an efficiency of 94.7% . This selection procedure will be
denoted as OSNN-S.
As a test of our method we apply OSNN-S to a sample of 38,606 data events
930
H. Abramowicz, D. Hom, U. Naftaly and C. Sahar-Pikielny
that passed the standard physics cuts [5] . The distribution of the neural network
output s is presented in Figure 4. It looks very different from the one obtained with
the training sample. Whereas the former consisted of approximately 500 events
distributed equally among accelerator events and cosmic muons, this one contains
mostly accelerator events, with a fraction of a percent of muons. This proportion is
characteristic of physics samples. The vast majority of accelerator events are found
in the first bin, but a long tail extends throughout s. The last bin in s is indeed
dominated by cosmic muons.
We performed a visual scan of all 181 events with s ~ 0.1 using the full information
from the detector. This allowed us to identify the cosmic-muon events represented
by shaded areas in figure 4. For s ~ 0.1 we find 55 cosmic-muon events and
123 accelerator events, 55 of which resemble muons on the rear segment of the
calorimeter. The latter, together with the genuine cosmic muons , populate mainly
the region of large s values.
We conclude that our method picked out the cosmic muons from the very large
sample of data, in spite of the fact that it relied just on two-dimensional information from the rear part of the detector. This fact is, however, responsible for
the contamination of the high s region by accelerator events that resemble cosmic
muons. Even with all its limitations, our method reduces the problem of rejecting
cosmic-muon events down to scanning less than one percent of all the events. We
conclude that we have achieved the goal that we set for ourselves, that of replacing
a laborious visual scan by a computer algorithm with similar reliability.
N
N
Test
OSNN-S (Train)
10
10
0.4
0 .?
o.a
s
I
Figure 4: Left: Number of events as a function of the output s of an auxiliary neural
net . Choosing the separator to be s = 0.1 we obtain an efficiency of 94.7% on our
training set. This bimodal distribution holds the promise of better generalization
than the OSNN-D method depicted in Figure 3. Muons are represented by shaded
areas. Right: Distribution of the auxiliary neural network output s obtained with
the OSNN-S selector for the test sample of 38,606 events. The tail of the distribution
of accelerator events leads to 123 accelerator events with s > 0.1, including 55 that
resemble straight lines on the input layer. 55 genuine cosmic muons were identified
in the high s region.
Orientation Selective Neural Network
VI
931
Summary
We have presented an algorithm for identifying linear patterns on a two-dimensional
lattice based on the concept of an orientation selective cell, a concept borrowed
from neurobiology of vision. Constructing a multi-layered neural network with
fixed architecture that implements orientation selectivity, we define output elements
corresponding to different orientations, that allow us to make a selection decision.
The algorithm takes into account the granularity of the lattice as well as the presence
of noise and inefficiencies.
Our feed-forward network has a fixed set of synaptic weights. Hence, although the
number of neurons is very high, the complexity of the system, as determined by the
number of free parameters, is low. This allows us to train our system on a small
data set. We are gratified to see that, nontheless, it generalizes well and performs
excellently on a test sample that is larger by two orders of magnitude.
One may regard our method as a refinement of the Hough transform, since each of
our orientation selective cells acts as a filter of straight lines on a limited grid. The
major difference from conventional Hough transforms is that we perform semi-local
calculations, and proceed in several stages, reflected by the different layers of our
network, before evaluating global parameters.
The task that we have set to ourselves in the application described here is only one
example of problems of pattern recognition that are encountered in the analysis of
particle detectors. Given the large flux of data in these experiments, one is faced
by two requirements: correct identification and fast performance. Using a structure
like our OSNN for data classification, one can naturally meet the speed requirement through its realization in hardware, taking advantage of the basic features of
distributed parallel computation.
Acknow ledgements
We are indebted to the ZEUS Collaboration for allowing us to use the sample of
data for this analysis. This work was partly supported by a grant from the Israel
Science Foundation.
References
[1] ZEUS Collab., The ZEUS Detector, Status Report 1993, DESY 1993; M. Derrick et
al., Phys. Lett. B 293 (1992) 465.
[2] D. H. Hubel and T. N. Wiesel, J. Physiol. 195 (1968) 215.
[3] H. Abramowicz, D. Horn, U. Naftaly and C. Sahar-Pikielny, Nuclear Instrum. and
Methods in Phys. Res. A378 (1996) 305.
[4] B. Denby, Neural Computation, 5 (1993) 505.
[5] ZEUS Calorimeter Group, A. Andresen et al., Nucl. Inst. Meth. A 309 (1991) lOI.
[6] P. V. Hough, "Methods and means to recognize complex patterns", U.S. patent
3.069.654.
[7] D. H. Ballard, Pattern Recognition 3 (1981) II.
[8] R. O. Duda and P. E. Hart, Commun. ACM. 15 (1972) I.
[9] ZEUS collab., M. Derrick et al., Phys. Lett. B 316 (1993) 412; ZEUS collab., M.
Derrick et al., Zeitschrift f. Physik C 69 (1996) 607-620
| 1320 |@word middle:1 manageable:1 wiesel:1 proportion:1 duda:1 open:2 physik:1 grey:1 carry:1 inefficiency:2 contains:1 denby:1 selecting:1 tuned:1 existing:1 incidence:1 activation:3 yet:2 si:1 physiol:1 predetermined:1 selected:1 device:1 isotropic:1 detecting:1 contribute:1 location:7 node:1 five:3 along:2 constructed:1 consists:6 abramowicz:5 indeed:1 expected:3 roughly:1 hera:1 multi:3 becomes:1 provided:1 israel:2 cm:1 developed:1 astronomy:1 act:1 unit:1 grant:1 before:2 local:2 zeitschrift:1 meet:1 path:2 fluctuation:1 approximately:1 shaded:2 limited:2 range:1 horn:3 responsible:1 testing:1 implement:2 procedure:2 area:2 thought:1 projection:2 spite:1 get:1 close:1 layered:2 selection:7 twodimensional:1 applying:1 accumulation:1 conventional:4 center:1 resolution:2 identifying:6 array:1 amax:1 borrow:1 fill:1 nuclear:1 trigger:1 element:5 recognition:4 cut:2 observed:1 calculate:1 region:3 highest:2 contamination:1 observes:1 complexity:2 dynamic:1 trained:1 segment:1 efficiency:12 chapter:2 represented:2 train:2 separated:1 fast:2 labeling:1 choosing:1 larger:1 transform:6 advantage:3 net:2 interaction:2 maximal:1 reset:1 neighboring:2 relevant:1 aligned:1 realization:1 description:1 cluster:3 empty:1 requirement:2 produce:1 leave:2 depending:2 ac:4 school:1 borrowed:2 auxiliary:4 misidentified:1 resemble:4 quantify:1 correct:1 filter:1 centered:2 bin:2 generalization:2 elementary:1 a7r:1 hold:1 around:1 circuitry:1 major:1 continuum:1 early:1 consecutive:1 purpose:1 label:1 largest:3 tool:2 clearly:1 activates:1 modified:2 varying:1 ax:1 properly:1 check:1 mainly:1 inst:1 rear:4 typically:2 hidden:2 selective:13 quasi:1 pixel:19 classification:3 orientation:29 among:1 denoted:2 constrained:1 spatial:1 fairly:1 field:7 genuine:2 eliminated:1 look:3 survive:1 np:6 contaminated:1 report:1 inherent:1 employ:3 composed:1 recognize:1 ourselves:2 possibility:1 laborious:1 activated:5 predefined:1 iv:1 hough:9 circle:1 re:1 lattice:8 scanning:1 physic:6 together:1 connectivity:2 expert:1 muon:36 leading:2 account:2 depends:1 vi:2 performed:3 picked:1 doing:1 start:1 relied:1 parallel:2 slope:3 il:4 square:1 characteristic:3 correspond:2 identify:2 identification:2 rejecting:1 emulated:1 basically:1 straight:6 indebted:1 detector:12 phys:3 synaptic:1 definition:1 energy:2 naturally:2 derrick:3 zeus:9 back:2 feed:2 higher:2 reflected:1 response:1 improved:1 strongly:1 anywhere:1 stage:3 just:1 correlation:5 hand:1 replacing:1 propagation:1 defines:1 quality:2 aviv:2 effect:1 concept:4 consisted:1 former:1 hence:1 assigned:2 white:1 adjacent:1 omax:1 noted:1 criterion:3 complete:1 performs:2 percent:2 novel:1 possessing:1 patent:1 winner:1 volume:1 analog:2 discussed:1 tail:2 significant:1 automatic:2 tuning:1 grid:6 particle:5 language:1 dot:3 reliability:1 cortex:2 belongs:1 commun:1 selectivity:2 collab:3 pikielny:5 ost:1 seen:3 purity:4 signal:3 semi:1 ii:2 full:2 dashed:1 reduces:1 segmented:1 characterized:1 calculation:1 cross:1 long:1 hart:1 preselecting:1 equally:1 calculates:1 basic:1 vision:3 experimentalists:1 bimodal:2 cosmic:28 cell:11 achieved:1 receive:1 whereas:4 rest:1 posse:1 tend:1 presence:2 granularity:4 iii:1 architecture:4 identified:1 idea:1 whether:1 passed:1 proceed:1 clear:1 aimed:1 amount:1 transforms:1 dark:1 preselection:1 excellently:1 hardware:2 inhibitory:1 ledgements:1 discrete:3 probed:1 promise:1 group:1 four:1 threshold:2 vast:1 fraction:1 angle:1 fourth:1 extends:1 almost:1 throughout:2 separation:1 draw:1 decision:3 layer:20 hom:3 distinguish:1 encountered:1 dominated:1 speed:1 relatively:1 developing:1 according:1 remain:1 naftaly:5 wta:3 dmax:5 describing:1 fed:1 studying:1 available:1 operation:2 generalizes:1 discretizes:1 apply:1 receptive:5 oa:2 majority:1 collected:1 extent:1 length:1 difficult:1 mostly:1 sector:1 acknow:1 negative:1 implementation:1 design:1 perform:1 allowing:2 neuron:9 convolution:1 discarded:1 neurobiology:2 incorporated:1 david:1 distorts:1 connection:1 below:1 pattern:13 max:5 tau:4 including:1 event:40 nucl:1 meth:1 deviate:1 faced:1 occupation:1 sahar:5 accelerator:24 interesting:1 limitation:1 foundation:1 sufficient:1 collaboration:1 excitatory:1 course:1 summary:1 supported:1 last:1 free:3 populate:2 side:1 allow:3 taking:1 distributed:2 regard:1 dimension:1 lett:2 evaluating:1 forward:2 refinement:1 feeling:1 flux:2 cope:1 sj:1 selector:1 status:1 keep:2 dealing:1 global:2 active:1 hubel:1 conclude:2 search:1 decade:1 ballard:1 tel:2 complex:1 separator:6 constructing:2 da:1 main:1 noise:5 allowed:2 neuronal:1 fig:3 fashion:1 wish:1 candidate:1 lie:1 third:3 down:1 adding:1 magnitude:1 hole:1 depicted:1 led:1 desy:1 visual:6 determines:2 acm:1 goal:1 instrum:1 typical:2 determined:2 total:1 pas:1 partly:1 loi:1 latter:1 scan:4 |
352 | 1,321 | Edges are the 'Independent Components' of
Natural Scenes.
Anthony J. Bell and Terrence J. Sejnowski
Computational Neurobiology Laboratory
The Salk Institute
10010 N. Torrey Pines Road
La Jolla, California 92037
tony@salk.edu, terry@salk.edu
Abstract
Field (1994) has suggested that neurons with line and edge selectivities
found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and Barlow (1989) has reasoned
that such responses should emerge from an unsupervised learning algorithm
that attempts to find a factorial code of independent visual features. We
show here that non-linear 'infomax', when applied to an ensemble of natural scenes, produces sets of visual filters that are localised and oriented.
Some of these filters are Gabor-like and resemble those produced by the
sparseness-maximisation network of Olshausen & Field (1996). In addition,
the outputs of these filters are as independent as possible, since the infomax network is able to perform Independent Components Analysis (ICA).
We compare the resulting ICA filters and their associated basis functions,
with other decorrelating filters produced by Principal Components Analysis
(PCA) and zero-phase whitening filters (ZCA). The ICA filters have more
sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that
these neurons form an information-theoretic co-ordinate system for images.
1
Introduction.
Both the classic experiments of Rubel & Wiesel [8] on neurons in visual cortex, and several decades of theorising about feature detection in vision, have left open the question
most succinctly phrased by Barlow "Why do we have edge detectors?" That is: are there
any coding principles which would predict the formation of localised, oriented receptive
832
A. 1. Bell and T. 1. Sejnowski
fields? Barlow's answer is that edges are suspicious coincidences in an image. Formalised
information-theoretically, this means that our visual cortical feature detectors might be the
end result of a redundancy reduction process [4, 2], in which the activation of each feature
detector is supposed to be as statistically independent from the others as possible. Such a
'factorial code' potentially involves dependencies of all orders, but most studies [9, 10, 2]
(and many others) have used only the second-order statistics required for decorrelating the
outputs of a set of feature detectors. Yet there are multiple decorrelating solutions, including the 'global' unphysiological Fourier filters produced by PCA, so further constraints are
required.
Field [7] has argued for the importance of sparse, or 'Minimum Entropy', coding [4], in which
each feature detector is activated as rarely as possible. Olshausen & Field demonstrated [12]
that such a sparseness criterion could be used to self-organise localised, oriented receptive
fields.
Here we present similar results using a direct information-theoretic criterion which maximises
the joint entropy of a non-linearly transformed output feature vector [5]. Under certain
conditions, this process will perform Independent Component Analysis (or ICA) which is
equivalent to Barlow's redundancy reduction problem. Since our ICA algorithm, applied to
natural scenes, does produce local edge filters, Barlow's reasoning is vindicated. Our ICA
filters are more sparsely distributed than those of other decorrelating filters, thus supporting
some of the arguments of Field (1994) and helping to explain the results of Olshausen's
network from an information-theoretic point of view.
2
Blind separation of natural images.
A perceptual system is exposed to a series of small image patches, drawn from an ensemble
of larger images. In the linear image synthesis model [12], each image patch, represented
by the vector x, has been formed by the linear combination of N basis functions. The basis
functions form the columns of a fixed matrix, A. The weighting of this linear combination
(which varies with each image) is given by a vector, s. Each component of this vector
has its own associated basis function, and represents an underlying 'cause' of the image.
Thus: x=As. The goal of a perceptual system, in this simplified framework, is to linearly
transform the images, x, with a matrix of filters, W, so that the resulting vector: u= Wx,
recovers the underlying causes, s, possibly in a different order, and rescaled. For the sake
of argument, we will define the ordering and scaling of the causes so that W = A -1. But
what should determine their form? If we choose decorrelation, so that (uu T ) = I, then the
solution for W must satisfy:
WTW = (xxT)-1
(1)
There are several ways to constrain the solution to this:
(1) Principal Components Analysis W p (PCA), is the Orthogonal (global) solution
[WWT = I]. The PCA solution to Eq.(l) is W p = n-tET, where n is the diagonal
matrix of eigenvalues, and E is the matrix who's columns are the eigenvectors. The filters (rows of W p) are orthogonal, are thus the same as the PCA basis functions, and are
typically global Fourier filters, ordered according to the amplitude spectrum of the image.
Example PCA filters are shown in Fig.1a.
(2) Zero-phase Components Analysis W z (ZCA), is the Symmetrical (local) solution
[WWT = W 2 ]. The ZCA solution to Eq.(l) is W z = (XX T )-1/2 (matrix square root).
ZCA is the polar opposite of PCA. It produces local (centre-surround type) whitening fil-
Edges are the "Independent Components" of Natural Scenes
833
ters, which are ordered according to the phase spectrum of the image. That is, each filter
whitens a given pixel in the image, preserving the spatial arrangement of the image and
flattening its frequency (amplitude) spectrum. W z is related to the transforms described
by Atick & Redlich [3]. Example ZCA filters and basis functions are shown in Fig.1b.
(3) Independent Components Analysis WI (ICA), is the Factorised (semi-local) solution
[/u(u) = TIi lUi (Ui)]. Please see [5] for full references. The 'infomax' solution we describe
here is related to the approaches in [5, 1, 6].
As we will show, in Section 5, ICA on natural images produces decorrelating filters which
are sensitive to both phase (locality) and spatial frequency information, just as in transforms
involving oriented Gabor functions or wavelets. Example ICA filters are shown in Fig.1d
and their corresponding basis functions are shown in Fig.1e.
3
An leA algorithm.
It is important to recognise two differences between finding an ICA solution, WI, and other
decorrelation methods. (1) there may be no ICA solution, and (2) a given ICA algorithm
may not find the solution even if it exists, since there are approximations involved. In
these senses, ICA is different from PCA and ZCA, and cannot be calculated analytically, for
example, from second order statistics (the covariance matrix), except in the gaussian case.
The approach 'which we developed in [5] (see there for further references to ICA) was to
maximise by stochastic gradient ascent the joint entropy, H[g(u)], of the linear transform
squashed by a sigmoidal function, g. When the non-linear function is the same (up to scaling and shifting) as the cumulative density functions (c.d.f.s) of the underlying independent
components, it can be shown (Nadal & Parga 1995) that such a non-linear 'infomax' procedure also minimises the mutual information between the Ui, exactly what is required for
ICA. In most cases, however, we must pick a non-linearity, g, without any detailed knowledge
of the probability density functions (p.d.f.s) of the underlying independent components. In
cases where the p.d.f.s are super-gaussian (meaning they are peakier and longer-tailed than
a gaussian, having kurtosis greater than 0), we have repeatedly observed, using the logistic
or tanh nonlinearities, that maximisation of H[g(u)] still leads to ICA solutions, when they
exist, as with our experiments on speech signal separation [5]. Although the infomax algorithm is described here as an ICA algorithm, a fuller understanding needs to be developed
of under exactly what conditions it may fail to converge to an ICA solution.
The basic infomax algorithm changes weights according to the entropy gradient. Defining Yi = g( Ui) to be the sigmoidally transformed output variables, the stochastic gradient
learning rule is:
(2)
In this, E denotes expected value, y = [g(ud ... g(UN)V, and IJI is the absolute value of the
determinant of the Jacobian matrix: J = det [8Yi/8xj]ij' and y = [Y1 . .. YN]T, the elements
of which depend on the nonlinearity according to: Yi = 8/8Yi(8yi/8ui).
Amari, Cichocki & Yang [1] have proposed a modification of this rule which utilises the
natural gradient rather than the absolute gradient of H(y). The natural gradient exists for
objective functions which are functions of matrices, as in this case, and is the same as the
relative gradient concept developed by Cardoso & Laheld (1996). It amounts to multiplying
834
A. 1. Bell and T. 1. Sejnowski
the absolute gradient by WTW, giving, in our case, the following altered version of Eq.(2):
(3)
This rule has the twin advantages over Eq.(2) of avoiding the matrix inverse, and of converging several orders of magnitude more quickly, for data, x, that is not prewhitened. The
speedup is explained by the fact that convergence is no longer dependent on the conditioning of the underlying basis function matrix, A. Writing Eq.(3) for one weight gives
~Wij ex: Wij + iii l:k WkjUk. This rule is 'almost local' requiring a backwards pass.
(a)
(b)
peA zeA
1
5
7
11
15
22
37
60
63
89
109
144
(c)
(d)
(e)
w
leA
A
D
Figure 1: Selected decorrelating filters
and their basis functions extracted from
the natural scene data. Each type of
decorrelating filter yielded 144 12x12 filters, of which we only display a subset here. Each column contains filters
or basis functions of a particular type,
and each of the rows has a number relating to which row of the filter or basis
function matrix is displayed. (a) PCA
(W p): The 1st, 5th, 7th etc Principal
Components, showing increasing spatial
frequency. There is no need to show basis
functions and filters separately here since
for PCA, they are the same thing. (b)
ZCA (W z): The first 6 entries in this
column show the one-pixel wide centresurround filter which whitens while preserving the phase spectrum. All are identical, but shifted. The lower 6 entries
(37, 60 show the basis functions instead,
which are the columns of the inverse of
the W z matrix. (c) W: the weights
learnt by the lCA network trained on
W z-whitened data, showing (in descending order) the DC filter, localised oriented
filters, and localised checkerboard filters.
(d) WI: The corresponding lCA filters,
calculated according to WI = WW z,
looking like whitened versions of the Wfilters. (e) A: the corresponding basis
functions, or columns of WI!' These
are the patterns which optimally stimulate their corresponding lCA filters, while
not stimulating any other lCA filter, so
that W1A = I.
Edges are the "Independent Components" of Natural Scenes
4
835
Methods.
We took four natural scenes involving trees, leafs etc., and converted them to greyscale
values between 0 and 255. A training set, {x}, was then generated of 17,59512x12 samples
from the images. This was 'sphered' by subtracting the mean and multiplying by twice
the local symmetrical (zero-phase) whitening filter: {x} +- 2Wz({x} - (x)). This removes
both first and second order statistics from the data, and makes the covariance matrix of x
equal to 41. This is an appropriately scaled starting point for further training since infomax
(Eq.(3)) on raw data, with the logistic function, Yi = (1 + e-Ui)-t, produces au-vector
which approximately satisfies (uu T ) = 41. Therefore, by prewhitening x in this way, we can
ensure that the subsequent transformation, u = Wx, to be learnt should approximate an
orthonormal matrix (rotation without scaling), roughly satisfying the relation WTW = I.
The matrix, W, is then initialised to the identity matrix, and trained using the logistic
function version of Eq.(3), in which Yi = 1 - 2Yi. Thirty sweeps through the data were
performed, at the end of each of which, the order of the data vectors was permuted to avoid
cyclical behaviour in the learning. In each sweep, the weights were updated in batches of 50
presentations. The learning rate (proportionality constant in Eq.(3)) followed 21 sweeps at
0.001, and 3 sweeps at each of 0.0005,0.0002 and 0.0001, taking 2 hours running MATLAB
on a Sparc-20 machine, though a reasonable result for 12x12 filters can be achieved in 30
minutes. To verify that the result was not affected by the starting condition of W = I, the
training was repeated with several randomly initialised weight matrices, and also on data
that was not prewhitened. The results were qualitatively similar, though convergence was
much slower.
The full ICA transform from the raw image was calculated as the product of the sphering
(ZCA) matrix and the learnt matrix: WI = WW z. The basis function matrix, A, was
calculated as wIt. A PCA matrix, W p, was calculated. The original (unsphered) data
was then transformed by all three decorrelating transforms, and for each the kurtosis of each
of the 144 filters was calculated. Then the mean kurtosis for each filter type (ICA, PCA,
ZCA) was calculated, averaging over all filters and input data.
5
Results.
The filters and basis functions resulting from training on natural scenes are displayed in
Fig.1 and Fig.2. Fig.1 displays example filters and basis functions of each type. The PCA
filters, Fig.1a, are spatially global and ordered in frequency. The ZCA filters and basis
functions are spatially local and ordered in phase. The ICA filters, whether trained on the
ZCA-whitened images, Fig.le, or the original images, Fig.1d, are semi-local filters, most with
a specific orientation preference. The basis functions, Fig.1e, calculated from the Fig.1d ICA
filters, are not local, and naturally have the spatial frequency characteristics of the original
images. Basis functions calculated from Fig.1d (as with PCA filters) are the same as the
corresponding filters since the matrix W (as with W p) is orthogonal.
In order to show the full variety of ICA filters, Fig.2 shows, with lower resolution, all 144
filters in the matrix W, in reverse order of the vector-lengths of the filters, so that the
filters corresponding to higher-variance independent components appear at the top. The
general result is that ICA filters are localised and mostly oriented. Unlike the basis functions
displayed in Olshausen & Field (1996), they do not cover a broad range of spatial frequencies.
However, the appropriate comparison to make is between the ICA basis functions, and the
basis functions in Olshausen & Field's Figure 4. The ICA basis functions in our Fig.1e are
836
A. 1. Bell and T. 1. Sejnowski
oriented, but not localised and therefore it is difficult to observe any multiscale properties.
However, when we ran the ICA algorithm on Olshausen's images, which were preprocessed
with a whitening/low pass filter, our algorithm yielded basis functions which were localised
multiscale Gabor patches qualitively similar to those in Olshausen's Figure 4. Part of the
difference in our results is therefore attributable to different preprocessing techniques.
The distributions (image histograms) produced by PCA, ZCA and ICA are generally doubleexponential (e-1u;j), or 'sparse', meaning peaky with a long tail, when compared to a gaussian, as predicted by Field [7]. The log histograms are seen to be roughly linear across
5 orders of magnitude. The histogram for the ICA filters, however, departs slightly from
linearity, being more peaked, and having a longer tail than the ZCA and PCA histograms.
This spreading of the tail signals the greater sparseness of the outputs of the ICA filters,
and this is reflected in a calculated kurtosis measure of 10.04 for ICA, compared to 3.74 for
PCA, and 4.5 for ZCA.
In conclusion, these simulations show that the filters found by the ICA algorithm of Eq.(3)
with a logistic non-linearity are localised, oriented, and produce outputs distributions of
very high sparseness. It is notable that this is achieved through an information theoretic
learning rule which (1) has no noise model, (2) is sensitive to higher-order statistics (spatial
coincidences), (3) is non-Hebbian (it is closer to anti-Hebbian) and (4) is simple enough to
be almost locally implementable. Many other levels of higher-order invariance (translation,
rotation, scaling, lighting) exist in natural scenes. It will be interesting to see if informationtheoretic techniques can be extended to address these invariances.
Acknowledgements
This work emerged through many extremely useful discussions with Bruno Olshausen and
David Field. We are very grateful to them, and also to Paul Viola and Barak Pearlmutter.
The work was supported by the Howard Hughes Medical Institute.
References
[1] Amari S. Cichocki A. & Yang H.H. 1996. A new learning algorithm for blind signal
separation, Advances in Neural Information Processing Systems 8, MIT press.
[2] Atick J.J. 1992. Could information theory provide an ecological theory of sensory processing? Network 3, 213-251
[3] Atick J .J . & Redlich A.N. 1993. Convergent algorithm for sensory receptive field development, Neural Computation 5, 45-60
[4] Barlow H.B. 1989. Unsupervised learning, Neural Computation 1, 295-311
[5] Bell A.J. & Sejnowski T.J. 1995. An information maximization approach to blind separation and blind deconvolution, Neural Computation, 7, 1129-1159
[6] Cardoso J-F. & Laheld B. 1996. Equivariant adaptive source separation, IEEE Trans.
on Signal Proc., Dec.1996.
[7] Field D.J. 1994. What is the goal of sensory coding? Neural Computation 6, 559-601
[8] Hubel D.H. & Wiesel T.N. 1968. Receptive fields and functional architecture of monkey
striate cortex, J. Physiol., 195: 215-244
Edges are the "Independent Components " of Natural Scenes
837
[9] Linsker R. 1988. Self-organization in a perceptual network. Computer, 21, 105-117
[10] Miller K.D. 1988. Correlation-based models of neural development, in Neuroscience and
Connectionist Theory, M. Gluck & D. Rumelhart, eds., 267-353, L.Erlbaum, NJ
[11] Nadal J-P. & Parga N. 1994. Non-linear neurons in the low noise limit: a factorial code
maximises information transfer. Network, 5,565-581.
?
[12] Olshausen B.A. & Field D.J. 1996. Natural image statistics and efficient coding, Network: Computation in Neural Systems, 7, 2.
?
Figure 2: The matrix of 144 filters obtained by training on ZCA-whitened natural images.
Each filter is a row of the matrix W, and they are ordered left-to-right, top-to-bottom in
reverse order of the length of the filter vectors. In a rough characterisation, and more-or-Iess
in order of appearance, the filters consist of one DC filter (top left), 106 oriented filters (of
which 35 were diagonal, 37 were vertical and 34 horizontal), and 37 localised checkerboard
patterns. The diagonal filters are longer than the vertical and horizontal due to the bias
induced by having square, rather than circular, receptive fields.
| 1321 |@word determinant:1 version:3 wiesel:2 open:1 proportionality:1 simulation:1 covariance:2 pick:1 reduction:2 series:1 contains:1 activation:1 yet:1 must:2 physiol:1 subsequent:1 wx:2 remove:1 selected:1 leaf:1 preference:1 sigmoidal:1 direct:1 suspicious:1 theoretically:1 expected:1 ica:32 equivariant:1 roughly:2 increasing:1 xx:1 underlying:5 linearity:3 what:4 nadal:2 monkey:2 developed:3 finding:1 transformation:1 nj:1 exactly:2 scaled:1 medical:1 yn:1 appear:1 maximise:1 local:9 limit:1 approximately:1 might:1 twice:1 au:1 suggests:1 co:1 range:1 statistically:1 thirty:1 maximisation:2 hughes:1 procedure:1 laheld:2 bell:5 gabor:3 road:1 cannot:1 writing:1 descending:1 equivalent:1 demonstrated:1 starting:2 resolution:1 wit:1 rule:5 orthonormal:1 classic:1 updated:1 element:1 rumelhart:1 satisfying:1 peaky:1 sparsely:2 observed:1 bottom:1 coincidence:2 ordering:1 rescaled:1 ran:1 ui:5 trained:3 depend:1 grateful:1 exposed:1 basis:25 joint:2 cat:1 represented:1 xxt:1 w1a:1 peakier:1 describe:1 sejnowski:5 formation:1 emerged:1 larger:1 amari:2 statistic:5 torrey:1 transform:3 advantage:1 eigenvalue:1 kurtosis:4 took:1 subtracting:1 formalised:1 product:1 supposed:1 convergence:2 produce:6 unsphered:1 minimises:1 ij:1 eq:9 predicted:1 resemble:2 involves:1 uu:2 filter:62 stochastic:2 pea:1 argued:1 behaviour:1 helping:1 fil:1 predict:1 pine:1 polar:1 proc:1 spreading:1 tanh:1 sensitive:2 mit:1 rough:1 gaussian:4 super:1 rather:2 avoid:1 rubel:1 zca:15 dependent:1 typically:1 relation:1 transformed:3 wij:2 pixel:2 orientation:1 development:2 spatial:6 mutual:1 field:17 equal:1 fuller:1 having:3 reasoned:1 identical:1 represents:1 broad:1 unsupervised:2 peaked:1 linsker:1 others:2 connectionist:1 oriented:9 randomly:1 phase:7 attempt:1 detection:1 organization:1 circular:1 activated:1 sens:1 prewhitening:1 edge:8 closer:1 orthogonal:3 tree:1 column:6 kurtotic:1 cover:1 maximization:1 subset:1 entry:2 unphysiological:1 erlbaum:1 optimally:1 dependency:1 answer:1 varies:1 learnt:3 st:1 density:2 terrence:1 infomax:7 synthesis:1 quickly:1 choose:1 possibly:1 tet:1 checkerboard:2 converted:1 tii:1 nonlinearities:1 factorised:1 coding:4 twin:1 satisfy:1 notable:1 blind:4 performed:1 root:1 view:1 formed:1 square:2 variance:1 who:1 characteristic:1 ensemble:2 miller:1 raw:2 parga:2 produced:4 multiplying:2 lighting:1 detector:5 explain:1 ed:1 frequency:6 involved:1 initialised:2 naturally:1 associated:2 recovers:1 whitens:2 iji:1 knowledge:1 amplitude:2 wwt:2 higher:3 reflected:1 response:1 decorrelating:8 though:2 just:1 atick:3 correlation:1 horizontal:2 multiscale:2 logistic:4 stimulate:1 olshausen:9 concept:1 requiring:1 barlow:6 verify:1 analytically:1 spatially:2 laboratory:1 self:2 please:1 criterion:2 theoretic:4 pearlmutter:1 reasoning:1 image:24 meaning:2 rotation:2 permuted:1 functional:1 conditioning:1 tail:3 relating:1 surround:1 centre:1 nonlinearity:1 bruno:1 cortex:4 longer:4 whitening:4 etc:2 own:1 jolla:1 sparc:1 reverse:2 selectivity:1 certain:1 ecological:1 yi:8 preserving:2 minimum:1 greater:2 seen:1 utilises:1 determine:1 converge:1 ud:1 signal:4 semi:2 multiple:1 full:3 hebbian:2 long:1 y:1 converging:1 involving:2 basic:1 whitened:4 vision:1 histogram:4 achieved:2 cell:1 lea:2 dec:1 addition:1 separately:1 source:1 appropriately:1 unlike:1 ascent:1 induced:1 thing:1 yang:2 backwards:1 iii:1 enough:1 variety:1 xj:1 architecture:1 opposite:1 det:1 whether:1 pca:17 speech:1 cause:3 repeatedly:1 matlab:1 generally:1 useful:1 detailed:1 eigenvectors:1 cardoso:2 factorial:3 transforms:3 amount:1 locally:1 exist:2 shifted:1 neuroscience:1 affected:1 centresurround:1 redundancy:2 four:1 drawn:1 characterisation:1 preprocessed:1 wtw:3 inverse:2 almost:2 reasonable:1 separation:5 patch:3 recognise:1 scaling:4 followed:1 display:2 convergent:1 yielded:2 constraint:1 constrain:1 scene:12 phrased:1 sake:1 fourier:2 argument:2 extremely:1 x12:3 sphering:1 speedup:1 according:5 lca:4 combination:2 across:1 slightly:1 wi:6 modification:1 explained:1 fail:1 end:2 observe:1 appropriate:1 batch:1 slower:1 original:3 denotes:1 running:1 tony:1 ensure:1 top:3 giving:1 sweep:4 objective:1 question:1 arrangement:1 receptive:6 primary:1 squashed:1 striate:1 diagonal:3 gradient:8 code:3 length:2 difficult:1 mostly:1 potentially:1 greyscale:1 localised:10 perform:2 maximises:2 vertical:2 neuron:4 howard:1 implementable:1 anti:1 displayed:3 supporting:1 defining:1 neurobiology:1 looking:1 extended:1 viola:1 y1:1 dc:2 ww:2 ordinate:1 david:1 required:3 california:1 hour:1 trans:1 address:1 able:1 suggested:1 pattern:2 including:1 wz:1 terry:1 shifting:1 decorrelation:2 natural:18 altered:1 cichocki:2 understanding:1 acknowledgement:1 relative:1 interesting:1 principle:1 translation:1 row:4 succinctly:1 supported:1 bias:1 organise:1 barak:1 institute:2 wide:1 taking:1 emerge:1 absolute:3 sparse:3 distributed:3 calculated:10 cortical:1 cumulative:1 sensory:3 qualitatively:1 adaptive:1 preprocessing:1 simplified:1 approximate:1 informationtheoretic:1 global:4 hubel:1 symmetrical:2 spectrum:4 un:1 decade:1 tailed:1 why:1 transfer:1 anthony:1 flattening:1 linearly:2 doubleexponential:1 noise:2 paul:1 repeated:1 fig:15 redlich:2 sigmoidally:1 salk:3 attributable:1 perceptual:3 weighting:1 jacobian:1 wavelet:1 minute:1 departs:1 specific:1 showing:2 deconvolution:1 exists:2 consist:1 importance:1 magnitude:2 sparseness:4 gluck:1 locality:1 entropy:4 appearance:1 visual:6 ordered:5 cyclical:1 ters:1 satisfies:1 extracted:1 stimulating:1 goal:2 identity:1 presentation:1 change:1 except:1 lui:1 averaging:1 principal:3 pas:2 invariance:2 la:1 rarely:1 sphered:1 avoiding:1 ex:1 |
353 | 1,322 | A spike based learning neuron in analog
VLSI
Philipp Hiifliger
Institute of Neuroinformatics
ETHZjUNIZ
Gloriastrasse 32
CH-8006 Zurich
Switzerland
e-mail: haftiger@neuroinf.ethz.ch
tel: ++41 1 257 26 84
Misha Mahowald
Institute of Neuroinformatics
ETHZjUNIZ
Gloriastrasse 32
CH-8006 Zurich
Switzerland
e-mail: misha@neuroinf.ethz.ch
tel: ++41 1 257 26 84
Lloyd Watts
Arithmos, Inc.
2730 San Tomas Expressway, Suite 210
Santa Clara, CA 95051-0952
USA
e-mail: lloyd@arithmos.com
tel: 408 982 4490, x219
Abstract
Many popular learning rules are formulated in terms of continuous, analog inputs and outputs. Biological systems, however, use
action potentials, which are digital-amplitude events that encode
analog information in the inter-event interval. Action-potential
representations are now being used to advantage in neuromorphic
VLSI systems as well. We report on a simple learning rule, based
on the Riccati equation described by Kohonen [1], modified for
action-potential neuronal outputs. We demonstrate this learning
rule in an analog VLSI chip that uses volatile capacitive storage for
synaptic weights. We show that our time-dependent learning rule
is sufficient to achieve approximate weight normalization and can
detect temporal correlations in spike trains.
A Spike Based Learning Neuron in Analog VLSI
1
693
INTRODUCTION
It is an ongoing debate how information in the nervous system is encoded and carried
between neurons. In many subsystems of the brain it is now believed that it is done
by the exact timing of spikes. Furthermore spike signals on VLSI chips allow the
use of address-event busses to solve the problem of the large connectivity in neural
networks [3, 4]. For these reasons our artificial neuron and others [2] use spike signals
to communicate. Additionally the weight updates at the synapses are determined
by the relative timing of presynaptic and postsynaptic spikes, a mechanism that
has recently been discovered to operate in cortical synapses [5, 7, 6].
Weight normalization is a useful property of learning rules. In order to perform the
normalization, some information about the whole weight vector must be available
at every synapse. We use the neuron's output spikes (The neuron's output is the
product of the weight and the input vector), which retrogradely propagate through
the dendrites to the synapses (as has been observed in biological neurons [5]). In
our model approximate normalization is an implicit property of the learning rule.
2
THE LEARNING RULE
presynaptic spikes
7
.~
____
L-~
____
~
______
~_
correlation signat
3
ynaptic weight
postsynaptic spikes
Figure 1: A snapshot of the simulation variables involved at one synapse. With
r = 0.838
The core of the learning rule is a local 'correlation signal' c at every synapse. It
records the 'history' of presynaptic spikes. It is incremented by 1 with every presynaptic spike and decays in time with time constant r:
c(tm,O) = 0
ttn,n- t 171.,n-l
c(tm,n) = e-
T
c(tm,n-d
+1
(1)
n>O
tm,o is the time of the m'th postsynaptic spike and tm,n (n > 0) is the time of the
n'th presynaptic spike after the m'th postsynaptic spike. The weight changes when
the cell fires an action potential:
694
P. Hiijliger, M. Mahowald and L Watts
tTn o-tTTl-l ?
w(tm,o) = W(tm-l,O) + ae- .
s = max{v : tm-l ,v :s; tm,o}
T
?
c(tm-1,s) - ,8w(tm- 1,o)
(2)
where w is the weight at this synapse. tm-l,s means the last event (presynaptic
or postsynaptic spike) before the m'th postsynaptic spike. a and ,8 are parameters
influencing learning speed and weight vector normalization (see (5)).
Our learning rule is designed to react to temporal correlations between spikes in
the input signals. However, to show the normalizing of the weights we analyze its
behavior by making some simplifying assumptions on the input and output signals;
e.g. the intervals of the presynaptic and the postsynaptic spike train are Poisson
distributed and there is no correlation between single spikes. Therefore we can
represent the signals by their instantaneous average frequencies 0 and i. Now the
simplified learning rule can be written as:
:t
(3)
w = al(O)f - ,8wO
(4)
l(O) represents the average percentage to which the correlation signal is reduced
between weight updates (output spikes). So when the neuron's average firing rate
fulfills 0 ? ~, one can approximate l(O) ~ 1. (3) is thus reduced to the Riccati
equation described by Kohonen [1]. This rule would not be Hebbian, but normalizes
the weight vector (see (5)). Note that if the correlation signal does not decay, then
our rule matches exactly the Riccati equation. We will further refer to it as the
Modified Riccati Rule (MRR). Whereas if 0 ? ~ then l(O) ~ Or, which is a
Hebbian learning rule also described in [1].
If we assume that the spiking mechanism preserves 0
follows for the equilibrium state:
IIwll =
Since l(O)
= wT f and insert it in (3), it
Jl(O)~
< 1 the weight vector will never be longer than
(5)
Ii'
This property also
holds when the simplifying assumptions are removed. The vector will always be
smaller, as it is with no decay of the correlation signals, since the decay only affects
the incrementing part of the rule.
Matters get much more complicated with the removal of the assumption of the
pre- and postsynaptic trains being independently Poisson distributed. With an
integrate-and-fire neuron for instance, or if there exist correlations between spikes
of the input trains, it is no longer possible to express what happens in terms of rate
695
A Spike Based Learning Neuron in Analog VLSI
coding only (with f and 0). (3) is still valid as an approximation but temporal
relationships between pre- and postsynaptic spikes become important. Presynaptic
spikes immediately followed by an action potential will have the strongest increasing
effect on the synapse's weight.
3
IMPLEMENTATION IN ANALOG VLSI
We have implemented a learning rule in a neuron circuit fabricated in a 2.0f..Lm
CMOS process. This neuron is a preliminary design that conforms only approximately to the MRR. The neuron uses an integrate-and-fire mechanism to generate
action potentials (Figure 2).
Figure 2: Integrate-and-fire neuron. The sarna capacitor holds the somatic membrane voltage. This voltage is compared to a threshold thresh with a differential
pair. When it crosses this threshold it gets pulled up through the mirrored current
from the differential pair. This same current gets also mirrored to the right and
starts to pull up a second leaky capacitor (setback) through a small W / L transistor,
so this voltage rises slowly. This capacitor voltage finally opens a transistor that
pulls sarna back to ground where it restarts integrating the incoming current. The
parameters tonic+ and tonic- are used to add or subtract a constant current to
the soma capacitor. tre! allows the spike-width to be changed.
Not shown, but also part of the neuron, are two non-learning synapses: one excitatory and one inhibitory. Each of three learning synapses contains a storage
capacitor for the synaptic weight and for the correlation signal (Figure 3).
The correlation signal c is simplified to a binary variable in this implementation.
When an input spike occurs, the correlation signal is set to 1. It is set to 0 whenever
the neuron produces an output-spike or after a fixed time-period (T in (7)) if there
is no other input spike:
c(tm,o) = 0
c(tm,n) = 1
n >0 ,
tm,n::; tm+1,O
(6)
This approximation unfortunately tends to eliminate differences between highly
active inputs and weaker inputs. Nevertheless the weight changes with every output
spike:
696
P. Hiijiiger, M. Mahowald and L. Watts
Figure 3: The CMOS learning-synapse incorporates the learning mechanism. The
weight capacitor holds the weight, the carr capacitor stores the correlation signal
representation. The magnitude of the weight increment and decrement are computed by a differential pair (upper left w50). These currents are mirrored to the
synaptic weight and gated by digital switches encoding the state of the correlation
signal and of the somatic action potential. The correlation signal reset is mediated by a leakage transistor, decayin, which has a tonic value, but is increased
dramatically when the output neuron fires.
if C{tm-l ,s) = 1 and tm "0
- tm-l
s<T
otherwise
(7)
w is the weight on one synapse, c is the correlation signal of that synapse, and a is
a parameter that controls how fast the weight changes. (See in the previous section
for a description of tm ,n.) The weight, W50, is the equilibrium value of the synaptic
weight when the occurrence of an input spike is fifty percent correlated with the
occurrence of an output spike. This implementation differs from the Riccati rule in
that either the weight increment or the weight decrement, but not both, are executed
upon each output spike. Also, the weight increment is a function of the synaptic
weight. The circuit was implemented this way to try and achieve an equilibrium
value for the synaptic weight equal to the fraction of the time that the input neuron
fired relative to the times the output neuron fired. This is the correct equilibrium
value for the synaptic weight in the Riccati rule. The evolution of a synaptic weight
is depicted in Figure 4.
The synaptic weight vector normalization in this implementation is accurate only
when the assumptions of the design are met. These assumptions are that there is
one or fewer input spikes per synapse for every output spike. This assumption is
easier to meet when there are many synapses formed with the neuron, so that spikes
from multiple inputs combine to drive the cell to threshold. Since we have only
three synapses, this approximation is usually violated. Nevertheless, the weights
compete with one another and therefore the length of the weight vector is limited.
Competition between synaptic weights occurs because if one weight is stronger, it
causes the output neuron to spike and this suppresses the other input that has not
697
A Spike Based Learning Neuron in Analog VLSI
fired. Future revision of the chip will conform more closely to the MRR.
20
18
8.S
9
tlma
9.S
10
X 10-3
Figure 4: A snapshot of the learning behavior of a single VLSI synapse: The top
trace is the neuron output (IV/division), the upper middle trace is the synaptic weight (lower voltage means a stronger synaptic weight) (25mV /division), the
lower middle trace is a representation of the correlation signal (1 V /division)(it has
inverted sense too) and the bottom trace is the presynaptic activity (1 V/ division) .
The weight changes only when an output spike occurs. The timeout of the correlation signal is realized with a decay and a threshold. If the correlation signal is above
threshold, the weight is strengthened. If the signal has decayed below threshold at
the time of an output spike, the weight is weakened. The magnitude of the change
of the weight is a function of the absolute magnitude of the weight. This weight
was weaker than W50, so the increments are bigger than the decrements.
4
TEMPORAL CORRELATION IN INPUT SPIKE
TRAINS
Figure 5 illustrates the ability of our learning rule to detect temporal correlations in
spike trains. A simulated neuron strengthens those two synapses that receive 40%
coincident spikes, although all four synapses get the same average spike frequencies.
5
DISCUSSION
Learning rules that make use of temporal correlations in their spike inputs/outputs
provide biologically relevant mechanisms of synapse modification [5, 7, 6]. Analog
VLSI implementations allow such models to operate in real time. We plan to develop
such analog VLSI neurons using floating gates for weight storage and an addressevent bus for interneuronal connections. These could then be used in realtime
applications in adaptive 'neuromorphic' systems.
Acknowledgments
We thank the following organizations for their support: SPP Neuroinformatik des
Schweizerischen Nationalfonds, Centre Swiss d'Electronique et de Microtechnique,
U.S. Office of Naval Research and the Gatsby Charitable Foundation.
P. Hiijiiger, M. Mahowald and L. Watts
698
synapse 2
0.4
0.36
0 .3
015
0.1
O?060l..----:-O:,00--='200":----,300~~400.,----='"="--":-----=-'700
timers)
Figure 5: In this simulation we use a neuron with four synapses. All of them get
input trains of the same average frequency (20Hz). Two of those input trains are
the result of independent Poisson processes (synapses 3 and 4), the other two are the
combination oftwo Poisson processes (synapses 1 and 2): One that is independent of
any other (12Hz) and one that is shared by the two with slightly different time delays
(8Hz): Synapse 1 gets those coincident spikes 0.01 seconds earlier than synapse 2.
Synapse 2 gets stronger because when it together with synapse 1 triggered an action
potential, it was the last synapse being active before the postsynaptic spike. The
parameters were: Q = 0.004, {3 = 0.02, T = 11ms
References
[1] Thevo Kohonen. Self-Organization and Associative Memory. Springer, Berlin,
1984.
[2] D.K. Ferry L.A. Akers and R.O. Grondin. Synthetic neural systems in the 1990s.
An introduction to neural and electronic networks, Academic Press (Zernetzer,
Davis, Lau, McKenna), pages 359-387, 1995.
[3] J. Lazzaro, J. Wawrzynek, M. Mahowald, M. Sivilotti, and D. Gillespie. Silicon
auditory processors as computer peripherals. IEEE Trans. Neural Networks,
4:523-528, 1993.
[4] A. Mortara and E. A. Vittoz. A communication architecture tailored for analog
VLSI artificial neural networks: intrinsic performance and limitations. IEEE
Translation on Neural Networks, 5:459-466, 1994.
[5] G. J. Stuart and B. Sakmann. Active propagation of somatic action potentials
into neocortical pyramidal cell dendrites. Nature, 367:600, 1994.
[6] M. V. Tsodyks and H. Markram. Redistribution of synaptic efficacy between
neocortical pyramidal neurons. Nature, 382:807-810, 1996.
[7] R. Yuste and W. Denk. Dendritic spines as basic functional units of neuronal
integration. Nature, 375:682-684, 1995.
| 1322 |@word middle:2 stronger:3 open:1 simulation:2 propagate:1 simplifying:2 ttn:2 contains:1 efficacy:1 current:5 com:1 timer:1 clara:1 must:1 written:1 designed:1 update:2 fewer:1 nervous:1 core:1 record:1 philipp:1 become:1 differential:3 combine:1 inter:1 spine:1 behavior:2 brain:1 increasing:1 revision:1 circuit:2 what:1 sivilotti:1 suppresses:1 fabricated:1 suite:1 temporal:6 every:5 exactly:1 control:1 interneuronal:1 unit:1 before:2 influencing:1 timing:2 local:1 tends:1 encoding:1 meet:1 firing:1 approximately:1 weakened:1 limited:1 acknowledgment:1 neuroinf:2 differs:1 swiss:1 pre:2 integrating:1 retrogradely:1 get:7 subsystem:1 storage:3 independently:1 tomas:1 immediately:1 react:1 rule:20 pull:2 increment:4 tlma:1 exact:1 us:2 strengthens:1 observed:1 bottom:1 signat:1 tsodyks:1 incremented:1 removed:1 denk:1 upon:1 division:4 chip:3 train:8 fast:1 artificial:2 neuroinformatics:2 encoded:1 solve:1 otherwise:1 ability:1 timeout:1 associative:1 advantage:1 triggered:1 transistor:3 product:1 reset:1 kohonen:3 relevant:1 riccati:6 fired:3 achieve:2 description:1 competition:1 produce:1 cmos:2 develop:1 implemented:2 vittoz:1 met:1 switzerland:2 closely:1 correct:1 redistribution:1 preliminary:1 biological:2 dendritic:1 insert:1 hold:3 ground:1 equilibrium:4 lm:1 always:1 modified:2 voltage:5 office:1 encode:1 naval:1 detect:2 sense:1 dependent:1 eliminate:1 vlsi:12 plan:1 integration:1 equal:1 never:1 represents:1 stuart:1 future:1 report:1 others:1 preserve:1 floating:1 fire:5 organization:2 highly:1 misha:2 ferry:1 accurate:1 conforms:1 mortara:1 iv:1 instance:1 increased:1 earlier:1 neuromorphic:2 mahowald:5 delay:1 too:1 synthetic:1 decayed:1 together:1 connectivity:1 slowly:1 potential:9 de:2 lloyd:2 coding:1 inc:1 matter:1 mv:1 try:1 analyze:1 start:1 complicated:1 formed:1 drive:1 processor:1 history:1 synapsis:12 strongest:1 addressevent:1 whenever:1 synaptic:13 akers:1 frequency:3 involved:1 auditory:1 popular:1 amplitude:1 back:1 restarts:1 synapse:17 done:1 furthermore:1 implicit:1 correlation:22 tre:1 propagation:1 usa:1 effect:1 evolution:1 width:1 self:1 davis:1 m:1 neocortical:2 demonstrate:1 carr:1 percent:1 instantaneous:1 recently:1 volatile:1 functional:1 spiking:1 jl:1 analog:11 refer:1 silicon:1 centre:1 longer:2 add:1 mrr:3 thresh:1 store:1 binary:1 inverted:1 period:1 signal:20 ii:1 multiple:1 hebbian:2 match:1 academic:1 believed:1 cross:1 bigger:1 basic:1 ae:1 poisson:4 normalization:6 represent:1 tailored:1 cell:3 receive:1 whereas:1 interval:2 pyramidal:2 fifty:1 operate:2 hz:3 incorporates:1 capacitor:7 switch:1 affect:1 architecture:1 tm:20 wo:1 cause:1 lazzaro:1 action:9 dramatically:1 useful:1 santa:1 reduced:2 generate:1 percentage:1 exist:1 mirrored:3 inhibitory:1 per:1 conform:1 express:1 four:2 soma:1 threshold:6 nevertheless:2 fraction:1 compete:1 communicate:1 oftwo:1 electronic:1 realtime:1 followed:1 activity:1 speed:1 peripheral:1 watt:4 combination:1 membrane:1 smaller:1 slightly:1 wawrzynek:1 postsynaptic:10 making:1 happens:1 biologically:1 modification:1 lau:1 equation:3 zurich:2 bus:2 mechanism:5 available:1 occurrence:2 gate:1 capacitive:1 top:1 leakage:1 realized:1 spike:47 occurs:3 thank:1 simulated:1 berlin:1 mail:3 presynaptic:9 reason:1 setback:1 length:1 relationship:1 unfortunately:1 executed:1 debate:1 trace:4 rise:1 implementation:5 design:2 sakmann:1 perform:1 gated:1 upper:2 neuron:27 snapshot:2 coincident:2 tonic:3 communication:1 discovered:1 somatic:3 neuroinformatik:1 pair:3 connection:1 trans:1 address:1 usually:1 below:1 spp:1 max:1 memory:1 gillespie:1 event:4 carried:1 mediated:1 removal:1 relative:2 yuste:1 limitation:1 digital:2 foundation:1 integrate:3 sufficient:1 charitable:1 translation:1 normalizes:1 excitatory:1 changed:1 last:2 allow:2 pulled:1 weaker:2 institute:2 markram:1 absolute:1 leaky:1 electronique:1 distributed:2 cortical:1 valid:1 adaptive:1 san:1 simplified:2 approximate:3 active:3 incoming:1 continuous:1 additionally:1 nature:3 ca:1 tel:3 dendrite:2 iiwll:1 decrement:3 whole:1 incrementing:1 neuronal:2 strengthened:1 gatsby:1 mckenna:1 decay:5 normalizing:1 intrinsic:1 magnitude:3 illustrates:1 easier:1 subtract:1 depicted:1 springer:1 ch:4 formulated:1 shared:1 change:5 determined:1 wt:1 support:1 fulfills:1 ethz:2 violated:1 ongoing:1 correlated:1 |
354 | 1,323 | NeuroScale: Novel Topographic Feature
Extraction using RBF Networks
David Lowe
D.LoweOaston.ac.uk
Michael E. Tipping
H.E.TippingOaston.ac.uk
Neural Computing Research Group
Aston University, Aston Triangle, Birmingam B4 7ET1 UK
http://www.ncrg.aston.ac.uk/
.
Abstract
Dimension-reducing feature extraction neural network techniques
which also preserve neighbourhood relationships in data have traditionally been the exclusive domain of Kohonen self organising
maps. Recently, we introduced a novel dimension-reducing feature
extraction process, which is also topographic, based upon a Radial
Basis Function architecture. It has been observed that the generalisation performance of the system is broadly insensitive to model
order complexity and other smoothing factors such as the kernel
widths, contrary to intuition derived from supervised neural network models. In this paper we provide an effective demonstration
of this property and give a theoretical justification for the apparent
'self-regularising' behaviour of the 'NEUROSCALE' architecture.
1
'NeuroScale': A Feed-forward Neural Network
Topographic Transformation
Recently an important class of topographic neural network based feature extraction
approaches, which can be related to the traditional statistical methods of Sammon
Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have
been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping,
1996). These novel alternatives to Kohonen-like approaches for topographic feature
extraction possess several interesting properties. For instance, the NEuROSCALE
architecture has the empirically observed property that the generalisation perfor-
D. Lowe and M. E. Tipping
544
mance does not seem to depend critically on model order complexity, contrary to
intuition based upon knowledge of its supervised counterparts. This paper presents
evidence for their 'self-regularising' behaviour and provides an explanation in terms
of the curvature of the trained models.
We now provide a brief introduction to the NEUROSCALE philosophy of nonlinear
topographic feature extraction. Further details may be found in (Lowe, 1993; Lowe
and Tipping, 1996). We seek a dimension-reducing, topographic transformation of
data for the purposes of visualisation and analysis. By 'topographic', we imply that
the geometric structure of the data be optimally preserved in the transformation,
and the embodiment of this constraint is that the inter-point distances in the feature
space should correspond as closely as possible to those distances in the data space.
The implementation of this principle by a neural network is very simple. A Radial
Basis Function (RBF) neural network is utilised to predict the coordinates of the
data point in the transformed feature space. The locations of the feature points are
indirectly determined by adjusting the weights of the network. The transformation
is determined by optimising the network parameters in order to minimise a suitable
error measure that embodies the topographic principle.
The specific details of this alternative approach are as follows. Given an mdimensional input space of N data points x q , an n-dimensional feature space of
points Yq is generated such that the relative positions of the feature space points
minimise the error, or 'STRESS', term:
N
E =
2: 2:(d~p -
dqp )2,
(1)
p q>p
where the d~p are the inter-point Euclidean distances in the data space: d~p =
J(xq - Xp)T(Xq - xp), and the dqp are the corresponding distances in the feature
space: dqp = J(Yq - Yp)T(Yq - Yp)?
The points yare generated by the RBF, given the data points as input. That is,
Yq = f(xq;W), where f is the nonlinear transformation effected by the RBF with
parameters (weights and any kernel smoothing factors) W. The distances in the
feature space may thus be given by dqp =11 f(xq) - f(xp) " and so more explicitly
by
(2)
where ?k 0 are the basis functions, JLk are the centres of those functions, which are
fixed, and Wlk are the weights from the basis functions to the output.
The topographic nature of the transformation is imposed by the STRESS term which
attempts to match the inter-point Euclidean distances in the feature space with
those in the input space. This mapping is relatively supervised because there is no
specific target for each Yq; only a relative measure of target separation between each
Yq, Yp pair is provided. In this form it does not take account of any additional information (for example, class labels) that might be associated with the data points,
but is determined strictly by their spatial distribution. However, the approach may
be extended to incorporate the use of extra 'subjective' information which may be
545
NeuroScale: Novel Topographic Feature Extraction using RBF Networks
used to influence the transformation and permits the extraction of 'enhanced', more
informative, feature spaces (Lowe and Tipping, 1996).
Combining equations (1) and (2) and differentiating with respect to the weights in
the network allows the partial derivatives of the STRESS {)E/{)WZk to be derived
for each pattern pair. These may be accumulated over the entire pattern set and
the weights adjusted by an iterative procedure to minimise the STRESS term E.
Note that the objective function for the RBF is no longer quadratic, and so a
standard analytic matrix-inversion method for fixing the final layer weights cannot
be employed.
We refer to this overall procedure as ' NEUROSCALE'. Although any universal approximator may be exploited within NEUROSCALE, using a Radial Basis Function
network allows more theoretical analysis of the resulting behaviour, despite the fact
that we have lost the usual linearity advantages of the RBF because of the STRESS
measure. A schematic ofthe NEUROSCALE model is given in figure 1, and illustrates
the role of the RBF in transforming the data space to the feature space.
ERROR MEASURE
~j
FEATURE SPACE
DATA SPACE
RBF
Figure 1: The NEUROSCALE architecture.
2
Generalisation
In a supervised learning context, generalisation performance deteriorates for overcomplex networks as 'overfitting' occurs. By contrast, it is an interesting empirical
observation that the generalisation performance of NEUROSCALE, and related models, is largely insensitive to excessive model complexity. This applies both to the
number of centres used in the RBF and in the kernel smoothing factors which themselves may be viewed as regularising hyperparameters in a feed-forward supervised
situation.
This insensitivity may be illustrated by Figure 2, which shows the training and
test set performances on the IRIS data (for 5-45 basis functions trained and tested
on 45 separate samples). To within acceptable deviations, the training and test set
D. Lowe and M. E. Tipping
546
STRESS values are approximately constant. This behaviour is counter-intuitive when
compared with research on feed forward networks trained according to supervised
approaches. We have observed this general trend on a variety of diverse real world
problems, and it is not peculiar to the IRIS data.
Training and Test Errors
4.5 X 10-'
4
3.5
3
0.5
o
o
5
10
15
20
25
30
35
40
45
Number of Basis Functions
Figure 2: Training and test errors for NEUROSCALE Radial Basis Functions with
various numbers of basis functions. Training errors are on the left, test errors are
on the right.
There are two fundamental causes of this observed behaviour. Firstly, we may
derive significant insight into the necessary form of the functional transformation
independent of the data. Secondly, given this prior functional knowledge, there
is an appropriate regularising component implicitly incorporated in the training
algorithm outlined in the previous section.
2.1
Smoothness and Topographic Transformations
For a supervised problem, in the absence of any explicit prior information, the
smoothness of the network function must be determined by the data, typically necessitating the setting of regularising hyperparameters to counter overfitting behaviour.
In the case of the distance-preserving transformation effected by NEUROSCALE, an
understanding of the necessary smoothness may be deduced a priori.
Consider a point Xq in input space and a nearby test point xp = Xq + E pq , where
is an arbitrary displacement vector. Optimum generalisation demands that the
distance between the corresponding image points Yq and Yp should thus be I/Epq 1/.
Considering the Taylor expansions around the point Yq we find
Epq
n
Ilyp -
Yq 112 = 2)E;qgq,)2
+ O(E4),
1=1
=E;q
(t gqlg~)
Epq
1=1
= E;qGqEpq
+ O(E4),
+ O(E4),
(3)
NeuroScale: Novel Topographic Feature Extraction using RBF Networks
547
where the matrix G q
L~=l gqlg~ and gql is the gradient vector
(8YI(q)/f)xl, ... ,8YI(q)/8x n )T evaluated at x = x q . For structure preservation
the corresponding distances in input and output spaces need to be retained for all
values of ?opq: IIYp - Yq 112= ?oT ?o, and so G q = I with the requirement that secondand higher-order terms must vanish. In particular note that measures of curvature
proportional to (82YI(Q)/8xn2 should vanish. In general, for dimension reduction,
we cannot ensure that exact structure preservation is obtained since the rank of G q
is necessarily less than n and hence can never equate to the identity matrix. However, when minimising STRESS we are locally attempting to minimise the residual
II I - G q II, which is achieved when all the vectors ?opq of interest lie within the range
of G q ?
2.2
The Training Mechanism
An important feature of this class of topographic transformations is that the STRESS
measure is invariant under arbitrary rotations and transformations of the output
configuration. The algorithm outlined previously tends towards those configurations
that generally reduce the sum-of-squared weight values (Tipping, 1996). This is
achieved without any explicit addition of regularisation, but rather it is a feature
of the relative supervision algorithm.
The effect of this reduction in weight magnitudes on the smoothness of the network
transformation may be observed by monitoring an explicit quantitative measure of
total curvature:
(4)
where Q ranges over the patterns, i over the input dimensions and lover the output
dimensions.
Figure 3 depicts the total curvature of NEUROSCALE as a function of the training
iterations on the IRIS subset data for a variety of model complexities. As predicted,
curvature generally decreases during the training process, with the final value independent of the model complexity. Theoretical insight into this phenomenon is given
in (Tipping, 1996).
This behaviour is highly relevant, given the analysis of the previous subsection. That
the training algorithm implicitly reduces the sum-of-squares weight values implies
that there is a weight decay process occurring with an associated smoothing effect.
While there is no control over the magnitude of this element, it was shown that
for good generalisation, the optimal transformation should be maximally smooth.
This self-regularisation operates differently to regularisers normally introduced to
stabilise the ill-posed problems of supervised neural network models. In the latter
case the regulariser acts to oppose the effect of reducing the error on the training
set. In NEUROSCALE the implicit weight decay operates with the minimisation of
STRESS since the aim is to 'fit' the relative input positions exactly.
That there are many RBF networks which satisfy a given STRESS level may be
seen by training a network a posteriori on a predetermined Sammon mapping of a
data set by a supervised approach (since then the targets are known explicitly). In
general, such a posteriori trained networks do not have a low curvature and hence
D. Lowe and M. E. Tipping
548
Curvature against Time for NeuroScale
2~~.-----------.------------~----------,
- - 15 Basis Functions
- - 30 Basis Functions
.. 45 Basis Functions
2000
I
o
I
~ 1500
I
:l
'"
::;
~L-----------~50------------1~OO----------~150
Epoch Number
Figure 3: Curvature against time during the training of a NEUROSCALE
mapping on the Iris data, for networks with 15, 30 and 45 basis functions.
do not show as good a generalisation behaviour as networks trained according to
the relative supervision approach. The method by which NEUROSCALE reduces
curvature, is to select, automatically, RBF networks with minimum norm weights.
This is an inherent property of the training algorithm to reduce the STRESS criterion.
2.3
An example
An effective example of the ease of production of good generalising transformations
is given by the following experiment. A synthetic data set comprised four Gaussian
clusters, each with spherical variance of 0.5, located in four dimensions with centres
at (xc, 0, 0, 0) : Xc E {I, 2, 3, 4} . A NEUROSCALE transformation to two dimensions
was trained using the relative supervision approach, using the three clusters at
X c = 1,3 and 4. The network was then tested on the entire dataset, with the fourth
cluster included, and the projections are given in Figure 4 below.
The apparently excellent generalisation to test data not sampled from the same
distribution as the training data is a function of the inherent smoothing within
the training process and also reflects the fact that the test data lay approximately
within the range of the matrices G q determined during training.
3
Conclusion
We have described NEUROSCALE, a parameterised RBF Sammon mapping approach
for topographic feature extraction. The NEUROSCALE method may be viewed as a
technique which is closely related to Sammon mappings and nonlinear metric MDS,
with the added flexibility of producing a generalising transformation.
A theoretical justification has been provided for the empirical observation that the
generalisation performance is not affected by model order complexity issues. This
counter-intuitive result is based on arguments of necessary transformation smooth-
549
NeuroScale: Novel Topographic Feature Extraction using RBF Networks
NeuroSca1. trained on 3 linear clusters
NeuroScaJe 'asled on 4 "near clulters
1,5
1.5
a
O.S
o
-O.S
?
0
?
a,. \. :
;: .t:. ~
...
. ,......
.'
:
:- ?
-,
-,
1.S
2S
3S
4S
, .S
2S
3S
4 .S
Figure 4: Training and test projections of the four clusters. Training STRESS was
0.00515 and test STRESS 0.00532.
ness coupled with the apparent self-regularising aspects of NEUROSCALE. The relative supervision training algorithm implicitly minimises a measure of curvature by
incorporating an automatic 'weight decay' effect which favours solutions generated
by networks with small overall weights.
Acknowledgements
This work was supported in part under the EPSRC contract GR/J75425, "Novel
Developments in Learning Theory for Neural Networks".
References
Kruskal, J. B. (1964). Multidimensional scaling by optimising goodness of fit to a
nonmetric hypothesis. Psychometrika, 29(1}:1-27.
Lowe, D. {1993}. Novel 'topographic' nonlinear feature extraction using radial basis
functions for concentration coding in the 'artificial nose'. In 3rd lEE International Conference on Artificial Neural Networks. London: lEE.
Lowe, D. and Tipping, M. E. {1996}. Feed-forward neural networks and topographic
mappings for exploratory data analysis. Neural Computing and Applications,
4:83-95.
Mao, J. and Jain, A. K. {1995}. Artificial neural networks for feature extraction
and multivariate data projection. IEEE Transactions on Neural Networks,
6{2}:296-317.
Sammon, J. W. {1969}. A nonlinear mapping for data structure analysis. IEEE
Transactions on Computers, C-18{5}:401-409.
Tipping, M. E. {1996}. Topographic Mappings and Feed-Forward Neural Networks.
PhD thesis, Aston University, Aston Street, Birmingham B4 7ET, UK. Available from http://www.ncrg.aston.ac . uk/.
Webb, A. R. (1995). Multidimensional scaling by iterative majorisation using radial
basis functions. Pattern Recognition, 28{5}:753-759.
| 1323 |@word inversion:1 norm:1 sammon:6 seek:1 reduction:2 configuration:2 subjective:1 must:2 informative:1 predetermined:1 analytic:1 provides:1 location:1 organising:1 firstly:1 overcomplex:1 inter:3 themselves:1 spherical:1 automatically:1 considering:1 psychometrika:1 provided:2 linearity:1 transformation:18 quantitative:1 multidimensional:3 act:1 exactly:1 uk:6 control:1 normally:1 producing:1 tends:1 despite:1 approximately:2 might:1 ease:1 oppose:1 range:3 lost:1 procedure:2 displacement:1 empirical:2 universal:1 projection:3 radial:6 cannot:2 context:1 influence:1 www:2 map:1 imposed:1 insight:2 exploratory:1 traditionally:1 justification:2 coordinate:1 target:3 enhanced:1 exact:1 hypothesis:1 trend:1 element:1 recognition:1 located:1 lay:1 observed:5 role:1 epsrc:1 majorisation:1 counter:3 decrease:1 intuition:2 transforming:1 complexity:6 trained:7 depend:1 upon:2 basis:15 triangle:1 differently:1 various:1 jain:2 effective:2 london:1 artificial:3 apparent:2 posed:1 topographic:19 final:2 advantage:1 kohonen:2 relevant:1 combining:1 flexibility:1 insensitivity:1 intuitive:2 cluster:5 optimum:1 requirement:1 derive:1 oo:1 ac:4 fixing:1 minimises:1 predicted:1 implies:1 j75425:1 closely:2 behaviour:8 secondly:1 adjusted:1 strictly:1 around:1 mapping:9 predict:1 kruskal:2 purpose:1 birmingham:1 label:1 et1:1 reflects:1 gaussian:1 aim:1 rather:1 minimisation:1 derived:2 rank:1 contrast:1 posteriori:2 stabilise:1 accumulated:1 entire:2 typically:1 visualisation:1 transformed:1 overall:2 issue:1 ill:1 priori:1 development:1 smoothing:5 spatial:1 ness:1 never:1 extraction:13 optimising:2 excessive:1 inherent:2 preserve:1 attempt:1 interest:1 highly:1 epq:3 peculiar:1 partial:1 necessary:3 euclidean:2 taylor:1 theoretical:4 mdimensional:1 instance:1 goodness:1 deviation:1 subset:1 comprised:1 gr:1 optimally:1 synthetic:1 deduced:1 fundamental:1 international:1 contract:1 lee:2 michael:1 squared:1 thesis:1 derivative:1 yp:4 account:1 coding:1 satisfy:1 explicitly:2 lowe:11 utilised:1 apparently:1 effected:2 square:1 variance:1 largely:1 equate:1 correspond:1 ofthe:1 critically:1 monitoring:1 against:2 associated:2 sampled:1 dataset:1 adjusting:1 knowledge:2 subsection:1 nonmetric:1 feed:5 higher:1 tipping:11 supervised:9 maximally:1 evaluated:1 parameterised:1 implicit:1 nonlinear:5 effect:4 counterpart:1 hence:2 illustrated:1 during:3 self:5 width:1 iris:4 criterion:1 stress:13 necessitating:1 image:1 novel:8 recently:2 rotation:1 functional:2 empirically:1 b4:2 insensitive:2 ncrg:2 refer:1 significant:1 smoothness:4 automatic:1 rd:1 outlined:2 centre:3 pq:1 longer:1 supervision:4 curvature:10 multivariate:1 yi:3 exploited:1 preserving:1 seen:1 additional:1 minimum:1 employed:1 preservation:2 xn2:1 ii:2 reduces:2 smooth:2 match:1 minimising:1 schematic:1 metric:1 iteration:1 kernel:3 achieved:2 preserved:1 addition:1 opq:2 extra:1 ot:1 posse:1 contrary:2 lover:1 seem:1 near:1 variety:2 fit:2 architecture:4 reduce:2 minimise:4 favour:1 regularisers:1 cause:1 generally:2 locally:1 http:2 wlk:1 deteriorates:1 broadly:1 diverse:1 affected:1 group:1 four:3 neuroscale:24 sum:2 fourth:1 separation:1 acceptable:1 scaling:3 layer:1 quadratic:1 constraint:1 nearby:1 aspect:1 argument:1 attempting:1 relatively:1 according:2 invariant:1 jlk:1 equation:1 previously:1 mechanism:1 nose:1 available:1 mance:1 permit:1 yare:1 indirectly:1 appropriate:1 neighbourhood:1 alternative:2 ensure:1 xc:2 embodies:1 objective:1 added:1 occurs:1 concentration:1 exclusive:1 usual:1 traditional:1 md:1 gradient:1 distance:9 separate:1 street:1 retained:1 relationship:1 demonstration:1 webb:2 implementation:1 regulariser:1 observation:2 situation:1 extended:1 incorporated:1 arbitrary:2 david:1 introduced:3 pair:2 below:1 pattern:4 explanation:1 suitable:1 residual:1 aston:6 brief:1 imply:1 yq:10 coupled:1 xq:6 prior:2 geometric:1 understanding:1 epoch:1 acknowledgement:1 relative:7 regularisation:2 interesting:2 proportional:1 approximator:1 xp:4 principle:2 production:1 supported:1 differentiating:1 dimension:8 embodiment:1 world:1 forward:5 transaction:2 implicitly:3 overfitting:2 generalising:2 iterative:2 nature:1 expansion:1 excellent:1 necessarily:1 domain:1 hyperparameters:2 depicts:1 mao:2 position:2 explicit:3 xl:1 lie:1 vanish:2 e4:3 specific:2 decay:3 evidence:1 incorporating:1 phd:1 magnitude:2 illustrates:1 occurring:1 demand:1 applies:1 viewed:2 identity:1 rbf:15 towards:1 absence:1 regularising:6 included:1 generalisation:10 determined:5 reducing:4 operates:2 total:2 select:1 perfor:1 latter:1 philosophy:1 incorporate:1 tested:2 phenomenon:1 |
355 | 1,324 | Probabilistic Interpretation of Population
Codes
Richard S. Zemel
zemeleu.arizona.edu
Peter Dayan
dayaneai.mit.edu
Alexandre Pouget
alexesalk.edu
Abstract
We present a theoretical framework for population codes which
generalizes naturally to the important case where the population
provides information about a whole probability distribution over
an underlying quantity rather than just a single value. We use
the framework to analyze two existing models, and to suggest and
evaluate a third model for encoding such probability distributions.
1
Introduction
Population codes, where information is represented in the activities of whole populations of units, are ubiquitous in the brain. There has been substantial work on
how animals should and/or actually do extract information about the underlying
encoded quantity. 5,3,11,9,12 With the exception of Anderson, l this work has concentrated on the case of extracting a single value for this quantity. We study ways
of characterizing the joint activity of a population as coding a whole probability
distribution over the underlying quantity.
Two examples motivate this paper: place cells in the hippocampus of freely moving
rats that fire when the animal is at a particular part of an environment,S and cells in
area MT of monkeys firing to a random moving dot stimulus. 7 Treating the activity
of such populations of cells as reporting a single value of their underlying variables
is inadequate if there is (a) insufficient information to be sure (eg if a rat can be
uncertain as to whether it is in place XA or XB then perhaps place cells for both
locations should fire; or (b) if multiple values underlie the input, as in the whole
distribution of moving random dots in the motion display. Our aim is to capture the
computational power of representing a probability distribution over the underlying
parameters. 6
RSZ is at University of Arizona, Tucson, AZ 85721; PD is at MIT, Cambridge, MA
02139; AP is at Georgetown University, Washington, DC 20007. This work was funded by
McDonnell-Pew, NIH, AFOSR and startup funds from all three institutions.
Probabilistic Interpretation of Population Codes
677
In this paper, we provide a general statistical framework for population codes, use
it to understand existing methods for coding probability distributions and also to
generate a novel method. We evaluate the methods on some example tasks.
2
Population Code Interpretations
The starting point for almost all work on neural population codes is the neurophysiological finding that many neurons respond to particular variable( s) underlying a
stimulus according to a unimodal tuning function such as a Gaussian. This characterizes cells near the sensory periphery and also cells that report the results of
more complex processing, including receiving information from groups of cells that
themselves have these tuning properties (in MT, for instance). Following Zemel
& Hinton's13 analysis, we distinguish two spaces: the explicit space which consists
of the activities r = {rd of the cells in the population, and a (typically low dimensional) implicit space which contains the underlying information X that the
population encodes in which they are tuned. All processing on the basis of the
activities r has to be referred to the implicit space, but it itself plays no explicit
role in determining activities.
Figure 1 illustrates our framework. At the top is the measured activities of a population of cells. There are two key operations. Encoding: What is the relationship
between the activities r of the cells and the underlying quantity in the world X
that is represented? Decoding: What information about the quantity X can be
extracted from the activities? Since neurons are generally noisy, it is often convenient to characterize encoding (operations A and B) in a probabilistic way, by
specifying P[rIX]. The simplest models make a further assumption of conditional independence of the different units given the underlying quantity P[rIX] = I1i P[riIX]
although others characterize the degree of correlation between the units. If the encoding model is true, then a Bayesian decoding model specifies that the information
r carries about X can be characterized precisely as: P[Xlr] ex P[rIX]P[X], where
P[ X] is the prior distribution about X and the constant of proportionality is set
so that
P[Xlr]dX = 1. Note that starting with a deterministic quantity X in
the world, encoding in the firing rates r, and decoding it (operation C) results in
a probability distribution over X. This uncertainty arises from the stochasticity
represented by P[rIX]. Given a loss function, we could then go on to extract a
single value from this distribution (operation D).
Ix
We attack the common assumption that X is a single value of some variable x, eg
the single position of a rat in an environment, or the single coherent direction of
motion of a set of dots in a direction discrimination task. This does not capture
the subtleties of certain experiments, such as those in which rats can be made to be
uncertain about their position, or in which one direction of motion predominates yet
there are several simultaneous motion directions. 7 Here, the natural characterization
of X is actually a whole probability distribution P[xlw] over the value of the variable
x (perhaps plus extra information about the number of dots), where w represents
all the available information. We can now cast two existing classes of proposals for
population codes in terms of this framework.
The Poisson Model
Under the Poisson encoding model, the quantity X encoded is indeed one particular
value which we will call x, and the activities of the individual units are independent,
R. S. Zemel, P. Dayan and A. Pouget
678
1
1'1''''1 ,--_0_ +
_0
_,_t_f_t_o_.....
enmde
i
iD
f
"
x
x'
x
Figure 1: Left: encoding maps X from the world through tuning functions (A) into mean activities (B), leading to Top: observed activities r. We assume complete knowledge of the variables
governing systematic changes to the activities of the cells. Here X is a single value x? in the space
of underlying variables. Right: decoding extracts 1'[Xlr) (C)j a Single value can be picked (D)
from this distribution given a loss function.
with the terms P[rilx] = e-h(x) (h(x)t' jriL The activity ri could, for example, be
the number of spikes the cell emits in a fixed time interval following the stimulus
onset. A typical form for the tuning function h(x) is Gaussian h(x) <X e-(X-Xi)2/20'2
about a preferred value Xi for cell i. The Poisson decoding model is: 3 , 11, 9, 12
(1)
where K is a constant with respect to x.
Although simple, the Poisson model makes the the assumption criticized above,
that X is just a single value x. We argued for a characterization of the quantity X
in the world that the activities of the cells encode as now P[xlw]. We describe below
a method of encoding that takes exactly this definition of X. However, wouldn't
P[xlr] from Equation 1 be good enough? Not if h(x) are Gaussian, since
logP[xlr] = K' _
~
2
(L:i ri) (X _
a2
L:i riXi)2,
L:i ri
completing the square, implying that P[xlr] is Gaussian, and therefore inevitably
unimodal. Worse, the width of this distribution goes down with L:i ri, making it,
in most practical cases, a close approximation to a delta function.
The KDE Model
Anderson 1 ,2 set out to represent whole probability distributions over X rather than
just single values. Activities r represent distribution pr(x) through a linear combination of basis functions tPi(X), ie pr(x) = L:i r~tPi(x) where r~ are normalized
such that pr(x) is a probability distribution. The kernel functions tPi(X) are not
679
Probabilistic Interpretation of Population Codes
the tuning functions Ji(x) of the cells that would commonly be measured in an
experiment. They need have no neural instantiation; instead, they form part of the
interpretive structure for the population code. If the tPi(X) are probability distributions, and so are positive, then the range of spatial frequencies in P[xlw] that they
can reproduce in pr(x) is likely to be severely limited.
In terms of our framework, the KDE model specifies the method of decoding, and
makes encoding its corollary. Evaluating KDE requires some choice of encoding representing P[xlw] by pr(x) through appropriate r. One way to encode is to use
the Kullback-Leibler divergence as a measure of the discrepancy between P[xlw] and
Ei r~tPi(x) and use the expectation-maximization (EM) algorithm to fit the ira,
treating them as mixing proportions in a mixture mode1. 4 This relies on {tPi(X)} being probability distributions themselves. The projection method l is a one-shot linear
filtering based alternative using the ?2 distance. ri are computed as a projection
of P[xlw] onto tuning functions Ji(x) that are calculated from tPj(x).
ri
=
Ix P[xlw]Ji(x)dx
fi(X) =
L Aij1tPj(x)
Aij =
Ix tPi (x)tPj (x)dx
(2)
j
Ji(x) are likely to need regularizing, 1 particularly if the tPi(X) overlap substantially.
3
The Extended Poisson Model
The KDE model is likely to have difficulty capturing in pr(x) probability distributions P[xlw] that include high frequencies, such as delta functions. Conversely, the
standard Poisson model decodes almost any pattern of activities r into something
that rapidly approaches a delta function as the activities increase. Is there any
middle ground?
We extend the standard Poisson encoding model to allow the recorded activities r
to depend on general P[xlw], having Poisson statistics with mean:
(ri)
= Ix P[xlw]Ji(x)dx.
(3)
This equation is identical to that for the KDE model (Equation 2), except that
variability is built into the Poisson statistics, and decoding is now required to be
the Bayesian inverse of encoding. Note that since ri depends stochastically on
P[xlw], the full Bayesian inverse will specify a distribution P[P[xlw]lr] over possible
distributions. We summarize this by an approximation to its most likely memberwe perform an approximate form of maximum likelihood, not in the value of x, but
in distributions over x. We approximate P[xlw] as a piece-wise constant histogram
which takes the value ?>j in (xj, Xj+l], and Ji(x) by a piece-wise constant histogram
that take the values Jij in (xj, xj+d . Generally, the maximum a posteriori estimate
for {?>j} can be shown to be derived by maximizing:
(4)
where ? is the variance of a smoothness prior. We use a form of EM to maximize
the likelihood and adopt the crude ?a pproximation of averaging neighboring values
680
R. S. Zemel, P. Dayan and A. Pouget
Operation
Extended Poisson
Encode
(r.)
(r.)
= h [I" P(xlw]f.(x)dxj
f;(x)
KDE (Projection)
(r.)
= R.n..N(x?? u)
= h [R.n?? I" P[xlw]f.(x)dx]
f.(x)
A.j
pr(x) to max. L
Decode
pr(x)
Likelihood
Error
ri
L
r:
E
= h [Rm.. r:J
ri to max. L
= I" .p.(x).pj(x)dx
= L:. ri.p.(x)
pr(x)
= L:. r:.p.(x)
= r./ L:J rj
= log P [{tPi}l{ri}] ::::: L:.r;logf.
G = L:. ri log(r;!f.)
(r.)
= L:J Aijl.pj(x)
pr(x)
= I% pr(x)f.(x)dx::::: L: j tPilij
KDE(EM)
= I" [pr(x) -
P[xlwJ] 2 dx
L
= I" P[xIwJlogpr(x)dx
G
= I" P[xlw] log ~~X)JdX
Table 1: A summary of the key operations with respect to the framework of the
interpretation methods compared here. hO is a rounding operator to ensure integer
firing rates, and 'l/Ji(X) = N(xi, 0') are the kernel functions for the KDE method.
of ~j on successive iterations. By comparison with the linear decoding of the KDE
method, Equation 4 offers a non-linear way of combining a set of activities {rd to
give a probability distribution pr(x) over the underlying variable x. The computational complexities of Equation 4 are irrelevant, since decoding is only an implicit
operation that the system need never actually perform.
4
Comparing the Models
We illustrate the various models by showing the faithfulness with which they can
represent two bimodal distributions. We used 0' = 0.3 for the kernel functions
(KDE) and the tuning functions (extended Poisson model) and used 50 units whose
Xi were spaced evenly in the range x = [-10,10]. Table 1 summarizes the three
methods.
Figure 2a shows the decoded version of a mixture of two broad Gaussians
1/2N[-2, 1] + 1/2N[2,1]. Figure 2b shows the same for a mixture of two narrow Gaussians tN[-2, .2] + tN[2, .2]. All the models work well for representing
the broad Gaussians; both forms of the KDE model have difficulty with the narrow Gaussians. The EM version of KDE puts all its weight on the nearest kernel
functions, and so is too broad; the projection version 'rings' in its attempt to represent the narrow components of the distributions. The extended Poisson model
reconstructs with greater fidelity.
5
Discussion
Informally, we have examined the consequences of the seemingly obvious step of
saying that if a rat, for instance, is uncertain about whether it is at one of two places,
then place cells representing both places could be activated. The complications
681
Probabilistic Interpretation of Population Codes
......,-
......,-
IM~_-
01
A A
IMIBI) -
01
A A
01
OIl
tIl
Oil
01
01
01
O~--~--~--~--~
?10
01
OJ
O.
..
OJ
OJ
11
'T
OJ
D.I
......,~-
OJ
u
01
u
Ol
02
01
Figure 2: a) (upper) All three methods provide a good fit to the bimodal Gaussian
distribution when its variance is sufficiently large (7 = 1.0). b) (lower) The KDE
model has difficulty when 7 = 0.2.
come because the structure of the interpretation changes - for instance, one can
no longer think of maximum likelihood methods to extract a single value from the
code directly.
One main fruit of our resulting framework is a method for encoding and decoding
probability distributions that is the natural extension of the (provably inadequate)
standard Poisson model for encoding and decoding single values. Cells have Poisson statistics about a mean determined by the integral of the whole probability
distribution, weighted by the tuning function of the cell. We suggested a particular
decoding model, based on an approximation to maximum likelihood decoding to a
discretized version of the whole probability distribution, and showed that it reconstructs broad, narrow and multimodal distributions more accurately than either the
standard Poisson model or the kernel density model. Stochasticity is built into our
method, since the units are supposed to have Poisson statistics, and it is therefore
also quite robust to noise. The decoding method is not biologically plausible, but
provides a quantitative lower bound to the faithfulness with which a set of activities
can code a distribution.
Stages of processing subsequent to a population code might either extract a single
value from it to control behavior, or integrate it with information represented in
other population codes to form a combined population code. Both operations must
be performed through standard neural operations such as taking non-linear weighted
sums and possibly products of the activities. We are interested in how much information is preserved by such operations, as measured against the non-biological
682
R. S. Zemel. P. Dayan and A. Pouget
standard of our decoding method. Modeling extraction requires modeling the loss
function - there is some empirical evidence about this from a motion experiment
in which electrical stimulation of MT cells was pitted against input from a moving
stimulus.lO However, much works remains to be done.
Integrating two or more population codes to generate the output in the form of
another population code was stressed by Hinton,6 who noted that it directly relates
to the notion of generalized Hough transforms. We are presently studying how a
system can learn to perform this combination, using the EM-based decoder to generate targets. One special concern for combination is how to understand noise. For
instance, the visual system can be behaviorally extraordinarily sensitive - detecting
just a handful of photons. However, the outputs of real cells at various stages in
the system are apparently quite noisy, with Poisson statistics. If noise is added at
every stage of processing and combination, then the final population code will not
be very faithful to the input. There is much current research on the issue of the
creation and elimination of noise in cortical synapses and neurons.
A last issue that we have not treated here is certainty or magnitude. Hinton's6 idea
of using the sum total activity of a population to code the certainty in the existence
of the quantity they represent is attractive, provided that there is some independent
way of knowing what the scale is for this total. We have used this scaling idea in
both the KDE and the extended Poisson models. In fact, we can go one stage
further, and interpret greater activity still as representing information about the
existence of multiple objects or multiple motions. However, this treatment seems
less appropriate for the place cell system - the rat is presumably always certain
that it is somewhere. There it is plausible that the absolute level of activity could
be coding something different, such as the familiarity of a location.
An entire collection of cells is a terrible thing to waste on representing just a single
value of some quantity. Representing a whole probability distribution, at least with
some fidelity, is not more difficult, provided that the interpretation of the encoding
and decoding are clear. We suggest some steps in this direction.
References
[1) Anderson, CH (1994). International Journal of Modern Physics C, 5, 135-137.
[2) Anderson, CH & Van Essen, DC (1994) . In Computational Intelligence Imitating Life, 213-222.
New York: IEEE Press.
[3) Baldi, P & Heiligenberg, W (1988). Biological Cybernetics, 59, 313-318.
(4) Dempster, AP, Laird, NM & Rubin, DB (1997) . Proceedings of the Royal Statistical Society, B
39, 1-38.
[5) Georgopoulos, AP, Schwartz, AB & Kettner, RE (1986). Science, 243, 1416-1419.
[6) Hinton, GE (1992). Scientific American, 267(3) 105-109.
(7) Newsome, WT, Britten, KH & Movshon, JA (1989). Nature, 341, 52-54.
(8) O'Keefe, J & Dostrovsky, J (1971). Brain Research, 34,171-175.
[9) Salinas, E & Abbott, LF (1994). Journal of Computational Neuroscience, 1, 89-107.
(10) Salzman, CD & Newsome, WT (1994). Science, 264, 231-237.
[11) Seung, HS & SompoJinsky, H (1993). Proceedings of the National Academy of Sciences, USA,
90, 10749-10753.
(12) Snippe, HP (1996). Neural Computation, 8, 29-37.
(13) Zemel, RS & Hinton, GE (1995). Neural Computation, 7, 549-564.
PART
V
IMPLEMENTATION
| 1324 |@word h:1 version:4 middle:1 hippocampus:1 proportion:1 seems:1 proportionality:1 r:1 aijl:1 xlw:16 shot:1 carry:1 contains:1 tuned:1 existing:3 current:1 comparing:1 xlr:6 dx:9 yet:1 must:1 subsequent:1 treating:2 fund:1 discrimination:1 implying:1 intelligence:1 lr:1 institution:1 characterization:2 provides:2 complication:1 location:2 successive:1 attack:1 detecting:1 consists:1 baldi:1 indeed:1 behavior:1 themselves:2 brain:2 ol:1 discretized:1 provided:2 underlying:11 what:3 substantially:1 monkey:1 finding:1 certainty:2 quantitative:1 every:1 exactly:1 rm:1 schwartz:1 control:1 unit:6 underlie:1 positive:1 severely:1 consequence:1 encoding:15 id:1 firing:3 ap:3 might:1 plus:1 examined:1 specifying:1 conversely:1 limited:1 range:2 practical:1 faithful:1 lf:1 area:1 empirical:1 convenient:1 projection:4 integrating:1 suggest:2 onto:1 close:1 operator:1 put:1 deterministic:1 map:1 maximizing:1 go:3 starting:2 pouget:4 i1i:1 s6:1 population:25 notion:1 target:1 play:1 decode:1 particularly:1 observed:1 role:1 electrical:1 capture:2 substantial:1 environment:2 pd:1 complexity:1 dempster:1 seung:1 motivate:1 depend:1 creation:1 basis:2 multimodal:1 joint:1 pitted:1 represented:4 various:2 describe:1 zemel:6 startup:1 extraordinarily:1 salina:1 whose:1 encoded:2 quite:2 plausible:2 statistic:5 think:1 itself:1 noisy:2 final:1 seemingly:1 laird:1 tpi:9 jij:1 product:1 neighboring:1 combining:1 rapidly:1 mixing:1 academy:1 supposed:1 kh:1 az:1 ring:1 object:1 illustrate:1 measured:3 nearest:1 come:1 direction:5 pproximation:1 snippe:1 elimination:1 argued:1 ja:1 biological:2 im:1 extension:1 sufficiently:1 ground:1 presumably:1 adopt:1 a2:1 sensitive:1 weighted:2 mit:2 behaviorally:1 gaussian:5 always:1 aim:1 rather:2 corollary:1 encode:3 ira:1 derived:1 likelihood:5 tpj:2 posteriori:1 dayan:4 typically:1 entire:1 reproduce:1 interested:1 provably:1 issue:2 fidelity:2 animal:2 spatial:1 special:1 never:1 having:1 washington:1 extraction:1 identical:1 represents:1 broad:4 discrepancy:1 report:1 stimulus:4 others:1 richard:1 modern:1 divergence:1 national:1 individual:1 fire:2 attempt:1 ab:1 essen:1 mixture:3 activated:1 xb:1 integral:1 hough:1 re:1 theoretical:1 uncertain:3 criticized:1 instance:4 dostrovsky:1 modeling:2 newsome:2 logp:1 maximization:1 rounding:1 inadequate:2 too:1 characterize:2 combined:1 density:1 international:1 ie:1 probabilistic:5 systematic:1 receiving:1 decoding:16 predominates:1 mode1:1 physic:1 recorded:1 nm:1 reconstructs:2 possibly:1 worse:1 stochastically:1 american:1 leading:1 til:1 s13:1 photon:1 coding:3 waste:1 onset:1 depends:1 piece:2 performed:1 picked:1 analyze:1 characterizes:1 apparently:1 square:1 variance:2 who:1 spaced:1 bayesian:3 decodes:1 accurately:1 cybernetics:1 simultaneous:1 synapsis:1 definition:1 against:2 frequency:2 obvious:1 naturally:1 emits:1 treatment:1 knowledge:1 ubiquitous:1 actually:3 alexandre:1 specify:1 done:1 anderson:4 just:5 xa:1 implicit:3 governing:1 correlation:1 stage:4 ei:1 perhaps:2 scientific:1 usa:1 oil:2 normalized:1 true:1 leibler:1 eg:2 attractive:1 width:1 noted:1 rat:6 generalized:1 complete:1 tn:2 motion:6 heiligenberg:1 wise:2 novel:1 fi:1 nih:1 common:1 stimulation:1 mt:3 ji:7 extend:1 interpretation:8 interpret:1 rsz:1 cambridge:1 pew:1 tuning:8 rd:2 smoothness:1 hp:1 stochasticity:2 dot:4 funded:1 moving:4 longer:1 something:2 showed:1 irrelevant:1 periphery:1 certain:2 life:1 greater:2 freely:1 maximize:1 relates:1 multiple:3 unimodal:2 full:1 rj:1 characterized:1 offer:1 expectation:1 poisson:18 histogram:2 represent:5 kernel:5 iteration:1 bimodal:2 cell:22 proposal:1 preserved:1 interval:1 extra:1 sure:1 db:1 thing:1 dxj:1 call:1 extracting:1 integer:1 near:1 logf:1 enough:1 independence:1 fit:2 xj:4 idea:2 knowing:1 whether:2 movshon:1 peter:1 york:1 generally:2 clear:1 informally:1 transforms:1 concentrated:1 simplest:1 generate:3 specifies:2 terrible:1 delta:3 neuroscience:1 group:1 key:2 pj:2 abbott:1 sum:2 inverse:2 uncertainty:1 respond:1 place:7 reporting:1 almost:2 jdx:1 saying:1 rix:4 summarizes:1 scaling:1 capturing:1 bound:1 completing:1 distinguish:1 display:1 arizona:2 activity:25 precisely:1 handful:1 georgopoulos:1 ri:13 encodes:1 according:1 mcdonnell:1 combination:4 em:5 making:1 biologically:1 presently:1 pr:13 imitating:1 equation:5 remains:1 ge:2 studying:1 generalizes:1 operation:10 available:1 gaussians:4 appropriate:2 alternative:1 ho:1 existence:2 top:2 include:1 ensure:1 somewhere:1 society:1 added:1 quantity:12 spike:1 distance:1 decoder:1 evenly:1 salzman:1 code:20 relationship:1 insufficient:1 difficult:1 kde:14 implementation:1 perform:3 upper:1 neuron:3 rixi:1 inevitably:1 hinton:5 extended:5 variability:1 dc:2 cast:1 required:1 faithfulness:2 coherent:1 narrow:4 suggested:1 below:1 pattern:1 summarize:1 tucson:1 built:2 including:1 max:2 oj:5 royal:1 power:1 overlap:1 natural:2 difficulty:3 treated:1 representing:7 britten:1 extract:5 prior:2 georgetown:1 determining:1 afosr:1 loss:3 filtering:1 integrate:1 degree:1 fruit:1 rubin:1 cd:1 lo:1 summary:1 last:1 aij:1 allow:1 understand:2 characterizing:1 taking:1 absolute:1 van:1 calculated:1 cortical:1 world:4 evaluating:1 sensory:1 made:1 commonly:1 wouldn:1 collection:1 approximate:2 preferred:1 kullback:1 instantiation:1 xi:4 table:2 learn:1 kettner:1 robust:1 nature:1 complex:1 main:1 whole:9 noise:4 interpretive:1 referred:1 position:2 decoded:1 explicit:2 crude:1 third:1 ix:4 down:1 familiarity:1 showing:1 evidence:1 concern:1 keefe:1 magnitude:1 illustrates:1 likely:4 neurophysiological:1 visual:1 subtlety:1 ch:2 relies:1 extracted:1 ma:1 conditional:1 change:2 typical:1 except:1 determined:1 averaging:1 wt:2 total:2 exception:1 arises:1 stressed:1 evaluate:2 regularizing:1 ex:1 |
356 | 1,325 | Combined Weak Classifiers
Chuanyi Ji and Sheng Ma
Department of Electrical, Computer and System Engineering
Rensselaer Polytechnic Institute, Troy, NY 12180
chuanyi@ecse.rpi.edu, shengm@ecse.rpi.edu
Abstract
To obtain classification systems with both good generalization performance and efficiency in space and time, we propose a learning
method based on combinations of weak classifiers, where weak classifiers are linear classifiers (perceptrons) which can do a little better
than making random guesses. A randomized algorithm is proposed
to find the weak classifiers. They? are then combined through a majority vote. As demonstrated through systematic experiments, the
method developed is able to obtain combinations of weak classifiers
with good generalization performance and a fast training time on
a variety of test problems and real applications.
1
Introduction
The problem we will investigate in this work is how to develop a classifier with both
good generalization performance and efficiency in space and time in a supervised
learning environment. The generalization performance is measured by the probability of classification error of a classifier. A classifier is said to be efficient if its
size and the (average) time needed to develop such a classifier scale nicely (polynomiaUy) with the dimension of the feature vectors, and other parameters in the
training algorithm.
The method we propose to tackle this problem is based on combinations of weak
classifiers [8][6] , where the weak classifiers are the classifiers which can do a little
better than random guessing . It has been shown by Schapire and Freund [8][4]
that the computational power of weak classifiers is equivalent to that of a welltrained classifier, and an algorithm has been given to boost the performance of
weak classifiers. What has not been investigated is the type of weak classifiers
that can be used and how to find them. In practice, the ideas have been applied
with success in hand-written character recognition to boost the performance of an
already well-trained classifier. But the original idea on combining a large number of
weak classifiers has not been used in solving real problems. An independent work
Combinations of Weak Classifiers
495
by Kleinberg[6] suggests that in addition to a good generalization performance,
combinations of weak classifiers also provide advantages in computation time, since
weak classifiers are computationally easier to obtain than well-trained classifiers.
However, since the proposed method is based on an assumption which is difficult
to realize, discrepancies have been found between the theory and the experimental
results[7]. The recent work by Breiman[1][2] also suggests that combinations of
classifiers can be computationally efficient, especially when used to learn large data
sets.
The focus of this work is to investigate the following problems: (1) how to find
weak classifiers, (2) what are the performance and efficiency of combinations of
weak classifiers, and (3) what are the advantages of using combined weak classifiers
compared with other pattern classification methods?
We will develop a randomized algorithm to obtain weak classifiers. We will then
provide simulation results on both synthetic real problems to show capabilities and
efficiency of combined weak classifiers. The extended version of this work with some
of the theoretical analysis can be found in [5].
2
Weak Classifiers
In the present work, we choose linear classifiers (perceptrons) as weak classifiers.
~ be the required generalization error of a classifier, where v 2: 2, is called
Let
the weakness factor which is used to characterize the strength of a classifier. The
larger the v, the weaker the weak classifier. A set of weak classifiers are combined
through a simple majority vote.
t-
3
Algorithm
Our algorithm for combinations of weak classifiers consists of two steps: (1) generating individual weak classifiers through a simple randomized algorithm; and (2)
combining a collection of weak classifiers through a simple majority vote.
Three parameters need to be chosen a priori for the algorithm: a weakness factor v,
a number (J (~ ~ (J < 1) which will be used as a threshold to partition the training
set, and the number of weak classifiers 2? + 1 to be generated, where ? is a positive
integer.
3.1
Partitioning the Training Set
The method we use to partition a training set is motivated by what given in [4].
Suppose a combined classifier consists of K (K ~ 1) weak classifiers already. In
order to generate a (new) weak classifier, the entire training set of N training
samples is partitioned into two subsets: a set of Ml samples which contain all
the misclassified samples and a small fraction of samples correctly-classified by the
existing combined classifier; and the remaining N - Ml training samples. The set of
Ml samples are called "cares", since they will be used to select a new weak classifier,
while the rest of the samples are the "don't-cares".
The threshold (J is used to determine which samples should be assigned as cares.
For instance, for the n-th training sample (1 ~ n ~ N), the performance index
a( n) is recorded, where a( n) is the fraction of the weak classifiers in the existing
combined classifier which classify the n-th sample correctly. If a(n) < (J, this sample
is assigned to the cares. Otherwise, it is a don't-care. This is done for all N samples.
496
C. Ji and S. Ma
Through partitioning a training set in this way, a newly-generated weak classifier
is forced to learn the samples which have not been learned by the existing weak
classifiers. In the meantime, a properly-chosen () can ensure that enough samples
are used to obtain each weak classifier.
3.2
Random Sampling
To achieve a fast training time, we obtain a weak classifier by randomly sampling
the classifier-space of all possible linear classifiers.
Assume that a feature vector x E Rd is distributed over a compact region D. The
direction of a hyperplane characterized by a linear classifier with a weight vector,
is first generated by randomly selecting the elements of the weight vector based
on a uniform distribution over (-1 , l)d . Then the threshold of the hyperplane
is determined by randomly picking an xED , and letting the hyperplane pass
through x . This will generate random hyperplanes which pass through the region
D, and whose directions are randomly distributed in all directions. Such a randomly
selected classifier will then be tested on all the cares. If it misclassifies a fraction of
cares no more than
~
? (? > 0 and small), the classifier is kept and will be
used in the combination. Otherwise, it is discarded . This process is repeated until
a weak classifier is finally obtained.
k- -
A newly-generated weak classifier is then combined with the existing ones through
a simple majority vote. The entire training set will then be tested on the combined
classifier to result in a new set of cares, and don't-cares. The whole process will
be repeated until the total number 2L + 1 of weak classifiers are generated. The
algorithm can be easily extended to multiple classes. Details can be found in [5] .
4
Experimental Results
Extensive simulations have been carried out on both synthetic and real problems
using our algorithm. One synthetic problem is chosen to test the efficiency of
our method. Real applications from standard data bases are selected to compare
the generalization performance of combinations of weak classifiers (CW) with that
of other methods such as K-Nearest-Neighbor classifiers (K-NN)l, artificial neural
networks (ANN), combinations of neural networks (CNN), and stochastic discriminations (SD).
4.1
A Synthetic Problem: Two Overlapping Gaussians
To test the scaling properties of combinations of weak classifiers, a non-linearly
separable problem is chosen from a standard database called ELENA 2. The problem is a two-class classification problem, where the distributions of samples in both
classes are multi-variate Gaussians with the same mean but different variances for
each independent variable. There is a considerable amount of overlap between the
samples in two classes, and the problem is non-linearly separable. The average generalization error and the standard deviations are given in Figure 1 for our algorithm
based on 20 runs, and for other classifiers. The Bayes error is also given to show
the theoretical limit. The results show that the performance of kNN degrades very
quickly. The performance of ANN is better than that of kNN but still deviates more
and more from the Bayes error as d gets large. The combination of weak classifiers
1 The
2
best result of different k is reported.
I pu b I neural-nets I ELEN AIdatabases IBenchmarks. ps. Z
on ft p.dice. ucl. ac. be
497
Combinations of Weak Classifiers
" I.,
- "- k-NN
-_ ANN
10
-0-
-
.....
cw
IV_
Figure 1: Performance versus the dimension of the feature vectors
Algorithms
Combined Weak Classifiers
k Nearest Neighbor
Neural Networks
Combined Neural Networks
Card1
(%) Error /er
11.3/ 0.85
15.67
13.64/ 0.85
13.02/0.33
Diabetes1
(%) Error / er
22.70/ 0.70
25.8
23.52/ 0.72
22.79/0.57
Gene1
(%1 Error/ er
11.80/0.52
22.87
13.47/0.44
12.08/0.23
Table 1: Performance on Card1, Diabetes1 and Gene1. er: standard deviation
continues to follow the trend of the Bayes error.
4.2
Proben1 Data Sets
Three data sets, Card1, Diabetes1 and Gene1 were selected to test our algorithm
from Proben1 databases which contain data sets from real applications3 .
Card1 data set is for a problem on determining whether a credit-card application
from a customer can be approved based on information given in 51-dimensional
feature vectors. 345 out of 690 examples are used for training and the rest for
testing. Diabetes1 data set is for determining whether diabetes is present based
on 8-dimensional input patterns. 384 examples are used for training and the same
number of samples for testing. Gene1 data set is for deciding whether a DNA
sequence is from a donor, an acceptor or neither from 120 dimensional binary feature
vectors. 1588 samples out of total of 3175 were used for training, and the rest for
testing.
The average generalization error as well as the standard deviations are reported in
Table 1. The results from combinations of weak classifiers are based on 25 runs.
The results of neural networks and combinations of well-trained neural networks are
from the database . As demonstrated by the results, combinations of weak classifiers
have been able to achieve the generalization performance comparable to or better
than that of combinations of well-trained neural networks.
4.3
Hand-written Digit Recognition
Hand-written digit recognition is chosen to test our algorithm, since one of
the previously developed method on combinations of weak classifiers (stochastic
discrimination[6]) was applied to this problem. For the purpose of comparison, the
3 Available
by
anonymous
ftp
/ pub/papers/techreports/1994/1994-21. ps.z.
from
ftp.ira.uka.de,
as
498
C. Ji and S. Ma
Algorithms
Combined Weak Classifiers
k Nearest Neighbor
Neural Networks
Stochastic Discriminations
(%) Error/O'
4.23/ 0.1
4.84
5.33
3.92
Table 2: Performance on handwritten digit recognition.
Parameters
1/2 + l/v
e
2L+1
A verage Tries
Gaussians
0.51
0.51
2000
2
Card1
0.51
0.51
1000
3
Diabetes1
0.51
0.54
1000
7
Gene1
0.55
0.54
4000
4
Digits
0.54
0.53
20000
2
Table 3: Parameters used in our experiments.
same set of data as used in [6](from the NIST data base) is utilized to train and
to test our algorithm. The data set contains 10000 digits written by different people. Each digit is represented by 16 by 16 black and white pixels. The first 4997
digits are used to form a training set, and the rest are for testing. Performance
of our algorithm, k-NN, neural networks, and stochastic discriminations are given
in Table 2. The results for our methods are based on 5 runs, while the results
for the other methods are from [6]. The results show that the performance of our
algorithm is slightly worse (by 0.3%) than that of stochastic discriminations, which
uses a different method for multi-class classification [6] .
4.4
Effects of The Weakness Factor
Experiments are done to test the effects of v on the problem of two 8-dimensional
overlapping Gaussians. The performance and the average training time (CPUtime on Sun Spac-10) of combined weak classifiers based on 10 runs are given for
different v's in Figures 2 and 3, respectively. The results indicate as v increases
an individual weak classifier is obtained more quickly, but more weak classifiers are
needed to achieve good performance. When a proper v is chosen, a nice scaling
property can be observed in training time.
A record of the parameters used in all the experiments on real applications are
provided in Table 3. The average tries, which are the average number of times
needed to sample the classifier space to obtain an acceptable weak classifier, are
also given in the table to characterize the training time for these problems.
4.5
Training Time
To compare learning time with off-line BackPropagation (BP), feedforward two layer
neural network with 10 sigmoidal hidden units are trained by gradient-descent to
learn the problem on the two 8-dimensional overlapping Gaussians. 2500 training
samples are used. The performance versus CPU time 4 are plotted for both our
algorithm and BP in Figure 4. For our algorithm, 2000 weak classifiers are combined. For BP, 1000 epoches are used. The figure shows that our algorithm is much
faster than the BP algorithm. Moreover, when several well-trained neural networks
are combined to achieve a better performance, the cost on training time will be
4Both algorithms are run on a Sun Sparc-lO sun workstation
499
Combinations of Weak Classifiers
'?0
200
400
too
100
1000 1200 1400
NuntMI'oII . . . . ~
,'00
1100
2000
Figure 2: Performance versus the number of weak classifiers for different
I'
~
~
u
-0-
-x-
?
1/.
nu :
1/.
1AwaO 005
lJnu-o.Ol0
-?- ? 1~O'5
,
-+- : lA'1u-002O
-
: lhN-0025
. .'
,
_
_
_
_
_~~::::
_ : :::::
_
_
_m "::;~~=1;:~-~:~~~~~oIIw.-a...r
...
Figure 3: Training time versus the number of weak classifiers for different
1/.
even higher. Therefore, compared to combinations of well-trained neural networks,
combining weak classifiers is computationally much cheaper.
5
Discussions
From the experimental results, we observe that the performance of the combined
weak classifiers is comparable or even better than combinations of well- trained classifiers, and out-performs individual neural network classifiers and k-Nearest Neighbor classifiers. In the meantime whereas the k-nearest neighbor classifiers suffer
from the curse of dimensionality, a nice scaling property in terms of the dimension of feature vectors has been observed for combined weak classifiers. Another
.
.
35
,,
,,
;
-
TranngcurteollBP
T_CV't'eollBP
T,......cuwolCW
- r_cwy.otcw
v_
I
IS
10
l --_
-1- -
-----..
.............. --t-___ ,
-
-
-
_
I- - _ _ __ t ____ _
I
Figure 4: Performance versus CPU time
500
C. Ii and S. Ma
important observation obtained from the experiments is that the weakness factor
directly impacts the size of a combined classifier and the training time. Therefore,
the choice of the weakness factor is important to obtain efficient combined weak
classifiers. It has been shown in our theoretical analysis on learning an underlying
perceptron [5] that v should be at least large as O( dlnd) to obtain a polynomial
training time, and the price paid to accomplish this is a space-complexity which is
polynomial in d as well. This cost can be observed from our experimental results
for the need of a large number of weak classifiers.
Acknowledgement
Specials thanks are due to Tin Kan Ho for providing NIST data, related references and helpful discussions. Support from the National Science Foundation (ECS9312594 and (CAREER) IRI-9502518) is gratefully acknowledged.
References
[1] L. Breiman, "Bias, Variance and Arcing Classifiers," Technical Report, TR460, Department of Statistics, University of California, Berkeley, April, 1996.
[2] 1. Breiman, "Pasting, Bites Together for Prediction in Large Data sets and
On-Line," ftp.stat .berkeley.edu/users/breiman, 1996.
[3] H. Drucker, R . Schapire and P. Simard, "Improving Performance in Neural
Networks Using a Boosting Algorithm," Neural Information Processing Symposium, 42-49, 1993.
[4] Y .
Freund
and
R.
Schapire,
"A
Decision-Theoretic Generalization of On-Line Learning and An Application
to Boosting," http://www.research.att.com/orgs/ssr/people/yoav or schapire.
[5] C. Ji and S. Ma, "Combinations of Weak Classifiers," IEEE Trans . Neural
Networks, Special Issue on Neural Networks and Pattern Recognition, vol. 8,
32-42, Jan., 1997.
[6] E .M. Kleinberg, "Stochastic Discrimination," Annals of Mathematics and Artificial Intelligence , voU , 207-239, 1990.
[7] E.M. Kleinberg and T. Ho, "Pattern Recognition by Stochastic Modeling,"
Proceedings of the Third International Workshop on Frontiers in Handwriting
Recognition, 175-183, Buffalo, May 1993.
[8] R.E. Schapire, "The Strength of Weak Learnability," Machine Learning, vol.
5, 197-227, 1990.
| 1325 |@word cnn:1 version:1 polynomial:2 approved:1 simulation:2 paid:1 contains:1 att:1 selecting:1 pub:1 existing:4 com:1 rpi:2 written:4 realize:1 partition:2 discrimination:6 v_:1 intelligence:1 selected:3 guess:1 record:1 boosting:2 hyperplanes:1 sigmoidal:1 symposium:1 consists:2 multi:2 little:2 cpu:2 curse:1 provided:1 moreover:1 underlying:1 what:4 xed:1 developed:2 pasting:1 berkeley:2 tackle:1 classifier:92 partitioning:2 unit:1 positive:1 engineering:1 sd:1 limit:1 black:1 suggests:2 testing:4 practice:1 backpropagation:1 digit:7 jan:1 dice:1 acceptor:1 get:1 www:1 equivalent:1 demonstrated:2 customer:1 iri:1 annals:1 suppose:1 user:1 us:1 diabetes:1 element:1 trend:1 recognition:7 utilized:1 continues:1 donor:1 database:3 observed:3 ft:1 electrical:1 region:2 sun:3 environment:1 complexity:1 trained:8 solving:1 efficiency:5 easily:1 represented:1 train:1 forced:1 fast:2 elen:1 artificial:2 whose:1 larger:1 otherwise:2 statistic:1 knn:2 advantage:2 sequence:1 net:1 ucl:1 propose:2 combining:3 achieve:4 p:2 generating:1 ftp:3 develop:3 ac:1 stat:1 measured:1 nearest:5 indicate:1 direction:3 stochastic:7 generalization:11 anonymous:1 frontier:1 credit:1 deciding:1 purpose:1 ecse:2 techreports:1 breiman:4 ira:1 arcing:1 focus:1 properly:1 helpful:1 nn:3 entire:2 hidden:1 misclassified:1 pixel:1 issue:1 classification:5 oii:1 priori:1 misclassifies:1 special:2 nicely:1 sampling:2 discrepancy:1 report:1 randomly:5 national:1 individual:3 cheaper:1 investigate:2 weakness:5 plotted:1 theoretical:3 instance:1 classify:1 modeling:1 yoav:1 cost:2 deviation:3 subset:1 uniform:1 too:1 learnability:1 characterize:2 reported:2 accomplish:1 synthetic:4 combined:20 thanks:1 international:1 randomized:3 systematic:1 off:1 picking:1 together:1 quickly:2 recorded:1 choose:1 worse:1 simard:1 de:1 try:2 bayes:3 capability:1 variance:2 weak:62 handwritten:1 classified:1 workstation:1 handwriting:1 newly:2 dimensionality:1 higher:1 supervised:1 follow:1 april:1 done:2 until:2 sheng:1 hand:3 overlapping:3 effect:2 contain:2 assigned:2 white:1 theoretic:1 performs:1 ji:4 rd:1 mathematics:1 gratefully:1 vou:1 base:2 pu:1 recent:1 sparc:1 binary:1 success:1 care:9 determine:1 diabetes1:5 ii:1 multiple:1 technical:1 faster:1 characterized:1 impact:1 prediction:1 addition:1 whereas:1 uka:1 rest:4 integer:1 feedforward:1 enough:1 variety:1 variate:1 idea:2 drucker:1 whether:3 motivated:1 suffer:1 amount:1 dna:1 schapire:5 generate:2 http:1 correctly:2 vol:2 threshold:3 acknowledged:1 neither:1 kept:1 fraction:3 run:5 decision:1 acceptable:1 scaling:3 comparable:2 layer:1 strength:2 bp:4 kleinberg:3 separable:2 department:2 combination:23 verage:1 slightly:1 character:1 partitioned:1 making:1 computationally:3 previously:1 needed:3 letting:1 available:1 gaussians:5 chuanyi:2 polytechnic:1 observe:1 ho:2 original:1 remaining:1 ensure:1 especially:1 already:2 degrades:1 guessing:1 said:1 gradient:1 cw:2 card:1 majority:4 index:1 providing:1 difficult:1 troy:1 proper:1 observation:1 discarded:1 nist:2 descent:1 buffalo:1 extended:2 ssr:1 required:1 extensive:1 cputime:1 california:1 learned:1 boost:2 nu:1 trans:1 able:2 pattern:4 bite:1 power:1 overlap:1 meantime:2 gene1:5 carried:1 deviate:1 nice:2 epoch:1 acknowledgement:1 determining:2 freund:2 versus:5 foundation:1 lhn:1 lo:1 bias:1 weaker:1 perceptron:1 institute:1 neighbor:5 distributed:2 dimension:3 collection:1 compact:1 ml:3 don:3 rensselaer:1 table:7 learn:3 career:1 improving:1 investigated:1 linearly:2 whole:1 repeated:2 ny:1 third:1 tin:1 elena:1 er:4 workshop:1 easier:1 kan:1 ma:5 ann:3 price:1 considerable:1 determined:1 hyperplane:3 total:2 called:3 pas:2 experimental:4 la:1 vote:4 perceptrons:2 select:1 people:2 support:1 tested:2 |
357 | 1,326 | Estimating Equivalent Kernels For Neural
Networks: A Data Perturbation Approach
A. Neil Burgess
Department of Decision Science
London Business School
London, NW1 4SA, UK
(N.Burgess@lbs.lon.ac.uk)
ABSTRACT
We describe the notion of "equivalent kernels" and suggest that this
provides a framework for comparing different classes of regression models,
including neural networks and both parametric and non-parametric
statistical techniques. Unfortunately, standard techniques break down when
faced with models, such as neural networks, in which there is more than one
"layer" of adjustable parameters. We propose an algorithm which overcomes
this limitation, estimating the equivalent kernels for neural network models
using a data perturbation approach. Experimental results indicate that the
networks do not use the maximum possible number of degrees of freedom,
that these can be controlled using regularisation techniques and that the
equivalent kernels learnt by the network vary both in "size" and in "shape"
in different regions of the input space.
1
INTRODUCTION
The dominant approaches within the statistical community, such as multiple linear
regression but even extending to advanced techniques such as generalised additive
models (Hastie and Tibshirani, 1990), projection pursuit regression (Friedman and
Stuetzle, 1981), and classification and regreSSion trees (Breiman et al., 1984), tend to
err, when they do, on the high-bias side due to restrictive assumptions regarding either
the functional form of the response to individual variables and/or the limited nature of
the interaction effects which can be accommodated. Other classes of models, such as
multi-variate adaptive regression spline models of high-order (Friedman, 1991),
interaction splines (Wahba, 1990) and especially non-parametric regression techniques
(HardIe, 1990) are capable of relaxing some or all of these restrictive assumptions, but
run the converse risk of suffering high-variance, or "over fitting".
A large literature of experimental results suggests that, under the right conditions, the
flexibility of neural networks allows them to out-perform other techniques. Where the
current understanding is limited, however, is in analysing trained neural networks to
understand how the degrees of freedom have been allocated, in a way which allows
meaningful comparisons with other classes of models. We propose that the notion of
383
Estimating Equivalent Kernels: A Data Perturbation Approach
"equivalent kernels" [ego (Hastie and Tibshirani, 1990)] can provide a unifying
framework for neural networks and other classes of regression model, as well as
providing important information about the neural network itself. We describe an
algorithm for estimating equivalent kernels for neural networks which overcomes the
limitations of existing analytical methods.
In the following section we describe the concept of equivalent kernels. In Section 3 we
describe an algorithm which estimates how the response function learned by the neural
network would change if the training data were modified slightly, from which we derive
the equivalent kernels for the network. Section 4 provides simulation results for two
controlled experiments. Section 5 contains a brief discussion of some of the implications
of this work, and highlights a number of interesting directions for further research. A
summary of the main points of the paper is presented in Section 6.
2
EQUIVALENT KERNELS
Non-parametric regression techniques, such as kernel smoothing, local regression and
nearest neighbour regression, can all be expressed in the form:
f((J(z,x).J(x) .t(x) dx
<Xl
y(z)
=
(1)
X=?<Xl
where y(z) is the response at the query point z, <p(z. x) is the weighting, or kernel, which
is "centred" at z, f(x) is the input density and t(x) is the target function.
In finite samples, this is approximated by:
n
y(xJ = L<f>(x;,xj).tj
(2)
j=1
and the response at point Xj is a weighted average of the sampled target values across the
entire dataset. Furthermore, the response can be viewed as a least squares estimate for
y(Xj) because we can write it as a solution to the minimization problem:
~ (r.CjJ(xj,Xj).tj - y(xi)Y
];1
(3)
)
We can combine the kernel functions to define the smoother matrix S, given by:
<P(Xl,Xl )
s=
<P(Xl ,X2)
<P(X2 ,Xl) <P(X2 ,X2)
(4)
From which we obtain:
y=S.t
(5)
A. N. Burgess
384
Where y = (y(XI), y(X2), ... ,y(xn) )T, and t
= (t\, h, ... , tJT is the vector of target values.
From the smoother matrix S, we can derive many kinds of important infonnation. The
model is represented in tenns of the influence of each observation on the response at
each sample point, allowing us to quantify the effect of outliers for instance. It is also
possible to calculate the model bias and variance at each sample point [see (Hardie,
1990) for details]. One important measure which we will return to below is the number
of degrees of freedom which are absorbed by the model; a number of definitions can be
motivated, but in the case of least squares estimators they turn out to be equivalent [see
pp 52-55 of (Hastie and Tibshirani, 1990)], perhaps the most intuitive is:
dofs = trace( S )
(6)
thus a model which is a look up table, i.e. y(Xj) = tj, absorbs all 'n' degrees of freedom,
whereas the sample mean, y(Xj) = lin L tj , absorbs only one degree of freedom. The
degrees of freedom can be taken as a natural measure of model complexity, which
fonnulated with respect to the data itself, rather than to the number of parameters.
The discussion above relates only to models which can be expressed in the fonn given by
equation (2), i.e. where the "kernel functions" can be computed. Fortunately, many types
of parametric models can be "inverted" in this manner, providing what are known as
"equivalent kernels" . Consider a model of the fonn:
(7)
i.e. a weighted function of some arbitrary transfonnations of the input variables. In the
case of fitting using a least squares approach, then the optimal weights w = ( WI, W2, .. .,
Wn)T are given by:
(8)
where <1>+ is the pseudo-inverse of the transfonned data matrix <1>. The network output
can then be expressed as:
(9)
= ~k <P(Xj, Xk) . ~
and the cp(Xj, Xk) are then the "equivalent kernels" of the original model which is now in
the same fonn as equation (2). Examples of equivalent kernels for different classes of
parametric and non-parametric models are given by (Hastie and Tibshirani, 1990) whilst
a treatment for Radial Basis Function (RBF) networks is presented in (Lowe, 1995).
3
EQUIVALENT KERNELS FOR NEURAL NETWORKS
The analytic approach described above relies on the ability to calculate the optimal
weights using the pseudo-inverse of the data matrix. This is only possible if the
transfonnations, ~(x), are fixed functions, as is typically the case in parametric models or
single-layer neural networks. However, for a neural network with more than one layer of
385
Estimating Equivalent Kernels: A Data Perturbation Approach
adjustable weights, the basis functions are parametrised rather than fixed and are thus
themselves a function of the training data. Consequently the equivalent kernels are also
dependent on the data, and the problem of finding the equivalent kernels becomes nonlinear.
We adopt a solution to this problem which is based on the following observation. In the
case where the equivalent kernels are independent of the observed values tj, we notice
from equation (2):
By; = <p(x;,Xj )
-
(10)
Btj
i.e. the basis function <p(Xj, x) is equal to the sensitivity of the response y(Xj) to a small
change in the observed value tj. This suggests that we approximate the equivalent kernels
by turning the above expression around:
(11)
where E is a small perturbation of the training data and <p(Xj) is the response of the reoptimised network:
If/(X j )
= <p?(x;,x).(tj +e)+ L<p?(x;,Xk)?tk
(12)
k~j
The notation <p. indicates that the new kernel functions derive from the network fitted to
perturbed data. Note that this takes into account all of the adjustable parameters in the
network. Whereas treating the basis functions as fixed would give simply the number of
additive terms in the final layer of the network.
Calculating the equivalent kernels in this fashion is a computationally intensive
procedure, with the network needing to be retrained after perturbing each point in tum.
Note that regularisation techniques such as weight decay should be incorporated within
this procedure as with initial training and are thus correctly accounted for by the
algorithm. The retraining step is facilitated by using the optimised weights from the
unperturbed data, causing the network to re-train from weights which are initially almost
optimal (especially if the perturbation is small).
4
SIMULATION RESULTS
In order to investigate the practical viability of estimating equivalent kernels using the
perturbation approach, we performed a controlled experiment on simulated data. The
target function used was the first two periods of a sine-wave, sampled at 41 points evenly
spaced between 0 and 47t. This function was estimated using a neural network with a
single layer of four sigmoid units, a shortcut connection from input to output, and a
linear output unit, trained using standard backpropogation.
From the trained network we then estimated the equivalent kernels using the
perturbation method described in the previous section. The resulting kernels for points 0,
7t, and 27t are shown in figure 2, below.
386
A. N. Burgess
Figure 2: Equivalent Kernels for sine-wave problem
As discussed in the previous section, we can combine the estimated kernels to construct a
linear smoother. The correlation coefficient between the function reconstructed from the
approximated smoother matrix and the original neural network is found to be 0.995 .
From equation (6) we find that the network contains approx. 8.2 degrees of freedom; this
compares to the 10 potential degrees of freedom, and also to the 6 degrees of freedom
which we would expect for an equivalent model with fixed transfer functions. Clearly, to
some degreee, perturbations in the training data are accommodated by adjustments to the
sigmoid functions.
Using this approach we can also investigate the effects of weight decay on (a) the ability
of the network to reproduce the target function, (b) the number of degrees of freedom
absorbed by the network, and (c) the kernel functions themselves. We use a standard
quadratic weight decay, leading to a cost function of the form:
C = (y - f(x)i + y.LW2
(13)
The effect of gradually increaSing the weight decay factor, y, on both network
performance and capacity is shown in figure 3(b), below:
15
9
...
.... -.......-~?. ? ~? .....;,;.?h???? ???? ?? ??????- -?- ---? --- ?-- ----?---?-
? .. .. ?....... .? ??. ????.
lilJ'11,
.. ? ... . .... ...,., .........?.?.??...?..?? ??? .? OM
:::::::::::::::::::::::::::::::::::::: ::::'~:':"'~:::::::::::::::::: :
..
, ........ ...... .. .. ... ... ..... . . . ..... . . . . . ... . .......\ ................. OM
"
"'"
: :::::1 ::=(=:...~ . . !:::::::::::::..~~:~:~ ::
,
.......... ..... .. ...... ... ... ... . ........... ..............\ .... ... . ... ..
I
1 --- ??? ? - -- ----- -------- --.--.-- ? ?? ? ?? -------- - -- --- - -- -- - --.- - . -- -.'--.
o
Q
-
~
?
m
m~
_o.aw_! I
~ ~ ~ ~ ~
~
i
'I
~ ~ ~ ~
Iii ;:j t
1(M
~
m
.15
Figure 3: (a) Comparison of network and reconstructed functions with target, and (b) effect of weight decay
Looking at figure 3(b) we note that the two curves follow each other very closely. As the
weight decay factor is increased, the effective capacity of the network is reduced and the
performance drops off accordingly.
In one dimension, the main flexibility for the equivalent kernels is one of scale: narrow,
concentrated kernels which rely heavily on nearby observations versus broad, diffuse
kernels in which the response is conditioned on a larger number of observations. In
higher dimensions, however, the real power of neural networks as function estimators
lies in the fact that the sensitivity of the estimated network function is itself a flexible
Estimating Equivalent Kernels: A Data Perturbation Approach
387
function of the input vector. Viewed from the perspective of equivalent kernels, this
property might be expected to manifest itself in a change in the shape of the kernels in
different regions of the input space. In order to investigate this effect we applied the
perturbation approach in estimating equivalent kernels for a network trained to
reproduce a two-dimensional function; the function chosen was a "ring" defined by:
z = II ( 1 + 30.( x2 + y2 - 0.5)2)
(14)
For ease of visualisation the input points were chosen on a regular 15 by 15 grid running
between plus and minus one. This function was approximated using a 2(+ 1)-8-1 network
with sigmoidal hidden units and a linear output unit. Selected kernel functions,
estimated from this network, are shown in figure 4, below:
Figure 4: Equivalent Kernels: approximated using the perturbation method
This result clearly shows the changing shape of the kernel functions in different parts of
the input space. The function reconstructed from the estimated smoother matrix has a
correlation coefficient of 0.987 with the original network function.
5. Discussion
The ability to transform neural network regression models into an equivalent kernel
representation raises the possibility of harnessing the whole battery of statistical
methods which have been developed for non-parametric techniques: model selection
procedures, prediction interval estimation, calculation of degrees of freedom, and
statistical significance testing amongst others. The algorithm described in this paper
raises the possibility of applying these techniques to more-powerful networks with two or
more layers of adaptable weights, be they based on sigmoids, radial functions, splines or
whatever, albeit at the price of significant computational effort.
Another opportunity is in the area of model combination where the added value from
combining models in an ensemble is related to the degree of correlation between the
different models (Krogh and Vedelsby, 1995). Typically the pointwise correlation
between two models will be related to the similarity between their equivalent kernels and
so the equivalent kernel approach opens new possibilities for conditionally modifying the
ensemble weights without a need for an additional level of learning.
The influence-based method for estimating the number of degrees of freedom absorbed
by a neural network model, focuses attention on uncertainty in the data itself, rather than
taking the indirect route based on uncertainty in the model parameters; in future work
388
A. N. Burgess
we propose to investigate the similarities and differences between our approach and
those based on the "effective number of parameters" (Moody, 1992) and Bayesian
methods (MacKay, 1992).
6. Summary
We suggest that equivalent kernels provide an important tool for understanding what
neural networks do and how they go about doing it; in particular a large battery of
existing statistical tools use information derived from the smoother matrix.
The perturbation method which we have presented overcomes the limitations of standard
approaches, which are only appropriate for models with a single layer of adjustable
weights, albeit at considerable computational expense. It has the added bonus of
automatically taking into account the effect of regularisation techniques such as weight
decay.
The experimental results illustrate the application of the technique to two simple
problems. As expected the number of degrees of freedom in the models is found to be
related to the amount of weight decay used during training. The equivalent kernels are
found to vary significantly in different regions of input space and the functions
reconstructed from the estimated smoother matrices closely match the origna! networks.
7. References
Breiman, 1., Friedman, J. H., Olshen, R. A, and Stone C. 1., 1984, Classification and Regression
Trees, Wadsworth and Brooks/Cole, Monterey.
Friedman, J.H. and Stuetzle, W., 1981. Projection pursuit regression. Journal of the American
Statistical Association. Vol. 76, pp. 817-823.
Friedman, J.H., 1991 . Multivariate Adaptive Regression Splines (with discussion). Annals of
Statistics. Vol 19, num. 1, pp. 1-141.
HardIe, W., 1990. Applied nonparametric regression. Cambridge University Press.
Hastie, T.J. and Tibshirani, R.J., 1990. Generalised Additive Models. Chapman and Hall, London.
Krogh, A, and Vedelsby, 1., New-al network ensembles, cross-validation and active learning, NIPS
7, pp231-238.
Lowe, D., 1995, On the use of nonlocal and non positive definite basis functions in radial basis
function networks, Proceedings of the Fourth lEE Conference on ArtifiCial Neural
Networks, pp. 206-211 .
MacKay, D. J. C., 1992, A practical Bayesian framework for backprop networks, Neural
Computation, 4,448-472.
Moody, J. E., 1992, The effective number of parameters: an analysis of generalisation and
regularization in nonlinear learning systems, NIPS 4, 847-54, Morgan Kaufmann, San Mateo
Wahba, G., 1990, Spline Models for Observational Data. Society for Industrial and Applied
Mathematics, Philadelphia.
| 1326 |@word retraining:1 open:1 simulation:2 fonn:3 minus:1 initial:1 contains:2 existing:2 err:1 current:1 comparing:1 dx:1 fonnulated:1 additive:3 shape:3 analytic:1 treating:1 drop:1 selected:1 accordingly:1 xk:3 num:1 provides:2 sigmoidal:1 fitting:2 combine:2 absorbs:2 manner:1 expected:2 themselves:2 multi:1 automatically:1 increasing:1 becomes:1 estimating:9 notation:1 bonus:1 what:2 kind:1 developed:1 whilst:1 finding:1 pseudo:2 transfonnations:2 uk:2 whatever:1 unit:4 converse:1 generalised:2 positive:1 local:1 optimised:1 might:1 plus:1 mateo:1 suggests:2 relaxing:1 ease:1 limited:2 practical:2 testing:1 definite:1 procedure:3 stuetzle:2 area:1 significantly:1 projection:2 radial:3 regular:1 suggest:2 selection:1 risk:1 influence:2 applying:1 equivalent:35 go:1 attention:1 estimator:2 notion:2 annals:1 target:6 heavily:1 ego:1 approximated:4 observed:2 calculate:2 region:3 complexity:1 battery:2 trained:4 raise:2 basis:6 indirect:1 represented:1 train:1 describe:4 london:3 effective:3 query:1 artificial:1 harnessing:1 larger:1 ability:3 statistic:1 neil:1 transform:1 itself:5 final:1 analytical:1 propose:3 interaction:2 causing:1 combining:1 flexibility:2 intuitive:1 extending:1 ring:1 tk:1 derive:3 illustrate:1 ac:1 nearest:1 school:1 sa:1 krogh:2 indicate:1 quantify:1 direction:1 closely:2 modifying:1 observational:1 backprop:1 tjt:1 around:1 hall:1 vary:2 adopt:1 estimation:1 infonnation:1 cole:1 tool:2 weighted:2 minimization:1 clearly:2 modified:1 rather:3 breiman:2 derived:1 focus:1 lon:1 indicates:1 industrial:1 dependent:1 entire:1 typically:2 initially:1 hidden:1 visualisation:1 reproduce:2 classification:2 flexible:1 smoothing:1 mackay:2 wadsworth:1 equal:1 construct:1 chapman:1 broad:1 look:1 future:1 others:1 spline:5 neighbour:1 individual:1 friedman:5 freedom:13 investigate:4 possibility:3 tj:7 parametrised:1 implication:1 capable:1 tree:2 dofs:1 accommodated:2 re:1 fitted:1 instance:1 increased:1 cost:1 perturbed:1 learnt:1 density:1 sensitivity:2 lee:1 off:1 moody:2 american:1 leading:1 return:1 account:2 potential:1 centred:1 coefficient:2 performed:1 break:1 lowe:2 sine:2 doing:1 wave:2 om:2 square:3 variance:2 kaufmann:1 ensemble:3 spaced:1 bayesian:2 definition:1 pp:4 vedelsby:2 sampled:2 dataset:1 treatment:1 manifest:1 adaptable:1 tum:1 higher:1 follow:1 response:9 furthermore:1 correlation:4 nonlinear:2 hardie:3 perhaps:1 effect:7 concept:1 y2:1 regularization:1 conditionally:1 during:1 stone:1 cp:1 btj:1 sigmoid:2 functional:1 perturbing:1 discussed:1 association:1 significant:1 cambridge:1 backpropogation:1 approx:1 grid:1 mathematics:1 similarity:2 dominant:1 multivariate:1 perspective:1 route:1 tenns:1 inverted:1 morgan:1 fortunately:1 additional:1 period:1 ii:1 relates:1 multiple:1 smoother:7 needing:1 match:1 calculation:1 cjj:1 cross:1 lin:1 controlled:3 prediction:1 regression:15 kernel:46 whereas:2 interval:1 allocated:1 w2:1 tend:1 iii:1 viability:1 wn:1 xj:14 variate:1 burgess:5 hastie:5 wahba:2 regarding:1 intensive:1 motivated:1 expression:1 effort:1 monterey:1 amount:1 nonparametric:1 concentrated:1 reduced:1 notice:1 estimated:7 tibshirani:5 correctly:1 write:1 vol:2 four:1 changing:1 run:1 inverse:2 facilitated:1 powerful:1 uncertainty:2 fourth:1 almost:1 decision:1 layer:7 quadratic:1 x2:6 diffuse:1 nearby:1 department:1 combination:1 across:1 slightly:1 wi:1 outlier:1 gradually:1 taken:1 computationally:1 equation:4 turn:1 pursuit:2 appropriate:1 original:3 running:1 opportunity:1 unifying:1 calculating:1 restrictive:2 nw1:1 especially:2 society:1 added:2 parametric:9 amongst:1 simulated:1 capacity:2 evenly:1 transfonned:1 pointwise:1 providing:2 unfortunately:1 olshen:1 expense:1 trace:1 adjustable:4 perform:1 allowing:1 observation:4 finite:1 incorporated:1 looking:1 perturbation:13 lb:1 arbitrary:1 retrained:1 community:1 connection:1 learned:1 narrow:1 nip:2 brook:1 below:4 including:1 power:1 business:1 natural:1 rely:1 turning:1 advanced:1 brief:1 philadelphia:1 faced:1 literature:1 understanding:2 regularisation:3 expect:1 highlight:1 interesting:1 limitation:3 versus:1 validation:1 degree:14 summary:2 accounted:1 bias:2 side:1 understand:1 taking:2 curve:1 dimension:2 xn:1 adaptive:2 san:1 nonlocal:1 reconstructed:4 approximate:1 overcomes:3 active:1 xi:2 table:1 nature:1 transfer:1 significance:1 main:2 whole:1 suffering:1 fashion:1 xl:6 lie:1 weighting:1 down:1 unperturbed:1 decay:8 albeit:2 conditioned:1 sigmoids:1 simply:1 absorbed:3 expressed:3 adjustment:1 relies:1 viewed:2 consequently:1 rbf:1 price:1 shortcut:1 analysing:1 change:3 considerable:1 generalisation:1 experimental:3 meaningful:1 |
358 | 1,327 | Compositionality, MDL Priors, and
Object Recognition
Elie Bienenstock (elie@dam.brown.edu)
Stuart Geman (geman@dam.brown.edu)
Daniel Potter (dfp@dam.brown.edu)
Division of Applied Mathematics,
Brown University, Providence, RI 02912 USA
Abstract
Images are ambiguous at each of many levels of a contextual hierarchy. Nevertheless, the high-level interpretation of most scenes
is unambiguous, as evidenced by the superior performance of humans. This observation argues for global vision models, such as deformable templates. Unfortunately, such models are computationally intractable for unconstrained problems. We propose a compositional model in which primitives are recursively composed, subject
to syntactic restrictions, to form tree-structured objects and object
groupings. Ambiguity is propagated up the hierarchy in the form
of multiple interpretations, which are later resolved by a Bayesian,
equivalently minimum-description-Iength, cost functional.
1
Bayesian decision theory and compositionaiity
In his Essay on Probability, Laplace (1812) devotes a short chapter-his "Sixth
Principle" -to what we call today the Bayesian decision rule. Laplace observes
that we interpret a "regular combination," e.g., an arrangement of objects that
displays some particular symmetry, as having resulted from a "regular cause" rather
than arisen by chance. It is not, he argues, that a symmetric configuration is less
likely to happen by chance than another arrangement. Rather, it is that among all
possible combinations, which are equally favored by chance, there are very few of
the regular type: "On a table we see letters arranged in this order, Constantinople,
and we judge that this arrangement is not the result of chance, not because it is
less possible than the others, for if this word were not employed in any language
Compositionality, MDL Priors, and Object Recognition
839
we should not suspect it came from any particular cause, but this word being in use
amongst us, it is incomparably more probable that some person has thus arranged
the aforesaid letters than that this arrangement is due to chance." In this example,
regularity is not a mathematical symmetry. Rather, it is a convention shared among
language users, whereby Constantinople is a word, whereas Jpctneolnosant, a string
containing the same letters but arranged in a random order, is not.
Central in Laplace's argument is the observation that the number of words in the
language is smaller, indeed "incomparably" smaller, than the number of possible
arrangements of letters. Indeed, if the collection of 14-letter words in a language
made up, say, half of all 14-letter strings- a rich language indeed-we would, upon
seeing the string Constantinople on the table, be far less inclined to deem it a word,
and far more inclined to accept it as a possible coincidence. The sparseness of allowed combinations can be observed at all linguistic articulations (phonetic-syllabic,
syllabic-lexical, lexical-syntactic, syntactic-pragmatic, to use broadly defined levels),
and may be viewed as a form of redundancy-by analogy to error-correcting codes.
This redundancy was likely devised by evolution to ensure efficient communication
in spite of the ambiguity of elementary speech signals. The hierarchical compositional structure of natural visual scenes can also be thought of as redundant: the
rules that govern the composition of edge elements into object boundaries, of intensities into surfaces etc., all the way to the assembly of 2-D projections of named
objects, amount to a collection of drastic combinatorial restrictions. Arguably, this
is why in all but a few-generally hand-crafted-cases, natural images have a unique
high-level interpretation in spite of pervasive low-level ambiguity-this being amply
demonstrated by the performances of our brains.
In sum, compositionality appears to be a fundamental aspect of cognition (see also
von der Malsburg 1981, 1987; Fodor and Pylyshyn 1988; Bienenstock, 1991, 1994,
1996; Bienenstock and Geman 1995). We propose here to account for mental computation in general and scene interpretation in particular in terms of elementary
composition operations, and describe a mathematical framework that we have developed to this effect. The present description is a cursory one, and some notions
are illustrated on two simple examples rather than formally defined-for a detailed
account, see Geman et al. (1996), Potter (1997). The binary-image example refers
to an N x N array of binary-valued pixels, while the Laplace-Table example refers
to a one-dimensional array of length N, where each position can be filled with one
of the 26 letters of the alphabet or remain blank.
2
Labels and composition rules
The objects operated upon are denoted Wi, i = 1,2, ... , k. Each composite object
carries a label, I = L(w), and the list of its constituents, (Wt,W2,?? .). These
uniquely determine w, so we write W = I(WI, W2, .?. ) . A scene S is a collection of
primitive objects. In the binary-image case, a scene S consists of a collection of
black pixels in the N x N array. All these primitives carry the same label, L(w) = p
(for "Point"), and a parameter 7r(w) which is the position in the image. In Laplace's
Table, a scene S consists of an arrangement of characters on the table. There are 26
primitive labels, "A", "B" ,... , "Z" , and the parameter of a primitive W is its position
1 ~ 7r(w) ~ N (all primitives in such a scene must have different positions).
W
An example of a composite W in the binary-image case is an arrangement composed
E. Bienenstock. S. Geman and D. Potter
840
of a black pixel at any position except on the rightmost column and another black
pixel to the immediate right of the first one. The label is "Horizontal Linelet,"
denoted L(w) = hl, and there are N(N - 1) possible horizontallinelets. Another
non-primitive label, "Vertical Linelet," or vl, is defined analogously. An example
of a composite W for Laplace's Table is an arrangement of 14 neighboring primitives carrying the labels "G", "0", "N", "S", ... , "E" in that order, wherever that
arrangement will fit. We then have L(w) = Ganstantinople, and there are N - 13
possible Constantinople objects.
The composition rule for label type 1 consists of a binding junction, B" and a set
of allowed binding-function values, or binding support, S,: denoting by 0 the set
of all objects in the model, we have, for any WI, ' .. ,Wk E 0, B, (WI. ... ,Wk) E
s, ?:} l(WI"" ,Wk) E O. In the binary-image example, Bhl(WI,W2) = Bv,(WI,W2) =
(L(WI),L(W2),7I'(W2) -7I'(WI)), Sh' = {(P,p,(I,O))} and Sv' = {(p,p,(O,I))} define
the hl- and vl-composition rules, p+p -+ hl and p+p -+ vl. In Laplace's Table, G +
0+? .. + E -+ Ganstantinpole is an example of a 14-ary composition rule, where we
must check the label and position of each constituent. One way to define the binding
function and support for this rule is: B(WI, ' " ,WI4) = (L(WI),' " ,L(WI4), 71'(W2) 71'(Wt} , 71'(W3) - 71'(WI),"', 71'(W14) - 71'(WI)) and S = (G,"', E, 1,2"",13).
We now introduce recursive labels and composition rules: the label of the composite
object is identical to the label of one or more of its constituents, and the rule may
be applied an arbitrary number of times, to yield objects of arbitrary complexity.
In the binary-image case, we use a recursive label c, for Curve, and an associated
binding function which creates objects of the form hl + p -+ c, vl + p -+ c, c + p -+ c,
p + hl -+ c, p + vl -+ c, p + c -+ c, and c + c -+ c. The reader may easily
fill in the details, i.e., define a binding function and binding support which result
in "c" -objects being precisely curves in the image, where a curve is of length at
least 3 and may be self-intersecting. In the previous examples, primitives were
composed into compositions; here compositions are further composed into more
complex compositions. In general, an object W is a labeled tree, where each vertex
carries the name of an object, and each leaf is associated with a primitive (the
association is not necessarily one-to-one, as in the case of a self-intersecting curve).
Let M be a model-Le., a collection of labels with their binding functions and
binding supports-and 0 the set of all objects in M . We say that object W E
o covers S if S is precisely the set of primitives that make up w's leaves. An
interpretation I of S is any finite collection of objects in 0 such that the union
of the sets of primitives they cover is S. We use the convention that, for all M
and S, 10 denotes the trivial interpretation, defined as the collection of (unbound)
primitives in S. In most cases of interest, a model M will allow many interpretations
for a scene S . For instance, given a long curve in the binary-image model, there
will be many ways to recursively construct a "c"-labeled tree that covers exactly
that curve.
3
The MDL formulation
In Laplace's Table, a scene consisting of the string Constantinople admits, in
addition to 10 , the interpretation II = {WI}, where WI is a "G anstantinople" object. We wish to define a probability distribution D on interpretations such that
D(I1 ) ? D(Io), in order to realize Laplace's "incomparably more probable". Our
Compositionality, MDL Priors, and Object Recognition
841
definition of D will be motivated by the following use of the Minimum Description
Length (MDL) principle (Rissanen 1989). Consider a scene S and pretend we want
to transmit S as quickly as possible through a noiseless channel, hence we seek to
encode it as efficiently as possible, i.e., with the shortest possible binary code c. We
can always use the trivial interpretation 10: the codeword c(Io) is a mere list of n
locations in S. We need not specify labels, since there is only one primitive label in
this example. The length, or cost, of this code for S is Ic(Io)1 = nlog2 (N 2 ).
Now however we want to take advantage of regularities, in the sense of Laplace,
that we expect to be present in S. We are specifically interested in compositional
regularities, where some arrangements that occur more frequently than by chance
can be interpreted advantageously using an appropriate compositional model M.
Interpretation I is advantageous if Ic(I)1 < Ic(Io)l. An example in the binary-image
case is a linelet scene S. The trivial encoding of this scene costs us Ic(Io)1 = 2[log2 3+
log2(N 2)] bits, whereas the cost of the compositional interpretation II = {wI} is
Ic(Idl = log2 3+log2 (N(N -1)), where WI is an hI or vl object, as the case may be.
The first log23 bits encode the label L(WI) E {p, hi, vi}, and the rest encodes the
position in the image. The compositional {p, hl, vl} model is therefore advantageous
for a linelet scene, since It affords us a gain in encoding cost of about 2log2 N bits.
In general, the gain realized by encoding {w} = {I (WI, W2)} instead of {WI, W2} may
be viewed as a binding energy, measuring the affinity that WI and W2 exhibit for
each other as they assemble into w. This binding energy is c, = IC(WI)I + IC(W2)1 Ic( I (WI, W2) ) I, and an efficient M is one that contains judiciously chosen cost-saving
composition rules. In effect, if, say, linelets were very rare, we would be better
off with the trivial model. The inclusion of non-primitive labels would force us
to add at least one bit to the code of every object-to specify its label-and this
would increase the average encoding cost, since the infrequent use of non-primitive
labels would not balance the extra small cost incurred on primitives. In practical
applications, the construction of a sound M is no trivial issue. Note however
the simple rationale for including a rule such as p + p --7 hl. Giving ourselves the
label hi renders redundant the independent encoding of the positions of horizontally
adjacent pixels. In general, a good model should allow one to hierarchically compose
with each other frequently occurring arrangements of objects.
This use of MDL leads in a straightforward way to an equivalent Bayesian formulation. Setting P'(w) = 2- lc (w)lj L:w'EO 2- lc (w')I yields a probability distribution P'
on n for which c is approximately a Shannon code (Cover and Thomas 1991). With
this definition, the decision to include the label hl-or the label Con8tantinoplewould be viewed, in principle, as a statement about the prior probability of finding
horizontal linelets-or Constantinople strings-in the scene to be interpreted.
4
The observable-measure formulation
The MDL formulation however has a number of shortcomings; in particular, computing the binding energy for composite objects can be problematic. We outline
now an alternative approach (Geman et al. 1996, Potter 1997), where a probability distribution P(w) on n is defined through a family of observable measures Q,.
These measures assign probabilities to each possible binding-function value, s E S"
and also to the primitives. We require L:'EM L:sEs r Q,(8) = 1, where the notion of
binding function has been extended to primitives via Bprim (w) = 7r(w) for primitive
E. Bienenstoc/c, S. Geman and D. Potter
842
w. The probabilities induced on 0 by Q, are given by P(w) = Qprim(Bprim(w))
for a primitive w, and P(w) = Q,(B,(WI,W2))P2(WI,W2IB,(WI,W2)) for a composite
object w = l(wI, W2).1 Here p 2 = P X P is the product probability, i.e., the free, or
not-bound, distribution for the pair (WI, W2) E 0 2. For instance, with C + ... + E -?
Canstantinople, p 14 (WI,W2,'" ,w14IBcons ... (W1, ... ,W14) = (C, 0,???,13)) is the
conditional probability of observing a particular string Constantinople, under the
free distribution, given that (WI, ... , W14) constitutes such a string. With the reasonable assumption that, under Q, primitives are uniformly distributed over the
table, this conditional probability is simply the inverse of the number of possible
Constantinople strings, Le., 1/(N - 13).
The binding energy, defined, by analogy to the MDL approach, as [, =
log2(P(w)/(P(wdP(w2))), now becomes [, = log2(Q,(B,(wI,w2))) - log2(P x
P(B'(Wl,W2)))' Finally, if I is the collection of all finite interpretations / c 0, we
define the probability of / E I as D(/) = IIwElP(w)/Z, with Z = L:I'EI IIwEl'P(w),
Thus, the probability of an interpretation containing several free objects is obtained
by assuming that these objects occurred in the scene independently of each other.
Given a scene S, recognition is formulated as the task of maximizing D over all the
l's in I that are interpretations of S.
We now illustrate the use of D on our two examples. In the binary-image example
with model M = {p, hi, vi}, we use a parameter q, 0 ~ q ~ 1, to adjust the prior
probability of linelets. Thus, Qprim(Bprim(W)) = (1 - q)/N2 for primitives, and
Qh'?P,p,O, 1)) = Qv'?P,p, 1,0)) = q/2 for linelets. It is easily seen that regardless
of the normalizing constant Z, the binding energy of two adjacent pixels into a
linelet is [h' = [v, = log2(q/2) - log2[{lNf N(N - 1)]. Interestingly, as long as
q =1= 0 and q =1= 1, the binding energy, for large N, is approximately 2log2 N, which
is independent of q. Thus, the linelet interpretation is "incomparably" more likely
than the independent occurrence of two primitives at neighboring positions. We
leave it to the reader to construct a prior P for the model {p, hl, vI, c}, e.g. by
distributing the Q-mass evenly between all composition rules. Finally, in Laplace's
Table, if there are M equally likely non-primitive labels-say city names-and q is
their total mass, the binding energy for Constantinople is [Cons ... = log2 M(r!-13) log2[~~.&]14, and the "regular" cause is again "incomparably" more likely.
There are several advantages to this reformulation from codewords into probabilities
using the Q-parameters. First, the Q-parameters can in principle be adjusted to
better account for a particular world of images. Second, we get an explicit formula
for the binding energy, (namely log2 (Q / P x P)). Of course, we need to evaluate
the product probability P x P, and this can be highly non-trivial-one approach
is through sampling, as demonstrated in Potter (1997). Fi~ally, this formulation
is well-suited for parameter estimation: the Q's, which are the parameters of the
distribution P, are indeed observables, Le., directly available empirically.
5
Concluding remarks
The approach described here was applied by X. Xing to the recognition of "online" handwritten characters, using a binary-image-type model as above, enriched
IThis is actually an implicit definition. Under reasonable conditions, it is well definedsee Geman et al. (1996).
Compositionality, MDL Priors, and Object Recognition
843
with higher-level labels including curved lines, straight lines, angles, crossings, Tjunctions, L-junctions {right angles}, and the 26 letters of the alphabet. In such
a model, the search for an optimal solution cannot be done exhaustively. We experimented with a number of strategies, including a two-step algorithm which first
generates all possible objects in the scene, and then selects the "best" objects, Le.,
the objects with highest total binding energy, using a greedy method, to yield a final
scene interpretation. (The total binding energy of W is the sum of the binding energies ?, over all the composition rules I used in the composition of w. Equivalently,
the total binding energy is the log-likelihood ratio log2{P{w}/II i P{Wi)), where the
product is taken over all the primitives Wi covered by w.}
The first step of the algorithm typically results in high-level objects partly overlapping on the set of primitives they cover, i.e., competing for the interpretation of
shared primitives. Ambiguity is thus propagated in a "bottom-up" fashion. The
ambiguity is resolved in the second "top-down" pass, when high-level composition
rules are used to select the best compositions, at all levels including the lower ones.
A detailed account of our experiments will be given elsewhere. We found the results quite encouraging, particularly in view of the potential scope of the approach.
In effect, we believe that this approach is in principle capable of addressing unrestricted vision problems, where images are typically very ambiguous at lower levels
for a variety of reasons-including occlusion and mutual overlap of objects-hence
purely bottom-up segmentation is impractical.
Turning now to biological implications, note that dynamic binding in the nervous
system has been a subject of intensive research and debate in the last decade. Most
interesting in the present context is the suggestion, first clearly articulated by von
der Malsburg {1981}, that composition may be performed thanks to a dual mechanism of accurate synchronization of spiking activity-not necessarily relying on
periodic firing-and fast reversible synaptic plasticity. If there is some neurobiological truth to the model described in the present paper, the binding mechanism
proposed by von der Malsburg would appear to be an attractive implementation.
In effect, the use of fine temporal structure of neural activity opens up a large realm
of possible high-order codes in networks of neurons.
In the present model, constituents always bind in the service of a new object, an
operation one may refer to as triangular binding. Composite objects can engage in
further composition, thus giving rise to arbitrarily deep tree-structured constructs.
Physiological evidence of triangular binding in the visual system can be found in Sillito et al. {1994}; Damasio {1989} describes an approach derived from neuroanatomical data and lesion studies that is largely consistent with the formalism described
here.
An important requirement for the neural representation of the tree-structured objects used in our model is that the doing and undoing of links operating on some
constituents, say Wi and W2, while affecting in some useful way the high-order patterns that represent these objects, leaves these patterns, as representations of Wi and
W2, intact. A family of biologically plausible patterns that would appear to satisfy
this requirement is provided by synfire patterns {Abeles 1991}. We hypothesized
elsewhere {Bienenstock 1991, 1994, 1996} that synfire chains could be dynamically
bound via weak synaptic couplings; such couplings would synchronize the wave-like
activities of two synfire chains, in much the same way as coupled oscillators lock
844
E. Bienenstock, S. Geman and D. Potter
their phases. Recursiveness of compositionality could, in principle, arise from the
further binding of these composite structures.
Acknow ledgements
Supported by the Army Research Office (DAAL03-92-G-0115), the National Science
Foundation (DMS-9217655), and the Office of Naval Research (N00014-96-1-0647).
References
Abeles, M. (1991) Corticonics: Neuronal circuits of the cerebral cortex, Cambridge
University Press.
Bienenstock, E. (1991) Notes on the growth of a composition machine, in Proceedings of the Royaumont Interdisciplinary Workshop on Compositionality in
Cognition and Neural Networks-I, D. Andler, E. Bienenstock, and B. Laks,
Eds., pp. 25--43. (1994) A Model of Neocortex. Network: Computation in
Neural Systems, 6:179-224. (1996) Composition, In Brain Theory: Biological Basis and Computational Principles, A. Aertsen and V. Braitenberg
eds., Elsevier, pp 269-300.
Bienenstock, E., and Geman, S. (1995) Compositionality in Neural Systems, In
The Handbook of Brain Theory and Neural Networks, M.A. Arbib ed.,
M.I.T./Bradford Press, pp 223-226.
Cover, T.M., and Thomas, J.A. (1991) Elements of Information Theory, Wiley
and Sons, New York.
Damasio, A. R. (1989) Time-locked multiregional retroactivation: a systems-level
proposal for the neural substrates of recall and recognition, Cognition,
33:25-62.
Fodor, J .A., and Pylyshyn, Z.W. (1988) Connectionism and cognitive architecture:
a critical analysis, Cognition, 28:3-71.
Geman, S., Potter, D., and Chi, Z. (1996) Compositional Systems, Technical Report, Division of Applied Mathematics, Brown University.
Laplace, P.S. (1812) Esssai philosophique sur les probabiliUs. Translation of Truscott and Emory, New York, 1902.
Potter, D. (1997) Compositional Pattern Recognition, PhD Thesis, Division of
Applied Mathematics, Brown University, In preparation.
Rissanen, J. (1989) Stochastic Complexity in Statistical Inquiry World Scientific
Co, Singapore.
Sillito, A.M., Jones, H.E, Gerstein, G.L., and West, D.C. (1994) Feature-linked
synchronization of thalamic relay cell firing induced by feedback from the
visual cortex, Nature, 369: 479-482
von der Malsburg, C. (1981) The correlation theory of brain function. Internal report 81-2, Max-Planck Institute for Biophysical Chemistry, Dept. of Neurobiology, Gottingen, Germany. (1987) Synaptic plasticity as a basis of
brain organization, in The Neural and Molecular Bases of Learning (J.P.
Changeux and M. Konishi, Eds.), John Wiley and Sons, pp. 411--432.
| 1327 |@word advantageous:2 open:1 essay:1 seek:1 amply:1 idl:1 recursively:2 carry:3 configuration:1 contains:1 daniel:1 denoting:1 interestingly:1 rightmost:1 blank:1 contextual:1 emory:1 must:2 john:1 realize:1 happen:1 plasticity:2 pylyshyn:2 half:1 leaf:3 greedy:1 nervous:1 cursory:1 short:1 mental:1 location:1 mathematical:2 consists:3 nlog2:1 compose:1 introduce:1 indeed:4 frequently:2 brain:5 chi:1 relying:1 encouraging:1 deem:1 becomes:1 provided:1 circuit:1 mass:2 what:1 interpreted:2 string:8 developed:1 finding:1 impractical:1 gottingen:1 temporal:1 every:1 growth:1 exactly:1 appear:2 planck:1 arguably:1 service:1 bind:1 io:5 encoding:5 firing:2 approximately:2 black:3 dynamically:1 co:1 locked:1 elie:2 unique:1 practical:1 recursive:2 union:1 incomparably:5 thought:1 composite:8 projection:1 word:6 regular:4 seeing:1 spite:2 refers:2 get:1 cannot:1 dam:3 context:1 restriction:2 equivalent:1 demonstrated:2 lexical:2 maximizing:1 primitive:28 straightforward:1 regardless:1 independently:1 correcting:1 rule:14 array:3 fill:1 his:2 konishi:1 notion:2 laplace:12 fodor:2 transmit:1 hierarchy:2 today:1 infrequent:1 user:1 construction:1 qh:1 engage:1 substrate:1 element:2 crossing:1 recognition:8 particularly:1 geman:11 labeled:2 observed:1 bottom:2 coincidence:1 inclined:2 daal03:1 highest:1 observes:1 govern:1 complexity:2 exhaustively:1 dynamic:1 carrying:1 purely:1 upon:2 division:3 creates:1 observables:1 basis:2 resolved:2 easily:2 chapter:1 alphabet:2 articulated:1 fast:1 describe:1 shortcoming:1 quite:1 valued:1 plausible:1 say:5 s:1 triangular:2 syntactic:3 final:1 online:1 dfp:1 advantage:2 biophysical:1 propose:2 product:3 neighboring:2 deformable:1 description:3 constituent:5 regularity:3 requirement:2 leave:1 object:40 illustrate:1 coupling:2 p2:1 judge:1 convention:2 stochastic:1 human:1 require:1 assign:1 probable:2 elementary:2 biological:2 connectionism:1 adjusted:1 ic:8 cognition:4 scope:1 relay:1 estimation:1 combinatorial:1 label:25 wl:1 city:1 qv:1 clearly:1 always:2 rather:4 office:2 linguistic:1 pervasive:1 encode:2 derived:1 naval:1 check:1 likelihood:1 sense:1 elsevier:1 vl:7 lj:1 typically:2 accept:1 bienenstock:9 i1:1 interested:1 selects:1 pixel:6 issue:1 among:2 dual:1 germany:1 denoted:2 favored:1 mutual:1 construct:3 saving:1 having:1 sampling:1 corticonics:1 identical:1 stuart:1 jones:1 constitutes:1 braitenberg:1 others:1 report:2 few:2 composed:4 resulted:1 national:1 phase:1 consisting:1 ourselves:1 occlusion:1 organization:1 interest:1 highly:1 adjust:1 mdl:9 sh:1 operated:1 chain:2 implication:1 accurate:1 edge:1 capable:1 tree:5 filled:1 instance:2 column:1 formalism:1 cover:6 measuring:1 cost:8 vertex:1 addressing:1 rare:1 providence:1 periodic:1 sv:1 abele:2 person:1 thanks:1 fundamental:1 lnf:1 interdisciplinary:1 off:1 analogously:1 quickly:1 intersecting:2 w1:1 again:1 thesis:1 von:4 ambiguity:5 containing:2 central:1 cognitive:1 account:4 potential:1 chemistry:1 wk:3 devotes:1 satisfy:1 vi:3 later:1 view:1 performed:1 damasio:2 observing:1 doing:1 linked:1 xing:1 wave:1 thalamic:1 aforesaid:1 largely:1 efficiently:1 yield:3 weak:1 bayesian:4 handwritten:1 mere:1 straight:1 ary:1 inquiry:1 synaptic:3 ed:4 sixth:1 definition:3 energy:12 pp:4 dm:1 associated:2 con:1 propagated:2 gain:2 recall:1 realm:1 segmentation:1 actually:1 appears:1 higher:1 specify:2 arranged:3 formulation:5 done:1 implicit:1 correlation:1 hand:1 ally:1 horizontal:2 ei:1 synfire:3 overlapping:1 reversible:1 scientific:1 believe:1 usa:1 effect:4 name:2 brown:6 hypothesized:1 evolution:1 hence:2 symmetric:1 illustrated:1 attractive:1 adjacent:2 self:2 uniquely:1 ambiguous:2 unambiguous:1 whereby:1 outline:1 argues:2 image:16 fi:1 superior:1 functional:1 spiking:1 empirically:1 cerebral:1 association:1 interpretation:18 he:1 occurred:1 interpret:1 refer:1 composition:20 cambridge:1 unconstrained:1 mathematics:3 inclusion:1 language:5 cortex:2 surface:1 operating:1 etc:1 add:1 base:1 phonetic:1 codeword:1 n00014:1 binary:11 came:1 arbitrarily:1 der:4 seen:1 minimum:2 unrestricted:1 employed:1 eo:1 undoing:1 determine:1 shortest:1 redundant:2 signal:1 ii:3 multiple:1 sound:1 technical:1 long:2 devised:1 equally:2 molecular:1 vision:2 noiseless:1 represent:1 arisen:1 cell:1 proposal:1 whereas:2 addition:1 want:2 fine:1 affecting:1 w2:22 rest:1 extra:1 subject:2 suspect:1 induced:2 call:1 variety:1 fit:1 w3:1 arbib:1 competing:1 architecture:1 judiciously:1 intensive:1 motivated:1 distributing:1 render:1 advantageously:1 speech:1 york:2 cause:3 compositional:8 remark:1 deep:1 generally:1 useful:1 detailed:2 covered:1 amount:1 neocortex:1 affords:1 problematic:1 singapore:1 broadly:1 ledgements:1 write:1 redundancy:2 reformulation:1 nevertheless:1 rissanen:2 sum:2 inverse:1 letter:8 angle:2 named:1 family:2 reader:2 reasonable:2 gerstein:1 decision:3 bit:4 bound:2 hi:4 display:1 assemble:1 activity:3 bv:1 occur:1 precisely:2 ri:1 scene:18 encodes:1 generates:1 aspect:1 argument:1 concluding:1 structured:3 combination:3 smaller:2 remain:1 em:1 character:2 describes:1 wi:35 son:2 wherever:1 biologically:1 hl:9 taken:1 computationally:1 mechanism:2 drastic:1 junction:2 operation:2 available:1 hierarchical:1 appropriate:1 occurrence:1 alternative:1 thomas:2 neuroanatomical:1 denotes:1 top:1 ensure:1 assembly:1 include:1 log2:15 lock:1 laks:1 malsburg:4 pretend:1 giving:2 arrangement:11 realized:1 codewords:1 strategy:1 aertsen:1 exhibit:1 amongst:1 affinity:1 link:1 evenly:1 evaluate:1 trivial:6 reason:1 potter:9 assuming:1 code:6 length:4 sur:1 ratio:1 balance:1 equivalently:2 unfortunately:1 statement:1 debate:1 acknow:1 rise:1 implementation:1 vertical:1 observation:2 neuron:1 finite:2 curved:1 immediate:1 extended:1 communication:1 neurobiology:1 arbitrary:2 intensity:1 compositionality:8 evidenced:1 pair:1 namely:1 pattern:5 articulation:1 unbound:1 including:5 max:1 overlap:1 critical:1 natural:2 force:1 synchronize:1 turning:1 coupled:1 iength:1 prior:7 synchronization:2 expect:1 rationale:1 interesting:1 suggestion:1 analogy:2 syllabic:2 foundation:1 incurred:1 consistent:1 principle:7 translation:1 course:1 elsewhere:2 supported:1 last:1 free:3 w14:3 allow:2 institute:1 template:1 distributed:1 boundary:1 curve:6 feedback:1 world:2 rich:1 collection:8 made:1 far:2 observable:2 neurobiological:1 global:1 handbook:1 search:1 decade:1 sillito:2 why:1 table:10 channel:1 nature:1 symmetry:2 complex:1 necessarily:2 hierarchically:1 arise:1 n2:1 allowed:2 lesion:1 enriched:1 crafted:1 neuronal:1 west:1 fashion:1 wiley:2 lc:2 position:9 wish:1 explicit:1 formula:1 down:1 changeux:1 list:2 experimented:1 admits:1 physiological:1 normalizing:1 grouping:1 intractable:1 evidence:1 workshop:1 phd:1 occurring:1 sparseness:1 suited:1 simply:1 likely:5 army:1 visual:3 horizontally:1 binding:28 truth:1 chance:6 conditional:2 viewed:3 formulated:1 oscillator:1 shared:2 specifically:1 except:1 uniformly:1 wt:2 ithis:1 total:4 pas:1 partly:1 bradford:1 shannon:1 intact:1 pragmatic:1 formally:1 select:1 internal:1 support:4 preparation:1 dept:1 |
359 | 1,328 | MIMIC: Finding Optima by Estimating
Probability Densities
Jeremy S. De Bonet, Charles L. Isbell, Jr., Paul Viola
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
In many optimization problems, the structure of solutions reflects
complex relationships between the different input parameters. For
example, experience may tell us that certain parameters are closely
related and should not be explored independently. Similarly, experience may establish that a subset of parameters must take on
particular values. Any search of the cost landscape should take
advantage of these relationships. We present MIMIC, a framework
in which we analyze the global structure of the optimization landscape. A novel and efficient algorithm for the estimation of this
structure is derived. We use knowledge of this structure to guide a
randomized search through the solution space and, in turn, to refine our estimate ofthe structure. Our technique obtains significant
speed gains over other randomized optimization procedures.
1
Introduction
Given some cost function C(x) with local minima, we may search for the optimal
x in many ways. Variations of gradient descent are perhaps the most popular.
When most of the minima are far from optimal, the search must either include a
brute-force component or incorporate randomization. Classical examples include
Simulated Annealing (SA) and Genetic Algorithms (GAs) (Kirkpatrick, Gelatt and
Vecchi, 1983; Holland, 1975). In all cases, in the process of optimizing C(x) many
thousands or perhaps millions of samples of C( x) are evaluated. Most optimization
algorithms take these millions of pieces of information, and compress them into
a single point x-the current estimate of the solution (one notable exception are
GAs to which we will return shortly). Imagine splitting the search process into
two parts, both taking t/2 time steps. Both parts are structurally identical: taking
a description of CO, they start their search from some initial point. The sole
benefit enjoyed by the second part of the search over the first is that the initial
MIMIC: Finding Optima by Estimating Probability Densities
425
point is perhaps closer to the optimum. Intuitively, there must be some additional
information that could be learned from the first half of the search, if only to warn
the second half about avoidable mistakes and pitfalls.
We present an optimization algorithm called Mutual-Information-Maximizing Input Clustering (MIMIC). It attempts to communicate information about the cost
function obtained from one iteration of the search to later iterations of the search
directly. It does this in an efficient and principled way. There are two main components of MIMIC: first, a randomized optimization algorithm that samples from
those regions of the input space most likely to contain the minimum for CO; second,
an effective density estimator that can be used to capture a wide variety of structure on the input space, yet is computable from simple second order statistics on
the data. MIMIC's results on simple cost functions indicate an order of magnitude
improvement in performance over related approaches. Further experiments on a
k-color map coloring problem yield similar improvements.
2
Related Work
Many well known optimization procedures neither represent nor utilize the structure of the optimization landscape. In contrast, Genetic Algorithms (GA) attempt
to capture this structure by an ad hoc embedding of the parameters onto a line (the
chromosome). The intent of the crossover operation in standard genetic algorithms
is to preserve and propagate a group of parameters that might be partially responsible for generating a favorable evaluation. Even when such groups exist, many of
the offspring generated do not preserve the structure of these groups because the
choice of crossover point is random.
In problems where the benefit of a parameter is completely independent of the
value of all other parameters, the population simply encodes information about the
probability distribution over each parameter. In this case, the crossover operation
is equivalent to sampling from this distribution; the more crossovers the better the
sample. Even in problems where fitness is obtained through the combined effects of
clusters of inputs, the GA crossover operation is beneficial only when its randomly
chosen clusters happen to closely match the underlying structure of the problem.
Because of the rarity of such a fortuitous occurrence, the benefit of the crossover
operation is greatly diminished. As as result, GAs have a checkered history in
function optimization (Baum, Boneh and Garrett, 1995; Lang, 1995). One of our
goals is to incorporate insights from GAs in a principled optimization framework.
There have been other attempts to capture the advantages of GAs. Population
Based Incremental Learning (PBIL) attempts to incorporate the notion of a candidate population by replacing it with a single probability vector (Baluja and Caruana, 1995). Each element of the vector is the probability that a particular bit in a
solution is on. During the learning process, the probability vector can be thought
of as a simple model of the optimization landscape. Bits whose values are firmly
established have probabilities that are close to lor O. Those that are still unknown
have probabilities close to 0.5 .
When it is the structure of the components of a candidate rather than the particular
values of the components that determines how it fares, it can be difficult to move
PBIL's representation towards a viable solution. Nevertheless, even in these sorts
of problems PBIL often out-performs genetic algorithms because those algorithms
are hindered by the fact that random crossovers are infrequently beneficial.
A very distinct, but related technique was proposed by Sabes and Jordan for a
J. S. de Bonet, C. L. Isbell and P. WoLa
426
reinforcement learning task (Sabes and Jordan, 1995). In their framework, the
learner must generate actions so that a reinforcement function can be completely
explored. Simultaneously, the learner must exploit what it has learned so as to
optimize the long-term reward. Sabes and Jordan chose to construct a Boltzmann
distribution from the reinforcement function: p(x) = exp~~) where R(x) is the
reinforcement function for action X, T is the temperature, and ZT is a normalization
factor. They use this distribution to generate actions. At high temperatures this
distribution approaches the uniform distribution, and results in random exploration
of RO. At low temperatures only those actions which garner large reinforcement
are generated. By reducing T, the learner progresses from an initially randomized
search to a more directed search about the true optimal action. Interestingly, their
estimate for p( x) is to some extent a model of the optimization landscape which is
constructed during the learning process. To our knowledge, Sabes and Jordan have
neither attempted optimization over high dimensional spaces, nor attempted to fit
p( x) with a complex model.
3
MIMIC
Knowing nothing else about C(x) it might not be unreasonable to search for its
minimum by generating points from a uniform distribution over the inputs p( x).
Such a search allows none of the information generated by previous samples to effect
the generation of subsequent samples. Not surprisingly, much less work might be
necessary if samples were generated from a distribution, p8(x), that is uniformly
distributed over those x's where C(x) ~ 0 and has a probability of 0 elsewhere. For
example, if we had access to p8 M (x) for OM = minx C( x) a single sample would be
sufficient to find an optimum.
This insight suggests a process of successive approximation: given a collection of
points for which C( x) ~ 00 a density estimator for p/J o (x) is constructed. From this
density estimator additional samples are generated, a new threshold established,
01 = 00 - f, and a new density estimator created. The process is repeated until the
values of C( x) cease to improve.
The MIMIC algorithm begins by generating a random population of candidates
choosen uniformly from the input space. From this population the median fitness
is extracted and is denoted 00 . The algorithm then proceeds:
1. Update the parameters of the density estimator of p/J?(x) from a sample.
2. Generate more samples from the distribution p/J?(x).
3. Set 0i+l equal to the Nth percentile of the data. Retain only the points
less than Oi +1 '
The validity of this approach is dependent on two critical assumptions: p(\x) can
be successfully approximated with a finite amount of data; and D(pl1-f(X)llp (x)) is
small enough so that samples from p8(x) are also likely to be samples from p/J-f(X)
(where D(pllq) is the Kullback-Liebler divergence between p and q). Bounds on
these conditions can be used to prove convergence in a finite number of successive
approximation steps.
The performance of this approach is dependent on the nature of the density approximator used. We have chosen to estimate the conditional distributions for every pair
of parameters in the representation, a total of O( n 2 ) numbers. In the next section
we will show how we use these conditionals distributions to construct a joint distribution which is closest in the KL sense to the true joint distribution. Such an
MIMIC: Finding Optima by Estimating Probability Densities
427
approximator is capable of representing clusters of highly related parameters. While
this might seem similar to the intuitive behavior of crossover, this representation is
strictly more powerful. More importantly, our clusters are learned from the data,
and are not pre-defined by the programmer.
4
Generating Events from Conditional Probabilities
The joint probability distribution over a set of random variables, X = {Xi}, is:
Given only pairwise conditional probabilities, p(Xi IXj) and unconditional probabilities, p(Xi), we are faced with the task of generating samples that match as closely
as possible the true joint distribution, p(X). It is not possible to capture all possible
joint distributions of n variables using only the unconditional and pairwise conditional probabilities; however, we would like to describe the true joint distribution as
closely as possible. Below, we derive an algorithm for choosing such a description.
Given a permutation of the numbers between 1 and n,
class of probability distributions, P1l"(X):
7r
= i 1i2 ... in, we define a
(2)
The distribution P1l"(X) uses 7r as an ordering for the pairwise conditional probabilities. Our goal is to choose the permutation 7r that maximizes the agreement between
P1l"(X) and the true distribution p(X). The agreement between two distributions
can be measured by the Kullback-Liebler divergence:
D(pllp1l")
l
= p[logp - logp1l" ]dX
=Ep[logp] - Ep[logp1l"]
= -h(p) - Ep[logp(XilIXh)P(Xi2IXi3) . . .p(Xin_lIXi,,)p(Xin)]
= -h(p) + h(Xi1IXi2) + h(Xh IXi3) + .. . + h(Xin_1IXiJ + h(XiJ.
This divergence is always non-negative, with equality only in the case where p(7r)
and p(X) are identical distributions. The optimal 7r is defined as the one that
minimizes this divergence. For a distribution that can be completely described by
pairwise conditional probabilities, the optimal 7r will generate a distribution that
will be identical to the true distribution. Insofar as the true distribution cannot be
captured this way, the optimal P1l"(X) will diverge from that distribution.
The first term in the divergence does not depend on
J1l"(X), we wish to minimize is:
7r.
Therefore, the cost function,
The optimal 7r is the one that produces the lowest pairwise entropy with respect
to the true distribution. By searching over all n! permutations, it is possible to
determine the optimal 7r. In the interests of computational efficiency, we employ a
straightforward greedy algorithm to pick a permutation:
J. S. de Bonet, C. L. Isbell and P. Viola
428
1. in
2. ik
j
=::
arg minj h(Xj).
=::
arg minj h( Xj IXik+J, where
t= i k +1 ... in and k
n - 1, n - 2, ... ,2,1.
=::
where hO is the empirical entropy. Once a distribution is chosen, generating samples
is also straightforward:
1. Choose a value for Xin based on its empirical probability P(Xi n ).
2. for k =:: n - 1, n - 2, ... ,2,1, choose element Xik based on the empirical
conditional probability P(Xik jXik + 1 )?
The first algorithm runs in time O(n 2 ) and the second in time O(n 2 ).
5
Experiments
To measure the performance of MIMIC, we performed three benchmark experiments
and compared our results with those obtained using several standard optimization
algorithms.
We will use four algorithms in our comparisons:
1. MIMIC - the algorithm above with 200 samples per iteration
2. PBIL - standard population based incremental learning
3. RHC - randomized hill climbing
4. GA - a standard genetic algorithm with single crossover and 10%
mutation rate
5.1
Four Peaks
The Four Peaks problem is taken from (Baluja and Caruana, 1995). Given an
N -dimensional input vector X, the four peaks evaluation function is defined as:
I(X, T)
=::
max [tail(O,
X), head(l, X)] + R(X, T)
(4)
where
R(X T)
,
tai/(b, X)
=::
number of trailing b's in X
(5)
head(b, X)
=::
number of leading b's in X
(6)
= {N
0
iftail(?,X) > T and head(l,X) > T
otherWIse
(7)
There are two global maxima for this function. They are achieved either when there
are T + 1 leading l's followed by all O's or when there are T + 1 trailing O's preceded
by all 1 'so There are also two suboptimal local maxima that occur with a string
of all l's or all O's. For large values of T, this problem becomes increasingly more
difficult because the basin of attraction for the inferior local maxima become larger.
Results for running the algorithms are shown in figure 1. In all trials, T was set
to be 10% of N, the total number of inputs . The MIMIC algorithm consistently
maximizes the function with approximately one tenth the number of evaluations
required by the second best algorithm.
429
MIMIC: Finding Optima by Estimating Probability Densities
Function Evaluations Required to Maximize 4 Peaks
1200.--~-~--~-~----,
~ I ()()()
o
.~
~
&
800
? MIMIC
0
PBIL
x RHC
? GA
~ 600 '------"
il
~400
5
~ 200l::=;;~~~~~:::::=J
o
40
50
Inputs
60
70
80
Figure 1: Number of evaluations of the Four-Peak cost function for different algorithms plotted for a variety of problems sizes.
5.2
Six Peaks
The Six Peaks problem is a slight variation on Four Peaks where
R(X,T) = { ;
if tai/(O,x) > T and head(l, x) > Tor
tai/(l, x) > T and head(O, x) > T
otherwise
(8)
This function has two additional global maxima where there are T + 1 leading O's
followed by all 1 's or when there are T + 1 trailing 1 's preceded by all O's. In this
case, it is not the values of the candidates that is important, but their structure:
the first T + 1 positions should take on the same value, the last T + 1 positions
should take on the same value, these two groups should take on different values,
and the middle positions should take on all the same value.
Results for this problem are shown in figure 2. As might be expected, PBIL performed worse than on the Four Peak problem because it tends to oscillate in the
middle of the space while contradictory signals pull it back and forth. The random
crossover operation of the G A occasionally was able to capture some of the underlying structure, resulting in an improved relative performance of the GA. As we
expected, the MIMIC algorithm was able to capture the underlying structure of the
problem, and combine information from all the maxima. Thus MIMIC consistently
maximizes the Six Peaks function with approximately one fiftieth the number of
evaluations required by the other algorithms.
5.3
Max K-Coloring
A graph is K-Colorable if it is possible to assign one of k colors to each of the
nodes of the graph such that no adjacent nodes have the same color. Determining
whether a graph is K-Colorable is known to be NP-Complete. Here, we define
Max K-Coloring to be the task of finding a coloring that minimizes the number of
adjacent pairs colored the same.
Results for this problem are shown in figure 2. We used a subset of graphs with
a single solution (up to permutations of color) so that the optimal solution is dependent only on the structure of the parameters. Because of this, PBIL performs
poorly. GA's perform better because any crossover point is representative of some of
the underlying structure of the graphs used. Finally, MIMIC performs best because
J. S. de Bonet, C. L. Isbell and P. Vwla
430
Function Evaluations Required to Maximize 6 Peaks
Function Evaluations Required to Maximize K-Coloring
1300r---~--~-~--~---'
",1200
? MIMIC
e
o
?~1000 o PBIL
.a
x RHC
'" 800 + GA
W
'Cl
:gIOOO
o
.;:1
'"
~ 800
W
L--_ - - '
? MIMIC
o PBIL
x RHC
+
GA
'Cl 600
~600
~
?
g~400
~ 400
o
~
1200r---~-~-~-~-~---..
~ 200
200
o
20
30
Inputs
40
50
60
40
Figure 2: Number of evaluations of the Six-Peak cost function (left) and the K-Color
cost function (right) for a variety of problem sizes.
it is able to capture all of the structural regularity within the inputs.
6
Conclusions
We have described MIMIC, a novel optimization algorithm that converges faster
and more reliably than several other existing algorithms. MIMIC accomplishes this
in two ways. First, it performs optimization by successively approximating the conditional distribution of the inputs given a bound on the cost function. Throughout
this process , the optimum of the cost function becomes gradually more likely. As a
result, MIMIC directly communicates information about the cost function from the
early stages to the later stages of the search. Second, MIMIC attempts to discover
common underlying structure about optima by computing second-order statistics
and sampling from a distribution consistent with those statistics.
Acknowledgments
In this research, Jeremy De Bonet is supported by the DOD Multidisciplinary Research Program of the University Research Initiative, Charles Isbell by a fellowship
granted by AT&T Labs-Research, and Paul Viola by Office of Naval Research Grant
No. N00014-96-1-0311. Greg Galperin helped in the preparation of this paper.
References
Baluja, S. and Caruana, R. (1995). Removing the genetics from the standard genetic
algorithm. Technical report, Carnegie Mellon Univerisity.
Baum, E. B., Boneh, D., and Garrett, C. (1995). Where genetic algorithms excel. In Proceedings of the Conference on Computational Learning Theory, New York. Association
for Computing Machinery.
Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. The Michigan University Press.
Kirkpatrick, S., Gelatt, C., and Vecchi, M . (1983). Optimization by Simulated Annealing.
Science, 220(4598):671-680.
Lang, K. (1995). Hill climbing beats genetic search on a boolean circuit synthesis problem
of koza's. In Twelfth International Conference on Machine Learning.
Sabes, P. N. and Jordan, M. 1. (1995). Reinforcement learning by probability matching . In
David S. Touretzky, M. M. and Perrone, M., editors, Advances in Neural Information
Processing, volume 8, Denver 1995. MIT Press, Cambridge.
| 1328 |@word trial:1 middle:2 twelfth:1 pbil:9 propagate:1 pick:1 fortuitous:1 initial:2 genetic:8 interestingly:1 existing:1 current:1 ixj:1 lang:2 yet:1 dx:1 must:5 subsequent:1 happen:1 update:1 intelligence:1 half:2 greedy:1 colored:1 node:2 successive:2 lor:1 constructed:2 become:1 viable:1 ik:1 initiative:1 prove:1 combine:1 pairwise:5 p8:3 expected:2 behavior:1 nor:2 pitfall:1 becomes:2 begin:1 estimating:4 underlying:5 discover:1 maximizes:3 circuit:1 lowest:1 what:1 minimizes:2 string:1 finding:5 every:1 ro:1 brute:1 grant:1 local:3 offspring:1 tends:1 mistake:1 approximately:2 might:5 chose:1 suggests:1 co:2 directed:1 acknowledgment:1 responsible:1 procedure:2 empirical:3 crossover:11 thought:1 matching:1 pre:1 onto:1 ga:8 close:2 cannot:1 optimize:1 equivalent:1 map:1 maximizing:1 baum:2 straightforward:2 independently:1 colorable:2 splitting:1 estimator:5 insight:2 attraction:1 importantly:1 pull:1 embedding:1 population:6 notion:1 variation:2 searching:1 imagine:1 us:1 agreement:2 element:2 infrequently:1 approximated:1 ep:3 capture:7 thousand:1 rhc:4 region:1 ordering:1 principled:2 reward:1 depend:1 efficiency:1 learner:3 completely:3 joint:6 distinct:1 effective:1 describe:1 artificial:2 tell:1 choosing:1 whose:1 larger:1 otherwise:2 statistic:3 hoc:1 advantage:2 adaptation:1 poorly:1 forth:1 description:2 intuitive:1 convergence:1 cluster:4 optimum:8 regularity:1 produce:1 generating:6 incremental:2 converges:1 derive:1 measured:1 sole:1 progress:1 sa:1 indicate:1 closely:4 exploration:1 programmer:1 assign:1 randomization:1 strictly:1 exp:1 trailing:3 tor:1 early:1 estimation:1 favorable:1 successfully:1 reflects:1 mit:1 always:1 rather:1 office:1 derived:1 naval:1 improvement:2 consistently:2 greatly:1 contrast:1 sense:1 dependent:3 initially:1 arg:2 denoted:1 mutual:1 equal:1 construct:2 once:1 sampling:2 identical:3 mimic:23 np:1 report:1 employ:1 randomly:1 preserve:2 simultaneously:1 divergence:5 fitness:2 attempt:5 interest:1 highly:1 evaluation:9 kirkpatrick:2 unconditional:2 closer:1 capable:1 necessary:1 experience:2 machinery:1 plotted:1 boolean:1 caruana:3 logp:3 cost:11 subset:2 uniform:2 dod:1 combined:1 density:10 peak:12 randomized:5 international:1 retain:1 diverge:1 synthesis:1 successively:1 choose:3 worse:1 leading:3 return:1 jeremy:2 de:5 notable:1 ad:1 piece:1 later:2 performed:2 helped:1 lab:1 analyze:1 start:1 sort:1 mutation:1 om:1 oi:1 minimize:1 greg:1 il:1 yield:1 ofthe:1 landscape:5 climbing:2 garner:1 none:1 history:1 liebler:2 minj:2 touretzky:1 gain:1 massachusetts:1 popular:1 knowledge:2 color:5 garrett:2 back:1 coloring:5 improved:1 evaluated:1 stage:2 until:1 replacing:1 bonet:5 warn:1 multidisciplinary:1 perhaps:3 effect:2 validity:1 contain:1 true:8 equality:1 laboratory:1 i2:1 adjacent:2 during:2 inferior:1 percentile:1 hill:2 complete:1 performs:4 temperature:3 novel:2 charles:2 common:1 preceded:2 denver:1 volume:1 million:2 tail:1 fare:1 slight:1 association:1 significant:1 mellon:1 cambridge:2 enjoyed:1 similarly:1 had:1 access:1 closest:1 pl1:1 optimizing:1 occasionally:1 certain:1 n00014:1 captured:1 minimum:4 additional:3 univerisity:1 accomplishes:1 determine:1 maximize:3 signal:1 technical:1 match:2 faster:1 long:1 iteration:3 represent:1 normalization:1 achieved:1 conditionals:1 fellowship:1 annealing:2 else:1 median:1 seem:1 jordan:5 structural:1 enough:1 insofar:1 variety:3 xj:2 fit:1 suboptimal:1 hindered:1 knowing:1 computable:1 whether:1 six:4 granted:1 york:1 oscillate:1 action:5 j1l:1 amount:1 generate:4 exist:1 xij:1 koza:1 per:1 carnegie:1 group:4 four:7 nevertheless:1 threshold:1 neither:2 tenth:1 utilize:1 graph:5 run:1 powerful:1 communicate:1 throughout:1 bit:2 bound:2 followed:2 refine:1 occur:1 isbell:5 encodes:1 speed:1 vecchi:2 perrone:1 jr:1 beneficial:2 increasingly:1 intuitively:1 gradually:1 taken:1 tai:3 turn:1 operation:5 unreasonable:1 gelatt:2 occurrence:1 pllq:1 shortly:1 ho:1 compress:1 clustering:1 include:2 running:1 exploit:1 establish:1 approximating:1 classical:1 move:1 gradient:1 minx:1 simulated:2 extent:1 relationship:2 difficult:2 xik:2 negative:1 intent:1 reliably:1 zt:1 boltzmann:1 unknown:1 perform:1 galperin:1 p1l:4 benchmark:1 finite:2 descent:1 gas:5 beat:1 viola:3 head:5 david:1 pair:2 required:5 kl:1 learned:3 established:2 able:3 llp:1 proceeds:1 below:1 program:1 max:3 critical:1 event:1 natural:1 force:1 nth:1 representing:1 improve:1 technology:1 firmly:1 sabes:5 created:1 excel:1 faced:1 determining:1 relative:1 permutation:5 generation:1 approximator:2 boneh:2 sufficient:1 basin:1 consistent:1 editor:1 elsewhere:1 genetics:1 surprisingly:1 last:1 supported:1 choosen:1 guide:1 institute:1 wide:1 taking:2 benefit:3 distributed:1 collection:1 reinforcement:6 far:1 obtains:1 checkered:1 kullback:2 global:3 xi:4 search:16 nature:1 chromosome:1 complex:2 cl:2 main:1 paul:2 nothing:1 repeated:1 representative:1 rarity:1 structurally:1 position:3 wish:1 xh:1 candidate:4 vwla:1 communicates:1 removing:1 explored:2 cease:1 magnitude:1 entropy:2 michigan:1 simply:1 likely:3 partially:1 holland:2 determines:1 avoidable:1 ma:1 extracted:1 conditional:8 goal:2 towards:1 diminished:1 baluja:3 reducing:1 uniformly:2 contradictory:1 called:1 total:2 xin:2 attempted:2 exception:1 preparation:1 incorporate:3 |
360 | 1,329 | ARTEX: A Self-Organizing Architecture
for Classifying Image Regions
Stephen Grossberg and James R. Williamson
{steve, jrw}@cns.bu.edu
Center for Adaptive Systems and
Department of Cognitive and Neural Systems
Boston University
677 Beacon Street,
Boston, MA 02215
Abstract
A self-organizing architecture is developed for image region classification. The system consists of a preprocessor that utilizes multiscale filtering, competition, cooperation, and diffusion to compute a
vector of image boundary and surface properties, notably texture
and brightness properties. This vector inputs to a system that
incrementally learns noisy multidimensional mappings and their
probabilities. The architecture is applied to difficult real-world
image classification problems, including classification of synthetic aperture radar and natural texture images, and outperforms a
recent state-of-the-art system at classifying natural textures.
1
INTRODUCTION
Automatic processing of visual scenes often begins by detecting regions of an image
with common values of simple local features, such as texture, and mapping the pattern offeature activation into a predicted region label. We develop a self-organizing
neural architecture, called the ARTEX algorithm, for automatically extracting a
novel and effective array of such features and mapping them to output region labels. ARTEX is made up of biologically motivated networks, the Boundary Contour
System and Feature Contour System (BCS/FCS) networks for visual feature extraction (Cohen & Grossberg, 1984; Grossberg & Mingolla, 1985a, 1985b; Grossberg
& Todorovic, 1988; Grossberg, Mingolla, & Williamson, 1995), and the Gaussian
ARTMAP (GAM) network for classification (Williamson, 1996).
ARTEX is first evaluated on a difficult real-world task, classifying regions of synthetic aperture radar (SAR) images, where it reliably achieves high resolution (single
S. Grossberg and 1. R. Williamson
874
pixel) classification results, and creates accurate probability maps for its class predictions. ARTEX is then evaluated on classification of natural textures, where it
outperforms the texture classification system in Greenspan, Goodman, Chellappa,
& Anderson (1994) using comparable preprocessing and training conditions.
2
FEATURE EXTRACTION NETWORKS
Filled-in surface brightness. Regions of interest in an image can often be segmented based on first-order differences in pixel intensity. An improvement over raw
pixel intensities can be obtained by compensating for variable illumination of the
image to yield a local brightness feature. A further improvement over local brightness features can be obtained with a surface brightness feature, which is obtained by
smoothing local brightness values when they belong to the same region, while maintaining differences when they belong to different regions. Such a procedure tends
to maximize the separability of different regions in brightness space by minimizing
within-region variance while maximizing between-region variance.
In Grossberg et al. (1995) a multiple-scale BCS/FCS network was used to process
noisy SAR images for use by human operators by normalizing and segmenting the
SAR intensity distributions and using these transformed data to fill-in surface representations that smooth over noise while maintaining informative structures. The
single-scale BCS/FCS used here employs the middle-scale BCS/FCS used in that
study. The BCS/FCS equations and parameters are fully described in Grossberg
et al. (1995). The BCS/FCS is herein applied to SAR images that are spatially
consolidated to half the size (in each dimension) of the images used in that study,
and so is comparable to the large-scale BCS/FCS used there.
Multiple-scale oriented contrast. In addition to surface brightness, another
image property that is useful for region segmentation is texture. One popular approach for analyzing texture, for which there is a great deal of supporting biological
and computational evidence, decomposes an image, at each image location, into a
set of energy measures at different oriented spatial frequencies. This may be done
by applying a bank of orientation-selective bandpass filters followed by simple nonlinearities and spatial pooling, to extract multiple-scale oriented contrast features.
The early stages of the BCS, which define a Static Oriented Constrast (or SOC)
filtering network, carry out these operations, and variants of them have been used
in many texture segregation algorithms (Bergen, 1991; Greenspan et al., 1994).
Here, the SOC network produces K = 4 oriented contrast features at each of four spatial scales. The first stage of the SOC network is a shunting on-center off-surround
network that compensates for variable illumination, normalizes, and computes ratio
contrasts in the image. Given an input image, I, the output at pixel (i,i) and scale
9 in the first stage of the SOC network is
9 _
aij -
Iij - (Gg * I)ij - DE
D + Iij + (Gg * I)ij ,
( 1)
where E = 0.5, and Gg is a Gaussian kernel defined by
Gfj(p, q) = - 122 exp[-?i - p)2 + (j - q)2)/20";],
(2)
'frO"9
with O"g = 2g , for the spatial scales 9 = 0,1,2,3. The value of D is determined by
the range of pixel intensities in the input image. We use D=2000 for SAR images
and D = 255 for natural texture images. The next stage obtains a local measure of
orientational contrast by convolving the output of (1) with Gabor filters, H!, which
ARTEX: A Self-organizing Architecture/or Classifying Inwge Regions
875
are defined at four orientations, and then full-wave rectifying the result:
bfjk
= I(HZ *
ag)ij
I?
(3)
The horizontal Gabor filter (k = 0) is defined by:
HfJo(p, q) = Grj(p, q) . sin[0.757r(j - q)/ug ] .
(4)
Orientational contrast responses may exhibit high spatial variability. A smooth,
reliable measure of orientational contrast is obtained by spatially pooling the responses within the same orientation:
e!jk
= (Gg
* bt)ij.
(5)
Equation (5) yields an orientationaliy variant, or OV, representation of oriented
contrast. A further optional stage yields an orientationaliy invariant, or 01, representation by shifting the oriented responses at each scale into a canonical ordering,
to yield a common representation for rotated versions of the same texture:
d!jk
3
=
erjkl
where k'
= [k + arg ~~ (cfjk
ll )]
mod K.
(6)
CLASSIFICATION NETWORK
GAM is a constructive, incremental-learning network which self-organizes internal
category nodes that learn a Gaussian mixture model of the M-dimensional input
space, as well as mappings to output class labels. Here, mappings are learned
from 17-dimensional input vectors (composed of a filled-in brightness feature and
16 oriented contrast features) to a class label representing a shadow, road, grass, or
tree region. The ph category's receptive field is parametrized by two M-dimensional
vectors: its mean, Pj, and standard deviation, ifj . A scalar, nj, also represents the
node's cumulative credit. Category j is activated only if its match, Gj , satisfies
the match criterion, which is determined by a vigilance parameter, p. Match is a
measure, obtained from the category's unit-height Gaussian distribution, of how
close an input, X, is to the category's mean, relative to its standard deviation:
Gj
= exp ( --1~(Xi-l'"i)2)
LJ
Uji
2 i=l
?
The match criterion is a threshold: the category is activated only if Gj
wise, the category is reset. The input strength, gj, is determined by
gj
=
n'
M J
TIi=l Uji
Gj if Gj
> Pi
gj
=0
otherwise.
(7)
> Pi other(8)
The category's activation, Yj, which represents P(jlx), is obtained by
g'J
YJ. -_
(9)
N'
D + Ll=l g,
where N is the number of categories and D is a shunting decay term that maintains
sensitivity to the input magnitude in the activation level (D = 0.01 here).
When category j is first chosen, it learns a permanent mapping to the output class,
k, associated with the current training sample. All categories that map to the same
class prediction belong to the same ensemble: j E E(k). Each time an input is
presented, the categories in each ensemble sum their activations to generate a net
probability estimate, Zk, of the class prediction k that they share:
Zk
=
L
jEE(k)
Yj?
(10)
S. Grossberg and 1. R. Williamson
876
The system prediction, }(, is determined by the maximum probability estimate,
K
= argmax(zk),
k
(11)
which determines the chosen ensemble. Once the class prediction K is chosen, we
obtain the category's "chosen-ensemble" activation,
which represents P(jlx, K):
Y;,
Y;
= L:
Yj
lEE(K) Yl
if j E E(K);
Y;
=0
otherwise.
(12)
If K is the correct prediction, then the network resonates and learns; otherwise,
match tracking is invoked: p is raised to the average match of the chosen ensemble.
p
= exp (-~
L Y; t
jEE(K)
(Xi ; .:ji)
i=l
2) .
(13)
l'
In addition, all categories in the chosen ensemble are reset. Equations (8)-(11) are
then re-evaluated. Based on the remaining non-reset categories, a new prediction
K in (11), and its corresponding ensemble, are chosen. This automatic search cycle
continues until the correct prediction is made, or until all committed categories
are reset and an uncommitted category is chosen. Upon presentation of the next
training sample, p is reassigned its baseline value: p p. Here, p ~ O.
=
When category j learns, nj is updated to represent the amount of training data the
node has been assigned credit for:
nj := nj
+ Y; ?
(14)
The vectors flj and iij are then updated to learn the input statistics:
I'ji
(1
- Yj? nj-1) I'ji
+.-1
Yj nj Xi,
(15)
(16)
GAM is initialized with N =O. When a category is first chosen, N is incremented,
and the new category, indexed by J =N, is initialized with nJ = 1, fl = X, G'ji =;,
and with a permanent mapping to the correct output class. Initializing G'ji
is necessary to make (7) and (8) well-defined. Varying; has a marked effect on
learning: as ; is raised, learning becomes slower, but fewer categories are created.
The input vectors are normalized to have the same standard deviation in each
dimension so that; has the same meaning in each dimension.
=;
4
SIMULATION RESULTS
Classifying SAR image regions. Figure 1 illustrates the classification results
obtained on one SAR image after training on the other eight images in the data
set. The final classification result (bottom, right) closely resembles the hand-labeled
regions (middle, left) . The caption summarizes the average results obtained on all
nine images. ARTEX learns this problem very quickly, using a small number of
self-organized categories, as shown in Figure 2 (left). The best classification result
of 84.2% correct is obtained by filling-in the probability estimates from equation
(10) within the BCS boundaries, using an FCS diffusion equation as described in
Grossberg et al. (1995). These filled-in probability estimates predict the actual
classification rates with remarkable accuracy (Figure 2, right).
Classifying natural textures. ARTEX performance is now compared to that
of a texture analysis system described in Greenspan et al. (1994), which we refer to as the "hybrid system" because it is a hybrid architecture made up of a
ARTEX: A Self-organizing Architecture/or Classifying Image Regions
877
Figure 1: Results are shown on a 180x180 pixel SAR image, which is one of nine
images in data set. Top row: Center/surround, first stage output (left); BCS boundaries to FCS filling-in (middle); final BCS/FCS filled-in output (right). Note that
BCS accurately localizes region boundaries, and that FCS improves appearance by
smoothing intensities within regions while maintaining sharp differences between
regions. Middle row: Hand-labeled regions corresponding to shadow, road, grass,
trees (left); Gaussian classifier results based on center/surround feature (middle,
59.6% correct), and based on filled-in feature (right, 70.7%). Note that fillingin greatly improves classification by reducing brightness variability within regions.
However, the lack of textural information results in errors, such as the misclassification of the vertical road as a shadow region. Bottom row: GAM results (1' = 4)
based on 16 SOC features in addition to the filled-in brightness feature: using the
OV representation (left, 81.9%), using the 01 representation (middle, 83.2%), and
using filled-in 01 prediction probabilities (right, 84.2%). With the OV representation (bottom, left), the thin vertical road is misclassified as shadows because there
are no thin vertical roads in the training set. With the 01 representation, however
(bottom, middle), the road is classified correctly because the training set includes
thin roads at other orientations. Finally, the classification results are improved by
filling-in the prediction probabilities from equation (10) within the BCS boundaries,
thereby taking advantage of spatial and structural context (bottom, right).
s. Grossberg and J. R.
878
..
Williamson
'00
.
50
50
'00
'50
...
Number of Categories
'"'"
03
0 .4
0.5
05
0.7
0.8
0.1
Filled-in Probability
Figure 2: Left: classification rate is plotted as a function of the number of categories
after training on different sized subsets of the SAR training data: (left-to-right)
0.01 %, 0.1%, 1%, 10%, and 100% of the training set. Right: classification rate is
plotted as a function of filled-in probability estimates.
log-Gabor pyramid representation, followed by unsupervised k-means clustering in
the feature space, followed by batch learning of mappings from clusters to output
classes using a rule-based classifier. The hybrid system uses three pyramid levels
and four orientations at each level. Each level of the pyramid is produced via three
blurring/decimation steps, resulting in an 8x8 pixel resolution. For a fair comparison, sufficient blurring/decimation was added as a postprocessing step to ARTEX
features to yield the same net amount of blurring. Both ARTEX and the hybrid
system use an OV representation for these problems because the textures are not
rotated. The first task is classification of a library of ten separate structured and
unstructured textures after training on different example images. ARTEX obtains
better performance, achieving 96.3% correct after 40 training epochs (with i = 1,
34 categories) versus 94.3% for the hybrid system. Even after only one training
epoch, ARTEX achieves better results (94.9%, 23 categories). The second task
(Figure 3) is classification of a five-texture mosaic, which requires discriminating
texture boundaries, after training on examples of the five textures, plus an additional texture (sand). ARTEX achieves 93.6% correct after 40 training epochs (33
categories), and produces results which appear to be better than those produced by
the hybrid system on a similar problem (see Greenspan et al., 1994, Figure 5).
In summary, the ARTEX system demonstrates the utility of combining BCS texture and FCS brightness measures for image preprocessing. These features may
be effectively classified by the GAM network, whose self-calibrating matching and
search operations enable it to carry out fast, incremental, distributed learning of
recognition categories and their probabilities. BCS boundaries may be further used
to constrain the diffusion of these probabilities according to FCS rules to improve
prediction probability.
Acknowledgements
Stephen Grossberg was supported by the Office of Naval Research (ONR NOOOl495-1-0409 and ONR NOOOl4-95-1-0657). James Williamson was supported by the
Advanced Research Projects Agency (ONR NOOOl4-92-J-4015), the Air Force Office
of Scientific Research (AFOSR F49620-92-J-0225 and AFOSR F49620-92-J-0334),
the National Science Foundation (NSF IRI-90-00530 and NSF IRI-90-24877), and
ARTEX: A Self-organizing Architecturejor Classifying Image Regions
879
Figure 3: Left: mosaic of five natural textures. Right: ARTEX classification (93.6%
correct) after training on examples of five textures and an additional texture (sand).
the Office of Naval Research (ONR NOOOI4-91-J-4100 and ONR NOOOI4-95-1-0409).
References
Bergen, J.R. ''Theories of visual texture perception," in Spatial Vision, D. M.
Regan Ed. New York: Macmillan, 1991, pp. 114-134.
Cohen, M. & Grossberg, S., (1984). Neural dynamics of brightness perception:
Features, boundaries, diffusion, and resonance. Perception ?3 Psychophysics, 36, 428-456.
Greenspan, H., Goodman, R., Chellappa, R., & Anderson, C.H. (1994). Learning texture discrimination rules in a multiresolution system. IEEE Trans.
PAMI, 16, 894-90l.
Grossberg, S. & Mingolla, E. (1985a). Neural dynamicsofform perception: Boundary completion, illusory figures, and neon color spreading. Psychological
Review, 92, 173-211.
Grossberg, S. & Mingolla, E. (1985b). Neural dynamics of perceptual grouping: Textures, boundaries, and emergent segmentations. Perception ?3
Psychophysics, 38, 141-17l.
Grossberg, S., Mingolla, E., & Williamson, J. (1995). Synthetic aperture radar
processing by a multiple scale neural system for boundary and surface representation. Neural Networks, 8, 1005-1028.
Grossberg, S. & Todorovic, D. (1988). Neural dynamics of I-D and 2-D brightness
perception: A unified model of classical and recent phenomena. Perception
?3 Psychophysics, 43,241-277.
Grossberg, S., & Williamson, J.R. (1996). A self-organizing system for classifying
complex images: Natural textures and synthetic aperture radar. Technical
Report CAS/CNS TR-96-002, Boston, MA: Boston University.
Williamson, J .R. (1996). Gaussian ARTMAP: A neural network for fast incremental learning of noisy multidimensional maps. Neural Networks, 9, 881-897.
| 1329 |@word version:1 middle:7 simulation:1 brightness:14 thereby:1 tr:1 carry:2 outperforms:2 current:1 activation:5 informative:1 grass:2 discrimination:1 half:1 fewer:1 detecting:1 node:3 location:1 five:4 height:1 consists:1 notably:1 compensating:1 grj:1 automatically:1 actual:1 becomes:1 begin:1 project:1 consolidated:1 developed:1 unified:1 ag:1 nj:7 multidimensional:2 classifier:2 demonstrates:1 unit:1 appear:1 segmenting:1 local:5 textural:1 tends:1 analyzing:1 pami:1 plus:1 resembles:1 range:1 grossberg:18 yj:6 procedure:1 gabor:3 matching:1 road:7 close:1 operator:1 context:1 applying:1 map:3 center:4 maximizing:1 iri:2 artmap:2 resolution:2 unstructured:1 constrast:1 rule:3 array:1 fill:1 sar:9 updated:2 flj:1 caption:1 us:1 mosaic:2 decimation:2 recognition:1 jk:2 continues:1 labeled:2 bottom:5 initializing:1 region:25 cycle:1 ordering:1 incremented:1 agency:1 dynamic:3 radar:4 ov:4 creates:1 upon:1 blurring:3 emergent:1 fast:2 effective:1 chellappa:2 whose:1 otherwise:3 compensates:1 statistic:1 noisy:3 final:2 advantage:1 net:2 reset:4 combining:1 organizing:7 multiresolution:1 competition:1 cluster:1 produce:2 incremental:3 rotated:2 develop:1 completion:1 ij:4 soc:5 predicted:1 shadow:4 closely:1 correct:8 filter:3 human:1 enable:1 sand:2 biological:1 credit:2 exp:3 great:1 mapping:8 predict:1 achieves:3 early:1 label:4 spreading:1 gaussian:6 greenspan:5 varying:1 office:3 naval:2 improvement:2 greatly:1 contrast:9 baseline:1 bergen:2 bt:1 lj:1 transformed:1 selective:1 misclassified:1 pixel:7 arg:1 classification:19 orientation:5 resonance:1 art:1 smoothing:2 spatial:7 raised:2 psychophysics:3 field:1 once:1 extraction:2 represents:3 unsupervised:1 filling:3 thin:3 report:1 employ:1 oriented:8 composed:1 national:1 argmax:1 cns:2 interest:1 mixture:1 activated:2 accurate:1 necessary:1 filled:9 offeature:1 tree:2 indexed:1 initialized:2 re:1 plotted:2 psychological:1 deviation:3 subset:1 synthetic:4 sensitivity:1 discriminating:1 bu:1 lee:1 off:1 yl:1 quickly:1 vigilance:1 cognitive:1 convolving:1 nonlinearities:1 de:1 tii:1 includes:1 permanent:2 wave:1 maintains:1 rectifying:1 air:1 accuracy:1 variance:2 ensemble:7 yield:5 raw:1 accurately:1 produced:2 classified:2 ed:1 energy:1 frequency:1 pp:1 james:2 associated:1 static:1 popular:1 noooi4:2 illusory:1 color:1 improves:2 segmentation:2 orientational:3 organized:1 steve:1 response:3 improved:1 evaluated:3 done:1 anderson:2 stage:6 until:2 hand:2 horizontal:1 multiscale:1 lack:1 incrementally:1 scientific:1 effect:1 calibrating:1 normalized:1 assigned:1 spatially:2 deal:1 sin:1 ll:2 self:10 criterion:2 gg:4 postprocessing:1 image:31 wise:1 invoked:1 novel:1 meaning:1 common:2 ug:1 ji:5 cohen:2 belong:3 refer:1 surround:3 automatic:2 surface:6 gj:8 recent:2 onr:5 additional:2 maximize:1 stephen:2 multiple:4 full:1 bcs:15 segmented:1 technical:1 match:6 smooth:2 beacon:1 shunting:2 prediction:11 variant:2 vision:1 kernel:1 represent:1 pyramid:3 addition:3 goodman:2 pooling:2 hz:1 mod:1 extracting:1 structural:1 architecture:7 motivated:1 utility:1 mingolla:5 york:1 nine:2 todorovic:2 useful:1 amount:2 uncommitted:1 ten:1 ph:1 category:28 generate:1 canonical:1 nsf:2 correctly:1 four:3 threshold:1 achieving:1 ifj:1 pj:1 diffusion:4 sum:1 utilizes:1 summarizes:1 comparable:2 fl:1 followed:3 neon:1 strength:1 constrain:1 scene:1 department:1 structured:1 according:1 separability:1 biologically:1 invariant:1 equation:6 segregation:1 gfj:1 operation:2 eight:1 gam:5 batch:1 slower:1 top:1 remaining:1 clustering:1 maintaining:3 classical:1 added:1 receptive:1 exhibit:1 separate:1 street:1 parametrized:1 ratio:1 minimizing:1 difficult:2 reliably:1 vertical:3 supporting:1 optional:1 variability:2 committed:1 sharp:1 intensity:5 learned:1 herein:1 trans:1 pattern:1 perception:7 including:1 reliable:1 shifting:1 misclassification:1 natural:7 hybrid:6 force:1 localizes:1 advanced:1 representing:1 improve:1 noool4:2 library:1 created:1 fro:1 x8:1 extract:1 epoch:3 review:1 acknowledgement:1 relative:1 afosr:2 fully:1 regan:1 filtering:2 versus:1 remarkable:1 foundation:1 jee:2 sufficient:1 bank:1 classifying:9 pi:2 share:1 normalizes:1 row:3 cooperation:1 summary:1 supported:2 aij:1 taking:1 distributed:1 f49620:2 boundary:12 dimension:3 world:2 cumulative:1 contour:2 computes:1 made:3 adaptive:1 preprocessing:2 obtains:2 aperture:4 xi:3 search:2 decomposes:1 uji:2 jlx:2 learn:2 zk:3 reassigned:1 ca:1 williamson:10 complex:1 noise:1 fair:1 iij:3 resonates:1 bandpass:1 perceptual:1 learns:5 preprocessor:1 decay:1 normalizing:1 evidence:1 grouping:1 effectively:1 texture:27 magnitude:1 illumination:2 illustrates:1 boston:4 fcs:13 appearance:1 visual:3 macmillan:1 tracking:1 scalar:1 satisfies:1 determines:1 ma:2 marked:1 presentation:1 sized:1 determined:4 reducing:1 called:1 organizes:1 internal:1 constructive:1 phenomenon:1 |
361 | 133 | 519
A BACK-PROPAGATION ALGORITHM
WITH OPTIMAL USE OF HIDDEN UNITS
Yves Chauvin
Thomson-CSF, Inc
(and Psychology Department, Stanford University)
630, Hansen Way (Suite 250)
Palo Alto, CA 94306
ABSTRACT
This paper presents a variation of the back-propagation algorithm that makes optimal use of a network hidden units by decr~asing an "energy" term written as a function of the squared
activations of these hidden units. The algorithm can automatically find optimal or nearly optimal architectures necessary to
solve known Boolean functions, facilitate the interpretation of
the activation of the remaining hidden units and automatically
estimate the complexity of architectures appropriate for phonetic
labeling problems. The general principle of the algorithm can
also be adapted to different tasks: for example, it can be used to
eliminate the [0, 0] local minimum of the [-1. +1] logistic activation function while preserving a much faster convergence and
forcing binary activations over the set of hidden units.
PRINCIPLE
This paper describes an algorithm which makes optimal use of the hidden units in
a network using the standard back-propagation algorithm (Rumelhart. Hinton &
Williams, 1986). Optimality is defined as the minimization of a function of the
"energy" spent by the hidden units throughtout the network, independently of
the chosen architecture, and where the energy is written as a function of the
squared activations of the hidden units.
The standard back-propagation algorithm is a gradient descent algorithm on the
following cost function:
P
C=
0
I I
(dij- Oij)2
[1]
j
where d is the desired output of an output unit, 0 the actual output, and where
the sum is taken over the set of output units 0 for the set of training patterns P.
520
Chauvin
The following algorithm implements a gradient descent on the following cost function:
POP H
I I
C = IJer
(dij - Oij)'l + IJen
j
I I
e(ot)
[2]
j
where e is a positive monotonic function and where the sum of the second term is
now taken over a set or subset of the hidden units H. The first term of this cost
function will be called the error term, the second, the energy term.
In principle, the theoretical minimum of this function is found when the desired
activations are equal to the actual activations for all output units and all presented
patterns and when the hidden units do not "spend any energy". In practical
cases, such a minimum cannot be reached and the hidden units have to "spend
some energy" to solve a given problem. The quantity of energy will be in pan
determined by the relative importance given to the error and energy terms during
gradient descent. In principle, if a hidden unit has a constant activation whatever
the pattern presented to the network, it contributes to the energy term only and
will be "suppressed" by the algorithm. The precise energy distribution among the
hidden units will depend on the actual energy function e.
ANALYSIS
ALGORITHM IMPLEMENTATION
We can write the total cost function that the algorithm tries to minimize as a
weighted sum of an error and energy term:
[3]
The first term is the error term used with the standard back-propagation algorithm in Rumelhan et al. If we have h hidden layers, we can write the total
energy term as a sum of all the energy terms corresponding to each hidden layer:
h Hi
Een
=I
I
i
j
e(o})
[4]
To decrease the energy of the uppermost hidden layer Hh, we can compute the
derivative of the energy function with respect to the weights. This derivative will
be null for any weight "above" the considered hidden layer. For any weight just
below the considered hidden layer, we have (using Rumelhan et al. notation):
A Back-Propagation Algorithm
[5]
~en = ae(ot) = ae(01) a01- aOi= 2'."
e OiJ ; (net; )
I
anet;
a01 ao; anet;
U?
[6]
where the derivative of e is taken with respect to the .. energy" of the unit i and
where f corresponds to the logistic function. For any hidden layer below the
considered layer h. the chain rule yields:
d1n = f
/c(net/c)
I
dj"Wj/c
[7]
J
This is just. standard back-propagation with a different back-propagated term. If
we minimize both the error at the output layer and the energy of the hidden layer
h, we can compute the complete weight change for any connection below layer h:
A
_
uW/C1 - -
,ur
~en
_
t.. A.er
~en)
~ac
a!J.eru/c 01 - a!J.enu/c 01 - - aOI\P'eru/c + !J.enu/c = - aOlu/c
[8]
where d~c is now the delta accumulated for error and energy that we can write as
a function of the deltas of the upper layer:
[9]
This means that instead of propagating the delta for both energy and error. we
can compute an accumulated delta for hidden layer h and propagate it back
throughout the network. If we minimize the energy of the layers hand h-J, the
new accumulated delta will equal the previously accumulated delta added to a
new delta energy on layer h-J. The procedure can be repeated throughout the
complete network. In shon. the back-propagated error signal used to change the
weights of each layer is simply equal to the back-propagated signal used in the
previous layer augmented with the delta energy of the current hidden layer. (The
algorithm is local and easy to implement).
ENERGY FUNCTION
The algorithm is sensitive to the energy function e being minimized. The functions used in the simulations described below have the following derivative with
521
522
Chauvin
respect to the squared activations/energy (only this derivative is necessary to implement the algorithm, see Equation [6]):
[10]
where n is an integer that determines the precise shape of the energy function
(see Table 1) and modulates the behavior of the algorithm in the following way.
For n 0, e is a linear function of the energy: "high and low energy" units are
equally penalized. For n = I, e is a logarithmic function and "low energy" units
become more penalized than uhigh energy" units, in proportion to the linear
case. For n = 2, the energy penalty may reach an asymptote as the energy increases: "high energy" units are not penalized more than umiddle energy" units.
In the simulations, as expected, it appears that higher values of n tend to suppress
"low e-nergy" units. (For n > 2, the behavior of the algorithm was not significantly different from n = 2. for the tests described below).
=
TABLE 1: Energy Functions.
I
n
0
I
1
e
02
II
Log(l +0 2 )
I
I
I
!
I
I
I
I
2
I
n>2
02
II
?
1 +02
I
BOOLEAN EXAMPLES
The algorithm was tested with a set of Boolean problems. In typical tasks, the
energy of the network significantly decreases during early learning. Later on, the
network finds a better minimum of the total cost function by decreasing the error
and by "spending" energy to solve the problem. Figure 1 shows energy and error
in function of the number of learning cycles during a typical task (XOR) for 4
different runs. For a broad range of the energy learning rate, the algorithm is
quite stable and finds the solution to the given problem. This nice behavior is
also quite independent of the variations of the onset of energy minimization.
EXCLUSIVE OR AND PARITY
The algorithm was tested with EXOR for various network architectures. Figure 2
shows an example of the activation of the hidden units after learning. The algorithm finds a minimal solution (2 hidden units, "minimum logic") to solve the
XOR problem when the energy is being minimized. This minimal solution is
actually found whatever the starting number of hidden units. If several layers are
used, the algorithm finds an optimal or nearly-optimal size for each layer.
A Back-Propagation Algorithm
~r-------'--------r------~--------
__------~
0.16
Figure 1. Energy and error curves as a function of the number of pattern
presentations for different values of the "energy" rate (0, .1, .2, .4). Each
energy curve (It e" label) is associated with an error curve (" +" label).
During learning, the units "spend" some energy to solve the given task.
With parity 3, for a [-1, +1] activation range of the sigmoid function, the algorithm does not find the 2 hidden units optimal solution but has no problem finding a 3 hidden units solution, independently of the staning architecture.
SYMMETRY
The algorithm was tested with the symmetry problem, described in Rumelhan et
al. The minimal solution for this task uses 2 hidden units. The simplest form of
the algorithm does not actually find this minimal solution because some weights
from the hidden units to the output unit can actually grow enough to compensate
the low activations of the hidden units. However, a simple weight decay can
prevent these weights from growing too much and allows the network to find the
minimal solution. In this case, the total cost function being minimized simply
becomes:
523
524
Chauvin
1 __
1 .1_1
pattern 2
----
_1_-
pattern 2
pattem 3
__ 1-
----
pattem 3
I
I_II
1__ 10
Figure 2. Hidden unit activations of a 4 hidden unit network over the 4
EXOR patterns when (left) standard back-propagation and (right) energy
minimization are being used during learning. The network is reduced to
minimal size (2 hidden units) when the . energy is being minimized.
W
POP H
C = Per
II
j
(di) -
Oi)2
+ Pen
I Ie (ot)
j
+ Pw
I
wf)
[11]
ij
PHONETIC LABELING
The algorithm was tested with a phonetic labeling task. The input patterns consisted of spectrograms (single speaker, 10x3.2ms spaced time frames, centered,
16 frequencies) corresponding to 9 syllables [ba] , [da], [ga], [bi] , [di], [gi] , and
[bu] , [du] , [gu]. The task of the network was to classify these spectrograms (7
tokens per syllable) into three categories corresponding to the three consonants
[b], [g], and [g]. Starting with 12 hidden units, the algorithm reduced the network to 3 hidden units. (A hidden unit is considered unused when its activation
over the entire range of patterns contributes very little to the activations of the
output units). With standard back-propagation, all of the 12 hidden units are
usually being used. The resulting network is consistent with the sizes of the hidden layers used by Elman and Zipser (1986) for similar tasks.
A Back-Propagation Algorithm
EXTENSION OF THE ALGORITHM
Equation [2] represents a constraint over the set of possible LMS solutions found
by the back-propagation algorithm. With such a constraint. the "zero-energy"
level of the hidden units can be (informally) considered as an attractor in the
solution space. However. by changing the sign of the energy gradient. such a
point now constitutes a repellor in this space. Having such repellors might be
useful when a set of activation values are to be avoided during learning. For
example. if the activation range of the sigmoid transfer function is [-1. +1]. the
learning speed of the back-propagation algorithm can be greatly improved but the
[0. 0] unit activation point (zero-input, zero-output) often behaves as a local
minimum. By inversing the sign of the energy gradient during early learning, it is
possible to have the point [0, 0] act as a repellor. forcing the network to make
"maximal use" of its resources (hidden units). This principle was tested on the
parity-3 problem with a network of 7 hidden units. For a given set of coefficients. standard back-propagation can solve parity-3 in about 15 cycles but yields
about 65%. of local minima in [0. 0]. By using the "repulsion" constraint, parity-3 can be solved in about 20 cycles with 0% of local minima.
Interestingly, it is also possible to design a I'trajectory" of such constraints during
learning. For example, the [0, 0] activation point can be built as a repellor
during early learning in order to avoid the corresponding local minimum, then as
an attractor during middle learning to reduce the size of the hidden layer. and as
a repulsor during late learning, to force the hidden units to have binary activations. This type of trajectory was tested on the parity-3 problem with 7 hidden
units. In this case, the algorithm always avoids the [0, 0] local minimum. Moreover, the network can be reduced to 3 or 4 hidden units taking binary values over
the set of input patterns. In contrast, standard back-propagation often gets stuck
in local minima and uses the initial 7 hidden units with analog activation values.
CONCLUSION
The present algorithm simply imposes a constraint over the LMS solution space.
It can be argued that limiting such a solution space can in some cases increase the
generalizing propenies of the network (curve-fitting analogy). Although a complete theory of generalization has yet to be formalized, the present algorithm
presents a step toward the automatic design of "minimal" networks by imposing
constraints on the activations of the hidden units. (Similar constraints on weights
can be imposed and have been tested with success by D. E. Rumelhan, Personal
Communication. Combinations of constraints on weights and activations are being tested). What is simply shown here is that this energy minimization principle
is easy to implement, is robust to a brQad range of parameter values, can find
minimal or nearly optimal network sizes when tested with a variety of tasks and
can be used to "bend" trajectories of activations during learning.
Ackowledgments
525
526
Chauvin
This research was conducted at Thomson-CSF, Inc. in Palo AIto. I would like to
thank the Thomson neural net team for useful discussions. Dave Rumelhan and
the PDP team at Stanford University were also very helpful. I am especially
greateful to Yoshiro Miyata, from Bellcore, for having letting me use his neural
net simulator (SunNet) and to Jeff Elman, from UCSD, for having letting me use
the speech data that he collected.
References.
J. L. Elman & D. Zipser. Learning the hidden structure of speech. ICS Technical Repon 8701. Institute for Cognitive Science. University of California, San Diego (1987).
D. E. Rumelhan, O. E. Hinton & R. J. Williams. Learning internal representaions by error propagation. In D. E . Rumelhan & J. L. McClelland
(Eds.), Parallel Distributed Processing: Exploration in the Microstructure 0/ Cognition. Vol. 1. Cambridge, MA: MIT Press/ Bradford Books
(1986) .
| 133 |@word middle:1 pw:1 proportion:1 simulation:2 propagate:1 initial:1 interestingly:1 current:1 activation:24 yet:1 written:2 shape:1 asymptote:1 become:1 fitting:1 expected:1 behavior:3 elman:3 growing:1 simulator:1 decreasing:1 automatically:2 actual:3 little:1 becomes:1 notation:1 moreover:1 alto:1 null:1 what:1 finding:1 suite:1 act:1 whatever:2 unit:50 positive:1 local:8 might:1 range:5 bi:1 practical:1 implement:4 x3:1 procedure:1 significantly:2 get:1 cannot:1 ga:1 bend:1 imposed:1 williams:2 starting:2 independently:2 formalized:1 rule:1 his:1 variation:2 limiting:1 diego:1 us:2 rumelhart:1 decr:1 solved:1 wj:1 cycle:3 decrease:2 complexity:1 personal:1 depend:1 gu:1 various:1 labeling:3 quite:2 stanford:2 solve:6 spend:3 gi:1 net:4 maximal:1 convergence:1 spent:1 ac:1 propagating:1 ij:1 csf:2 centered:1 exploration:1 argued:1 ao:1 generalization:1 microstructure:1 extension:1 considered:5 ic:1 cognition:1 lm:2 early:3 label:2 hansen:1 palo:2 sensitive:1 weighted:1 uppermost:1 minimization:4 mit:1 always:1 avoid:1 greatly:1 contrast:1 wf:1 a01:2 helpful:1 am:1 repulsion:1 accumulated:4 eliminate:1 entire:1 hidden:48 among:1 bellcore:1 equal:3 having:3 represents:1 broad:1 nearly:3 constitutes:1 minimized:4 attractor:2 chain:1 necessary:2 desired:2 theoretical:1 minimal:8 yoshiro:1 classify:1 boolean:3 anet:2 cost:6 subset:1 aoi:2 dij:2 conducted:1 too:1 ie:1 bu:1 squared:3 cognitive:1 book:1 derivative:5 coefficient:1 inc:2 repellors:1 onset:1 later:1 try:1 reached:1 parallel:1 minimize:3 yves:1 oi:1 xor:2 yield:2 spaced:1 trajectory:3 dave:1 reach:1 ed:1 energy:52 frequency:1 associated:1 di:2 propagated:3 actually:3 back:19 appears:1 higher:1 improved:1 just:2 hand:1 propagation:16 logistic:2 facilitate:1 consisted:1 during:12 speaker:1 m:1 thomson:3 complete:3 spending:1 sigmoid:2 behaves:1 analog:1 interpretation:1 he:1 cambridge:1 imposing:1 automatic:1 dj:1 stable:1 forcing:2 phonetic:3 binary:3 success:1 preserving:1 minimum:11 spectrogram:2 signal:2 ii:3 propenies:1 technical:1 faster:1 compensate:1 equally:1 ae:2 c1:1 grow:1 ot:3 tend:1 integer:1 zipser:2 unused:1 easy:2 enough:1 variety:1 psychology:1 architecture:5 een:1 reduce:1 penalty:1 speech:2 useful:2 informally:1 repon:1 category:1 simplest:1 reduced:3 mcclelland:1 sign:2 delta:8 per:2 write:3 vol:1 changing:1 prevent:1 uw:1 sum:4 run:1 pattem:2 rumelhan:7 throughout:2 layer:21 hi:1 syllable:2 adapted:1 constraint:8 speed:1 optimality:1 department:1 combination:1 describes:1 pan:1 suppressed:1 ur:1 taken:3 equation:2 resource:1 previously:1 exor:2 hh:1 letting:2 appropriate:1 d1n:1 remaining:1 especially:1 added:1 quantity:1 exclusive:1 gradient:5 thank:1 me:2 collected:1 chauvin:5 toward:1 repellor:3 ba:1 suppress:1 implementation:1 design:2 upper:1 descent:3 hinton:2 communication:1 precise:2 team:2 frame:1 pdp:1 ucsd:1 connection:1 california:1 pop:2 usually:1 below:5 pattern:10 built:1 force:1 oij:3 nice:1 relative:1 analogy:1 consistent:1 imposes:1 principle:6 penalized:3 token:1 parity:6 institute:1 taking:1 distributed:1 curve:4 avoids:1 stuck:1 san:1 avoided:1 logic:1 consonant:1 pen:1 eru:2 table:2 transfer:1 robust:1 ca:1 miyata:1 symmetry:2 contributes:2 du:1 da:1 repeated:1 augmented:1 en:3 enu:2 late:1 er:1 decay:1 importance:1 modulates:1 generalizing:1 logarithmic:1 simply:4 shon:1 monotonic:1 corresponds:1 determines:1 ma:1 presentation:1 jeff:1 change:2 determined:1 typical:2 called:1 total:4 bradford:1 internal:1 tested:9 |
362 | 1,330 | Orientation contrast sensitivity from
long-range interactions in visual cortex
Klaus R. Pawelzik, Udo Ernst, Fred Wolf, Theo Geisel
Institut fur Theoretische Physik and SFB 185 Nichtlineare Dynamik,
Universitat Frankfurt, D-60054 Frankfurt/M., and
MPI fur Stromungsforschung, D-37018 Gottingen, Germany
email: {klaus.udo.fred.geisel}@chaos.uni-frankfurt.de
Abstract
Recently Sill ito and coworkers (Nature 378, pp. 492,1995) demonstrated that stimulation beyond the classical receptive field (cRF)
can not only modulate, but radically change a neuron's response to
oriented stimuli. They revealed that patch-suppressed cells when
stimulated with contrasting orientations inside and outside their
cRF can strongly respond to stimuli oriented orthogonal to their
nominal preferred orientation. Here we analyze the emergence of
such complex response patterns in a simple model of primary visual cortex. We show that the observed sensitivity for orientation
contrast can be explained by a delicate interplay between local
isotropic interactions and patchy long-range connectivity between
distant iso-orientation domains. In particular we demonstrate that
the observed properties might arise without specific connections between sites with cross-oriented cRFs.
1
Introduction
Long range horizontal connections form a ubiquitous structural element of intracortical circuitry. In the primary visual cortex long range horizontal connections
extend over distances spanning several hypercolumns and preferentially connect
cells of similar orientation preference [1, 2, 3, 4] . Recent evidence suggests that
Modeling Orientation Contrast Sensitivity
91
their physiological effect depends on the level of postsynaptic depolarization; acting exitatory on weakly activated and inhibitory on strongly activated cells [5, 6].
This differ ental influence possibly underlies perceptual phenomena as 'pop out' and
'fill in' [9]. Previous modeling studies demonstrated that such differential interactions may arise from a single set of long range excitatory connections terminating
both on excitatory and inhibitory neurons in a given target column [7,8]. By and
large these results suggest that long range horizontal connections between columns
of like stimulus preference provide a central mechanism for the context dependent
regulation of activation in cortical networks.
Recent experiments by Sillito et al. suggest, however, that lateral connections
in primary visual cortex can also induce more radical changes in receptive field
organization [10]. Most importantly this study shows that patch- suppressed cells
can respond selectively to orientation contrast between center and surround of a stimulus even if they are centrally stimulated orthogonal to their preferred orientation.
Sillito et al. argued, that these response properties require specific connections
between orthogonally tuned columns for which, however, presently there is only
weak evidence.
Here we demonstrate that such nonclassical receptive field properties might instead
arise as an emergent property of the known intracortical circuitry. We investigate a
simple model for intracortical activity dynamics driven by weakly orientation tuned
afferent excitation. The cortical actitvity dynamics is based on a continous firing
rate description and incorporates both a local center- surrond type interaction and
long range connections between distant columns of like orientation preference. The
connections of distant orientation columns are assumed to act either excitatory or
inhibitory depending on the activation of their target neurons. It turns out that this
set of interactions not only leads to the emergence of patch-suppressed cells, but
also that a large fraction of these cells exhibits a selectivity for orientation contrast
very similar to the one observed by Sillito et al. .
2
Model
Our target is the analysis of basic rate modulations emerging from local and long
range feedback interactions in a simple model of visual cortex. It is therefore appropriate to consider a simple rate dynamics x = -c' x + F(x), where x = {Xi, i =
L.N} are the activation levels of N neurons. F(x) = g(Imex(x) + Ilat(x) + Iext ),
where g(I) = Co' (I - Ithres) if I> Ithres, and g(I) = 0 otherwise, denotes the firing
rate or gain function in dependence of the input I .
The neurons are arranged in a quadratic array representing a small part of the visual
cortex. Within this layer, neuron i has a position r i and a preferred orientation
<Pi E [0,180]. The input to neuron i has three contributions Ii = Iimex + Ifat + Ifxt.
Iinex = tmex . 2:f=l wiJex Xj is due to a mexican-hat shaped coupling structure
with weights wiJex, Ilat = tlat . WL(Xi) . 2:f=l wi~rxj denotes input from long-range
orientation-specific interactions with weights w~jt, and the third term models the
orientation dependent external input Iext = text . 16~t . (1 + TIi), where "li denotes the
noise added to the external input. wL(x) regulates the strength and sign ofthe long-
K. R. Pawelzik, U. Ernst. F. Wolf and T. Geisel
92
a)
Figure 1: Structure and response properties of the model network. a) Coupling
structure from one neuron on a grid of N = 1600 elements projected on the orientation preference map which was used for stimulation (<<I> i ,i = 1... N). Inhibitory
and excitatory couplings are marked with black and white squares, respectively,
the sizes of which represent the coupling strength. b) Activation pattern of the
network driven by a central stimulus of radius rc = 11 and horizontal orientation.
c) Self-consistent orientation map calculated from the activation patterns for all
stimulus orientations. Note that this map matches the input orientation preference
map shown in a) and b).
range lateral interaction in dependence of the postsynaptic activation, summarizing
the behavior of a local circuit in which the inhibitory population has a larger gain.
In particular, Wl(X) can be derived analytically from a simple cortical microcircuit
(Fig.2). This circuit consists of an inhibitory and excitatory cell population connected reciprocally. Each population receives lateral input and is driven by the
external stimulus I. The effective interaction WL depends on the lateral input L
and external input I. Assuming a piecewise linear gain function for each population, similar as those for the Xi'S, the phase-space I-L is partitioned in some
regions. Only if both I and L are small, WL is positive; justifying the choice
WL = Xsh - tanh (0.55? (x - Xa)/Xb) which we used for our simulations.
The weights
wijex
are given by
(1)
and wijex = 0 otherwise. In this representation of the local interactions weights
and scales are independently controllable. In particular if we define
a ex
2
= --4'
1rrex
ain
2? ere!
= 1r (rin + rex
)( rin -
rex
)3'
bex
= aexrex 2 ,
bin
= ain ( rin -
2
rex)
(2)
and rin denote the range of the excitatory and inhibitory part of the mexican
hat, respectively. Here we used rex = 2.5 and rin = 4.0. erel controls the balances of
inhibition and excitation. With constant activation level Xi = Xo Vi the inhibition
is erel times higher than the excitation and Imex = ?mex . (1 - erel) . Xo?
rex
93
Modeling Orientation Contrast Sensitivity
Figure 2: Local cortical circuit (left), consisting of an inhibitory and an excitatory
population of neurons interconnected with weights Wie, Wei , and stimulated with
lateral input L and external input I . By substituting this circuit with one single
excitatory unit, we need a differential instead of a fixed lateral coupling strength
(WL' right), which is positive only for small I and L .
The weights
wtit are
and 0 otherwise. alat ,1/J and alat ,r provide the orientation selectivity and the range
of the long-range lateral interaction, respectively. The additional parameter Clat
?
lat suc h th a t '\;""'N
lat
1.
norma1lzes
w ij
~j=l w ij ~
The spatial width and the orientation selectivity of the input fields are modeled by
a convolution with a Gaussian kernel before projected onto the cortical layer
I ext .
O,i
2
1
7ra r ecp ,r
~
2 ~
j=1
[
exp
(I
ri - r j
- 2
arecp ,r
12)
2'
exp
(I
-
<P i - <Pj
2
are cp , ~
12)]
2
.
In our simulations, the orientation preference of a cortical neuron i was given by
the orientation preference map displayed in Figla.
3
Results
We analyzed stationary states of our model depending on stimulus conditions. The
external input, a center stimulus of radius r c = 6 at orientation <Pc and an annulus
IThe following parameters have been used in our simulations leading to the results
shown in Figs. 1-4. "1i = 0.1 (external input noise) , f m ex = 2.2, fLat = 1.5, f ext = 1.3,
Ash = 0.0, Aa = 0.2, Ab = 0.05 , t e = 0.6 , S e = 0.5, Cre! = 2.0 (balance between inhibition
. t'lOn ) , C/at norm al'lzes w i]Lat such th at '\;""'N
an d eXClta
~j=l WiLat
j
~ 1 , 00Lnt ,q, = 20 , O'/at ,r = 15
O'r ec p ,r = 5, O'r ec p,4> = 40, r ex = 2.5 , and rin = 5.0.
94
K. R. Pawelzik, U. Ernst, F. Wolf and T. Geisel
0)
b)
c)
Figure 3: Changes in patterns of activity induced by the additional presentation of
a surround stimulus. Grey levels encode increase (darker grey) or decrease (lighter
grey) in activation x. a) center and surround parallel, b) and c) center and surround
orthogonal. While in b), the center is stimulated with the preferred orientation, in
c), the center is stimulated with the non-preferred orientation.
of inner radius rc and outer radius r s = 18 at orientation ?I>s , was projected onto
a grid of N = 40 x 40 elements (Fig.1a). Simulations were performed for 20 orientations equally spaced in the interval [0, 1800 J. When an oriented stimulus was
presented to the center we found blob-like activity patterns centered on the corresponding iso-orientation domains (Fig.1b). Simulations utilizing the full set of
orientations recovered the input-specificity and demonstrated the self-consistency
of the parameter set chosen (Fig.1c). While in this case there were still some deviations, stimulation of the whole field yielded perfect self-consistency (not shown) in
the sense of virtually identical afferent and response based orientation preferences.
For combinations of center and surround inputs we observed patch- suppressed regions. These regions exhibited substantial responses for cross- oriented stimuli which
often exceeded the response to an optimal center stimulus alone. Figs.3 and 4 summarize these results. Fig.3 shows responses to center-surround combinations compared to activity patterns resulting from center stimulation only. Obviously certain
regions within the model were strongly patch-suppressed (Fig.3, light patches for
same orientations of center and surround). Interestingly a large fraction of these
locations exhibited enhanced activation when center and surround stimulus were
orthogonal. Fig.4 displays tuning curves of patch-suppressed cells for variing the
orientation of the surround stimulus. Clearly these cells exhibited an enhancement
of most responses and a substantial selectivity for orientation contrast. Parameter
variation indicated that qualitatively these results do not depend sensitively of the
set of parameters chosen.
4
Summary and Discussion
Our model implements only elementary assumptions about intracortical interactions. A local sombrero shaped feedback is well known to induce localized blobs of
activity with a stereotyped shape [12J. This effect lies at the basis of many models
of visual cortex, as e.g. for the explanation of contrast independence of orientation
95
Modeling Orientation Contrast Sensitivity
0.8
x
c: 0.6
0
:OJ
':oJ
u
\
\
\
\
0
>
,,
0.4
,
0
c
0
CI)
::E
0.2
0.0
-90
o
+90
Orientation r/J
Figure 4: Tuning curves for patch-suppressed cells preferring a horizontal stimulus
within their cRF. The bold line shows the orientation tuning curve of the response
to an isolated center stimulus. The dashed and dotted lines show the tuning curve
when stimulating with a horizontal (dashed) and a vertical (dotted) center stimulus
while rotating the surround stimulus. The curves have been averaged over 6 units.
tuning [13, 14]. Long range connections selectively connect columns of similar orientation preference which is consistent with current anatomical knowledge [3, 4]. The
differential effect of this set of connections onto the target population was modeled
by a continuous sign change of their effective action depending on the level of postsynaptic input or activation. Orientation maps were used to determine the input
specificity and we assumed a rather weak selectivity of the afferent connections and
a restricted contrast which implies that every stimulus provides some input also to
orthogonally tuned cells. This means that long-range excitatory connections, while
not effective when only the surround is stimulated, can very well be sufficient for
driving cells if the stimulus to the center is orthogonal to their preferred orientation
(Contrast sensitivity).
In our model we find a large fraction of cells that exhibit sensitivity for centersurround stimuli. It turns out that most of the patch- suppressed cells respond
to orientation contrasts, i.e. they are strongly selective for orientation discontinuities between center and surround. We also find contrast enhancement, i.e. larger
responses to the preferred orientation in the center when stimulated with an orthogonal surround than if stimulated only centrally (Fig.4). The latter constitutes
a genuinely emergent property, since no selective cross- oriented connections are
present.
This phenomenon can be understood as a desinhibitory effect. Since no cells having
long-range connections to the center unit are activated, the additional sub-threshold
input from outside the classical receptive field can evoke a larger response (Contrast
enhancement). Contrarily, if center and surround are stimulated with the same ori-
K. R. Pawelzik, U. Ernst, F. Wolf and T. Geisel
96
entation, all the cells with similar orientation preference become activated such that
the long-range connections can strongly inhibit the center unit (Patch suppression).
In other words, while the lack of inhibitory influences from the surround should
recover the response with an amplitude similar or higher to the local stimulation,
the orthogonal surround effectively leads to a desinhibition for some of the cells.
Our results show a surprising agreement with previous findings on non-classical
receptive field properties which culminated in the paper by Sillito et al. [10]. Our
simple model clearly demonstrates that the known intracortical interactions might
lead to surprising effects on receptive fields. While this contribution concentrated
on analyzing the origin of selectivities for orientation discontinuities we expect that
the pursued level of abstraction has a large potential for analyzing a wide range
of non-classical receptive fields. Despite its simplicity we believe that our model
captures the main features of rate interactions. More detailed models based on
spiking neurons, however, will exhibit additional dynamical effects like correlations
and synchrony which will be at the focus of our future research.
Acknowledgement: We acknowledge inspiring discussions with S. Lowel and J.
Cowan. This work was supported by the Deutsche Forschungsgemeinschaft.
References
[1] D. Ts'o, C.D. Gilbert, and T.N. Wiesel, J. Neurosci 6, 1160-1170
(1986).
[2] C.D. Gilbert and T.N. Wiesel, J. Neurosci. 9, 2432-2442 (1989).
[3] S. Lowel and W. Singer, Science 255, 209 (1992).
[4] R. Malach, Y. Amir, M. Harel, and A. Grinvald, PNAS 90, 1046910473 (1993).
[5] J.A. Hirsch and C.D. Gilbert, J . Neurosci. 6, 1800-1809 (1991).
[6] M. Weliky, K. Kandler, D. Fitzpatrick, and L.C . Katz, Neuron 15,
541-552 (1995).
[7] M. Stemmler, M. Usher, and E. Niebur, Science 269, 1877-1880 (1995).
[8] L.J. Toth, D.C. Sommers, S.C. Rao, E.V. Todorov, D.-S. Kim, S.B.
Nelson, A.G. Siapas, and M. Sur, preprint 1995.
[9] U. Polat, D. Sagi, Vision Res. 7,993-999 (1993).
[10] A.M. Sillito, K.L. Grieve, H.E. Jones, J. Cudeiro, and J. Davis, Nature
378, 492-496 (1995) .
[11] J.J. Knierim and D.C. van Essen, J. Neurophys. 67,961-980 (1992).
[12] H.R. Wilson and J. Cowan, BioI. Cyb. 13,55-80 (1973).
[13] R. Ben-Yishai, R.L. Bar-Or, and H. Sompolinsky, Proc. Nat. Acad.
Sci. 92,3844-3848 (1995).
[14] D. Sommers, S.B. Nelson, and M. Sur, J. Neurosci. 15, 5448-5465
(1995).
| 1330 |@word wiesel:2 norm:1 physik:1 grey:3 simulation:5 exitatory:1 tuned:3 interestingly:1 recovered:1 current:1 neurophys:1 surprising:2 activation:10 distant:3 shape:1 nichtlineare:1 stationary:1 alone:1 pursued:1 amir:1 isotropic:1 iso:2 provides:1 location:1 preference:10 rc:2 differential:3 become:1 consists:1 inside:1 grieve:1 ra:1 behavior:1 pawelzik:4 deutsche:1 circuit:4 dynamik:1 depolarization:1 emerging:1 contrasting:1 finding:1 gottingen:1 every:1 act:1 demonstrates:1 control:1 unit:4 positive:2 before:1 understood:1 local:8 sagi:1 acad:1 ext:2 despite:1 culminated:1 analyzing:2 firing:2 modulation:1 might:3 black:1 suggests:1 co:1 sill:1 range:18 averaged:1 implement:1 word:1 induce:2 specificity:2 suggest:2 onto:3 context:1 influence:2 gilbert:3 map:6 demonstrated:3 center:22 crfs:1 independently:1 simplicity:1 wie:1 array:1 importantly:1 fill:1 utilizing:1 population:6 variation:1 target:4 nominal:1 enhanced:1 lighter:1 origin:1 agreement:1 element:3 genuinely:1 malach:1 observed:4 preprint:1 capture:1 region:4 connected:1 sompolinsky:1 decrease:1 inhibit:1 substantial:2 dynamic:3 terminating:1 weakly:2 depend:1 cyb:1 ithe:1 rin:6 basis:1 emergent:2 stemmler:1 effective:3 klaus:2 outside:2 larger:3 otherwise:3 emergence:2 obviously:1 interplay:1 blob:2 nonclassical:1 interaction:14 interconnected:1 xsh:1 ernst:4 description:1 enhancement:3 perfect:1 ben:1 depending:3 radical:1 coupling:5 ij:2 geisel:5 implies:1 differ:1 radius:4 centered:1 bin:1 argued:1 require:1 ifat:1 elementary:1 exp:2 circuitry:2 substituting:1 driving:1 fitzpatrick:1 proc:1 tanh:1 ain:2 wl:7 ere:1 clearly:2 gaussian:1 rather:1 sensitively:1 wilson:1 encode:1 derived:1 focus:1 lon:1 fur:2 contrast:14 suppression:1 kim:1 summarizing:1 sense:1 dependent:2 abstraction:1 selective:2 germany:1 orientation:49 spatial:1 field:9 ilat:2 shaped:2 having:1 identical:1 jones:1 constitutes:1 future:1 stimulus:22 piecewise:1 oriented:6 harel:1 phase:1 consisting:1 delicate:1 ab:1 organization:1 investigate:1 essen:1 analyzed:1 pc:1 activated:4 light:1 yishai:1 xb:1 orthogonal:7 institut:1 rotating:1 re:1 isolated:1 column:6 modeling:4 rao:1 patchy:1 deviation:1 rex:5 universitat:1 connect:2 hypercolumns:1 cre:1 sensitivity:7 preferring:1 connectivity:1 central:2 possibly:1 external:7 leading:1 li:1 potential:1 tii:1 de:1 intracortical:5 bold:1 afferent:3 depends:2 vi:1 polat:1 performed:1 ori:1 cudeiro:1 analyze:1 recover:1 parallel:1 synchrony:1 contribution:2 square:1 spaced:1 ofthe:1 theoretische:1 weak:2 annulus:1 niebur:1 email:1 lnt:1 pp:1 gain:3 knowledge:1 ubiquitous:1 amplitude:1 exceeded:1 higher:2 response:13 wei:1 arranged:1 microcircuit:1 strongly:5 xa:1 correlation:1 receives:1 horizontal:6 lack:1 indicated:1 believe:1 effect:6 analytically:1 white:1 self:3 width:1 davis:1 excitation:3 mpi:1 crf:3 demonstrate:2 cp:1 chaos:1 recently:1 stimulation:5 spiking:1 regulates:1 extend:1 katz:1 surround:16 frankfurt:3 siapas:1 tuning:5 grid:2 consistency:2 sommers:2 cortex:7 inhibition:3 recent:2 driven:3 selectivity:6 certain:1 additional:4 determine:1 coworkers:1 dashed:2 ii:1 full:1 pnas:1 match:1 cross:3 long:15 justifying:1 equally:1 underlies:1 basic:1 vision:1 represent:1 mex:1 kernel:1 cell:17 interval:1 contrarily:1 exhibited:3 usher:1 induced:1 virtually:1 cowan:2 incorporates:1 structural:1 revealed:1 forschungsgemeinschaft:1 todorov:1 xj:1 independence:1 inner:1 sfb:1 action:1 detailed:1 concentrated:1 inspiring:1 erel:3 inhibitory:9 dotted:2 sign:2 anatomical:1 threshold:1 pj:1 fraction:3 respond:3 patch:10 ecp:1 layer:2 centrally:2 display:1 quadratic:1 yielded:1 activity:5 strength:3 ri:1 flat:1 combination:2 suppressed:8 postsynaptic:3 wi:1 partitioned:1 presently:1 explained:1 restricted:1 xo:2 turn:2 mechanism:1 singer:1 appropriate:1 hat:2 denotes:3 lat:3 classical:4 added:1 receptive:7 primary:3 dependence:2 lowel:2 exhibit:3 distance:1 lateral:7 sci:1 centersurround:1 outer:1 nelson:2 spanning:1 assuming:1 sur:2 modeled:2 balance:2 preferentially:1 regulation:1 vertical:1 neuron:12 convolution:1 acknowledge:1 t:1 displayed:1 knierim:1 connection:16 continous:1 pop:1 discontinuity:2 beyond:1 bar:1 dynamical:1 pattern:6 summarize:1 oj:2 reciprocally:1 explanation:1 representing:1 orthogonally:2 text:1 acknowledgement:1 expect:1 udo:2 localized:1 ash:1 sufficient:1 consistent:2 weliky:1 pi:1 excitatory:9 summary:1 supported:1 theo:1 wide:1 van:1 feedback:2 calculated:1 cortical:6 fred:2 curve:5 ental:1 qualitatively:1 projected:3 iext:2 ec:2 uni:1 preferred:7 evoke:1 hirsch:1 bex:1 assumed:2 xi:4 continuous:1 sillito:5 stimulated:9 nature:2 toth:1 controllable:1 complex:1 suc:1 domain:2 main:1 stereotyped:1 neurosci:4 whole:1 noise:2 arise:3 site:1 fig:10 darker:1 sub:1 position:1 grinvald:1 lie:1 perceptual:1 third:1 ito:1 specific:3 jt:1 physiological:1 evidence:2 effectively:1 ci:1 nat:1 alat:2 visual:7 aa:1 wolf:4 radically:1 stimulating:1 bioi:1 modulate:1 marked:1 presentation:1 change:4 acting:1 mexican:2 selectively:2 latter:1 phenomenon:2 ex:3 |
363 | 1,331 | Recovering Perspective Pose with a Dual
Step EM Algorithm
Andrew D.J. Cross and Edwin R. Hancock,
Department of Computer Science,
University of York,
York, YOl 5DD, UK.
Abstract
This paper describes a new approach to extracting 3D perspective
structure from 2D point-sets. The novel feature is to unify the
tasks of estimating transformation geometry and identifying pointcorrespondence matches. Unification is realised by constructing a
mixture model over the bi-partite graph representing the correspondence match and by effecting optimisation using the EM algorithm.
According to our EM framework the probabilities of structural correspondence gate contributions to the expected likelihood function
used to estimate maximum likelihood perspective pose parameters.
This provides a means of rejecting structural outliers.
1
Introduction
The estimation of transformational geometry is key to many problems of computer
vision and robotics [10] . Broadly speaking the aim is to recover a matrix representation of the transformation between image and world co-ordinate systems. In order
to estimate the matrix requires a set of correspondence matches between features
in the two co-ordinate systems [11] . Posed in this way there is a basic chickenand-egg problem. Before good correspondences can be estimated, there need to be
reasonable bounds on the transformational geometry. Yet this geometry is, after
all, the ultimate goal of computation. This problem is usually overcome by invoking
constraints to bootstrap the estimation of feasible correspondence matches [5, 8].
One of the most popular ideas is to use the epipolar constraint to prune the space of
potential correspondences [5]. One of the drawbacks of this pruning strategy is that
residual outliers may lead to ill-conditioned or singular parameter matrices [11].
Recovering Perspective Pose with a Dual Step EM Algorithm
781
The aim in this paper is to pose the two problems of estimating transformation
geometry and locating correspondence matches using an architecture that is reminiscent of the hierarchical mixture of experts algorithm [6]. Specifically, we use
a bi-partite graph to represent the current configuration of correspondence match.
This graphical structure provides an architecture that can be used to gate contributions to the likelihood function for the geometric parameters using structural
constraints. Correspondence matches and transformation parameters are estimated
by applying the EM algorithm to the gated likelihood function. In this way we
arrive at dual maximisation steps. Maximum likelihood parameters are found by
minimising the structurally gated squared residuals between features in the two
images being matched. Correspondence matches are updated so as to maximise the
a posteriori probability of the observed structural configuration on the bi-partite
association graph.
We provide a practical illustration in the domain of computer vision which is aimed
at matching images of floppy discs under severe perspective foreshortening . However, it is important to stress that the idea of using a graphical model to provide
structural constraints on parameter estimation is a task of generic importance. Although the EM algorithm has been used to extract affine and Euclidean parameters
from point-sets [4] or line-sets [9], there has been no attempt to impose structural
constraints of the correspondence matches. Viewed from the perspective of graphical template matching [1, 7] our EM algorithm allows an explicit deformational
model to be imposed on a set of feature points. Since the method delivers statistical estimates for both the transformation parameters and their associated covariance
matrix it offers significant advantages in terms of its adaptive capabilities.
2
Perspective Geometry
Our basic aim is to recover the perspective transformation parameters which bring
a set of model or fiducial points into correspondence with their counterparts in a
set of image data. Each point in the image data is represented by an augmented
vector of co-ordinates Wi = (Xi, Yi, l)T where i is the point index. The available set
of image points is denoted by w = {Wi' Vi E 'D} where'D is the point index-set.
The fiducial points constituting the model are similarly represented by the set of
augmented co-ordinate vectors z = b j , Vj EM}. Here M is the index-set for the
model feature-points and the 'l!j represent the corresponding image co-ordinates.
Perspective geometry is distinguished from the simpler Euclidean (translation, rotation and scaling) and affine (the addition of shear) cases by the presence of significant foreshortening. We represent the perspective transformation by the parameter
matrix
~(n)
=
?>(n)
1,1
( ,hen)
'f'2,1
(1)
?>(n)
3 ,1
Using homogeneous co-ordinates, the transformation between model and data is
= (ZT .'lI(n)
1
)-l~(n)z.
where \lI(n) = (,h(n)
,hen) 1)T is a column-vector formed
-J
-J'
'f'3,1 ''f'3,2 ,
zen)
-J
from the elements in bottom row of the transformation matrix.
782
3
A. D. J. Cross and E. R. Hancock
Relational Constraints
One of our goals in this paper is to exploit structural constraints to improve the
recovery of perspective parameters from sets of feature points. We abstract the
process as bi-partite graph matching. Because of its well documented robustness to
noise and change of viewpoint, we adopt the Delaunay triangulation as our basic
representation of image structure [3]. We establish Delaunay triangulations on the
data and the model, by seeding Voronoi tessellations from the feature-points.
Tlie process of Delaunay triangulation generates relational graphs from the two
sets of point-features. More formally, the point-sets are the nodes of a data graph
GD
{V,ED} and a model graph G M
{M,EM}. Here ED ~ V X V and
EM ~ M x M are the edge-sets of the data and model graphs. Key to our matching
process is the idea of using the edge-structure of Delaunay graphs to constrain the
correspondence matches between the two point-sets. This correspondence matching
is denoted by the function j : M -+ V from the nodes of the data-graph to those
of the model graph. According to this notation the statement j(n)(i) = j indicates
that there is a match between the node i E V of the model-graph to the node j E M
of the model graph at iteration n of the algorithm. We use the binary indicator
=
=
s~n)
t,)
=
{I
if j(n)(i) = j
0 otherwise
(2)
to represent the configuration of correspondence matches.
We exploit the structure of the Delaunay graphs to compute the consistency of
match using the Bayesian framework for relational graph-matching recently reported
by Wilson and Hancock [12]. Suffice to say that consistency of a configuration of
matches residing on the neighbourhood Ri = i U {k ; (i, k) E ED} of the node
i in the data-graph and its counterpart Sj = j U {I ; (j,l) E Em} for the node
j in the model-graph is gauged by Hamming distance. The Hamming distance
H(i,j) counts the number of matches on the data-graph neighbourhood Ri that
are inconsistently matched onto the model-graph neighbourhood Sj. According to
Wilson and Hancock [12] the structural probability for the correspondence match
j(i) = j at iteration n of the algorithm is given by
( ~) =
t,)
exp [-/3H(i,j)]
----=----;:-----"-----:;-
LjEM
(3)
exp [-/3H(i,j)]
In the above expression, the Hamming distance is given by H(i,j)
L(k,I)ER;eSj (l-si~h where the symbol- denotes the composition of the data-graph
relation Ri and the model-graph relation Sj. The exponential constant /3 = In 1
is related to the uniform probability of structural matching errors Pe . This probability is set to reflect the overlap of the two point-sets. In the work reported here
- 2 I1 MI-ID\1
we set p.e - I1MI+IDI .
Ft,
4
The EM Algorithm
Our aim is to extract perspective pose parameters and correspondences matches
from the two point-sets using the EM algorithm. According to the original work
Recovering Perspective Pose with a Dual Step EM Algorithm
783
of Dempster, Laird and Rubin [2] the expected likelihood function is computed
by weighting the current log-probability density by the a posteriori measurement
probabilities computed from the preceding maximum likelihood parameters. Jordan
and Jacobs [6] augment the process with a graphical model which effectively gates
contributions to the expected log-likelihood function. Here we provide a variant of
this idea in which the bi-partite graph representing the correspondences matches
gate the log-likelihood function for the perspective pose parameters.
4.1
Mixture Model
Our basic aim is to jointly maximize the data-likelihood p(wlz, f,~) over the space
of correspondence matches f and the matrix of perspective parameters ~. To
commence our development, we assume observational independence and factorise
the conditional measurement density over the set of data-items
II p(wil z , f,~)
p(wlz, f,~) =
(4)
iE'D
In order to apply the apparatus of the EM algorithm to maximising p(wlz,f,~)
with respect to f and ~, we must establish a mixture model over the space of
correspondence matches. Accordingly, we apply Bayes theorem to expand over the
space of match indicator variables. In other words,
L
p(wilz,j,~) =
P(Wi,Si,jIZ,f,~)
(5)
Si , jE!
In order to develop a tractable likelihood function, we apply the chain rule of conditional probability. In addition, we use the indicator variables to control the switching of the conditional measurement densities via exponentiation. In other words we
assume p(wilsi,j'~j,~) = p(wil~j, ~)Si.j.
With this simplification, the mixture model for the correspondence matching process
leads to the following expression for the expected likelihood function
Q(f(n+l), ~(n+l)lf(n), ~(n?) =
LL
P(si,jIW, z, f(n), ~(n?)s~~) Inp(WiIZj' ~(n+1?)
iE'D iEM
(6)
To further simplify matters we make a mean-field approximation and replace s~~)
by its average value, i.e. we make use of the fact that E(s~~?) = (i,'j). In this way
the structural matching probabilities gate contributions to the expected likelihood
function . This mean-field approximation alleviates problems associated with local
optima which are likely to occur if the likelihood function is discretised by gating
with Si,j'
4.2
Expectation
Using the Bayes rule, we can re-write the a posteriori measurement probabilities in
terms of the components of the conditional measurement densities appearing in the
mixture model in equation (5)
.. 1
fen) ~(n+1?) =
P( St,}
W, z,
,~
r(n)p(w Iz ~(n?)
':.i,j
-i -j,
(n)
Lj/EM (i,j' p(wil?;jl, ~(n?)
(7)
784
A. D. J. Cross and E. R. Hancock
In order to proceed with the development of a point registration process we require
a model for the conditional measurement densities, i.e. p(wil?;j, cf>(n?) . Here we
assume that the required model can be specified in terms of a multivariate Gaussian
distribution. The random variables appearing in these distributions are the error
residuals for the position predictions of the jth model line delivered by the current
estimated transformation parameters. Accordingly we write
p(wil?;j,cf>
(n?) _
1
[ 1(
(n?)T~-l
( _ (n?)]
- (27l")~Mexp -2 Wi -?;j
L..J
Wi 'l.j
(8
)
In the above expression ~ is the variance-covariance matrix for the vector of errorbetween the components of the predicted mearesiduals fi,j(cf>(n?) = Wi surement vectors 'l.j and their counterparts in the data, i.e. Wi . Formally, the
matrix is related to the expectation of the outer-product of the error-residuals i.e.
~ = E[fi ,j(cf>(n?)fi,j(cf>(n?)T].
'l.;n)
4.3
Maximisation
The maximisation step of our matching algorithm is based on two coupled update
processes. The first of these aims to locate maximum a posteriori probability correspondence matches. The second class of update operation is concerned with locating
maximum likelihood transformation parameters. We effect the coupling by allowing
information flow between the two processes. Correspondences located by maximum
a posteriori graph-matching are used to constrain the recovery of maximum likelihood transformation parameters. A posteriori measurement probabilities computed
from the updated transformation parameters are used to refine the correspondence
matches.
In terms of the indicator variables matches the configuration of maximum a posteriori probability correspondence matches is updated as follows
exp
j(n+1)(i) = argmaxP(?; .IWi,cf>(n?)
JEM
J
LjEM
[-f3
exp
[
LCk,I)ER,.Sj
-f3
(1 -
L(k ,I)ER,.Sj
s~~l)]
(1 -
(9)
(n) ]
sk ,l )
The maximum likelihood transformation parameters satisfy the condition
cf>(n+l) = argmin
'""' '""' P(z.lw
. cf>(n?);~~)(w
. - z(n?)T~-l(w
. - zen?)
~ L...J L...J
-J - P
"1 ,J
-1
-J
-1-J
(10)
iE'DiEM
In the case of perspective geometry where we have used homogeneous co-ordinates
the saddle-point equations are not readily amenable in a closed-form linear fashion. Instead, we solve the non-linear maximisation problem using the LevenbergMarquardt technique. This non-linear optimisation technique offers a compromise
between the steepest gradient and inverse Hessian methods. The former is used
when close to the optimum while the latter is used far from it.
5
Experiments
The real-world evaluation of our matching method is concerned with recognising
planer objects in different 3D poses. The object used in this study is a 3.5 inch
Recovering Perspective Pose with a Dual Step EM Algorithm
785
floppy disk which is placed on a desktop. The scene is viewed with a low-quality SGI
IndyCam. The feature points used to triangulate the object are corners. Since the
imaging process is not accurately modelled by a perspective transformation under
pin-hole optics, the example provides a challenging test of our matching process.
Our experiments are illustrated in Figure 1. The first two columns show the views
under match. In the first example (the upper row of Figure 1) we are concerned
with matching when there is a significant difference in perspective forshortening . In
the example shown in the lower row of Figure 1, there is a rotation of the object
in addition to the foreshortening. The images in the third column are the initial
matching configurations. Here the perspective parameter matrix has been selected
at random. The fourth column in Figure 1 shows the final matching configuration
after the EM algorithm has converged. In both cases the final registration is accurate. The algorithm appears to be capable of recovering good matches even when
the initial pose estimate is poor.
Figure 1: Images Under Match, Initial and Final Configurations.
We now turn to measuring the sensitivity of our method. In order to illustrate
the benefits offered by the structural gating process, we compare its performance
with a conventional least-squares parameter estimation process. Figure 2 shows a
comparison of the two algorithms for a problem involving a point-set of 20 nodes.
Here we show the RMS error as a function of the number of points which have
correct correspondence matches. The break-even point occurs when 8 nodes are
initially matched correctly and there are 12 errors. Once the number of initially
correct correspondences exceeds 8 then the EM method consistently outperforms
the least-squares estimation.
6
Conclusions
Our main contributions in this paper are twofold. The theoretical contribution has
been to develop a mixture model that allows a graphical structure to to constrain the
estimation of maximum likelihood model parameters. The second contribution is a
practical one, and involves the application of the mixture model to the estimation
of perspective pose parameters. There are a number of ways in which the ideas
developed in this paper can be extended. For instance, the framework is readily
extensible to the recognition of more complex non-planar objects.
786
A. D. J. Cross and E. R. Hancock
,.
12
~
c!!
.ff
08
os
o.
LSF a1andardLSF strucknI ??? ??
!
I
",
1
l 't- " i.
Figure 2: Structural Sensitivity.
References
[1] Y. Amit and A. Kong, "Graphical Templates for Model Registration", IEEE PAMI,
18, pp. 225-236, 1996.
(2] A.P. Dempster, Laird N.M. and Rubin D.B., "Maximum-likelihood from incomplete
data via the EM algorithm", J. Royal Statistical Soc. Ser. B (methodological},39, pp
1-38, 1977.
(3] O.D. Faugeras, E. Le Bras-Mehlman and J-D. Boissonnat, "Representing Stereo Data
with the Delaunay Triangulation", Artificial Intelligence, 44, pp. 41-87, 1990.
(4] S. Gold, Rangarajan A. and Mjolsness E., "Learning with pre-knowledge: Clustering
with point and graph-matching distance measures", Neural Computation, 8, pp. 787804, 1996.
(5] R.I. Hartley, "Projective Reconstruction and Invariants from Multiple Images", IEEE
PAMI, 16, pp. 1036-1041, 1994.
(6] M.I. Jordan and R.A. Jacobs, "Hierarchical Mixtures of Experts and the EM Algorithm" , Neural Computation, 6, pp. 181-214, 1994.
[7] M. Lades, J .C. Vorbruggen, J. Buhmann, J. Lange, C. von der Maalsburg, R.P. Wurtz
and W .Konen, "Distortion-invariant object-recognition in a dynamic link architecture", IEEE Transactions on Computers, 42, pp. 300-311, 1993
[8] D.P. McReynolds and D.G. Lowe, "Rigidity Checking of 3D Point Correspondences
under Perspective Projection", IEEE PAMI, 18 , pp. 1174-1185, 1996.
(9] S. Moss and E.R. Hancock, "Registering Incomplete Radar Images with the EM Algorithm", Image and Vision Computing, 15, 637-648, 1997.
[10] D. Oberkampf, D.F. DeMenthon and L.S. Davis, "Iterative Pose Estimation using
Coplanar Feature Points", Computer Vision and Image Understanding, 63, pp. 495511, 1996.
(11] P. Torr, A. Zisserman and S.J. Maybank, "Robust Detection of Degenerate Configurations for the Fundamental Matrix", Proceedings of the Fifth International Conference
on Computer Vision, pp. 1037-1042, 1995.
(12] R.C. Wilson and E.R. Hancock, "Structural Matching by Discrete Relaxation", IEEE
PAMI, 19, pp.634-648 , 1997.
| 1331 |@word kong:1 disk:1 covariance:2 jacob:2 invoking:1 initial:3 configuration:9 esj:1 outperforms:1 current:3 si:6 yet:1 reminiscent:1 must:1 readily:2 seeding:1 update:2 intelligence:1 selected:1 item:1 accordingly:2 desktop:1 steepest:1 provides:3 node:8 idi:1 simpler:1 registering:1 expected:5 boissonnat:1 estimating:2 matched:3 notation:1 suffice:1 argmin:1 developed:1 transformation:15 uk:1 control:1 ser:1 before:1 maximise:1 local:1 apparatus:1 switching:1 id:1 pami:4 challenging:1 co:7 lck:1 projective:1 bi:5 practical:2 maximisation:4 lf:1 bootstrap:1 matching:18 projection:1 word:2 pre:1 inp:1 onto:1 close:1 applying:1 conventional:1 imposed:1 commence:1 unify:1 identifying:1 recovery:2 rule:2 updated:3 homogeneous:2 element:1 recognition:2 located:1 observed:1 bottom:1 ft:1 mjolsness:1 surement:1 dempster:2 wil:5 dynamic:1 radar:1 compromise:1 edwin:1 represented:2 hancock:8 artificial:1 faugeras:1 posed:1 solve:1 say:1 distortion:1 otherwise:1 jointly:1 laird:2 delivered:1 final:3 advantage:1 reconstruction:1 product:1 alleviates:1 degenerate:1 gold:1 sgi:1 optimum:2 rangarajan:1 argmaxp:1 object:6 coupling:1 andrew:1 develop:2 pose:12 illustrate:1 soc:1 recovering:5 predicted:1 jem:1 involves:1 tlie:1 yol:1 drawback:1 correct:2 hartley:1 mcreynolds:1 observational:1 require:1 konen:1 residing:1 exp:4 adopt:1 estimation:8 gaussian:1 aim:6 wilson:3 consistently:1 methodological:1 likelihood:19 indicates:1 posteriori:7 voronoi:1 lj:1 initially:2 relation:2 expand:1 i1:1 dual:5 ill:1 denoted:2 augment:1 development:2 field:2 once:1 f3:2 triangulate:1 simplify:1 foreshortening:3 geometry:8 iem:1 attempt:1 factorise:1 detection:1 evaluation:1 severe:1 mixture:9 chain:1 amenable:1 accurate:1 edge:2 capable:1 unification:1 incomplete:2 euclidean:2 re:1 theoretical:1 levenbergmarquardt:1 column:4 instance:1 extensible:1 measuring:1 tessellation:1 uniform:1 reported:2 gd:1 st:1 density:5 fundamental:1 sensitivity:2 international:1 ie:3 squared:1 reflect:1 von:1 zen:2 corner:1 expert:2 li:2 transformational:2 potential:1 matter:1 satisfy:1 vi:1 view:1 break:1 closed:1 lowe:1 realised:1 recover:2 bayes:2 capability:1 iwi:1 contribution:7 formed:1 partite:5 square:2 variance:1 inch:1 modelled:1 bayesian:1 rejecting:1 disc:1 accurately:1 converged:1 ed:3 pp:11 associated:2 mi:1 hamming:3 popular:1 knowledge:1 appears:1 planar:1 zisserman:1 o:1 quality:1 effect:1 wlz:3 counterpart:3 former:1 lades:1 illustrated:1 ll:1 davis:1 mexp:1 stress:1 delivers:1 bring:1 image:14 novel:1 recently:1 fi:3 rotation:2 shear:1 jl:1 association:1 significant:3 composition:1 measurement:7 maybank:1 consistency:2 similarly:1 delaunay:6 multivariate:1 triangulation:4 perspective:22 binary:1 yi:1 der:1 fen:1 impose:1 preceding:1 prune:1 bra:1 maximize:1 ii:1 multiple:1 exceeds:1 match:29 cross:4 minimising:1 offer:2 prediction:1 variant:1 basic:4 involving:1 optimisation:2 vision:5 expectation:2 wurtz:1 iteration:2 represent:4 robotics:1 addition:3 singular:1 lsf:1 flow:1 jordan:2 extracting:1 structural:13 presence:1 concerned:3 indycam:1 independence:1 architecture:3 lange:1 idea:5 expression:3 rms:1 ultimate:1 stereo:1 locating:2 york:2 speaking:1 proceed:1 hessian:1 aimed:1 documented:1 estimated:3 correctly:1 broadly:1 write:2 discrete:1 iz:1 key:2 planer:1 registration:3 imaging:1 graph:24 relaxation:1 inverse:1 exponentiation:1 fourth:1 arrive:1 reasonable:1 scaling:1 bound:1 simplification:1 correspondence:28 refine:1 occur:1 optic:1 constraint:7 constrain:3 ri:3 scene:1 generates:1 department:1 according:4 poor:1 describes:1 em:22 wi:7 outlier:2 invariant:2 equation:2 pin:1 count:1 turn:1 tractable:1 coplanar:1 available:1 operation:1 apply:3 hierarchical:2 generic:1 appearing:2 distinguished:1 neighbourhood:3 robustness:1 gate:5 original:1 inconsistently:1 denotes:1 clustering:1 cf:8 graphical:6 exploit:2 amit:1 establish:2 occurs:1 strategy:1 fiducial:2 gradient:1 distance:4 link:1 outer:1 maximising:1 index:3 illustration:1 effecting:1 statement:1 zt:1 gated:2 allowing:1 upper:1 relational:3 extended:1 locate:1 ordinate:7 discretised:1 required:1 specified:1 usually:1 royal:1 epipolar:1 overlap:1 indicator:4 buhmann:1 residual:4 representing:3 improve:1 extract:2 coupled:1 moss:1 geometric:1 understanding:1 checking:1 hen:2 gauged:1 affine:2 offered:1 rubin:2 dd:1 viewpoint:1 translation:1 row:3 placed:1 jth:1 template:2 fifth:1 benefit:1 overcome:1 world:2 adaptive:1 far:1 constituting:1 transaction:1 sj:5 pruning:1 xi:1 iterative:1 sk:1 robust:1 complex:1 constructing:1 domain:1 vj:1 main:1 noise:1 augmented:2 je:1 ff:1 egg:1 fashion:1 structurally:1 position:1 explicit:1 exponential:1 pe:1 lw:1 weighting:1 third:1 theorem:1 gating:2 er:3 symbol:1 recognising:1 effectively:1 importance:1 ljem:2 conditioned:1 hole:1 likely:1 saddle:1 conditional:5 goal:2 viewed:2 twofold:1 replace:1 feasible:1 change:1 specifically:1 torr:1 formally:2 latter:1 rigidity:1 |
364 | 1,332 | Mapping a manifold of perceptual observations
Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology, Cambridge, MA 02139
jbt@psyche.mit.edu
Abstract
Nonlinear dimensionality reduction is formulated here as the problem of trying to
find a Euclidean feature-space embedding of a set of observations that preserves
as closely as possible their intrinsic metric structure - the distances between points
on the observation manifold as measured along geodesic paths. Our isometric
feature mapping procedure, or isomap, is able to reliably recover low-dimensional
nonlinear structure in realistic perceptual data sets, such as a manifold of face
images, where conventional global mapping methods find only local minima.
The recovered map provides a canonical set of globally meaningful features,
which allows perceptual transformations such as interpolation, extrapolation, and
analogy - highly nonlinear transformations in the original observation space - to
be computed with simple linear operations in feature space.
1 Introduction
In psychological or computational research on perceptual categorization, it is generally taken
for granted that the perceiver has a priori access to a representation of stimuli in terms of
some perceptually meaningful features that can support the relevant classification. However,
these features will be related to the raw sensory input (e.g. values of retinal activity or image
pixels) only through a very complex transformation, which must somehow be acquired
through a combination of evolution, development, and learning. Fig. 1 illustrates the featurediscovery problem with an example from visual perception. The set of views of a face from
all possible viewpoints is an extremely high-dimensional data set when represented as image
arrays in a computer or on a retina; for example, 32 x 32 pixel grey-scale images can be
thought of as points in a 1,024-dimensional observation space. The perceptually meaningful
structure of these images, however, is of much lower dimensionality; all of the images in
Fig. 1 lie on a two-dimensional manifold parameterized by viewing angle. A perceptual
system that discovers this manifold structure has learned a model of the appearance of
this face that will support a wide range of recognition, classification, and imagery tasks
(some demonstrated in Fig. 1), despite the absence of any prior physical knowledge about
three-dimensional object geometry, surface texture, or illumination conditions.
Learning a manifold of perceptual observations is difficult because these observations
Mapping a Manifold of Perceptual ObseIVations
o
~
>
<1)
(i)
-
11
Wi
..
.?:
?.
L..... .... ------ .. --. --...;.... "-.... ...... --..~. ----. --... ... --i:::: .........~.
~-------------------------------------------~.
~.-p--:
c:
683
l..
"
. _--,.---'
.
:
~
C!
?
azimuth
A
B
[11---.---.---.--- ---B.
fll---fii-{ii iI ? ? ?
cll-.
W?
Figure 1: Isomap recovers a global topographic map of face images varying in two viewing
angle parameters, azimuth and elevation. Image interpolation (A), extrapolation (B), and
analogy (C) can then be carried out by linear operations in this feature space.
usually exhibit significant nonlinear structure. Fig. 2A provides a simplified version of
this problem. A flat two-dimensional manifold has been nonlinearly embedded in a threedimensional observation space, 1 and must be "unfolded" by the learner_ For linearly
embedded manifolds. principal component analysis (PCA) is guaranteed to discover the
dimensionality of the manifold and produce a compact representation in the form of an
orthonormal basis_ However, PCA is completely insensitive to the higher-order. nonlinear
structure that characterizes the points in Fig. 2A or the images in Fig. 1.
Nonlinear dimensionality reduction - the search for intrinsically low-dimensional structures embedded nonlinearly in high-dimensional observations - has long been a goal of
computational learning research. The most familiar nonlinear techniques. such as the
self-organizing map (SOM; Kohonen, 1988), the generative topographic mapping (GTM;
Bishon, Svensen, & Williams, 1998), or autoencoder neural networks (DeMers & Cottrell,
1993), try to generalize PCA by discovering a single global low-dimensional nonlinear
model of the observations. In contrast, local methods (Bregler & Omohundro. 1995; Hinton, Revow, & Dayan, 1995) seek a set of low-dimensional models, usually linear and
hence valid only for a limited range of data. When appropriate, a single global model is
IGiven by XI = ZI COS(ZI),
X2
= ZI sin(zJ),
X3
=
Z2,
for Zl E [311"/2 ,911"/2], Z2 E [0,15].
J. B. Tenenbaum
684
c
B
A
10
10
10
o
o
o
o
10
o 10
10
10
Figure 2: A nonlinearly embedded manifold may create severe local minima for "top-down"
mapping algorithms. (A) Raw data. (B) Best SOM fit. (C) Best GTM fit.
more revealing and useful than a set of local models. However, local linear methods are in
general far more computationally efficient and reliable than global methods.
For example, despite the visually obvious structure in Fig. 2A, this manifold was not
successfuly modeled by either of two popular global mapping algorithms, SOM (Fig. 2B)
and GTM (Fig. 2C), under a wide range of parameter settings. Both of these algorithms
try to fit a grid of predefined (usually two-dimensional) topology to the data, using greedy
optimization techniques that first fit the large-scale (linear) structure of the data, before
making small-scale (nonlinear) refinements. The coarse structure of such "folded" data sets
as Fig. 2A hides their nonlinear structure from greedy optimizers, virtually ensuring that
top-down mapping algorithms will become trapped in highly suboptimal solutions.
Rather than trying to force a predefined map onto the data manifold, this paper shows how a
perceptual system may map a set of observations in a "bottom-up" fashion, by first learning
the topological structure of the manifold (as in Fig. 3A) and only then learning a metric map
of the data (as in Fig. 3C) that respects this topology. The next section describes the goals and
steps of the mapping procedure, and subsequent sections demonstrate applications to two
challenging learning tasks: recovering a five-dimensional manifold embedded nonlinearly
in 50 dimensions, and recovering the manifold of face images depicted in Fig. I.
2 Isometric feature mapping
We assume our data lie on an unknown manifold M embedded in a high-dimensional
observation space X. Let xci) denote the coordinates of the ith observation. We seek a
mapping I : X - Y from the observation space X to a low-dimensional Euclidean feature
space Y that preserves as well as possible the intrinsic metric structure of the observations,
i.e. the distances between observations as measured along geodesic (locally shortest) paths
of M . The isometric feature mapping, or isomap, procedure presented below generates
an implicit description of the mapping I, in terms of the corresponding feature points
y(i) = I( xci)) for sufficiently many observations x(i). Explicit parametric descriptions
of I or I-I can be found with standard techniques of function approximation (Poggio &
Girosi, 1990) that interpolate smoothly between the known corresponding pairs {x( i) , y( i)} .
A Euclidean map of the data's intrinsic geometry has several important properties. First,
intrinsically similar observations should map to nearby points in feature space, supporting efficient similarity-based classification and informative visualization. Moreover, the
geodesic paths of the manifold, which are highly nonlinear in the original observation space,
should map onto straight lines in feature space. Then perceptually natural transfonnations
along these paths, such as the interpolation, extrapolation and analogy demonstrated in
Figs. IA-C, may be computed by trivial linear operations in feature space.
685
Mapping a Manifold of Perr:eptual Observations
A
B
C
10
4
o
2
. Manifold
. . .Distance
..
~bU~ 1~9(lJ?t~
'0
~ Qligo)",
~
~
Q
1!@>
1t:??1?\~
"'.
0
~
~
c!Je~r!~
5
rt;,..
10
tje. 'fJ
15
Figure 3: The results of the three-step isomap procedure. (A) Discrete representation of
manifold in Fig. 2A. (B) Correlation between measured graph distances and true manifold distances. (C) Correspondence of recovered two-dimensional feature points {Yl, Y2}
(circles) with original generating vectors {ZI' Z2} (line ends).
The isomap procedure consists of three main steps, each of which might be carried out by
more or less sophisticated techniques. The crux of isomap is finding an efficient way to
compute the true geodesic distance between observations, gi ven only their Euclidean distances in the high-dimensional observation space. Isomap assumes that distance between
points in observation space is an accurate measure of manifold distance only locally and
must be integrated over paths on the manifold to obtain global distances. As preparation for
computing manifold distances, we first construct a discrete representation of the manifold
in the form of a topology-preserving network (Fig. 3A). Given this network representation,
we then compute the shortest-path distance between any two points in the network using
dynamic programming. This polynomial-time computation provides a good approximation
to the actual manifold distances (Fig. 3B) without having to search over all possible paths in
the network (let alone the infinitely many paths on the unknown manifold!). Finally, from
these manifold distances, we construct a global geometry-preserving map of the observations in a low-dimensional Euclidean space, using multidimensional scaling (Fig. 3C). The
implementation of this procedure is detailed below.
Step 1: Discrete representation of manifold (Fig. 3A). From the input data of n observations
{x(1) , . ? . , xC n)}, we randomly select a subset of T points to serve as the nodes {g(1) , .. . , gC r)} of the
topology-preserving network. We then construct a graph G over these nodes by connecting gCi) and
g(;) if and only if there exists at least one x Ck ) whose two closest nodes (in observation space) are gC i)
and gCi) (Martinetz & Schulten, 1994). The resulting graph for the data in Fig. 2A is shown in Fig. 3A
(with n = 104, T = )03). This graph clearly respects the topology of the manifold far better than the
best fits with SOM (Fig. 2B) or GTM (Fig. 2C). In the limit of infinite data, the graph thus produced
converges to the Delaunay triangulation of the nodes, restricted to the data manifold (Martinetz &
Schulten, 1994). In practice, n = 104 data points have proven sufficient for all examples we have
tried. This number may be reduced significantly if we know the dimensionality d of the manifold,
but here we assume no a priori information about dimensionality. The choice of T, the number of
nodes in G, is the only free parameter in isomap. If T is too small, the shortest-path distances between
nodes in G will give a poor approximation to their true manifold distance. If T is too big (relative to
n), G will be missing many appropriate links (because each data point XCi) contributes at most one
link). In practice, choosing a satisfactory T is not difficult - all three examples presented in this paper
use T = n /10, the first value tried. I am currently exploring criteria for selecting the optimal value T
based on statistical arguments and dimensionality considerations.
Step 2: Manifold distance measure (Fig. 3B). We first assign a weight to each link w,) in the graph
= I\xCi ) - xC)I\ , the Euclidean distance between nodes i and j in the observation
G, equal to
space X . The length of a path in G is defined to be the sum of link weights along that path. We then
compute the geodesic distance d~ (i.e. shortest path length) between all pairs of nodes i and j in G,
using Floyd's O( T 3 ) algorithm (Foster, 1995). Initialize d& =
if nodes i and j are connected
d1
d1
J. B. Tenenbaum
686
B
A
iii
::>
:2
1
Oisomap
iii
::>
1
"0
xMDS
Q)
Q)
a:
-g 0,5
-g 0,5
1
N
0
.PCA
)1,
'iii
m
a:
z
lI!
'
'a
0
0
E
0
5
10
z
x '.
X
'. .,
"
x , x'x','.,
'.
0
0
Dimension
'X"
5
Dimension
10
Figure 4:
Given a
5-dimensional manifold
embedded nonlinearly in
a 50-dimensional space,
isomap identifies the
intrinsic dimensionality
(A), while PCA and
MDS alone do not (B).
and 00 otherwise. Then for each node k, set each d~ = min(d~, d~ + d"d). Fig. 3B plots the
distances d~ computed between nodes i and j in the graph of Fig. 3A versus their actual manifold
distances d~. Note that the correlation is almost perfect (R > .99), but d~ tends to overestimate d~
by a constant factor due to the discretization introduced by the graph. As the density of observations
increases, so does the possible graph resolution. Thus, in the limit of infinite data, the graph-based
approximation to manifold distance may be made arbitrarily accurate.
Step 3: Isometric Euclidean embedding (Fig. 3C). We use ordinal multidimensional scaling (MOS;
Cox & Cox, 1994; code provided by Brian Ripley), also called "non metric " MOS, to find a kdimensional Euclidean embedding that preserves as closely as possible the graph distances d~. In
contrast to classical "metric" MOS, which explicitly tries to preserve distances, ordinal MOS tries
to preserve only the rank ordering of distances. MOS finds a configuration of k-dimensional feature
vectors {y(1) ? ...? y( r)}, corresponding to the high-dimensional observations {x(I), ... ? x(r)}, that
minimizes the stress function,
S
= min
d;1
(1)
Here d~ = II y( i) - yU) II, the Euclidean distance between feature vectors i and j, and the d~ are
some monotonic transformation of the graph distances d~. We use ordinal MOS because it is less
senstitive to noisy estimates of manifold distance. Moreover, when the number of points scaled is
large enough (as it is in all our examples), ordinal constraints alone are sufficient to reconstruct a
precise metric map. Fig. 3C shows the projections of 100 random points on the manifold in Fig, 2A
onto a two-dimensional feature space computed by MOS from the graph distances output by step 2
above. These points are in close correspondence (after rescaling) with the original two-dimensional
vectors used to generate the manifold (see note 1), indicating that isomap has successfully unfolded
the manifold onto a 2-dimensional Euclidean plane.
3
Example 1: Five-dimensional manifold
This section demonstrates isomap's ability to discover and model a noisy five-dimensional
manifold embedded within a 50-dimensional space. As the dimension of the manifold
increases beyond two, SOM, GTM, and other constrained clustering approaches become
impractical due to the exponential proliferation of cluster centers. Isomap, however, is
quite practical for manifolds of moderate dimensionality, because the estimates of manifold
distance for a fixed graph size degrade gracefully as dimensionality increases. Moreover,
isomap is able to automatically discover the intrinsic dimensionality of the data, while
conventional methods must be initialized with a fixed dimensionality.
We consider a 5-dimensional manifold parameterized by {Z\, . . " zs} E [0,4]5. The first 10
of 50 observation dimensions were determined by nonlinear functions of these parameters. 2
2XI = cos( 1rzt}, X2 = sine 1rzI), X3 = cose; zI), X4 = sine; zI) , Xs = cos( fzI),
X6 = sin(fzl), X7 = z2cos\j~'zl)+z3sin2(lizl)' X8 = z2sin2(lizI)+Z3COS2(~zt}, X9 =
Z4 cos2(lizt} + Zs sin2(~zl)' XIO = Z4 sin\j~'zl) + Zs COS2(~ZI)'
Mapping a Manifold of Perceptual Observations
687
Low-amplitude gaussian noise (4-5% of variance) was added to each of these dimensions,
and the remaining 40 dimensions were set to pure noise of similar variance. The isomap
procedure applied to this data (n = 104 , r = 103) correctly recognized its intrinsic fivedimensionality, as indicated by the sharp decrease of stress (see Eq. 1) for embedding
dimensions up to 5 and only gradual decrease thereafter (Fig. 4A). In contrast, both PCA
and raw MDS (using distances in observation space rather than manifold distances) identify
the lO-dimensional linear subspace containing the data, but show no sensitivity to the
underlying five-dimensional manifold (Fig. 4B).
4
Example 2: 1Wo-dimensional manifold of face images
This section illustrates the performance of isomap on the two-dimensional manifold of
face images shown in Fig. 1. To generate this map, 32 x 32-pixel images of a face were
first rendered in MATLAB in many different poses (azimuth E [-90?,90?], elevation
E [-10?, 10?]), using a 3-D range image of an actual head and a combination oflambertian
and specular reflectance models. To save computation, the data (n = 104 images) were
first reduced to 60 principal components and then submitted to isomap (r = 103 ). The
plot of stress S vs. dimension indicated a dimensionality of two (even more clearly than
Fig. 4A). Fig. 1 shows the two-dimensional feature space that results from applying MDS to
the computed graph distances, with 25 face images placed at their corresponding points in
feature space. Note the clear topographic representation of similar views at nearby feature
points. The principal axes of the feature space can be identified as the underlying viewing
angle parameters used to generate the data. The correlations of the two isomap dimensions
with the two pose angles are R = .99 and R = .95 respectively. No other global mapping
procedure tried (PCA, MDS, SOM, GTM) produced interpretable results for these data.
The human visual system's implicit knowledge of an object's appearance is not limited to
a representation of view similarity, and neither is isomap's. As mentioned in Section 2, an
isometric feature map also supports analysis and manipulation of data, as a consequence of
mapping geodesics of the observation manifold to straight lines in feature space. Having
found a number of corresponding pairs {x( i) , y( i)} of images x( i) and feature vectors y( i) ,
it is easy to learn an explicit inverse mapping 1-1 : y -+ X from low-dimensional feature
space to high-dimensional observation space, using generic smooth interpolation techniques
such as generalized radial basis function (GRBF) networks (Poggio & Girosi, 1990). All
images in Fig. 1 have been synthesized from such a mapping. 3
Figs. lA-C show how learning this inverse mapping allows interpolation, extrapolation,
and analogy to be carried out using only linear operations. We can interpolate between
two images x(l) and x(2) by synthesizing a sequence of images along their connecting line
(y(2) _ yO) in feature space (Fig. lA). We can extrapolate the transformation from one
image to another and far beyond, by following the line to the edge of the manifold (Fig. IB).
We can map the transformation between two images xCI) and x(2) onto an analogous
transformation of another image x(3), by adding the transformation vector (y(2) - y(1? to
y(3) and synthesizing a new image at the resulting feature coordinates (Fig. 1C).
A number of authors (Bregler & Omohundro, 1995; Saul & Jordan, 1997; Beymer &
Poggio, 1995) have previously shown how learning from examples allows sophisticated
3The map from feature vectors to images was learned by fitting a GRBF net to 1000 corresponding
points in both spaces. Each point corresponds to a node in the graph G used to measure manifold
distance, so the feature-space distances required to fit the GRBF net are given (approximately) by the
graph distances d~ computed in step 2 of isomap. A subset C of m = 300 points were randomly
chosen as RBF centers, and the standard deviation of the RBFs was set equal to max;,jEC d~rJ2m
(as prescribed by Haykin, 1994).
688
J. B. Tenenbaum
image manipulations to be carried out efficiently. However, these approaches do not support
as broad a range of transformations as isomap does, because of their use of only locally
valid models and/or the need to compute special-purpose image features such as optical
flow. See Tenenbaum (1997) for further discussion, as well as examples of isomap applied
to more complex manifolds of visual observations.
5 Conclusions
The essence of the isomap approach to nonlinear dimensionality reduction lies in the
novel problem formulation: to seek a low-dimensional Euclidean embedding of a set of
observations that captures their intrinsic similarities, as measured along geodesic paths of the
observation manifold. Here I have presented an efficient algorithm for solving this problem
and shown that it can discover meaningful feature-space models of manifolds for which
conventional "top-down" approaches fail. As a direct consequence of mapping geodesics
to straight lines in feature space, isomap learns a representation of perceptual observations
in which it is easy to perform interpolation and other complex transformations. A negative
consequence of this strong problem formulation is that isomap will not be applicable to
every data manifold. However, as with the classic technique of peA, we can state clearly
the general class of data for which isomap is appropriate - manifolds with no "holes" and
no intrinsic curvature - with a guarantee that isomap will succeed on data sets from this
class, given enough samples from the manifold. Future work will focus on generalizing
this domain of applicability to allow for manifolds with more complex topologies and
significant curvature, as would be necessary to model certain perceptual manifolds such as
the complete view space of an object.
Acknowledgements
Thanks to M. Bernstein, W. Freeman, S. Gilbert, W. Richards, and Y. Weiss for helpful discussions.
The author is a Howard Hughes Medical Institute Predoctoral Fellow.
References
Beymer, D. & Poggio, T. (1995). Representations for visual learning, Science 272,1905.
Bishop, c., Svensen, M., & Williams, C. (1998). GTM: The generative topographic mapping. Neural
Computation 10(1).
Bregler, C. & Omohundro, S. (1995). Nonlinear image interpolation using manifold learning. NIPS
7. MIT Press.
Cox, T. & Cox, M. (1994). Multidimensional scaling. Chapman & Hall.
DeMers, D. & Cottrell, G. (1993). Nonlinear dimensionality reduction. NIPS 5. Morgan Kauffman.
Foster, I. (1995). Designing and building parallel programs. Addison-Wesley.
Hayldn, S. (1994). Neural Networks: A Comprehensive Foundation . Macmillan.
Hinton. G. , Revow. M.? & Dayan, P. (1995). Recognizing handwritten digits using mixtures of linear
models. NIPS 7. MIT Press.
Kohonen. T. (1988). Self-Organization and Associative Memory. Berlin: Springer.
Martinetz. T. & Schulten, K. (1994). Topology representing networks. Neural Networks 7, 507.
Poggio, T. & Girosi. F. (1990). Networks for approximation and learning. Proc. IEEE 78, 1481 .
Saul, L. & Jordan. M. (1997). A variational principle for model-based morphing. NIPS 9. MIT Press.
Tenenbaum, J. (1997). Unsupervised learning of appearance manifolds. Manuscript submitted.
| 1332 |@word cox:4 version:1 polynomial:1 grey:1 seek:3 tried:3 cos2:2 gradual:1 reduction:4 configuration:1 selecting:1 recovered:2 z2:3 discretization:1 must:4 cottrell:2 subsequent:1 realistic:1 informative:1 gci:2 girosi:3 plot:2 interpretable:1 v:1 alone:3 generative:2 discovering:1 greedy:2 plane:1 ith:1 haykin:1 provides:3 coarse:1 node:12 five:4 along:6 direct:1 become:2 consists:1 fitting:1 acquired:1 proliferation:1 tje:1 brain:1 freeman:1 globally:1 automatically:1 unfolded:2 actual:3 provided:1 discover:4 moreover:3 underlying:2 minimizes:1 z:3 perr:1 finding:1 transformation:10 impractical:1 guarantee:1 fellow:1 every:1 multidimensional:3 transfonnations:1 scaled:1 demonstrates:1 zl:4 medical:1 overestimate:1 before:1 local:5 tends:1 limit:2 consequence:3 despite:2 path:13 interpolation:7 approximately:1 might:1 challenging:1 co:3 limited:2 range:5 practical:1 practice:2 hughes:1 x3:2 optimizers:1 digit:1 procedure:8 thought:1 revealing:1 significantly:1 projection:1 radial:1 onto:5 close:1 applying:1 gilbert:1 conventional:3 map:15 demonstrated:2 xci:5 missing:1 center:2 williams:2 resolution:1 pure:1 array:1 orthonormal:1 perceiver:1 embedding:5 classic:1 coordinate:2 analogous:1 programming:1 designing:1 recognition:1 richards:1 bottom:1 capture:1 connected:1 ordering:1 decrease:2 mentioned:1 xio:1 dynamic:1 geodesic:8 solving:1 serve:1 completely:1 basis:1 represented:1 gtm:7 choosing:1 whose:1 quite:1 otherwise:1 reconstruct:1 ability:1 gi:1 topographic:4 noisy:2 associative:1 sequence:1 net:2 kohonen:2 relevant:1 organizing:1 description:2 cluster:1 produce:1 categorization:1 generating:1 converges:1 perfect:1 object:3 pose:2 svensen:2 measured:4 eq:1 strong:1 recovering:2 closely:2 pea:1 human:1 viewing:3 crux:1 assign:1 elevation:2 brian:1 bregler:3 exploring:1 sufficiently:1 hall:1 visually:1 mapping:22 mo:7 purpose:1 proc:1 applicable:1 currently:1 create:1 successfully:1 mit:4 clearly:3 gaussian:1 rather:2 ck:1 varying:1 ax:1 yo:1 focus:1 rank:1 contrast:3 am:1 sin2:1 helpful:1 dayan:2 lj:1 integrated:1 pixel:3 classification:3 priori:2 development:1 constrained:1 special:1 initialize:1 equal:2 construct:3 having:2 chapman:1 x4:1 ven:1 yu:1 broad:1 unsupervised:1 future:1 stimulus:1 retina:1 randomly:2 preserve:5 interpolate:2 comprehensive:1 familiar:1 geometry:3 organization:1 highly:3 severe:1 mixture:1 predefined:2 accurate:2 edge:1 cose:1 necessary:1 poggio:5 euclidean:11 initialized:1 circle:1 psychological:1 applicability:1 deviation:1 subset:2 recognizing:1 azimuth:3 too:2 thanks:1 density:1 cll:1 sensitivity:1 bu:1 yl:1 kdimensional:1 connecting:2 imagery:1 x9:1 containing:1 cognitive:1 rescaling:1 li:1 retinal:1 explicitly:1 sine:2 view:4 extrapolation:4 try:4 characterizes:1 recover:1 parallel:1 rbfs:1 variance:2 efficiently:1 identify:1 generalize:1 raw:3 handwritten:1 produced:2 straight:3 submitted:2 obvious:1 recovers:1 demers:2 massachusetts:1 intrinsically:2 popular:1 knowledge:2 dimensionality:15 amplitude:1 sophisticated:2 manuscript:1 wesley:1 higher:1 isometric:5 x6:1 wei:1 formulation:2 implicit:2 correlation:3 grbf:3 nonlinear:15 somehow:1 indicated:2 building:1 true:3 isomap:26 y2:1 evolution:1 hence:1 jbt:1 satisfactory:1 sin:3 floyd:1 self:2 essence:1 criterion:1 generalized:1 trying:2 stress:3 omohundro:3 demonstrate:1 complete:1 fj:1 image:28 variational:1 consideration:1 novel:1 discovers:1 physical:1 insensitive:1 synthesized:1 significant:2 cambridge:1 rzi:1 grid:1 z4:2 access:1 similarity:3 surface:1 delaunay:1 fii:1 closest:1 curvature:2 hide:1 triangulation:1 moderate:1 manipulation:2 certain:1 arbitrarily:1 joshua:1 preserving:3 minimum:2 morgan:1 recognized:1 shortest:4 ii:4 smooth:1 long:1 ensuring:1 metric:6 martinetz:3 virtually:1 flow:1 jordan:2 jec:1 bernstein:1 iii:3 enough:2 easy:2 fit:6 zi:7 specular:1 topology:7 suboptimal:1 identified:1 pca:7 granted:1 wo:1 matlab:1 generally:1 useful:1 detailed:1 clear:1 tenenbaum:6 locally:3 reduced:2 generate:3 canonical:1 zj:1 trapped:1 correctly:1 discrete:3 thereafter:1 neither:1 graph:17 sum:1 angle:4 parameterized:2 inverse:2 almost:1 scaling:3 guaranteed:1 fll:1 correspondence:2 topological:1 fzi:1 activity:1 constraint:1 x2:2 flat:1 nearby:2 generates:1 x7:1 argument:1 extremely:1 min:2 prescribed:1 optical:1 rendered:1 department:1 combination:2 poor:1 describes:1 psyche:1 wi:1 making:1 restricted:1 taken:1 computationally:1 visualization:1 previously:1 fail:1 know:1 ordinal:4 addison:1 end:1 operation:4 appropriate:3 generic:1 save:1 original:4 top:3 assumes:1 clustering:1 remaining:1 xc:2 reflectance:1 threedimensional:1 classical:1 added:1 parametric:1 rt:1 md:4 exhibit:1 subspace:1 distance:36 link:4 berlin:1 gracefully:1 degrade:1 manifold:66 trivial:1 length:2 code:1 modeled:1 difficult:2 negative:1 synthesizing:2 implementation:1 reliably:1 zt:1 unknown:2 perform:1 predoctoral:1 observation:38 howard:1 supporting:1 hinton:2 precise:1 head:1 gc:2 sharp:1 introduced:1 successfuly:1 nonlinearly:5 pair:3 required:1 learned:2 nip:4 able:2 beyond:2 usually:3 perception:1 below:2 kauffman:1 program:1 reliable:1 max:1 memory:1 ia:1 natural:1 force:1 representing:1 technology:1 identifies:1 carried:4 x8:1 autoencoder:1 prior:1 morphing:1 acknowledgement:1 relative:1 embedded:8 analogy:4 proven:1 versus:1 foundation:1 sufficient:2 foster:2 viewpoint:1 principle:1 lo:1 placed:1 free:1 allow:1 institute:2 wide:2 saul:2 face:9 dimension:10 valid:2 sensory:1 author:2 made:1 refinement:1 simplified:1 far:3 compact:1 global:9 xi:2 ripley:1 search:2 learn:1 contributes:1 complex:4 som:6 domain:1 main:1 linearly:1 big:1 noise:2 fig:39 je:1 fashion:1 schulten:3 explicit:2 exponential:1 lie:3 perceptual:11 ib:1 learns:1 down:3 bishop:1 x:1 intrinsic:8 exists:1 adding:1 texture:1 perceptually:3 illustrates:2 illumination:1 hole:1 smoothly:1 depicted:1 generalizing:1 appearance:3 infinitely:1 beymer:2 visual:4 macmillan:1 monotonic:1 springer:1 corresponds:1 ma:1 succeed:1 goal:2 formulated:1 rbf:1 revow:2 absence:1 folded:1 infinite:2 determined:1 principal:3 called:1 la:2 meaningful:4 indicating:1 select:1 support:4 preparation:1 d1:2 extrapolate:1 |
365 | 1,333 | The Efficiency and The Robustness of
Natural Gradient Descent Learning Rule
Howard Hua Yang
Department of Computer Science
Oregon Graduate Institute
PO Box 91000, Portland, OR 97291, USA
hyang@cse.ogi.edu
Shun-ichi Amari
Lab. for Information Synthesis
RlKEN Brain Science Institute
Wako-shi, Saitama 351-01, JAPAN
amari@zoo.brain.riken.go.jp
Abstract
The inverse of the Fisher information matrix is used in the natural gradient descent algorithm to train single-layer and multi-layer
perceptrons. We have discovered a new scheme to represent the
Fisher information matrix of a stochastic multi-layer perceptron.
Based on this scheme, we have designed an algorithm to compute
the natural gradient. When the input dimension n is much larger
than the number of hidden neurons, the complexity of this algorithm is of order O(n). It is confirmed by simulations that the
natural gradient descent learning rule is not only efficient but also
robust.
1
INTRODUCTION
The inverse of the Fisher information matrix is required to find the Cramer-Rae
lower bound to analyze the performance of an unbiased estimator. It is also needed
in the natural gradient learning framework (Amari, 1997) to design statistically
efficient algorithms for estimating parameters in general and for training neural
networks in particular. In this paper, we assume a stochastic model for multilayer perceptrons. Considering a Riemannian parameter space in which the Fisher
information matrix is a metric tensor, we apply the natural gradient learning rule to
train single-layer and multi-layer perceptrons. The main difficulty encountered is to
compute the inverse of the Fisher information matrix of large dimensions when the
input dimension is high. By exploring the structure of the Fisher information matrix
and its inverse, we design a fast algorithm with lower complexity to implement the
natural gradient learning algorithm.
386
2
H H Yang and S. Amari
A STOCHASTIC MULTI-LAYER PERCEPTRON
Assume the following model of a stochastic multi-layer perceptron:
m
z=
L ail{J(wT x + bi ) + ~
(1)
i=l
where OT denotes the transpose, ~ ,..., N(O, (72) is a Gaussian random variable, and
l{J(x) is a differentiable output function for hidden neurons. Assume the multi-layer
network has a n-dimensional input, m hidden neurons, a one dimensional output,
and m S n. Denote a = (ai, ... ,am)T the weight vector of the output neuron, Wi =
(Wli,???, Wni)T the weight vector of the i-th hidden neuron, and b = (b l ,???, bm)T
the vector of thresholds for the hidden neurons. Let W = [WI,???, W m ] be a
matrix formed by column weight vectors Wi, then (1) can be rewritten as z =
aTI{J(WT x + b) +~. Here, the scalar function I{J operates on each component of the
vector WT x + b.
The joint probability density function (pdf) of the input and the output is
p(x,z;W,a,b) = p(zlx; W,a,b)p(x).
Define a loss function:
L(x, z; 0) = -logp(x, Z; 0) = l(zlx; 0) -logp(x)
where 0 =
(wI,???, W~, aT, bTV includes all the parameters to be estimated and
l(zlx; 0) = -logp(zlx; 0)
Since ~ =
1
= 2(72
(z -
aT I{J(WT x
+ b?2.
-!b, the Fisher information matrix is defined by
G(O) = E[8L(8L)T] =
80 80
E[~(~)T]
80 80
(2)
The inverse of G(O) is often used in the Cramer-Rao inequality:
E[II6 - 0*112
I 0*]
~ Tr(G-I(O*?
6 is an unbiased estimator of a true parameter 0*.
the on-line estimator Ot based on the independent
where
examples {(xs, zs), s =
drawn from the probability law p(x, Z; 0*), the Cramer-Rao inequality
for the on-line estimator is
For
1,???, t}
E[II6 t - 0*112 I 0*]
3
~ ~Tr(G-I(O*?
(3)
NATURAL GRADIENT LEARNING
Consider a parameter space e = {O} in which the divergence between two points
0 1 and O2 is given by the Kullback-Leibler divergence
D(OI, O2 ) = KL(P(x, z; OI)IIp(x, z; O2 )].
When the two points are infinitesimally close, we have the quadratic form
D(O,O + dO)
= ~dOTG(O)dO.
(4)
The Efficiency and the Robustness of Natural Gradient Descent Learning Rule
387
This is regarded as the square of the length of dO. Since G(8) depends on 8, the
parameter space is regarded as a Riemannian space in which the local distance is
defined by (4). Here, the Fisher information matrix G(8) plays the role of the
Riemannian metric tensor.
It is shown by Amari(1997) that the steepest descent direction of a loss function
C(8) in the Riemannian space (3 is
-VC(8) = -G- 1 (8)\7C(8).
The natural gradient descent method is to decrease the loss function by updating
the parameter vector along this direction. By multiplying G- 1 (8), the covariant
gradient \7C(8) is converted into its contravariant form G- 1 (8)\7C(8) which is
consistent with the contravariant differential form dC(8).
Instead of using l(zlx; 8) we use the following loss function:
lr(zlx; 8)
1
= "2(z
-
aT tp(W Tx
+ b?2.
We have proved in [5] that G(8) = ~A(8) where A(8) does not depend on the
unknown u. So G- 1 (8)-lb = A -1(8)~. The on-line learning algorithms based on
the gradient ~ and the natural gradient A -1(8)~ are, respectively,
8 tH
= 8 t - tfl. Oil
{)8 (ztlxt; 8 t ),
8t+1 = 8t
-
fl.'
fA
-1
Oil
(8 t ) {)8 (ztlXt; 8t )
(5)
(6)
where fl. and fl.' are learning rates.
When the negative log-likelihood function is chosen as the loss function, the natural
gradient descent algorithm (6) gives a Fisher efficient on-line estimator (Amari,
1997), i.e., the asymptotic variance of 8 t driven by (6) satisfies
(7)
which gives the mean square error
(8)
The main difficulty in implementing the natural gradient descent algorithm (6) is
to compute the natural gradient on-line. To overcome this difficulty, we studied the
structure of the matrix A(8) in [5] and proposed an efficient scheme to represent
this matrix. Here, we briefly describe this scheme.
=
Let A(8)
[Aijlcm+2)x(m+2) be a partition of A(8) corresponding to the partition of 8 = (wf,.??,w?;.,aT,bT)T. Denote Ui = Wi/I/Will,i = 1, ? ? ? ,m,
U 1 = [U1,? .. ,u m ] and [VI,?? . ,V m ] = U 1 (UiU 1)-1. It has been proved in [5] that
those blocks in A(8) are divided into three classes: C1 = {Aij,i,j = 1,? ? ? ,m},
C2 = {Ai,mH, A!:H,i' A i,m+2, A!:+2,i' i
1,???, m} and C3 = {Am+i,m+j, i,j
1,2}. Each block in C1 is a linear combination of matrices UkVf, k, 1 = 1,? ? ?, m,
and no = I - E~1 ukvf. Each block in C2 is a matrix whose column is a linear combination of {Vk' k = 1,? .. ,m.}. The coefficients in these combinations are
integrals with respect to the multivariate Gaussian distribution N(O, R 1 ) where
=
=
H. H. Yang and S. Amari
388
HI = ufu 1 is m x m. Each block in C3 is an m x m matrix whose entries are also
integrals with respect to N(O, HI)' Detail expressions for these integrals are given
in [5]. When rp(x) = erf(.i2), using the techniques in (Saa.d and Solla, 1995), we
can find the analytic expressions for most of these integrals.
The dimension of A(9) is (nm + 2m) x (nm + 2m). When the input dimension n
is much larger than the number of hidden neurons, by using the above scheme, the
space for storing this large matrix is reduced from O(n2) to O(n). We also gave
a fast algorithm in [5] to compute A- 1 (9) and the natural gradient with the time
complexity O(n 2) and O(n) respectively. The trick is to make use of the structure
of the matrix A -1(9) .
4
SIMULATION
In this section, we give some simulation results to demonstrate that the natural
gradient descent algorithm is efficient and robust .
4.1
Single-layer perceptron
Assume 7-dimensional inputs Xt '" N(O, J) and rp(u) = ~+:=:. For the single-layer
perceptron, Z = rp(wTx), the on-line gradient descent (GD) and the natural GD
algorithms are respectively
Wt+l = Wt
Wt+l = Wt
+ J.to(t)(Zt - rp(w[ Xt))rp'(w[ Xt)Xt and
+ J.tdt) A-I (Wt)(Zt - rp(W[Xt?rp'(wiXt)Xt
(9)
(10)
where
(11)
(12)
(13)
and j.to(t) and j.tl (t) are two learning rate schedules defined by J.ti(t)
J.t(1]i,Ci,Tijt),i = 0,1. Here,
Ct
ct
t
= 1](1 + --)/(1
+ -+ -).
1]T
1]T T
=
2
J.t(1],C,Tjt)
(14)
is the search-then-converge schedule proposed by (Darken and Moody, 1992) . Note
that t < T is a "search phase" and t > T is a "converge phase". When Ti = 1, the
learning rate function J.ti(t) has no search phase but a weaker converge phase when
1]i is small. When t is large, J.ti (t) decreases as ?.
Randomly choose a 7-dimensional vector as w? for the teacher network:
w? = [-1.1043,0.4302,1.1978,1.5317, -2.2946, -0.7866,0.4428f.
Choose 1]0 = 1.25, 1]1 = 0.05, Co = 8.75, CI = 1, and TO = Tl = 1. These parameters
are selected by trial and error to optimize the performance of the GD and the
natural GD methods at the noise level u = 0.2. The training examples {(Xt, Zt)}
are generated by Zt = rp(w?TXt) +~t where ~t '" N(0,u 2) and u 2 is unknown to the
algorithms.
The Efficiency and the Robustness o/Natural Gradient Descent Learning Rule
389
Let Wt and Wt be the weight vectors driven by the equations (9) and (to) respectively. Ilwt - w"'l1 and IIWt - w'"l1 are error functions for the GD and the natural
GD.
=
IIw'"lI. From the equation (11), we obtain the Cramer-Rao Lower
Denote w'"
Bound (CRLB) for the deviation at the true weight vector w"':
CRLB(t)
u
= Vi
n -1
d 1 (w"')
1
(15)
+ d2 (w"'r
Figure 1: Performance of the GD and the natural GD at different noise levels
u = 0.2,0.4,1.
- - natural GO
- - - - - GO
CRLB
---
.. _--- - - - - - - - -
- --"'-
.....
_-----
, -, ,
_--
....
rtt\.vt~~
...
....
-
_--- .... -
----------
10-2'--:'-:--~-_=_-=___:=____::::::__~:__:_:_:-=__::
o
50
100
150
200
250
300
350
400
450
500
Iteration
It is shown in Figure 1 that the natural GD algorithin reaches CRLB at different
noise levels while the GD algorithm reaches the CRLB only at the noise level u =
0.2. The robustness of the natural gradient descent against the additive noise in
390
H. H. Yang and S. Amari
Figure 2:
Performance of the GD and the natural GD when "10
1.25, 1. 75,2.25,2.75, "11 = 0.05,0.2,0.4425,0.443, and CO = 8.75 and Cl = 1 are
fixed.
the training examples is clearly shown by Figure 1. When the teacher signal is
non-stationary, our simulations show that the natural GD algorithm also reaches
the CRLB.
Figure 2 shows that the natural GD algorithm is more robust than the GD algorithm against the change of the learning rate schedule. The performance of the GD
algorithm deteriorates when the constant "10 in the learning rate schedule (..to(t) is
different from that optimal one. On the contrary, the natural GD algorithm performs almost the same for all "11 within a interval [0.05,0.4425]. Figure 2 also shows
that the natural GD algorithm breaks down when "11 is larger than the critical number 0.443. This means that the weak converge phase in the learning rate schedule
is necessary.
4.2
Multi-layer perceptron
Let us consider the simple multi-layer perceptron with 2-dimensional input and 2hidden neurons. The problem is to train the committee machine y = <p(w[ x) +
<p(wfx) based on the examples {eXt, Zt), t = 1,? ?? , T} generated by the stochastic
committee machine Zt = <p(wiTXt) + <P(W2TXt) + ~t. Assume IIwili = 1. We can
reparameterize the weight vector to decrease the dimension of the parameter space
from 4 to 2:
COS(Oi) ]
* _ [ COS(O;)]
._ 1 2
Wi = [ sin(oi)
, Wi sin(oi) , '/, - , .
10',-----r-----r-----r-----r-----r----.
- - - -
GO
- - natural GO
CRLB
10~~--~~--~~--~~--~~----~--~
o
100
200
300
Heratlon
400
500
600
Figure 3: The GD vs. the natural GD
The parameter space is {8 = (01, (2)}. Assume that the true parameters are oi = 0
and 02 = 3;. Due to the symmetry, both 8r = (0, 3;) and 8; =
,0) are true
parameters. Let 8 t and 8~ be computed by the GD algorithm and the natural GD
e;
The Efficiency and the Robustness of Natural Gradient Descent Learning Rule
391
algorithm respectively. The errors are measured by
ct = min{119 t
-
9rll, 119t - 8~1I},
and c~ = min{lI~ - 9rll, 118~ - 9;11} .
In this simulation, using 8 0 = (0.1,0.2) as an initial estimate, we first start the
GD algorithm and run it for 80 iterations. Then, we use the estimate obtained
from the GD algorithm at the 80-th iteration as an initial estimate for the natural
GD algorithm and run the latter algorithm for 420 iterations. The noise level is
(1 = 0.05. N independent runs are conducted to obtain the errors ctU) and c~U),
i = 1,???, N. Define root mean square errors
N
and
cl
-
'" t -
~ I:(cHi)2.
j=1
Based on N = 10 independent runs, the errors Et and c't are computed and compared with the CRLB in Figure 3. The search-then-converge learning schedule (14)
is used in the GD algorithm while the learning rate for the natural GD algorithm
is simply the annealing rate t.
5
CONCLUSIONS
The natural gradient descent learning rule is statistically efficient. It can be used
to train any adaptive system. But the complexity of this learning rule depends on
the architecture of the learning machine. The main difficulty in implementing this
learning rule is to compute the inverse of the Fisher information matrix of large
dimensions. For a multi-layer perceptron, we have shown an efficient scheme to
represent the Fisher information matrix based on which the space for storing this
large matrix is reduced from O(n 2 ) to O(n). We have also shown an algorithm
to compute the natural gradient. Taking advantage of the structure of the inverse
of the Fisher information matrix, we found that the complexity of computing the
natural gradient is O(n) when the input dimension n is much larger than the number
of hidden neurons.
The simulation results have confirmed the fast convergence and statistical efficiency
of the natural gradient descent learning rule. They have also verified that this
learning rule is robust against the changes of the noise levels in the training examples
and the parameters in the learning rate schedules.
References
[1] S. Amari. Natural gradient works efficiently in learning. Accepted by Neural
Computation, 1997.
[2] S. Amari. Neural learning in structured parameter spaces - natural Riemannian
gradient. In Advances in Neural Information Processing Systems, 9, ed. M. C.
Mozer, M. 1. Jordan and T. Petsche, The MIT Press: Cambridge, MA., pages
127-133, 1997.
[3] C. Darken and J. Moody. Towards faster stochastic gradient search. In Advances in Neural Information Processing Systems, 4, eds. Moody, Hanson, and
Lippmann, Morgan Kaufmann, San Mateo, pages 1009-1016, 1992.
[4] D. Saad and S. A. Solla. On-line learning in soft committee machines. Physical
Review E, 52:4225-4243, 1995.
[5] H. H. Yang and S. Amari. Natural gradient descent for training multi-layer
perceptrons. Submitted to IEEE Tr. on Neural Networks, 1997.
| 1333 |@word trial:1 briefly:1 unbiased:2 true:4 tensor:2 direction:2 leibler:1 d2:1 fa:1 simulation:6 stochastic:6 vc:1 i2:1 ogi:1 sin:2 gradient:30 implementing:2 tr:3 shun:1 distance:1 initial:2 pdf:1 demonstrate:1 tjt:1 wako:1 ati:1 o2:3 exploring:1 performs:1 l1:2 length:1 cramer:4 iiw:1 additive:1 partition:2 physical:1 analytic:1 negative:1 designed:1 jp:1 design:2 v:1 stationary:1 zt:6 selected:1 unknown:2 ctu:1 neuron:9 cambridge:1 darken:2 ai:2 steepest:1 howard:1 descent:16 lr:1 mit:1 clearly:1 cse:1 gaussian:2 dc:1 discovered:1 rtt:1 uiu:1 lb:1 along:1 c2:2 differential:1 multivariate:1 required:1 kl:1 c3:2 vk:1 portland:1 driven:2 likelihood:1 hanson:1 inequality:2 zlx:6 am:2 wf:1 vt:1 multi:10 brain:2 chi:1 morgan:1 bt:1 hidden:8 converge:5 considering:1 signal:1 estimating:1 critical:1 natural:42 difficulty:4 faster:1 ail:1 z:1 divided:1 scheme:6 contravariant:2 ti:4 multilayer:1 txt:1 metric:2 iteration:4 represent:3 review:1 randomly:1 c1:2 divergence:2 asymptotic:1 btv:1 local:1 interval:1 annealing:1 law:1 phase:5 loss:5 ext:1 ot:2 saad:1 wli:1 tdt:1 rae:1 studied:1 mateo:1 usa:1 contrary:1 jordan:1 consistent:1 co:4 yang:5 storing:2 graduate:1 statistically:2 bi:1 gave:1 transpose:1 integral:4 architecture:1 block:4 implement:1 necessary:1 aij:1 weaker:1 perceptron:8 institute:2 taking:1 expression:2 overcome:1 dimension:8 column:2 soft:1 rao:3 adaptive:1 san:1 tp:1 close:1 bm:1 logp:3 saa:1 iiwt:1 deviation:1 entry:1 optimize:1 lippmann:1 saitama:1 kullback:1 shi:1 rll:2 go:5 conducted:1 reduced:2 teacher:2 ilwt:1 gd:27 rule:11 estimator:5 density:1 regarded:2 estimated:1 deteriorates:1 search:5 robust:4 synthesis:1 symmetry:1 moody:3 ichi:1 play:1 threshold:1 nm:2 cl:2 iip:1 choose:2 drawn:1 trick:1 verified:1 main:3 updating:1 wni:1 noise:7 n2:1 li:2 japan:1 converted:1 role:1 run:4 inverse:7 includes:1 coefficient:1 tl:2 oregon:1 almost:1 depends:2 solla:2 decrease:3 vi:2 break:1 root:1 lab:1 mozer:1 analyze:1 complexity:5 ui:1 start:1 layer:14 hi:2 bound:2 fl:3 ct:3 quadratic:1 down:1 encountered:1 depend:1 xt:7 formed:1 oi:6 square:3 variance:1 kaufmann:1 efficiently:1 efficiency:5 x:1 po:1 joint:1 mh:1 weak:1 u1:1 reparameterize:1 tx:1 min:2 ci:2 riken:1 train:4 zoo:1 multiplying:1 fast:3 describe:1 confirmed:2 infinitesimally:1 submitted:1 department:1 structured:1 combination:3 reach:3 ed:2 simply:1 whose:2 larger:4 against:3 wi:7 ztlxt:2 amari:11 erf:1 scalar:1 riemannian:5 hua:1 covariant:1 proved:2 tfl:1 satisfies:1 advantage:1 differentiable:1 equation:2 ma:1 committee:3 schedule:7 needed:1 towards:1 fisher:12 change:2 rewritten:1 operates:1 apply:1 wt:11 hyang:1 box:1 petsche:1 accepted:1 robustness:5 convergence:1 ii6:2 rp:8 perceptrons:4 denotes:1 latter:1 measured:1 wtx:1 oil:2 |
366 | 1,334 | Features as Sufficient Statistics
D. Geiger ?
Department of Computer Science
Courant Institute
and Center for Neural Science
New York University
A. Rudra t
Department of Computer Science
Courant Institute
New York University
archi~cs.nyu.edu
geiger~cs.nyu.edu
L. Maloney t
Departments of Psychology and Neural Science
New York University
Itm~cns.nyu.edu
Abstract
An image is often represented by a set of detected features. We get
an enormous compression by representing images in this way. Furthermore, we get a representation which is little affected by small
amounts of noise in the image. However, features are typically
chosen in an ad hoc manner. \Ve show how a good set of features can be obtained using sufficient statistics. The idea of sparse
data representation naturally arises. We treat the I-dimensional
and 2-dimensional signal reconstruction problem to make our ideas
concrete.
1
Introduction
Consider an image, I, that is the result of a stochastic image-formation process. The
process depends on the precise state, f, of an environment. The image, accordingly,
contains information about the environmental state f, possibly corrupted by noise.
We wish to choose feature vectors ?leI) derived from the image that summarize this
information concerning the environment. We are not otherwise interested in the
contents of the image and wish to discard any information concerning the image
that does not depend on the environmental state f .
?Supported by NSF grant 5274883 and AFOSR grants F 49620-96-1-0159 and F 4962096-1-0028
tpartially supported by AFOSR grants F 49620-96-1-0159 and F 49620-96-1-0028
tSupported by NIH grant EY08266
Features as Sufficient Statistics
795
We develop criteria for choosing sets of features (based on information theory and
statistical estimation theory) that extract from the image precisely the information
concerning the environmental state.
2
Image Formation, Sufficient Statistics and Features
As above, the image I is the realization of a random process with distribution
PEn1JironmentU). We are interested in estimating the parameters j of the environmental model given the image (compare [4]). We assume in the sequel that j, the
environmental parameters, are themselves a random vector with known prior distribution. Let ?J(I) denote a feature vector derived from the the image I. Initially,
we assume that ?J(I) is a deterministic function of I.
For any choice of random variables, X, Y, define[2] the mutual in/ormation of X
and Y to be M(Xj Y) = :Ex,Y P(X, Y)log pf*~;:;t). The information about
the environmental parameters contained in the image is then M(f;I), while the
information about the environmental parameters contained in the feature vector
?J(I) is then M(fj ?J(I)). As a consequence of the data processing inequality [2] ,
M(f; ?J(I)) ~ M(f; I).
A vector ?J(I), 'of features is defined to be sufficient if the inequality above is an
equality. We will use the terms feature and statistic interchangeably. The definition
of a sufficient feature vector above is then just the usual definition of a set of jointly
sufficient statistics[2].
To summarize, a feature vector ?J(I) captures all the information about the environmental state parameters / precisely when it is sufficent. 1
Graded Sufficiency: A feature vector either is or is not sufficient. For every
possible feature vector ?J(I), we define a measure of its failure to be sufficent:
Suff(?J(I)) = M(fjI) - MUj ?J(I)). This sufficency measure is always non-negative
and it is zero precisely when ?J is sufficient. We wish to find feature vectors ?J(I)
where Suff(?J(I)) is close to O. We define ?J(I) to be t-sufficient if Suff(?J(I)) ~ t. In
what follows, we will ordinarily say sufficient, when we mean t-sufficient.
The above formulation of feature vectors as jointly sufficient statistics, maximizing the mutual information, M(j, ?J(I)), can be expressed as the Kullback-Leibler
distance between the conditional distributions, PUll) and P(fI?J(I)):
E 1 [D(PUII) II P(fI?J(I)))] = M(fj I) - MUj ?J(I)) ,
(1)
where the symbol E1 denotes the expectation with respect to I, D denotes the
Kullback-Leibler (K-L) distance, defined by DUlIg) = :Ex j(x) logU(x)jg(x)) 2.
Thus, we seek feature vectors ?J(I) such that the conditional distributions, PUll)
and PUI?J(I)) in the K-L sense, averaged across the set of images. However, this
optimization for each image could lead to over-fitting.
3
Sparse Data and Sufficient Statistics
The notion of sufficient statistics may be described by how much data can be removed without increasing the K-L distance between PUI?J(I)) and PUll). Let us
1 An information-theoretic framework has been adopted in neural networks by others;
e.g., [5] [9][6] [1][8]. However, the connection between features and sufficiency is new.
2We won't prove the result here. The proof is simple and uses the Markov chain
property to say that P(f, l, ?(I)) P(l, ?J(l) )P(fll, ?(l)) P(l)P(fII).
=
=
D. Geiger; A. Rudra and L. T. Maloney
796
formulate the approach more precisely, and apply two methods to solve it.
3.1
Gaussian Noise Model and Sparse Data
We are required to construct P(fII) and P(fI?(I)). Note that according to Bayes'
rule P(fl?(!)) = P(?(I)I!) P(f) j P(?(I)). We will assume that the form of
the model P(f) is known. In order to obtain P(?(I)I!) we write P(?(!)l!) =
EJ P(?(I)II)P(II!)?
Computing P(fl ?(!)): Let us first assume that the generative process of the image
I, given the model I, is Gaussian LLd. ,Le., P(II/) = (ljV21To"i) TIi e-(f,-Ji)2/2tT~
where i = 0,1, ... , N - 1 are the image pixel index for an image of size N. Further, P(Iil/i) is a function of (Ii - Ii) and Ii varies from -00 to +00, so that the
normalization constant does not depend on Ii. Then, P(fII) can be obtained by
normalizing P(f)P(II!).
P(fII) = (ljZ)(II e-(fi-J;)2/2tT~)p(f),
i
where Z is the normalization constant.
Let us introduce a binary decision variable Si = 0,1, which at every image pixel i
decides if that image pixel contains "important" information or not regarding the
model I. Our statistic ? is actually a (multivariate) random variable generated
from I according to
Ps(?II) =
II
?
i
This distribution gives ?i = Ii with probability 1 (Dirac delta function) when Si =
(data is kept) and gives ?i uniformly distributed otherwise (Si = 1, data is removed).
We then have
Ps(?I/)
I
=
P(?,II!)dI=
II
i
=
1
v
r
I
I
P(II!) Ps(?II) dI
e -~(f;-Ji)2
20',
21TU
II
The conditional distribution of ? on I satisfies the properties that we mentioned in
connection with the posterior distribution of I on I. Thus,
Ps(fl?)
=
(ljZs) P(f)
(II e-~(f;-J;)2(1-Si))
(2)
i
where Zs is a normalization constant.
It is also plausible to extend this model to non-Gaussian ones, by simply modifying
the quadratic term (fi - Ii)2 and keeping the sparse data coefficient (1 - Si).
3.2
Two Methods
We can now formulate the problem of finding a feature-set, or finding a sufficient
statistics, in terms of the variables Si that can remove data. More precisely, we can
find S by minimizing
797
Features as Sufficient Statistics
E(s,I) = D(P(fII) II Ps(fl?(l)))
+ A 2)1 - Si) .
(3)
It is clear that the K-L distance is minimized when Si = 0 everywhere and all the
data is kept. The second term is added on to drive the solution towards a minimal
sufficient statistic, where the parameter A has to be estimated. Note that, for A
very large, all the data is removed (Si = 1), while for A = 0 all the data is kept.
We can further write (3) as
=
E(s,I)
2:P(fII) log(P(fIl)/Ps(fI?(I)))
+ A2:(1- Si)
I
2: P(fII)log( (Zs/Z)
I
where E p
IT we let
[ .]
Si
II e
-2!rUi-li)2(1-(I-Si?)
i
Zs - Ep ['"
Si (h - Ii) 2)]
log-Z
~ 2u[
,
+ A 2:(1 - Si)
i
" - Si) .
+ A'~(1
,
denotes the expectation taken with respect to the distribution P.
be a continuous variable the minimum E(s, I) will occur when
aE
2
2
0= aS i = (Ep.[(h - Ii) ] - Ep[(fi - Ii) ]) - A.
(4)
We note that the Hessian matrix
Hs[i,j] =
a~:!j = Ep.[(h -
Ii)2(/j - I j )2] - Ep.[(h - Ii)2]Ep.[(!i - Ij ?] , (5)
is a correlation matrix, i.e., it is positive semi-definite. Consequently, E(s) is convex.
Continuation Method on A:
In order to solve for the optimal vector S we consider the continuation method on
the parameter A. We know that S = 0, for A = O. Then, taking derivatives of (4)
with respect to A, we obtain
1 " ]
aSj
aA = "'HLJ s [~,J.
i
It was necessary the Hessian to be invertible, i.e., the continuation method works
because E is convex. The computations are expected to be mostly spent on estimating the Hessian matrix, i.e., on computing the averages Ep. [(h - Ii)2(iJ - Ij )2],
E p? [(h - Ii)2], and Ep. [(fj - I j )2]. Sometimes these averages can be exactly computed, for example for one dimensional graph lattices. Otherwise these averages
could be estimated via Gibbs sampling.
The above method can be very slow, since these computations for Hs have to be
repeated at each increment in A. We then investigate an alternative direct method.
A Direct Method:
Our approach seeks to find a "large set" of Si = 1 and to maintain a distribution
Ps(fI?(I)) close to P(fII), i.e., to remove as many data points as possible. For this
D. Geiger, A. Rudra and L. T. Maloney
798
rj
o
20
10
.
10
(a)
Figure 1: (a). Complete results for step edge showing the image, the effective
variance and the computed s-value (using the continuation method). (b) Complete
results for step edge with added noise.
goal, we can investigate the marginal distribution
f
P(fiII)
dfo .. , dh-l dfHl ... dfN-l P(fII)
~ e -2!;(f;-I;)2
f II
j#i
-
PI; (h) Pell(!;,),
d/j P(f)
(II e --0(/;-1;)2)
j#i
(after rearranging the normalization constants)
where Pel I (h) is an effective marginal distribution that depends on all the other
values of I besides the one at pixel i.
=
How to decide if Si 0 or Si = 1 directly from this marginal distribution P(lilI)?
The entropy of the first term HI; (fi) = J dfiPI; (h) logPI; (h) indicates how much
!;, is conditioned by the data. The larger the entropy the less the data constrain
Ii, thus, there is less need to keep this data. The second term entropy Hell (Ii) =
J dhPell(h) lo9Pel/(fi) works the opposite direction. The more h is constrained
by the neighbors, the lesser the entropy and the lesser the need to keep that data
point. Thus, the decision to keep the data, Si = 0, is driven by minimizing the
"data" entropy H I (!;,) and maximizing the neighbor entropy Hell (!;,). The relevant
quantity is Hell(h)-H I; (!;,). When this is large, the pixel is kept. Later, we will see
a case where the second term is constant, and so the effective entropy is maximized.
For Gaussian models, the entropy is the logarithm of the variance and the appropriate ratio of variances may be considered.
4
Example: Surface Reconstruction
To make this approach concrete we apply to the problem of surface reconstruction.
First we consider the 1 dimensional case to conclude that edges are the important
features. Then, we apply to the two dimensional case to conclude that junctions
followed by edges are the important features.
Features as Sufficient Statistics
4.1
799
ID Case: Edge Features
Various simplifications and manipulations can be applied for the case that the model
f is described by a first order Markov model, i.e., P(f) = Di Pi(h, h-I).Then the
posterior distribution is
P(fII) = ~
II e-[~(li-/i)2+"i(li-/i-l)21,
i
where J-ti are smoothing coefficients that may vary from pixel to pixel according to
how much intensity change occurs ar pixel i, e.g., J-ti = J-tl+ P(Ii:/i-d 2 with J-t and
p to be estimated. We have assumed that the standard deviation of the noise is
homogeneous, to simplify the calculations and analysis of the direct method. Let
us now consider both methods, the continuation one and the direct one to estimate
the features.
Continuation Method: Here we apply ~ = 2:i H;I[i, j] by computing Hs[i,j],
given by (5), straight forwardly. We use the Baum-Welch method [2] for Markov
chains to exactly compute Ep? [(h-li)2(h-Ij?], Ep? [(h-li?], and Ep.[(f;-Ij)2].
The final result ofthis algorithm, applied to a step-edge data (and with noise added)
is shown in Figure 1. Not surprisingly, the edge data, both pixels, as well as the
data boundaries, were the most important data, Le., the features.
Direct Method: We derive the same result, that edges and boundaries are the
most important data via an analysis of this model. We use the result that
P(filI)
=
/ dlo ... dli - I dli+1 ... dIN - I PUII) =
Z~e-~(Ii-/.)2 e->.[i(li- r [i)2 ,
where >.t" is obtained recursively, in log2 N steps (for simplicity, we are assuming
N to be an exact power of 2), as follows
>.~K
~
=
>.K
11K
(>.!< +
i+KrHK
~
>.f + J-tf + J-tfrK
+
>'i
>.K 11K
i-Kri K )
+ J-tf + J-ti-K
(6)
The effective variance is given by varel/(h) = 1/(2)'t'') while the data variance is
given by var/(h) = (72. Since var/(h) does not depend on any pixel i, maximizing
the ratio var ell / var / (as the direct method suggested) as equivalent to maximizing
either the effective variance, or the total variance (see figure(I).
Thus, the lower is >.t" the lower is Si. We note that >.f increases with K, and J-tf
decreases with K. Consequently >.K increases less and less as K increases. In a
perturbative sense A; most contribute to At" and is defined by the two neighbors
values J-ti and J-ti+I, Le., by the edge information. The larger are the intensity edges
the smaller are J-ti and therefore, the smaller will >.r be. Moreover, >.t" is mostly
defined by>.; (in a perturbative sense, this is where most of the contribution comes).
Thus, we can argue that the pixels i with intensity edges will have smaller values
for At" and therefore are likely to have the data kept as a feature (Si = 0).
4.2
2D Case: Junctions, Corners, and Edge Features
Let us investigate the two dimensional version of the ID problem for surface reconstruction. Let us assume the posterior
PUll) = .!.e-[~(lii-/ij)2+":i(li;-/i-l.j)2+"~j(lij-/;.j-l)21,
Z
D. Geiger, A. Rudra and L. T. Maloney
800
where J.L~jh are the smoothing coefficients along vertical and horizontal direction,
that vary inversely according to the 'V I along these direction. We can then approximately compute (e.g., see [3))
P (fij I)
I
~
-I ?? )2 _>.N(J? ? _r N )2
Z1 e _-L(J??
~ .,
., e i j "
ij
where, analogously to the ID case, we have
>..~
~
h
were
K _ \K
Xi ,j - Aij
+
>..K
h,K
>..K
h,K
>..K
11,K
i ,j-KJ.Lij
i,j+KJ.Li,j+K
i-K,jJ.Lij
K K K
Xi ,j-K
Xi,j+K
Xi-K,j
+
+
+ J.Lijh,K + J.Lij11,K + J.Lih,K
+ J.LHK,j'
11,K
d h,2K _
,HK
an J.Lij
-
+
>..
11,K
HK,jJ.LHK,j
K
XHK,j
" ,K
Jl.ij
(7)
h,K
Jl.i.i?K
x!' .
'"
The larger is the effective variance at one site (i,j), the smaller is >..N, the more
likely that image portion to be a feature. The larger the intensity gradient along
h, v, at (i, j), the smaller J.L~1J. The smaller is J.L~11 the smaller will be contribution
to >..2. In a perturbative sense ([3)) >..2 makes the largest contribution to >..N. Thus,
at one site, the more intensity edges it has the larger will be the effective variance.
Thus, T-junctions will produce very large effective variances, followed by corners,
followed by edges. These will be, in order of importance, the features selected to
reconstruct 2D surfaces.
5
Conclusion
We have proposed an approach to specify when a feature set has sufficient information in them, so that we can represent the image using it. Thus, one can, in
principle, tell what kind of feature is likely to be important in a given model. Two
methods of computation have been proposed and a concrete analysis for a simple
surface reconstruction was carried out.
References
[1] A. Berger and S. Della Pietra and V. Della Pietra "A Maximum Entropy Approach
to Natural Language Processing" Computational Linguistics, Vo1.22 (1), pp 39-71,
1996.
[2] T. Cover and J. Thomas. Elements of Information Theory. Wiley Interscience, New
York, 1991.
[3] D. Geiger and J. E. Kogler. Scaling Images and Image Feature via the Renormalization Group. In Proc. IEEE Con/. on Computer Vision & Pattern Recognition, New
York, NY, 1993.
[4] G. Hinton and Z. Ghahramani. Generative Models for Discovering Sparse Distributed
Representations To Appear Phil. funs . of the Royal Society B, 1997.
[5] R. Linsker. Self-Organization in a Perceptual Network. Computer, March 1988,
105-117.
[6] J. Principe, U. of Florida at Gainesville Personal Communication
[7] T. Sejnowski. Computational Models and the Development of Topographic Projections 7rends Neurosci, 10, 304-305.
[8] S.C. Zhu, Y .N. Wu, D. Mumford. Minimax entropy principle and its application to
texture modeling Neural Computation 1996 B.
[9] P. Viola and W .M. Wells III. "Alignment by Maximization of Mutual Information".
In Proceedings of the International Conference on Computer Vision. Boston. 1995.
| 1334 |@word h:3 version:1 compression:1 seek:2 gainesville:1 recursively:1 contains:2 si:21 perturbative:3 remove:2 generative:2 selected:1 discovering:1 accordingly:1 contribute:1 along:3 direct:6 prove:1 fitting:1 interscience:1 introduce:1 manner:1 expected:1 themselves:1 xhk:1 little:1 pf:1 increasing:1 estimating:2 moreover:1 what:2 pel:1 kind:1 z:3 finding:2 every:2 ti:6 fun:1 exactly:2 grant:4 dfn:1 appear:1 positive:1 treat:1 consequence:1 id:3 approximately:1 pui:2 averaged:1 definite:1 projection:1 get:2 hlj:1 close:2 equivalent:1 deterministic:1 logpi:1 center:1 maximizing:4 baum:1 phil:1 convex:2 formulate:2 welch:1 simplicity:1 suff:3 rule:1 pull:4 sufficent:2 notion:1 increment:1 logu:1 exact:1 homogeneous:1 us:1 element:1 recognition:1 ep:11 capture:1 forwardly:1 ormation:1 decrease:1 removed:3 mentioned:1 environment:2 personal:1 depend:3 represented:1 various:1 effective:8 sejnowski:1 detected:1 tell:1 formation:2 choosing:1 larger:5 plausible:1 solve:2 say:2 otherwise:3 reconstruct:1 statistic:14 topographic:1 jointly:2 final:1 hoc:1 reconstruction:5 tu:1 relevant:1 realization:1 dirac:1 p:7 produce:1 spent:1 derive:1 develop:1 dlo:1 ij:8 c:2 come:1 direction:3 fij:1 modifying:1 stochastic:1 hell:3 fil:1 considered:1 iil:1 vary:2 a2:1 estimation:1 proc:1 largest:1 tf:3 always:1 gaussian:4 ej:1 derived:2 indicates:1 hk:2 sense:4 typically:1 initially:1 interested:2 pixel:11 development:1 constrained:1 smoothing:2 ell:1 mutual:3 marginal:3 construct:1 sampling:1 linsker:1 minimized:1 others:1 simplify:1 ve:1 pietra:2 cns:1 maintain:1 organization:1 investigate:3 alignment:1 lih:1 chain:2 rudra:4 edge:14 itm:1 necessary:1 logarithm:1 minimal:1 modeling:1 ar:1 cover:1 lattice:1 maximization:1 deviation:1 varies:1 corrupted:1 international:1 sequel:1 invertible:1 analogously:1 concrete:3 choose:1 possibly:1 corner:2 lii:1 derivative:1 li:8 tii:1 coefficient:3 ad:1 depends:2 later:1 portion:1 bayes:1 contribution:3 variance:10 maximized:1 drive:1 straight:1 maloney:4 definition:2 failure:1 pp:1 naturally:1 proof:1 di:3 con:1 actually:1 courant:2 specify:1 sufficiency:2 formulation:1 furthermore:1 just:1 correlation:1 horizontal:1 lei:1 equality:1 din:1 leibler:2 interchangeably:1 self:1 won:1 criterion:1 theoretic:1 tt:2 complete:2 fj:3 image:27 fi:10 nih:1 ji:2 jl:2 extend:1 pell:1 gibbs:1 kri:1 language:1 jg:1 surface:5 fii:10 multivariate:1 posterior:3 driven:1 discard:1 manipulation:1 inequality:2 binary:1 minimum:1 signal:1 ii:36 semi:1 rj:1 calculation:1 concerning:3 e1:1 ae:1 vision:2 expectation:2 normalization:4 sometimes:1 represent:1 iii:1 xj:1 psychology:1 opposite:1 idea:2 regarding:1 lesser:2 york:5 hessian:3 jj:2 clear:1 amount:1 continuation:6 nsf:1 delta:1 estimated:3 write:2 affected:1 group:1 enormous:1 kept:5 graph:1 everywhere:1 decide:1 wu:1 geiger:6 decision:2 scaling:1 fl:4 hi:1 followed:3 simplification:1 fll:1 quadratic:1 occur:1 precisely:5 constrain:1 archi:1 department:3 according:4 march:1 across:1 smaller:7 lld:1 taken:1 know:1 fji:1 adopted:1 junction:3 apply:4 appropriate:1 alternative:1 florida:1 thomas:1 denotes:3 linguistics:1 log2:1 ghahramani:1 graded:1 society:1 added:3 quantity:1 occurs:1 mumford:1 usual:1 gradient:1 distance:4 argue:1 assuming:1 besides:1 index:1 berger:1 ratio:2 minimizing:2 mostly:2 negative:1 ordinarily:1 vertical:1 markov:3 viola:1 hinton:1 communication:1 precise:1 intensity:5 required:1 connection:2 dli:2 z1:1 suggested:1 pattern:1 summarize:2 royal:1 power:1 natural:1 zhu:1 representing:1 minimax:1 inversely:1 carried:1 extract:1 lij:4 kj:2 prior:1 afosr:2 var:4 sufficient:20 principle:2 pi:2 supported:2 surprisingly:1 keeping:1 aij:1 jh:1 institute:2 neighbor:3 taking:1 sparse:5 distributed:2 boundary:2 asj:1 kullback:2 keep:3 decides:1 conclude:2 assumed:1 xi:4 continuous:1 rearranging:1 neurosci:1 noise:6 repeated:1 site:2 tl:1 renormalization:1 slow:1 wiley:1 ny:1 wish:3 perceptual:1 showing:1 symbol:1 nyu:3 normalizing:1 ofthis:1 importance:1 texture:1 conditioned:1 rui:1 boston:1 entropy:10 simply:1 likely:3 expressed:1 contained:2 aa:1 environmental:8 satisfies:1 dh:1 conditional:3 goal:1 consequently:2 towards:1 dfo:1 content:1 change:1 uniformly:1 vo1:1 total:1 principe:1 arises:1 della:2 ex:2 |
367 | 1,335 | Training Methods for Adaptive Boosting
of Neural Networks
Holger Schwenk
Dept.IRO
Universite de Montreal
2920 Chemin de la Tour,
Montreal, Qc, Canada, H3C 317
schwenk@iro.umontreal.ca
Yoshua Bengio
Dept.IRO
Universite de Montreal
and AT&T Laboratories, NJ
bengioy@iro.umontreal.ca
Abstract
"Boosting" is a general method for improving the performance of any
learning algorithm that consistently generates classifiers which need to
perform only slightly better than random guessing. A recently proposed
and very promising boosting algorithm is AdaBoost [5]. It has been applied with great success to several benchmark machine learning problems
using rather simple learning algorithms [4], and decision trees [1, 2, 6].
In this paper we use AdaBoost to improve the performances of neural
networks. We compare training methods based on sampling the training
set and weighting the cost function. Our system achieves about 1.4%
error on a data base of online handwritten digits from more than 200
writers. Adaptive boosting of a multi-layer network achieved 1.5% error
on the UCI Letters and 8.1 % error on the UCI satellite data set.
1 Introduction
AdaBoost [4, 5] (for Adaptive Boosting) constructs a composite classifier by sequentially
training classifiers, while putting more and more emphasis on certain patterns. AdaBoost
has been applied to rather weak learning algorithms (with low capacity) [4] and to decision trees [1 , 2, 6], and not yet, until now, to the best of our knowledge, to artificial neural
networks. These experiments displayed rather intriguing generalization properties, such as
continued decrease in generalization error after training error reaches zero. Previous workers also disagree on the reasons for the impressive generalization performance displayed
by AdaBoost on a large array of tasks. One issue raised by Breiman [1] and the authors of
AdaBoost [4] is whether some of this effect is due to a reduction in variance similar to the
one obtained from the Bagging algorithm.
In this paper we explore the application of AdaBoost to Diabolo (auto-associative) networks and multi-layer neural networks (MLPs). In doing so, we also compare three dif-
H. Schwenk and Y. Bengio
648
ferent versions of AdaBoost: (R) training each classifier with a fixed training set obtained
by resampling with replacement from the original training set (as in [1]), (E) training by
resampling after each epoch a new training set from the original training set, and (W) training by directly weighting the cost fundion (here the squared error) of the neural network.
Note that the second version (E) is a better approximation of the weighted cost function
than the first one (R), in particular when many epochs are performed. If the variance reduction induced by averaging the hypotheses from very different models explains a good
part of the generalization performance of AdaBoost, then the weighted training version
(W) should perform worse then the resampling versions, and the fixed sample version (R)
should perform better then the continuously resampled version (E).
2 AdaBoost
AdaBoost combines the hypotheses generated by a set of classifiers trained one after the
other. The tth classifier is trained with more emphasis on certain patterns, using a cost function weighted by a probability distribution D t over the training data (Dt(i) is positive and
Li Dt(i) = 1). Some learning algorithms don't permit training with respect to a weighted
cost function. In this case sampling with replacement (using the probability distribution
D t ) can be used to approximate a weighted cost function. Examples with high probability
would then occur more often than those with low probability, while some examples may
not occur in the sample at all although their probability is not zero. This is particularly true
in the simple resampling version (labeled "R" earlier), and unlikely when a new training
set is resampled after each epoch ("E" version). Neural networks can be trained directly
with respect to a distribution over the learning data by weighting the cost function (this is
the "W" version): the squared error on the i-th pattern is weighted by the probability D t (i).
The result of training the tth classifier is a hypothesis h t : X -+ Y where Y = {I, ... , k} is
the space of labels, and X is the space of input features. After the tth round the weighted
error ?t of the resulting classifier is calculated and the distribution Dt+l is computed from
D t , by increasing the probability of incorrectly labeled examples. The global decision f is
obtained by weighted voting. Figure I (left) summarizes the basic AdaBoost algorithm. It
converges (learns the training set) if each classifier yields a weighted error that is less than
50%, i.e., better than chance in the 2-c1ass case. There is also a multi-class version, called
pseudoloss-AdaBoost, that can be used when the classifier computes confidence scores for
each class. Due to lack of space, we give only the algorithm (see figure 1, right) and we
refer the reader to the references for more details [4, 5].
AdaBoost has very interesting theoretical properties, in particular it can be shown that the
error of the composite classifier on the training data decreases exponentially fast to zero [5]
as the number of combined classifiers is increased. More importantly, however, bounds
on the generalization error of such a system have been formulated [7]. These are based
on a notion of margin of classification, defined as the difference between the score of the
correct class and the strongest score of a wrong class. In the case in which there are just
two possible labels {-I, +1}, this is yf(x), where f is the composite classifier and y the
correct label. Obviously, the classification is correct if the margin is positive. We now
state the theorem bounding the generalization error of Adaboost [7] (and any classifier
obtained by a convex combination of a set of classifiers). Let H be a set of hypotheses
(from which the ht hare chosen), with VC-dimenstion d. Let f be any convex combination
of hypotheses from H. Let S be a sample of N examples chosen independently at random
according to a distribution D. Then with probability at least 1 - 8 over the random choice
of the training set S from D, the following bound is satisfied for all () > 0:
PD[yf(x) ~ 0] ~ Ps[yf(x) ~ ()]
+0
1
( jN
dlog 2 (N/d)
(}2
+ log(1/8)
)
(1)
Note that this bound is independent of the number of combined hypotheses and how they
Training Methods for Adaptive Boosting ofNeural Networks
649
sequence of N examples (Xl, YI), ... , (X N , YN )
with labels Yi E Y = {I, ... , k}
Init: Dl(i) = l/N for all i
loit: letB = {(i,y): i E{l, ... ,N},y i= yd
Dl (i. y) = l/IBI for all (i, y) E B
Input:
Repeat:
Repeat:
1. Train neural network with respect
I. Train neural network with respect
to distribution D t and obtain
to distribution D t and obtain
hypothesis ht : X ~ Y
hypothesis h t : X x Y ~ [0,1]
2. calculate the weighted error of h t : 2. calculate the pseudo-loss of h t :
_ "
D (.)
abort loop
?t = ~ LDt(i, y)(l-ht(xi, Yd+ht(Xi' y))
?t ~
t '/,
if ?t > ~
i:ht(x,)#y,
(i,y)EB
=
3. set (3t
?t/(1 - ?t)
4. update distribution D t
D
(i) - Dt(i) a O,
t+l
-
Zt
3. set (3t = ?t/(1 - ?t)
4. update distribution D t
D
(i ) - Dt(i,y) a~((1+ht(x"y,)-ht(x"y?
/Jt
with c5i = (ht(Xi) = Yi)
and Zt a normalization constant
Output: final hypothesis:
f(x) = arg max
yEY
L
t:ht(x)=y
t+l
,Y -
Zt
/Jt
where Zt is a normalization constant
Output: final hypothesis:
1
log Ii"
/Jt
f(x) = arg max
yEY
L
t
(log ; ) ht(x, y)
/Jt
Figure I: AdaBoost algorithm (left), multi-class extension using confidence scores (right)
are chosen from H. The distribution of the margins however plays an important role. It can
be shown that the AdaBoost algorithm is especially well suited to the task of maximizing
the number of training examples with large margin [7].
3 The Diabolo Classifier
Normally, neural networks used for classification are trained to map an input vector to an
output vector that encodes directly the classes, usually by the so called "I-out-of-N encoding". An alternative approach with interesting properties is to use auto-associative neural
networks, also called autoencoders or Diabolo networks, to learn a model of each class.
In the simplest case, each autoencoder network is trained only with examples of the corresponding class, i.e., it learns to reconstruct all examples of one class at its output. The
distance between the input vector and the reconstructed output vector expresses the likelihood that a particular example is part of the corresponding class. Therefore classification
is done by choosing the best fitting model. Figure 2 summarizes the basic architecture.
It shows also typical classification behavior for an online character recognition task. The
input and output vectors are (x, y)-coordinate sequences of a character. The visual representation in the figure is obtained by connecting these points. In this example the" I" is
correctly classified since the network for this class has the smallest reconstruction error.
The Diabolo classifier uses a distributed representation of the models which is much more
compact than the enumeration of references often used by distance-based classifiers like
nearest-neighbor or RBF networks. Furthermore, one has to calculate only one distance
measure for each class to recognize. This allows to incorporate knowledge by a domain
specific distance measure at a very low computational cost. In previous work [8], we have
shown that the well-known tangent-distance [11] can be used in the objective function of the
autoencoders. This Diabolo classifier has achieved state-of-the-art results in handwritten
OCR [8,9]. Recently, we have also extended the idea of a transformation invariant distance
H. Schwenk and Y. Bengio
650
1
1
~
6
1
0.13
~
1
character
to classify
input
sequence
score I
0.08
output
sequences
distance
measures
score 7
0.23
decision
module
Figure 2: Architecture of a Diabolo classifier
measure to online character recognition [10]. One autoencoder alone, however, can not
learn efficiently the model of a character if it is written in many different stroke orders and
directions. The architecture can be extended by using several autoencoders per class, each
one specializing on a particular writing style (subclass). For the class "0", for instance,
we would have one Diabolo network that learns a model for zeros written clockwise and
another one for zeros written counterclockwise. The assignment of the training examples to
the different subclass models should ideally be done in an unsupervised way. However, this
can be quite difficult since the number of writing styles is not known in advance and usually
the number of examples in each subclass varies a lot. Our training data base contains for
instance 100 zeros written counterclockwise, but only 3 written clockwise (there are also
some more examples written in other strange styles). Classical clustering algorithms would
probably tend to ignore subclasses with very few examples since they aren't responsible
for much of the error, but this may result in poor generalization behavior. Therefore, in
previous work we have manually assigned the subclass labels [10]. Of course, this is not a
generally satisfactory approach, and certainly infeasible when the training set is large. In
the following, we will show that the emphasizing algorithm of AdaBoost can be used to
train multiple Diabolo classifiers per class, performing a soft assignment of examples of
the training set to each network.
4
Results with Diabolo and MLP Classifiers
Experiments have been performed on three data sets: a data base of online handwritten
digits, the UeI Letters database of offline machine-printed alphabetical characters and the
UCI satellite database that is generated from Landsat Multi-spectral Scanner image data.
All data sets have a pre-defined training and test set. The Diabolo classifier was only
applied to the online data set (since it takes advantage of the structure of the input features).
The online data set was collected at Paris 6 University [10]. It is writer-independent (different writers in training and test sets) and there are 203 writers, 1200 training examples
and 830 test examples. Each writer gave only one example per class. Therefore, there are
many different writing styles, with very different frequencies. We only applied a simple
pr~processing: the characters were resamfled to 11 points, centered and size normalized
to a (x,y)-coordinate sequence in [-1, 1]2 . Since the Diabolo classifier with tangent distance [10] is invariant to small transformations we don't need to extract further features.
Table 1 summarizes the results on the test set of different approaches before using AdaBoost. The Diabolo classifier with hand-selected sub-classes in the training set performs
best since it is invariant to transformations and since it can deal with the different writing
styles. The experiments suggest that fully connected neural networks are not well suited
for this task: small nets do poorly on both training and test sets, while large nets overfit.
| 1335 |@word effect:1 especially:1 normalized:1 version:10 true:1 classical:1 direction:1 assigned:1 objective:1 correct:3 laboratory:1 satisfactory:1 vc:1 centered:1 deal:1 round:1 guessing:1 distance:8 explains:1 capacity:1 reduction:2 c1ass:1 ofneural:1 contains:1 score:6 generalization:7 collected:1 diabolo:12 iro:4 performs:1 extension:1 reason:1 scanner:1 image:1 yet:1 intriguing:1 written:6 great:1 recently:2 umontreal:2 difficult:1 achieves:1 smallest:1 exponentially:1 c5i:1 update:2 resampling:4 alone:1 zt:4 selected:1 label:5 perform:3 refer:1 disagree:1 benchmark:1 weighted:10 displayed:2 incorrectly:1 extended:2 boosting:6 rather:3 impressive:1 breiman:1 base:3 canada:1 yey:2 paris:1 combine:1 fitting:1 consistently:1 likelihood:1 certain:2 success:1 behavior:2 yi:3 multi:5 landsat:1 usually:2 pattern:3 unlikely:1 calculated:1 enumeration:1 increasing:1 clockwise:2 ii:1 max:2 multiple:1 issue:1 classification:5 arg:2 raised:1 art:1 dept:2 construct:1 transformation:3 improve:1 nj:1 sampling:2 manually:1 pseudo:1 specializing:1 holger:1 voting:1 subclass:5 unsupervised:1 basic:2 auto:2 autoencoder:2 classifier:25 wrong:1 yoshua:1 normalization:2 normally:1 few:1 achieved:2 yn:1 epoch:3 tangent:2 positive:2 before:1 recognize:1 loss:1 fully:1 replacement:2 encoding:1 interesting:2 probably:1 yd:2 mlp:1 induced:1 ibi:1 emphasis:2 eb:1 tend:1 counterclockwise:2 certainly:1 dif:1 bengio:3 course:1 responsible:1 gave:1 repeat:2 architecture:3 infeasible:1 alphabetical:1 worker:1 offline:1 idea:1 digit:2 neighbor:1 tree:2 whether:1 distributed:1 composite:3 printed:1 theoretical:1 confidence:2 pre:1 increased:1 earlier:1 classify:1 suggest:1 instance:2 soft:1 ferent:1 computes:1 author:1 adaptive:4 assignment:2 cost:8 writing:4 generally:1 reconstructed:1 approximate:1 tour:1 map:1 compact:1 ignore:1 maximizing:1 global:1 sequentially:1 independently:1 convex:2 tth:3 qc:1 simplest:1 varies:1 xi:3 don:2 combined:2 continued:1 array:1 importantly:1 correctly:1 per:3 table:1 promising:1 learn:2 notion:1 coordinate:2 ca:2 connecting:1 continuously:1 express:1 extract:1 play:1 putting:1 squared:2 init:1 satisfied:1 improving:1 us:1 domain:1 hypothesis:10 worse:1 recognition:2 particularly:1 ht:10 bounding:1 style:5 li:1 labeled:2 database:2 role:1 module:1 de:3 letter:2 calculate:3 reader:1 connected:1 strange:1 sub:1 decrease:2 performed:2 decision:4 lot:1 summarizes:3 xl:1 doing:1 pd:1 bengioy:1 bound:3 layer:2 resampled:2 ideally:1 weighting:3 learns:3 theorem:1 emphasizing:1 trained:5 mlps:1 specific:1 occur:2 jt:4 variance:2 writer:5 efficiently:1 yield:1 encodes:1 dl:2 generates:1 weak:1 handwritten:3 schwenk:4 performing:1 train:3 fast:1 margin:4 classified:1 stroke:1 artificial:1 according:1 strongest:1 reach:1 combination:2 choosing:1 poor:1 suited:2 slightly:1 quite:1 character:7 aren:1 uei:1 explore:1 visual:1 hare:1 reconstruct:1 frequency:1 universite:2 dlog:1 invariant:3 pr:1 h3c:1 final:2 online:6 obviously:1 ldt:1 associative:2 sequence:5 advantage:1 net:2 knowledge:2 chance:1 reconstruction:1 formulated:1 rbf:1 uci:3 loop:1 dt:5 typical:1 adaboost:19 poorly:1 permit:1 averaging:1 ocr:1 done:2 spectral:1 called:3 furthermore:1 just:1 la:1 fundion:1 alternative:1 until:1 autoencoders:3 hand:1 p:1 satellite:2 overfit:1 jn:1 original:2 bagging:1 converges:1 lack:1 clustering:1 abort:1 montreal:3 yf:3 incorporate:1 nearest:1 |
368 | 1,336 | Minimax and Hamiltonian Dynamics of
Excitatory-Inhibitory Networks
H. S. Seung, T. J. Richardson
Bell Labs, Lucent Technologies
Murray Hill, NJ 07974
{seungltjr}~bell-labs.com
J. C. Lagarias
AT&T Labs-Research
180 Park Ave. D-130
Florham Park, NJ 07932
J. J. Hopfield
Dept. of Molecular Biology
Princeton University
Princeton, N J 08544
jcl~research.att.com
jhopfield~vatson.princeton.edu
Abstract
A Lyapunov function for excitatory-inhibitory networks is constructed.
The construction assumes symmetric interactions within excitatory and
inhibitory populations of neurons, and antisymmetric interactions between populations. The Lyapunov function yields sufficient conditions
for the global asymptotic stability of fixed points. If these conditions
are violated, limit cycles may be stable. The relations of the Lyapunov
function to optimization theory and classical mechanics are revealed by
minimax and dissipative Hamiltonian forms of the network dynamics.
The dynamics of a neural network with symmetric interactions provably converges to
fixed points under very general assumptions[l, 2]. This mathematical result helped
to establish the paradigm of neural computation with fixed point attractors[3]. But
in reality, interactions between neurons in the brain are asymmetric. Furthermore,
the dynamical behaviors seen in the brain are not confined to fixed point attractors,
but also include oscillations and complex nonperiodic behavior. These other types
of dynamics can be realized by asymmetric networks, and may be useful for neural
computation. For these reasons, it is important to understand the global behavior
of asymmetric neural networks.
The interaction between an excitatory neuron and an inhibitory neuron is clearly
asymmetric. Here we consider a class of networks that incorporates this fundamental asymmetry of the brain's microcircuitry. Networks of this class have distinct
populations of excitatory and inhibitory neurons, with antisymmetric interactions
H. S. Seung, T. 1. Richardson, J. C. Lagarias and 1. 1. Hopfield
330
between populations and symmetric interactions within each population. Such networks display a rich repertoire of dynamical behaviors including fixed points, limit
cycles[4, 5] and traveling waves[6].
After defining the class of excitatory-inhibitory networks, we introduce a Lyapunov
function that establishes sufficient conditions for the global asymptotic stability
of fixed points. The generality of these conditions contrasts with the restricted
nature of previous convergence results, which applied only to linear networks[5]' or
to nonlinear networks with infinitely fast inhibition[7].
The use of the Lyapunov function is illustrated with a competitive or winner-take-all
network, which consists of an excitatory population of neurons with recurrent inhibition from a single neuron[8]. For this network, the sufficient conditions for global
stability of fixed points also happen to be necessary conditions. In other words,
we have proved global stability over the largest possible parameter regime in which
it holds, demonstrating the power of the Lyapunov function. There exists another
parameter regime in which numerical simulations display limit cycle oscillations[7].
Similar convergence proofs for other excitatory-inhibitory networks may be obtained
by tedious but straightforward calculations. All the necessary tools are given in the
first half of the paper. But the rest of the paper explains what makes the Lyapunov
function especially interesting, beyond the convergence results it yields: its role in
a conceptual framework that relates excitatory-inhibitory networks to optimization
theory and classical mechanics.
The connection between neural networks and optimization[3] was established by
proofs that symmetric networks could find minima of objective functions[l, 2]. Later
it was discovered that excitatory-inhibitory networks could perform the minimax
computation of finding saddle points[9, 10, 11], though no general proof of this was
given at the time. Our Lyapunov function finally supplies such a proof, and one of
its components is the objective function of the network's minimax computation.
Our Lyapunov function can also be obtained by writing the dynamics of excitatoryinhibitory networks in Hamiltonian form, with extra velocity-dependent terms. If
these extra terms are dissipative, then the energy of the system is nonincreasing,
and is a Lyapunov function. If the extra terms are not purely dissipative, limit
cycles are possible. Previous Hamiltonian formalisms for neural networks made
the more restrictive assumption of purely antisymmetric interactions, and did not
include the effect of dissipation[12].
This paper establishes sufficient conditions for global asymptotic stability of fixed
points. The problem of finding sufficient conditions for oscillatory and chaotic
behavior remains open. The perspectives of minimax and Hamiltonian dynamics
may help in this task.
1
EXCITATORY-INHIBITORY NETWORKS
The dynamics of an excitatory-inhibitory network is defined by
TxX+X
TyY+y
=
f(u+Ax-By) ,
(1)
g(v+BTx-Cy).
(2)
The state variables are contained in two vectors x E Rm and y E Rn, which represent
the activities of the excitatory and inhibitory neurons, respectively.
f is used in both scalar and vector contexts. The scalar function
R is monotonic nondecreasing. The vector function f : R m ~ Rm is
The symbol
f :R
~
Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks
331
defined by applying the scalar function 1 to each component of a vector argument,
i.e., l(x) = (J(xt) , ... ,1(xm)). The symbol 9 is used similarly.
The symmetry of interaction within each population is imposed by the constraints
A = AT and C = CT. The antisymmetry of interaction between populations is
manifest in the occurrence of - B and BT in the equations. The terms "excitatory"
and "inhibitory" are appropriate with the additional constraint that the entries of
matrices A, B, and C are nonnegative. Though this assumption makes sense in
a neurobiological context the mathematics does not depends on it. The constant
vectors u and v represent tonic input from external sources, or alternatively bias
intrinsic to the neurons.
The time constants T z and Ty set the speed of excitatory and inhibitory synapses,
respectively. In the limit of infinitely fast inhibition, Ty = 0, the convergence
theorems for symmetric networks are applicable[l, 2], though some effort is required
in applying them to the case C =/; 0. If the dynamics converges for Ty = 0, then
there exists some neighborhood of zero in which it still converges[7]. Our Lyapunov
function goes further, as it is valid for more general T y ?
The potential for oscillatory behavior in excitatory-inhibitory networks like (1) has
long been known[4, 7]. The origin of oscillations can be understood from a simple
two neuron model. Suppose that neuron 1 excites neuron 2, and receives inhibition
back from neuron 2. Then the effect is that neuron 1 suppresses its own activity
with an effective delay that depends on the time constant of inhibition. If this delay
is long enough, oscillations result. However, these oscillations will die down to a
fixed point, as the inhibition tends to dampen activity in the circuit. Only if neuron
1 also excites itself can the oscillations become sustained.
Therefore, whether oscillations are damped or sustained depends on the choice of
parameters. In this paper we establish sufficient conditions for the global stability of
fixed points in (1). The violation of these sufficient conditions indicates parameter
regimes in which there may be other types of asymptotic behavior, such as limit
cycles.
2
LYAPUNOV FUNCTION
We will assume that 1 and 9 are smooth and that their inverses 1-1 and g-1 exist.
If the function 1 is bounded above and/or below, then its inverse 1-1 is defined on
the appropriate subinterval of R. Note that the set of (x, y) lying in the range of
(J,g) is a positive invariant set under (1) and that its closure is a global attractor
for the system.
The scalar function F is defined as the antiderivative of 1, and P as the Legendre
maxp{px - F(p)}. The derivatives of these conjugate convex
transform P(x)
functions are,
(3)
F'(x) = l(x) ,
The vector versions of these functions are defined componentwise, as in the definition
of the vector version of 1. The conjugate convex pair G, (; is defined similarly.
The Lyapunov function requires generalizations of the standard kinetic energies
Tz x 2/2 and Tyy2/2. These are constructed using the functions ~ : Rm x Rm ~ R
and r : Rn x Rn ~ R, defined by
~(p,x)
r(q,y)
=
ITF(p) -xTp+lTP(x) ,
(4)
ITG(q) _yTq+ IT(;(y) .
(5)
H. S. Seung, T. 1. Richardson, J. C. Lagarias and J. J. Hopfield
332
The components of the vector 1 are all ones; its dimensionality should be clear
from context. The function ~(p, x) is lower bounded by zero, and vanishes on
the manifold I(p) = x, by the definition of the Legendre transform. Setting p =
U+ Ax - By, we obtain the generalized kinetic energy T;l~(u + Ax - By, x), which
vanishes when x = 0 and is positive otherwise. It reduces to T;xx 2 /2 in the special
case where I is the identity function.
To construct the Lyapunov function, a multiple of the saddle function
S = _u T x - !x T Ax + v TY - !yTCy + ITP(x)
+ yTBT x
- ITG(y)
(6)
+ rS
(7)
2
2
is added to the kinetic energy. The reason for the name "saddle function" will be
explained later. Then
L = T;l~(U
+ Ax -
By,x)
+ T;lr(v + BT x -
Cy, y)
is a Lyapunov function provided that it is lower bounded, nonincreasing, and t only
vanishes at fixed points of the dynamics. Roughly speaking, this is enough to prove
the global asymptotic stability of fixed points, although some additional technical
details may be involved.
In the next section, the Lyapunov function will be applied to an example network,
yielding sufficient conditions for the global asymptotic stability of fixed points.
In this particular network, the sufficient conditions also happen to be necessary
conditions. Therefore the Lyapunov function succeeds in delineating the largest
possible parameter regime in which point attractors are globally stable. Of course,
there is no guarantee of this in general, but the power of the Lyapunov function is
manifest in this instance.
Before proceeding to the example network, we pause to state some general conditions
for L to be nonincreasing. A lengthy but straightforward calculation shows that
the time derivative of L is given by
t = x T Ax - iJTCiJ
(8)
_(T;l + r)j;T(J-l (T;xX + x) - I-I (x)J
_(T;l - r)iJT[g-l(TyiJ + y) - g-l(y)J .
Therefore, L is nonincreasing provided that
(a-b)TA(a-b)
(9)
max
(
a,b
a - b) T [ I-l(a) - I-l(b)] < 1 + rTz ,
.
(a - b)TC(a - b)
(10)
mm
T[
> 1 - rTy .
a,b (a - b) g-l(a) - g-l(b)]
The quotients in these inequalities are generalizations of the Rayleigh-Ritz ratios of
A and C. If I and 9 were linear, the left hand sides of these inequalities would be
equal to the maximum eigenvalue of A and the minimum eigenvalue of C.
3
AN EXAMPLE: COMPETITIVE NETWORK
The competitive or winner-take-all network is a classic example of an excitatoryinhibitory network[8, 7J . Its population of excitatory neurons Xi receives selffeedback of strength a and recurrent feedback from a single inhibitory neuron y,
Tzii
+ Xi
T.Y + y
I(Ui
=
+ aXi -
9 ( ~>i)
y) ,
.
(11)
(12)
Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks
This is a special case of (1), with A
= aI, B =
1, and C
333
= o.
The global inhibitory neuron mediates a competitive interaction between the excitatory neurons. If the competition is very strong, a single excitatory neuron "wins,"
shutting off all the rest. If the competition is weak, more than one excitatory neuron
can win, usually those corresponding to the larger Ui. Depending on the choice of f
and g, self-feedback a, and time scales Tx and Ty, this network exhibits a variety of
dynamical behaviors, including a single point attractor, multiple point attractors,
and limit cycles[5, 7].
We will consider the specific case where f and 9 are the rectification nonlinearity
[x]+ == max{ x, o}. The behavior ofthis network will be described in detail elsewhere;
only a brief summary is given here. With either of two convenient choices for r,
r = T;1 or r = a - T;1, it can be shown that the resulting L is bounded below
for a < 2 and nonincreasing for a < T;1 + T;1. These are sufficient conditions for
the global stability of fixed points. They also turn out to be necessary conditions,
as it can be verified that the fixed points are locally unstable if the conditions are
violated. The behaviors in the parameter regime defined by these conditions can
be divided into two rough categories. For a < 1, there is a unique point attractor,
at which more than one excitatory neuron can be active, in a soft form of winnertake-all. For a > 1, more than one point attractor may exist. Only one excitatory
neuron is active at each of these fixed points, a hard form of winner-take-all.
4
MINIMAX DYNAMICS
In the field of optimization, gradient descent-ascent is a standard method for finding
saddle points of an objective function. This section of the paper explains the close
relationship between gradient descent-ascent and excitatory-inhibitory networks[9,
10]. Furthermore, it reviews existing results on the convergence of gradient descentascent to saddle points[13, 10], which are the precedents of the convergence proofs
of this paper.
The similarity of excitatory-inhibitory networks to gradient descent-ascent can be
seen by comparing the partial derivatives of the saddle function (6) to the velocities
x and ii,
as
- ax
as
ay
(13)
(14)
The notation a '" b means that the vectors a and b have the same signs, component
by component. Because f and 9 are monotonic nondecreasing functions, x has the
same signs as -as/ax, while iJ has the same signs as as/ay. In other words, the
dynamics of the excitatory neurons tends to minimize S, while that of the inhibitory
neurons tends to maximize S.
If the sign relation", is replaced by equality in (13), we obtain a true gradient
descent-ascent dynamics,
TxX. =
-
as
ax '
Tyy. = as
ay .
(15)
Sufficient conditions for convergence of gradient descent-ascent to saddle points
are known[13, 10]. The conditions can be derived using a Lyapunov function constructed from the kinetic energy and the saddle function,
L=
~Txlxl2 + ~Tylill2 + rS .
(16)
H. S. Seung, T. 1. Richardson, 1. C. Lagarias and 1. 1. Hopfield
334
The time derivative of L is given by
'T8 2S .
?2
?2
L? = -x'T88x2S.
2 X + y 8y2 Y - rTxx + rTyy .
(17)
Weak sufficient conditions can be derived with the choice r = 0, so that L includes
only kinetic energy terms. Then L is obviously lower bounded by zero. Furthermore,
L is nonincreasing if 8 2 S /8x 2 is positive definite for all y and 8 2 S / 8y2 is negative
definite for all x. In this case, the existence of a unique saddle point is guaranteed,
as S is convex in x for all y , and concave in y for all x[13, 10].
If there is more than one saddle point, the kinetic energy by itself is generally not
a Lyapunov function. This is because the dynamics may pass through the vicinity
of more than one saddle point before it finally converges, so that the kinetic energy
behaves nonmonotonically as a function of time. In this situation, some appropriate
nonzero r must be found.
The Lyapunov function (7) for excitatory-inhibitory networks is a generalization
of the Lyapunov function (16) for gradient descent-ascent. This is analogous to
the way in which the Lyapunov function for symmetric networks generalizes the
potential function of gradient descent.
It should be noted that gradient descent-ascent is an unreliable way of finding a
saddle point. It is easy to construct situations in which it leads to a limit cycle.
The unreliability of gradient descent-ascent contrasts with the reliability of gradient
descent at finding local minimum of a potential function. Similarly, symmetric
networks converge to fixed points, but excitatory-inhibitory networks can converge
to limit cycles as well.
5
HAMILTONIAN DYNAMICS
The dynamics of an excitatory-inhibitory network can be written in a dissipative
Hamiltonian form . To do this, we define a phase space that is double the dimension
ofthe state space, adding momenta (Px,Py) that are canonically conjugate to (x, y).
The phase space dynamics
TxX + X
TyY + y
:t) (u+Ax-By-px)
(r+ !) (v+BTx-Cy-py)
(r+
=
f(Px) ,
g(py) ,
(18)
(19)
= o,
(20)
- o,
(21)
reduces to the state space dynamics (1) on the affine space A = {(Px, PY' x, y) : Px =
u + Ax - By,py = v + BTx - Cy}. Provided that r > 0, the affine space A is an
attractive invariant manifold.
Defining the Hamiltonian
H(px, X'PY' y) = T;l~(Px, x)
+ T;lr(py, y) + rS(x, y)
,
(22)
the phase space dynamics (18) can be written as
8H
8px '
8H
8py ,
(23)
(24)
Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks
- ~~ + Ax - By - (r;l + r)[pX
Py
=
_ BH + BT x _ Gy _ (r- l
By
y
_
335
- i-leX)] ,
(25)
r)r~ _ g-l(y)]
(26)
lJ'y
+2r(v+B T x-Gy-py) .
(27)
On the invariant manifold A, the Hamiltonian is identical to the Lyapunov function
(7) defined previously.
The rate of change of the energy is given by
H
-
xT Ax - (r;l + r)xT[px - i-lex)]
-yTGy _ (r;l _ r)yT[py _ g-l(y)]
+2 ry T(v
(28)
+ BT x - Gy - Py) .
The last term vanishes on the invariant manifold, leaving a result identical to (8).
Therefore, if the noncanonical terms in the phase space dynamics (18) dissipate
energy, then the Hamiltonian is nonincreasing. It is also possible that the velocitydependent terms may pump energy into the system, rather than dissipate it, in
which case oscillations or chaotic behavior may arise.
Acknowledgments This work was supported by Bell Laboratories. We would
like to thank Eric Mjolsness for useful discussions.
References
[1] M. A. Cohen and S. Grossberg. Absolute stability of global pattern formation and
parallel memory storage by competitive neural networks. IEEE, 13:815-826, 1983.
[2] J. J. Hopfield. Neurons with graded response have collective computational properties
like those of two-state neurons. Proc. Natl. Acad. Sci. USA, 81:3088-3092, 1984.
[3] J. J. Hopfield and D. W. Tank. Computing with neural circuits: a model. Science,
233:625-633, 1986.
[4] H. R. Wilson and J . D. Cowan. A mathematical theory of the functional dynamics
of cortical and thalamic nervous tissue. Kybernetik, 13:55-80, 1973.
[5] Z. Li and J. J. Hopfield. Modeling the olfactory bulb and its neural oscillatory processings. Bioi. Cybern., 61:379-392, 1989.
[6] S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Bioi.
Cybern., 27:77-87, 1977.
[7] B. Ermentrout. Complex dynamics in winner-take-all neural nets with slow inhibition.
Neural Networks, 5:415-431, 1992.
[8} S. Amari and M. A. Arbib. Competition and cooperation in neural nets. In J. Metzler,
editor, Systems Neuroscience, pages 119-165. Academic Press, New York, 1977.
[9} E. Mjolsness and C. Garrett. Algebraic transformations of objective functions. Neural
Networks, 3:651-669, 1990.
[10} J. C. Platt and A. H. Barr. Constrained differential optimization. In D. Z. Anderson,
editor, Neural Information Processing Systems, page 55, New York, 1987. American
Iristitute of Physics.
[11] 1. M. Elfadel. Convex potentials and their conjugates in analog mean-field optimization. Neural Computation, 7(5):1079-1104, 1995.
[12] J. D. Cowan. A statistical mechanics of nervous activity. In Some mathematical
questions in biology, volume III. AMS, 1972.
[13] K. J. Arrow, L. Hurwicz, and H. Uzawa. Studies in linear and non-linear programming.
Stanford University, Stanford, 1958.
| 1336 |@word version:2 tedious:1 open:1 closure:1 simulation:1 r:3 att:1 itp:1 existing:1 com:2 comparing:1 must:1 written:2 numerical:1 happen:2 half:1 nervous:2 hamiltonian:13 lr:2 mathematical:3 constructed:3 become:1 supply:1 differential:1 consists:1 sustained:2 prove:1 olfactory:1 introduce:1 roughly:1 behavior:11 mechanic:3 ry:1 brain:3 globally:1 provided:3 xx:2 bounded:5 notation:1 circuit:2 what:1 suppresses:1 finding:5 transformation:1 nj:2 guarantee:1 concave:1 delineating:1 rm:4 platt:1 unreliability:1 positive:3 before:2 understood:1 local:1 tends:3 limit:9 kybernetik:1 acad:1 dissipative:4 range:1 grossberg:1 unique:2 acknowledgment:1 definite:2 chaotic:2 bell:3 convenient:1 word:2 close:1 bh:1 storage:1 context:3 applying:2 writing:1 cybern:2 py:12 imposed:1 yt:1 straightforward:2 go:1 convex:4 ritz:1 population:9 stability:10 classic:1 analogous:1 construction:1 suppose:1 rty:1 programming:1 origin:1 velocity:2 asymmetric:4 metzler:1 role:1 cy:4 cycle:8 mjolsness:2 vanishes:4 ui:2 seung:4 ermentrout:1 dynamic:25 purely:2 eric:1 hopfield:7 tx:1 distinct:1 fast:2 effective:1 formation:2 neighborhood:1 larger:1 stanford:2 otherwise:1 amari:2 maxp:1 florham:1 richardson:4 nondecreasing:2 itself:2 transform:2 obviously:1 eigenvalue:2 net:2 interaction:11 canonically:1 competition:3 convergence:7 double:1 asymmetry:1 converges:4 help:1 depending:1 recurrent:2 ij:1 excites:2 strong:1 quotient:1 lyapunov:24 explains:2 barr:1 generalization:3 repertoire:1 txx:3 hold:1 lying:1 mm:1 proc:1 applicable:1 largest:2 establishes:2 tool:1 dampen:1 rough:1 clearly:1 rather:1 wilson:1 microcircuitry:1 ax:13 ytq:1 derived:2 indicates:1 contrast:2 ave:1 sense:1 am:1 dependent:1 bt:4 lj:1 relation:2 provably:1 tank:1 constrained:1 special:2 equal:1 construct:2 field:3 biology:2 identical:2 park:2 replaced:1 phase:4 attractor:8 violation:1 yielding:1 natl:1 damped:1 nonincreasing:7 partial:1 necessary:4 instance:1 formalism:1 soft:1 modeling:1 entry:1 pump:1 delay:2 fundamental:1 off:1 physic:1 external:1 tz:1 american:1 derivative:4 li:1 potential:4 gy:3 includes:1 depends:3 dissipate:2 later:2 helped:1 lab:3 wave:1 competitive:5 thalamic:1 parallel:1 minimize:1 yield:2 ofthe:1 weak:2 tissue:1 itg:2 oscillatory:3 synapsis:1 lengthy:1 definition:2 ty:5 energy:11 involved:1 proof:5 proved:1 manifest:2 dimensionality:1 garrett:1 back:1 ta:1 response:1 though:3 generality:1 furthermore:3 anderson:1 traveling:1 hand:1 receives:2 nonlinear:1 name:1 effect:2 usa:1 true:1 y2:2 equality:1 vicinity:1 symmetric:7 nonzero:1 laboratory:1 illustrated:1 attractive:1 self:1 noted:1 die:1 generalized:1 hill:1 ay:3 dissipation:1 behaves:1 functional:1 cohen:1 winner:4 volume:1 analog:1 ai:1 mathematics:1 similarly:3 nonlinearity:1 winnertake:1 reliability:1 stable:2 similarity:1 inhibition:8 own:1 perspective:1 inequality:2 seen:2 minimum:3 additional:2 converge:2 paradigm:1 maximize:1 ii:1 relates:1 multiple:2 reduces:2 smooth:1 technical:1 academic:1 calculation:2 long:2 divided:1 molecular:1 btx:3 represent:2 confined:1 jcl:1 source:1 leaving:1 extra:3 rest:2 ascent:8 ltp:1 cowan:2 incorporates:1 revealed:1 iii:1 enough:2 easy:1 variety:1 arbib:1 hurwicz:1 whether:1 effort:1 algebraic:1 speaking:1 york:2 useful:2 generally:1 clear:1 locally:1 category:1 exist:2 inhibitory:26 sign:4 neuroscience:1 demonstrating:1 verified:1 inverse:2 oscillation:8 ct:1 guaranteed:1 display:2 nonnegative:1 activity:4 strength:1 constraint:2 speed:1 argument:1 px:11 legendre:2 conjugate:4 explained:1 restricted:1 invariant:4 rectification:1 equation:1 remains:1 previously:1 turn:1 ijt:1 generalizes:1 appropriate:3 occurrence:1 existence:1 assumes:1 include:2 restrictive:1 murray:1 establish:2 especially:1 classical:2 graded:1 objective:4 added:1 realized:1 lex:2 question:1 exhibit:1 gradient:11 win:2 thank:1 sci:1 lateral:1 manifold:4 unstable:1 reason:2 relationship:1 ratio:1 negative:1 elfadel:1 collective:1 perform:1 neuron:27 descent:10 defining:2 situation:2 tonic:1 antisymmetry:1 discovered:1 rn:3 pair:1 required:1 connection:1 componentwise:1 established:1 mediates:1 beyond:1 dynamical:3 below:2 xm:1 usually:1 pattern:2 regime:5 including:2 max:2 memory:1 power:2 pause:1 minimax:9 technology:1 brief:1 review:1 precedent:1 asymptotic:6 interesting:1 bulb:1 affine:2 sufficient:12 editor:2 excitatory:31 course:1 elsewhere:1 summary:1 last:1 supported:1 cooperation:1 bias:1 side:1 understand:1 absolute:1 uzawa:1 feedback:2 axi:1 dimension:1 valid:1 noncanonical:1 rich:1 cortical:1 made:1 itf:1 neurobiological:1 unreliable:1 global:13 active:2 conceptual:1 xi:2 alternatively:1 reality:1 nature:1 symmetry:1 subinterval:1 complex:2 shutting:1 t8:1 antisymmetric:3 did:1 arrow:1 lagarias:4 arise:1 slow:1 xtp:1 momentum:1 theorem:1 down:1 lucent:1 xt:3 specific:1 symbol:2 exists:2 intrinsic:1 ofthis:1 adding:1 tc:1 rayleigh:1 saddle:12 infinitely:2 contained:1 scalar:4 monotonic:2 kinetic:7 bioi:2 identity:1 hard:1 change:1 pas:1 succeeds:1 violated:2 dept:1 princeton:3 nonperiodic:1 |
369 | 1,337 | Analog VLSI Model of Intersegmental
Coordination With Nearest-Neighbor Coupling
Girish N. Patel
girish@ece.gatech.edu
Jeremy H. Holleman
jeremy@ece.gatech.edu
Stephen P. DeWeerth
steved@ece.gatech.edu
School of Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, Ga. 30332-0250
Abstract
We have a developed an analog VLSI system that models the coordination of neurobiological segmental oscillators. We have implemented and
tested a system that consists of a chain of eleven pattern generating circuits that are synaptically coupled to their nearest neighbors. Each pattern generating circuit is implemented with two silicon Morris-Lecar
neurons that are connected in a reciprocally inhibitory network. We discuss the mechanisms of oscillations in the two-cell network and explore
system behavior based on isotropic and anisotropic coupling, and frequency gradients along the chain of oscillators.
1 INTRODUCTION
In recent years, neuroscientists and modelers have made great strides towards illuminating structure and computational properties in biological motor systems. For example,
much progress has been made toward understanding the neural networks that elicit rhythmic motor behaviors, including leech heartbeat (Calabrese and De Schutter, 1992), crustacean stomatogastric mill (Selverston, 1989) and tritonia swimming (Getting, 1989). In
particular, segmented locomotory systems, such as those that underlie swimming in the
lamprey (Cohen and Kiemel, 1993, Sigvardt, 1993, Grillner et aI, 1991) and in the leech
(Friesen and Pearce, 1993), are interesting from an quantitative perspective. In these systems, it is clear that coordinated motor behaviors are a result of complex interactions
among membrane, synaptic, circuit, and system properties. However, because of the lack
of sufficient neural underpinnings, a complete understanding of the computational principles in these systems is still lacking. Abstracting the biophysical complexity by modeling
segmented systems as coupled nonlinear oscillators is one approach that has provided
much insight into the operation of these systems (Cohen et ai, 1982). More specifically,
this type of modeling work has illuminated computational properties that give rise to
phase constancy, a motor behavior that is characterized by intersegmental phase lags that
are maintained at constant values independent of swimming frequency. For example, it
has been shown that frequency gradients and asymmetrical coupling play an important
role in establishing phase lags of correct sign and amplitude (Kopell and Ennentrout,
1988) as well as appropriate boundary conditions (Williams and Sigvardt, 1994).
Although theoretical modeling has provided much insight into the operation of interseg-
G. N. Patel, 1. H. Holleman and S. P. DeWeerth
720
mental systems. these models have limited capacity for incorporating biophysical properties and complex interconnectivity. Software and/or hardware emulation provides the
potential to add such complexity to system models. Additionally, the modularity and regularity in the anatomical and computational structures of intersegmental systems facilitate scalable representations. These factors make segmented systems particularly viable
for modeling using neuromorphic analog very large-scale integrated (aVLSI) technology.
In general, biological motor systems have a number of properties that make their realtime modeling using aVLSI circuits interesting and approachable. Like their sensory
counterparts, they exhibit rich emergent properties that are generated by collective architectures that are regular and modular. Additionally, the fact that motor processing is at the
periphery of the nervous system makes the analysis of the system behavior accessible due
to the fact that output of the system (embodied in the motor actions) is observable and
facilitates functional analysis.
The goals in this research are i) to study how the properties of individual neurons in a network affect the overall system behavior; (ii) to facilitate the validation of the principles
underlying intersegmental coordination; and (iii) to develop a real-time, low power,
motion control system. We want to exploit these principles and architectures both to
improve our understanding of the biology and to design artificial systems that perfonn
autonomously in various environments. In this paper we present an analog VLSI model of
intersegmental coordination that addresses the role of frequency gradients and asymmetrical coupling. Each segment in our system is implemented with two silicon model neurons that are connected in a reciprocally inhibitory network. A model of intersegmental
coordination is implemented by connecting eleven such oscillators, with nearest neighbor coupling. We present the neuron model, and we investigate the role of frequency gradients and asymmetrical coupling in the establishment of phase lags along a chain these
neural oscillators.
2 NEURON MODEL
In order to produce bursting activity, a neuron must possess "slow" intrinsic time constants in addition to the "fast" time constants that are necessary for the generation of
spikes. Hardware models of neurons with both slow ana fast time constants have been
designed based upon previously described Hodgkin-Huxley neuron models (Mahowald
and Douglas, 1991). Although these circuits are good models of their biological counterparts, they are relatively complex. with a large parameter space and transistor count, limiting their usefulness in the development of large-scale systems. It has been shown
(Skinner. 1994). however, that pattern generation can be represented with only the slow
time constants, creating a system that represents the envelope of the bursting oscillations
without the individual spikes. Model neurons with only slow time constants have been
proposed by Morris and Lecar (1981).
We have implemented an analog VLSI model of the Morris-Lecar Neuron (Patel and
DeWeerth. 1997). Figure I shows the circuit diagram of this neuron. The model consists
of two state variables: one corresponding to the membrane potential (V) and One corresponding to a slow variable (N). The slow variable is obtained by delaying the mem-
.,
I
p.Vprel
IN
Ie
'-----+--~~---+--_v LoJ
L
_ _ _ _ _ ...J
v
T
Figure 1: Circuit diagram of silicon Morris-Lecar Neuron
A VISI Model of Intersegmental Coordination
721
brane potential by way of an operational transconductance amplifier (OTA) connected in
unity gain configuration with load capacitor C 2 . The membrane potential is obtained by
injecting two positive currents (lext and i H) and two negative currents (iL and isyn) into
capacitor C I . Current iH raises the membrane potential towards V High when the membrane potential increases above V H' whereas current iL lowers the membrane potential
towards V Low when the delayed membrane potential increases above V L' The synaptic
current, i syn ' activates when the presynaptic input, V Pre' increases above V thresh .
Assuming operation of transistors in weak inversion and synaptic coupling turned off
(i syn = 0) the equations of motion for the system are.
.
CIV
exp(K(V - VH)/U T )
exp(K(N - VL)/U T)
= II (V, N) = 'extap + IH I + exp(IC(V _ VH)/U T ) a p -Il.l + exp(K(N _ VL)/UT ) aN
C 2'" = 12 (V, N) = 'ttanh(K(V - N)/(2U T ?( 1- exp?N - V dd )/UT ?)
terms
(Xp
and
(XN'
where
(Xp = 1 - exp(V - VHigh)/UT
and
exp(V Low - V)/U T ,correspond to the ohmic effect of transistor Ml and M2
respectively. K' corresponds to the back-gate effect of a MOS transistor operated in weak
inversion, and U T corresponds to the thermal voltage. We can understand the behavior of
this circuit by analyzing the geometry of the curves that yield zero motion (i.e., when
II (V, N) = 12( V, N) = 0). These curves, referred to as nullclines, are shown in
Figure 2 for various values of external current.
The
(XN
= 1-
The externally applied constant current (lext)' which has the effect of shifting the V
nullcline in the positive vertical direction (see Figure 2), controls the mode of operation
of the neuron. When the V - and N nullclines intersect between the local minimum and
local maximum of the V nullcline (P2 in Figure 2), the resulting fixed point is unstable
and the trajectories of the system approach a stable limit-cycle (an endogenous bursting
mode). Fixed points to the left of the local minimum (PI in Figure 2) or to the right of the
local maximum (P3 in Figure 2) are stable and correspond to a silent mode and a tonic
mode of the neuron respectively. An inhibitory synaptic current (i syn ) has the effect of
shifting the V nullcline in the negative vertical direction; depending on the state of a presynaptic cell, i Syn can dynamically change the mode of operation of the neuron.
3 TWO-CELL NETWORK
When two cells are connected in a reciprocally inhibitory network, the two cells will
oscillate in antiphase depending on the conditions of the free and inhibited cells and the
value of the synaptic threshold (Skinner et. aI, 1994). We assume that the turn-on characteristics of the synaptic current is sharp (valid for large V High - V Low) such that when the
membrane potential of a presynaptic cell reaches above V thresh' the postsynaptic cell is
immediately inhibited by application of negative current Isyn to its membrane potential.
26
P3
255
...
-- - -
z 2.5
,
I
\,
,'
2.45
",
,
I
P1
___ -
-
- lext = 5 nA
lext = 2.5 nA
lext = 0 nA
11.0
24
trajectories
24
2.45
2.5
255
2.6
V
Figure 2: Nullcline and corresponding trajectories of silicon Morris-Lecar neuron.
G. N. Patel, J H Holleman and S. P. DeWeerth
722
If the free cell is an endogenous burster, the inhibited cell is silent, and the synaptic
threshold is between the local maximum of the free cell and the local minimum in the
inhibited cell. the mechanism for oscillation is due to intrinsic release. This mechanism
can be understood by observing that the free cell undergoes rapid depolarization when its
state approaches the local maximum thus facilitating the release of the inhibited cell. If
the free cell is tonic and the inhibited cell is an endogenous burster (and conditions on
synaptic threshold are the same as in the intrinsic release case). then the oscillations are
due to an intrinsic escape mechanism. This mechanism is understood by observing that
the inhibited cell undergoes rapid hyperpolarization. thus escaping inhibition. when its
state approaches the local minimum. Note. in both intrinsic release and intrinsic escape
mechanisms. the synaptic threshold has no effect on oscillator period because rapid
changes in membrane potential occur before the effect of synaptic threshold.
When the free cell is an endogenous burster. the inhibited cell is silent. and the synaptic
threshold is to the right of the local maximum of the free cell. then the oscillations are
due to a synaptic release mechanism. This mechanism can be understood by observing
that when the membrane potential of the free cell reaches below the synaptic threshold.
the free cell ceases to inhibit the other cell which causes the release of the inhibited cell.
When the free cell is tonic. and the inhibited cell is an endogenous burster. and the synaptic threshold is to the left of the local minimum of the inhibited cell, then the oscillations
are due to a synaptic escape mechanism. This mechanism can be understood by observing that when the membrane potential of the inhibited cell crosses above the synaptic
threshold, then the membrane potential of the inhibited cell is large enough to inhibit the
free cell. Note, increasing the synaptic threshold h~ the effect of increasing oscillator frequency for the synaptic release mechanism, however, oscillator frequency under the synaptic escape mechanism will decrease with an increase in the synaptic threshold.
By setting V High - V Low to a large value. the synaptic currents appear to have a sharp cutoff. However, because transistor currents saturate within a few thermal voltages, the
null clines due to the membrane potential appear less cubic-like and more square-like.
This does not effect the qualitative behavior of the circuit. as we are able to produce antiphasic oscillations due to all four mechanisms. Figure 3 illustrates the four modes of
oscillations under various parameter regimes. Figure 31\ show typical waveforms from
two silicon neurons when they are configured in a reciprocally inhibitory network. The
oscillations in this case are due to intrinsic release mechanism and the frequency of oscillations are insensitive to the synaptic threshold. When the synaptic threshold is increased
above 2.5 volts, the oscillations are due to the synaptic release mechanism and the oscillator frequency will increase as the synaptic threshold is increased. as shown in Figure 3C.
By adjusting lex! such that the free cell is tonic and the inhibited cell bursts endogenously. we are able to produce oscillations due to the intrinsic escape mechanism, as
-V1IU
II
JUlf1
c
1
'"
l.:
3
i
.:
lIlJ1J
nJU\
D
(25
2.5
'0
E..,..,. t.Ioc:horOsm
B
i
2
j
' .5
.:!
2
-
~
, '-0
2
~
Figure 3: Experimental results from two neurons connected in a reciprocally inhibitory
network. Antiphasic oscillations due to intrinsic release mechanism (A), and intrinsic
escape mechanism (B). Dependence of oscillator frequency on synaptic threshold for the
synaptic release mechanism (C) and synaptic escape mechanism (D)
723
A VISI Model of Intersegmental Coordination
shown in Figure 3B. As the synaptic threshold is decreased below 0.3 volts, the oscillations are caused by the synaptic escape mechanism and oscillator frequency increases as
the synaptic threshold is decreased. The sharp transition between intrinsic and synaptic
mechanisms is due to nullclines that appear square-like.
4 CHAIN OF COUPLED NEURAL OSCILLATORS
In order to build a chain of pattern generating circuits with nearest neighbor coupling, we
designed our silicon neurons with five synaptic connections. The connections are made
using the synaptic spread rule proposed by Williams (1990). The rule states that a neuron
in any given segment can only connect to neurons in other segments that are homologues
to the neurons it connects to in the local segment. Therefore, each neuron makes two
inhibitory, contralateral connections and two excitatory, ipsilateral connections (as well a
single inhibitory connection in the local segment). The synaptic circuit, shown in the
dashed box in Figure I, is repeated for each inhibitory synapse and its complementary
version is repeated for the excitatory synapses. In order to investigate the role of frequency gradients, each neural oscillator has an independent parameter, lex!' for setting
the intrinsic oscillator period. A set of global parameters, I L , I H , It' V H' V L' V Hi!\h '
and V Low control the mechanism of oscilJation. These parameters are set such that the
mechanism of oscillation is intrinsic release.
Because of inherent mismatch of devices in CMOS technology, a consequence in our
model is that neurons with equal parameters do not necessarily behave with similar performance. Figure 4A illustrates the)ntrinsic oscillator period along the length of system
when all neurons receive the same parameters. When the oscillators are symmetrically
coupled, the resulting phase differences along the chain are nonzero, as shown in
Figure 4B . The phase lags are negative with respect to the head position, thus the default
swim direction is backward. As the coupling strength is increased, indicated by the lowermost curves in Figure 4B, the phase lags become smaller, as expected, but do not diminish to produce synchronous oscillations. When the oscillators are locked to one common
frequency, n, theory predicts (Kopell and Ermentrout, 1988) that the common frequency is dependent on intrinsic oscillator frequencies, and coupling from neighboring
oscillators. In addition, under the condition of weak coupling, the effect of coupling can
be quantified with coupling functions that depends on the phase difference between
neighboring oscillators:
n = CJ)i + H~($;) + H~(~i_l)
where, CJ)i is the intrinsic frequency of a given oscillator, H A and H D are coupling functions in the ascending and descending directions respectively, and $i is the phase difference between the (i+ 1)th and ith oscillator. This equation suggests that the phase lags
must be large in order to compensate for large variations in the intrinsic oscillator freB
A
1.5
f ?4 o
1.3
?
0
0
0
0
0
~, 2
0
0
i
0
1:
4
0 .9
c
6
10
15 , - - . - - -10
-----,
,
I, __ .. - -............ -:.. -; "'-..
!
I
-
~
...
~
0
-5
I
-10
_15 L - - --
6
-
-------'
8
10
'-~
Figure 4: Experimental data obtained from system of coupled oscillators.
C. N Patel, J H. Holleman and S. P. DeWeerth
724
quencies.
Another factor that can effect the intersegmental phase lag is the degree of anisotropic
coupling. To investigate the effect of asymmetrical coupling. we adjusted lex! in each
segment so to produce uniform intrinsic oscillator periods (to within ten percent of 115
ms) along the length of the system. Asymmetrical coupling is established by maintaining
Vavg == (VAse - V DEs)/2 at 0.7 volts and varying V delta == V ASC - V DES from 0.4 to - 0.4
volts. VAse and V DFS correspond to the bias voltage that sets the synaptic conductance
of presynaptic inputs arriving from the ascending and descending directions respectively.
Throughout the experiment. the average of inhibitory (contralateral) and excitatory (ipsilateral) connections from one direction are maintained at equal levels. Figure 4C shows
the intersegmental phase lags at different levels of anisotropic coupling. Stronger ascending weights (V delta = 0.4. 0.2 volts) produced negative phase lags. corresponding to backward swimming. while stronger descending connections (V delta = -0.4. -0.2 volts)
produce positive phase lags. corresponding to backward swimming. Although mathematical models suggest that stronger ascending coupling should produce forward swimming.
we feel that the type of coupling (inhibitory contralateral and excitatory ipsilateral connections) and the oscillatory mode (intrinsic release) of the segmental oscillators may
account for this discrepancy.
To study the effects of frequency gradients. we adjusted lexl at each segment such that the
that the oscillator period from the head to the tail (from segment I to segment II) varied
from 300 ms to lOOms in 20 ms increments. In addition. to minimize the effect of asymmetrical coupling. we set Vavg = 0.8 volts and V delta = 0 volts. The absolute phase
under these conditions are shown in Figure SA. The phase lags are negative with respect
to the head position. which corresponds to backward swimming. With a positive frequency gradient. head oscillator at lOOms and tail oscillator at 300 ms. the resulting
phases are in the opposite direction, as shown in Figure 5B. These results are consistent
with mathematical models and the trailing oscillator hypothesis as expounded by Grinner et. al. (1991).
5 CONCLUSIONS AND FUTURE WORK
We have implemented and tested an analog VLSI model of intersegmental coordination
with nearest neighbor coupling. We have explored the effects of anisotropic coupling and
frequency gradients on system behavior. One of our results-stronger ascending connections produced backward swimming instead of forward swimming-is contrary to theory. There are two factors that may account for this discrepancy: i) our system exhibits
inherent spatial disorder in the parameter space due to device mismatch. and ii) the operating point at which we performed the experiments retains high sensitivity to neuron
parameter variations and oscillatory modes. We are continuing to explore the parameter
space to determine if there are more robust operating points.
We expect that the limitation of our system to only including nearest-neighbor connections is a major factor in the large phase-lag variations that we observed. The importance
of both short and long distance connections in the regulation of constant phase under conditions of large variability in the parameter space has been shown by Cohen and Kiemel
(1993). To address these issues, we are currently designing a system that facilitates both
short and long distance connections (DeWeerth et al. 1997). Additionally. to study the
B
A
.
r.. ""
eo
-10
l~:
I ..
t:
10
-40
_?....
-70
10
-40
Figure 5: Absolute phase with negative (A) and positive (B) frequency gradients.
A VUI Model of Intersegmental Coordination
role of sensory feedback and to close the loop between neural control and motor behavior, we are also building a mechanical segmented system into which we will incorporate
our a VLSI models.
Acknowledgments
This research is supported by NSF grant IBN-95 I 1721. We would like to thank Avis
Cohen for discussion on computational properties that underlie coordinated motor behavior in the lamprey swim system. We would also like to thank Mario Simoni for discussions on pattern generating circuits. We thank the Georgia Tech Analog Consortium for
supporting students with travel funds.
References
Calabrese, R. and De Schutter, E. (1992). Motor-pattern-generating networks in
Invertebrates: Modeling Our Way Toward Understanding. TINS, 15.11 :439-445.
Cohen, A, Holms, P. and Rand R. (1982). The nature of coupling between segmental
oscillators of the lamprey spinal generator for locomotion: A mathematical model. J. Math
BioI. 13:345-369.
Cohen, A. and Kiemel, T. (1993). Intersegmental coordination: lessons from modeling
systems of coupled non-linear oscillators. Amer. Zool., 33:54-65.
DeWeerth, S., Patel, G., Schimmel, D., Simoni, M. and Calabrese, R. (1997). In
Proceedings of the Seventeenth Conference on Advanced Research in VLSI, R.B. Brown
and A.T. Ishii (eds), Los Alamitos, CA: IEEE Computer Society, 182-200.
Friesen, 0 and Pearce, R. (1993). Mechanisms of intersegmental coordination in leech
locomotion. SINS 5:41-47.
Getting, P. (1989). A network oscillator underlying swimming in Tritonia. In Cellular and
Neuronal Oscillators, J.w. Jacklet (ed), New York: Marcel Dekker, 101-128.
Grillner, S. Wallen, P., Brodin, L. and Lansner, A. (1991). Neuronal network generating
locomotor behavior In lamprey: circuitry, transmitter, membrane properties, and
simulation. Ann. Rev. Neurosci., 14:169-169.
Kopell, N. and Ennentrout, B. (1988). Coupled oscillators and the design of central pattern
generators. Math Biosci. 90:87-109.
Mahowald, M. and Douglas, R. (1991) A silicon neuron. Nature, 354:515-518.
Morris, C. and Lecar, H. (1981) Voltage oscillations in the barnacle giant muscle fiber.
Biophys. 1, 35: 193-213.
Patel, G., DeWeerth, S. (1997). Analogue VLSI Morris-Lecar neuron. Electronic LeUers,
lEE. 33.12: 997-998.
Sigvardt, K.(l993). Intersegmental coordination in the lamprey central pattern generator
for locomotion. SINS 5:3-15.
Selverston, A. (1989) The Lobster Gastric Mill Oscillator, In Cellular and Neuronal
Oscillators, J.W. Jacklet (ed), New York: Marcel Dekker, 338-370.
Skinner, F., Kopell, N., and Marder E. (1994) Mechanisms for Oscillation and Frequency
Control' in Reciprocally Inhibitory Model Neural Networks., 1. of Compo Neuroscience,
] :69-87.
Wiliams, T. (1992). Phase Coupling and Synaptic Spread in Chains of Coupled Neuronal
Oscillators. Science, vol. 258.
Williams, T., Sigvardt, K. (1994) intersegmental phase lags in the lamprey spinal cord:
experimental confirmation ofthe existence of a boundary region. 1. ofComp. Neuroscience,
1:61-67.
725
| 1337 |@word version:1 inversion:2 stronger:4 dekker:2 simulation:1 configuration:1 lowermost:1 current:12 must:2 ota:1 eleven:2 motor:10 civ:1 designed:2 fund:1 device:2 nervous:1 isotropic:1 ith:1 short:2 compo:1 mental:1 provides:1 math:2 five:1 mathematical:3 along:5 burst:1 become:1 viable:1 qualitative:1 consists:2 expected:1 rapid:3 behavior:12 p1:1 nullcline:4 increasing:2 provided:2 underlying:2 circuit:12 null:1 depolarization:1 developed:1 selverston:2 giant:1 perfonn:1 quantitative:1 control:5 underlie:2 grant:1 appear:3 positive:5 before:1 engineering:1 local:12 understood:4 nju:1 limit:1 consequence:1 analyzing:1 establishing:1 bursting:3 dynamically:1 quantified:1 suggests:1 limited:1 locked:1 seventeenth:1 acknowledgment:1 intersect:1 elicit:1 pre:1 regular:1 vhigh:1 suggest:1 consortium:1 schimmel:1 ga:1 close:1 descending:3 williams:3 disorder:1 immediately:1 stomatogastric:1 m2:1 insight:2 rule:2 variation:3 increment:1 limiting:1 feel:1 play:1 designing:1 hypothesis:1 locomotion:3 particularly:1 predicts:1 observed:1 constancy:1 role:5 electrical:1 vavg:2 cord:1 region:1 connected:5 cycle:1 autonomously:1 decrease:1 inhibit:2 lansner:1 leech:3 environment:1 complexity:2 ermentrout:1 raise:1 segment:9 upon:1 heartbeat:1 homologues:1 emergent:1 various:3 represented:1 fiber:1 ohmic:1 fast:2 artificial:1 lag:13 modular:1 expounded:1 transistor:5 biophysical:2 interaction:1 neighboring:2 turned:1 loop:1 quencies:1 zool:1 getting:2 los:1 holleman:4 regularity:1 produce:7 generating:6 cmos:1 coupling:25 avlsi:2 develop:1 depending:2 nearest:6 school:1 progress:1 sa:1 p2:1 implemented:6 ibn:1 marcel:2 direction:7 waveform:1 emulation:1 correct:1 dfs:1 cline:1 ana:1 locomotory:1 biological:3 adjusted:2 diminish:1 ic:1 exp:7 great:1 mo:1 circuitry:1 trailing:1 major:1 intersegmental:16 injecting:1 travel:1 currently:1 coordination:12 interconnectivity:1 activates:1 establishment:1 nullclines:3 varying:1 voltage:4 gatech:3 release:13 transmitter:1 tech:1 ishii:1 dependent:1 vl:2 integrated:1 vlsi:8 overall:1 among:1 issue:1 development:1 spatial:1 equal:2 ioc:1 biology:1 represents:1 discrepancy:2 future:1 inhibited:14 escape:8 few:1 inherent:2 approachable:1 individual:2 delayed:1 phase:22 geometry:1 connects:1 kiemel:3 amplifier:1 atlanta:1 neuroscientist:1 conductance:1 investigate:3 asc:1 operated:1 chain:7 underpinnings:1 necessary:1 continuing:1 theoretical:1 increased:3 modeling:7 retains:1 neuromorphic:1 mahowald:2 contralateral:3 uniform:1 usefulness:1 connect:1 sensitivity:1 accessible:1 ie:1 lee:1 off:1 connecting:1 na:3 central:2 i_l:1 wallen:1 external:1 creating:1 account:2 jeremy:2 potential:15 de:4 stride:1 student:1 configured:1 coordinated:2 caused:1 depends:1 performed:1 endogenous:5 observing:4 mario:1 minimize:1 il:3 square:2 tritonia:2 characteristic:1 correspond:3 yield:1 lesson:1 ofthe:1 weak:3 calabrese:3 produced:2 trajectory:3 kopell:4 oscillatory:2 synapsis:1 reach:2 synaptic:37 ed:3 frequency:21 lobster:1 modeler:1 gain:1 adjusting:1 crustacean:1 ut:3 cj:2 amplitude:1 syn:4 back:1 friesen:2 synapse:1 rand:1 amer:1 box:1 deweerth:8 nonlinear:1 lack:1 mode:8 undergoes:2 indicated:1 facilitate:2 effect:14 building:1 brown:1 asymmetrical:6 counterpart:2 skinner:3 volt:8 nonzero:1 sin:2 maintained:2 visi:2 m:4 complete:1 motion:3 percent:1 common:2 functional:1 hyperpolarization:1 cohen:6 spinal:2 insensitive:1 lext:5 anisotropic:4 analog:7 tail:2 silicon:7 biosci:1 ai:3 stable:2 operating:2 inhibition:1 locomotor:1 add:1 segmental:3 recent:1 thresh:2 perspective:1 periphery:1 isyn:2 vui:1 barnacle:1 muscle:1 minimum:5 eo:1 determine:1 period:5 dashed:1 stephen:1 ii:6 segmented:4 characterized:1 cross:1 compensate:1 long:2 jacklet:2 scalable:1 girish:2 synaptically:1 cell:32 receive:1 addition:3 want:1 whereas:1 decreased:2 diagram:2 envelope:1 posse:1 facilitates:2 contrary:1 capacitor:2 symmetrically:1 iii:1 enough:1 affect:1 architecture:2 escaping:1 silent:3 opposite:1 synchronous:1 swim:2 york:2 oscillate:1 cause:1 action:1 clear:1 ten:1 morris:7 hardware:2 nsf:1 inhibitory:12 sign:1 delta:4 neuroscience:2 ipsilateral:3 anatomical:1 sigvardt:4 vol:1 four:2 threshold:17 douglas:2 cutoff:1 backward:5 swimming:10 year:1 hodgkin:1 throughout:1 electronic:1 p3:2 oscillation:18 realtime:1 illuminated:1 hi:1 activity:1 strength:1 occur:1 marder:1 huxley:1 software:1 invertebrate:1 transconductance:1 relatively:1 membrane:15 smaller:1 postsynaptic:1 unity:1 rev:1 equation:2 previously:1 discus:1 count:1 mechanism:26 turn:1 ascending:5 operation:5 lecar:7 appropriate:1 gate:1 existence:1 maintaining:1 lamprey:6 exploit:1 build:1 society:1 alamitos:1 lex:3 spike:2 dependence:1 exhibit:2 gradient:9 distance:2 thank:3 capacity:1 presynaptic:4 unstable:1 cellular:2 toward:2 assuming:1 length:2 simoni:2 regulation:1 negative:7 rise:1 design:2 collective:1 vertical:2 neuron:28 pearce:2 behave:1 thermal:2 supporting:1 tonic:4 delaying:1 head:4 variability:1 gastric:1 varied:1 sharp:3 mechanical:1 connection:12 burster:4 established:1 address:2 able:2 below:2 pattern:8 mismatch:2 regime:1 including:2 reciprocally:6 shifting:2 power:1 analogue:1 endogenously:1 vase:2 advanced:1 loom:2 improve:1 technology:3 coupled:8 embodied:1 vh:2 understanding:4 lacking:1 expect:1 abstracting:1 interesting:2 generation:2 limitation:1 generator:3 validation:1 illuminating:1 degree:1 sufficient:1 xp:2 consistent:1 principle:3 dd:1 pi:1 excitatory:4 supported:1 free:12 arriving:1 bias:1 understand:1 institute:1 neighbor:6 avis:1 rhythmic:1 absolute:2 feedback:1 boundary:2 curve:3 xn:2 valid:1 transition:1 rich:1 default:1 sensory:2 forward:2 made:3 schutter:2 observable:1 patel:7 neurobiological:1 ml:1 global:1 mem:1 modularity:1 additionally:3 nature:2 robust:1 ca:1 confirmation:1 operational:1 complex:3 necessarily:1 spread:2 neurosci:1 grillner:2 repeated:2 facilitating:1 complementary:1 neuronal:4 referred:1 georgia:2 cubic:1 slow:6 loj:1 position:2 tin:1 externally:1 saturate:1 load:1 explored:1 cease:1 incorporating:1 intrinsic:18 ih:2 importance:1 illustrates:2 biophys:1 mill:2 l993:1 explore:2 wiliams:1 corresponds:3 bioi:1 goal:1 ann:1 towards:3 oscillator:38 change:2 specifically:1 typical:1 ece:3 experimental:3 incorporate:1 tested:2 |
370 | 1,338 | Dynamic Stochastic Synapses as
Computational Units
Wolfgang Maass
Institute for Theoretical Computer Science
Technische Universitat Graz,
A-B01O Graz, Austria.
email: maass@igi.tu-graz.ac.at
Anthony M. Zador
The Salk Institute
La Jolla, CA 92037, USA
email: zador@salk.edu
Abstract
In most neural network models, synapses are treated as static weights that
change only on the slow time scales of learning. In fact, however, synapses
are highly dynamic, and show use-dependent plasticity over a wide range
of time scales. Moreover, synaptic transmission is an inherently stochastic
process: a spike arriving at a presynaptic terminal triggers release of a
vesicle of neurotransmitter from a release site with a probability that can
be much less than one. Changes in release probability represent one of the
main mechanisms by which synaptic efficacy is modulated in neural circuits.
We propose and investigate a simple model for dynamic stochastic synapses
that can easily be integrated into common models for neural computation.
We show through computer simulations and rigorous theoretical analysis
that this model for a dynamic stochastic synapse increases computational
power in a nontrivial way. Our results may have implications for the processing of time-varying signals by both biological and artificial neural networks.
A synapse 8 carries out computations on spike trains, more precisely on trains of spikes
from the presynaptic neuron. Each spike from the presynaptic neuron mayor may not
trigger the release of a neurotransmitter-filled vesicle at the synapse. The probability of a
vesicle release ranges from about 0.01 to almost 1. Furthermore this release probability is
known to be strongly "history dependent" [Dobrunz and Stevens, 1997]. A spike causes an
excitatory or inhibitory potential (EPSP or IPSP, respectively) in the postsynaptic neuron
only when a vesicle is released.
A spike train is represented as a sequence 1 of firing times, i.e. as increasing sequences
of numbers tl < t2 < ... from R+ := {z E R: z ~ O} . For each spike train 1 the output of
synapse 8 consists of the sequence 8W of those ti E 10n which vesicles are "released" by
8 , i.e. of those t, E 1 which cause an excitatory or inhibitory postsynaptic potential (EPSP
or IPSP, respectively). The map 1 -+ 8(1) may be viewed as a stochastic function that is
computed by synapse S. Alternatively one can characterize the output SW of a synapse
8 through its release pattern q qlq2 ... E {R, F}? , where R stands for release and F for
failure of release. For each t, E1 one sets q, = R if ti E 8(1) , and qi = F if ti ? 8W .
=
Dynamic Stochastic Synapses as Computational Units
1
195
Basic model
The central equation in our dynamic synapse model gives the probability PS(ti) that the ith
spike in a presynaptic spike train t = (tl,"" tk) triggers the release of a vesicle at time ti
at synapse S,
(1)
The release probability is assumed to be nonzero only for t E t, so that releases occur only
when a spike invades the presynaptic terminal (i.e. the spontaneous release probability is
assumed to be zero). The functions C(t) ~ 0 and V(t) ~ 0 describe, respectively, the states
of facilitation and depletion at the synapse at time t .
The dynamics of facilitation are given by
C(t)
= Co + L
c(t - ti) ,
(2)
t. <t
where Co is some parameter ~ 0 that can for example be related to the resting concentration
of calcium in the synapse. The exponential response function c( s) models the response of
C(t) to a presynaptic spike that had reached the synapse at time t - s: c(s) = a' e- a/ TC ,
where the positive parameters Te and a give the decay constant and magnitude, respectively,
of the response. The function C models in an abstract way internal synaptic processes
underlying presynaptic facilitation, such as the concentration of calcium in the presynaptic
terminal. The particular exponential form used for c( s) could arise for example if presynaptic
calcium dynamics were governed by a simple first order process.
The dynamics of depletion are given by
V(t)
=
max( 0, Vo -
(3)
t.: t.<t and t.ES(!)
for some parameter Vo > O. V(t) depends on the subset of those ti E t with ti < t on which
vesicles were actually released by the synapse, i.e. ti E SW. The function v(s) models the
response of V (t) to a preceding release of the same synapse at time t - s ~ t . Analogously
as for c(s) one may choose for v(s) a function with exponential decay v(s) = e- a/ TV ,
where Tv > 0 is the decay constant. The function V models in an abstract way internal
synaptic processes that support presynaptic depression, such as depletion of the pool of
readily releasable vesicles. In a more specific synapse model one could interpret Vo as the
maximal number of vesicles that can be stored in the readily releasable pool, and V(t) as
the expected number of vesicles in the readily releasable pool at time t.
In summary, the model of synaptic dynamics presented here is described by five parameters: Co, Vo, Te, Tv and a. The dynamics of a synaptic computation and its internal
variables C(t) and V(t) are indicated in Fig. 1.
For low release probabilities, Eq. 1 can be expanded to first order around r(t) :=
C(t) . V(t) = 0 to give
(4)
Similar expressions have been widely used to describe synaptic dynamiCS for mUltiple
synapses [Magie by, 1987, Markram and Tsodyks, 1996, Varela et al., 1997].
In our synapse model, we have assumed a standard exponential form for the decay of facilitation and depression (see e.g. [Magleby, 1987, Markram and Tsodyks, 1996,
Varela et al., 1997, Dobrunz and Stevens, 1997]}. We have further assumed a multiplicative interaction between facilitation and depletion. While this form has not been validated
W. Maass and A. M. Zador
196
presynaptic
spike train
function C(t)
(facilitation)
function V(t)
(depression)
function p(t,) ' [
""
(release
0'---- - - - - - - - - -probabilities)
F
-
-
FR
R
FRF
F
R
II
I
I
I
I
release pattern - - - - - - - - - - - - - - t,
I I
)'
Figure 1: Synaptic computation on a spike train i, together with the temporal dynamics of
the internal variables C and V of our model. Note that V(t) changes its value only when a
presynaptic spike causes release.
at single synapses, in the limit of low release probability (see Eq. 4), it agrees with the
multiplicative term employed in [Varela et al., 19971 to describe the dynamics of mUltiple
synapses.
The assumption that release at individual release sites of a synapse is binary, i. e. that
each release site releases 0 or I-but not more than I-vesicle when invaded by a spike, leads
to the exponential form of Eq. 1 [Dobrunz and Stevens, 19971. We emphasize the formal
distinction between release site and synapse. A synapse might consist of several release sites
in parallel, each of which has a dynamics similar to that of the stochastic "synapse model"
we consider.
2
2.1
Results
Different "Weights" for the First and Second Spike in a Train
We start by investigating the range of different release probabilities ps(td,PS(t2) that a
synapse S can assume for the first two spikes in a given spike train. These release probabilities depend on t2 - tt as well as on the values of the internal parameters Co, Va,re,'TV,O
of the synapse S. Here we analyze the potential freedom of a synapse to choose values for
ps(tt} and PS(t2)' We show in Theorem 2.1 that the range of values for the release probabilities for the first two spikes is quite large, and that the entire attainable range can be
reached through through suitable choices of Co and Vo .
Theorem 2.1 Let (tt, t2) be some arbitrary spike train consisting of two spikes, and let
PI , P2 E (0,1) be some arbitrary given numbers with P2 > Pl' (1 - pd. Furthermore assume
that arbitrary positive values are given for the parameters 0, re, 'TV of a synapse S. Then one
can always find values for the two parameters Co and Va of the synapse S so that ps(tt) = PI
and PS(t2) = P2.
Furthermore the condition P2 > Pt . (1 - Pt) is necessary in a strong sense. If P2 ~
Pt ? (1 - pt) then no synapse S can achieve ps(td = Pt and PS(t2) = P2 for any spike train
(tl' t2) and for any values of its parameters Co, Vo, re, 'TV, 0.
If one associates the current sum of release probabilities of multiple synapses or release
sites between two neurons u and v with the current value of the "connection strength" wu,v
between two neurons in a formal neural network model, then the preceding result points
Dynamic Stochastic Synapses as Computational Units
197
Figure 2: The dotted area indicates the range of pairs (Pl,P2) of release probabilities for the
first and second spike through which a synapse can move (for any given interspike interval)
by varying its parameters Co and Vo .
to a significant difference between the dynamics of computations in biological circuits and
formal neural network models. Whereas in formal neural network models it is commonly
assumed that the value of a synaptic weight stays fixed during a computation, the release
probabilities of synapses in biological neural circuits may change on a fast time scale within
a single computation.
2.2
Release Patterns for the First Three Spikes
In this section we examine the variety of release patterns that a synapse can produce for
spike trains tl, t2, t3, ' " with at least three spikes. We show not only that a synapse can
make use of different parameter settings to produce 'different release patterns, but also that
a synapse with a fixed parameter setting can respond quite differently to spike trains with
different interspike intervals. Hence a synapse can serve as pattern detector for temporal
patterns in spike trains.
It turns out that the structure of the triples of release probabilities
(PS(tl),PS(t2),PS(t3)) that a synapse can assume is substantially more complicated
than for the first two spikes considered in the previous section. Therefore we focus here
on the dependence of the most likely release pattern q E {R,
on the internal synaptic
parameters and on the interspike intervals II := t2 - fi and 12 := t3 - t2. This dependence
is in fact quite complex, as indicated in Fig. 3.
FP
RFR
RRR
FRF
FFF
RRF
interspike interval IJ
interspike interval IJ
Figure 3: (A, left) Most likely release pattern of a synapse in dependence of the interspike
intervals It and 12. The synaptic parameters are Co = 1.5, Vo = 0.5, rc = 5, 'TV = 9,
a = 0.7. (B, right) Release patterns for a synapse with other values of its parameters
(Co = 0.1, Vo = 1.8, rc = 15, 'TV = 30, a = 1).
W. Maass and A. M Zador
198
Fig. 3A shows the most likely release pattern for each given pair of interspike intervals
(11 ,12 ), given a particular fixed set of synaptic parameters. One can see that a synapse with
fixed parameter values is likely to respond quite differently to spike trains with different
interspike intervals. For example even if one just considers spike trains with 11 = 12 one
moves in Fig. 3A through 3 different release patterns that take their turn in becoming the
most likely release pattern when II varies. Similarly, if one only considers spike trains with
a fixed time interval t3 - t1 = II + 12 = ~, but with different positions of the second spike
within this time interval of length ~, one sees that the most likely release pattern is quite
sensitive to the position of the second spike within this time interval~. Fig. 3B shows
that a different set of synaptic parameters gives rise to a completely different assignment of
release patterns.
We show in the next Theorem that the boundaries between the zones in these figures
are "plastic": by changing the values of Co, Vo, Ct the synapse can move the zone for most
of the release patterns q to any given point (11,12 )' This result provides another example
for a new type of synaptic plasticity that can no longer be described in terms of a decrease
or increase of the synaptic "weight".
Theorem 2.2 Assume that an arbitrary number p E (0,1) and an arbitrary pattern (11 ,12 )
of interspike intervals is given. Furthermore assume that arbitrary fixed pOlJitive val;.;.p.s are
given for the parameters rc and TV of a synapse S. Then for any pattern q E {R, FP except
RRF, FFR one can assign values to the other parameters Ct, Co, Vo of this-synapse S so that
the probability of release pattern q for a spike train with interspike intervals 11 ,12 becomes
larger than p.
It is shown in the full version oftbis paper [Maass and Zador, 19971 that it is not possible
to make the release patterns RRF and FFR arbitrarily likely for any given spike train with
interspike intervals (11 ,12 ) ?
2.3
Computing with Firing Rates
So far we have considered the effect of short trains of two or three presynaptic spikes on
synaptic release probability. Our next result (cf. Fig.5) shows that also two longer Poisson
spike trains that represent the same firing rate can produce quite different numers of synaptic
releases, depending on the synaptic parameters. To emphasize that this is due to the pattern
of interspike intervals, and not simply to the number of spikes, we compared the outputs in
response to two Poisson spike trains A and B with the same number (lO)?of spikes. These
examples indicate that even in the context of rate coding, synaptic efficacy may not be well
described in terms of a single scalar parameter w.
2.4
Burst Detection
Here we show that the computational power of a spiking (e.g. integrate-and-fire) neuron with
stochastic dynamic synapses is strictly larger than that of a spiking neuron with traditional
"static" synapses (cf Lisman, 1997). Let T be a some given time window, and consider the
computational task of detecting whether at least one of n presynaptic neurons a1, . .. ,an
fire at least twice during T ("burst detection"). To make this task computationally feasible
we assume that none of the neurons al, ... ,an fires outside of this time window.
Theorem 2.3 A spiking neuron v with dynamic stochastic synapses can solve this burst
detection task (with arbitrarily high reliability). On the other hand no spiking neuron with
static synapses can solve this task (for any assignment of "weights" to its synapses). 1
lWe assume here that neuronal transmission delays differ by less than (n - 1) . T), where by
transmission delay we refer to the temporal delay between the firing of the presynaptic neuron and
its effect on the postsynaptic target.
Dynamic Stochastic Synapses as Computational Units
,
..
..
?? r ....... ...
~
199
11I
u
... , r
II!
u
,
r.
M
?
?
,
,
,i
.4
III
......,....".
,I
?
?
n
rIi
I
I
I
I
M
?
?
d
..
?
f
l111
?
?
Il
,
r
!,
I
??
1 ..
10
I
i
?
'10
?
.
.1
I
?
?
Figure 4: Release probabilities of two synapses for two Poisson spike trains A and B with 10
spikes each. The release probabilities for the first synapse are shown on the left hand side,
and for the second synapse on the right hand side. For both synapses the release probabilities
for spike train A are shown at the top, and for spike train B at the bottom. The first synapse
has for spike train A a 22 % higher average release probability, whereas the second synapse
has for spike train B a 16 % higher average release probability. Note that the fourth spike
in spike train B has for the first synapse a release probability of nearly zero and so is not
visible.
2.5
Translating Interval Coding into Population Coding
Assume that information is encoded in the length I of the interspike interval between the
times tl and t2 when a certain neuron v fires, and that different motor responses need to
be initiated depending on whether I < a or I > a, where a is some given parameter (c.f.
[Hopfield, 1995]). For that purpose it would be useful to translate the information encoded
in the interspike interval I into the firing activity of populations of neurons ("population
coding"). Fig. 5 illustrates a simple mechanism for that task based on dynamic synapses.
The synaptic parameters are chosen so that facilitation dominates (i.e., Co should be small
and a large) at synapses between neuron v and the postsynaptic population of neurons. The
release probability for the first spike is then close to 0, whereas the release probability for
the second spike is fairly large if I < a and significantly smaller if I is substantially larger
than a. If the resulting firing activity of the postsynaptic neurons is positively correlated
with the total number of releases of these synapses, then their population response is also
positively correlated with the length of the interspike interval I.
1
{ FR ? if 1 < a
FF ? if 1> a
presynaptic spikes
synaptic response
{ I. if 1 < a
o ? if I> a
resulting activation of
postsynaptic neurons
Figure 5: A mechanism for translating temporal coding into population coding.
W. Maass and A. M Zador
200
3
Discussion
We have explored computational implications of a dynamic stochastic synapse model. Our
model incorporates several features of biological synapses usually omitted in the connections
or weights conventionally used in artificial neural network models. Our main result is that a
neural circuit in which connections are dynamic has fundamentally greater power than one
in which connections are static. We refer to [Maass and Zador, 1997] for details. Our results
may have implications for computation in both biological and artificial neural networks, and
particularly for the processing of signals with interesting temporal structure.
Several groups have recently proposed a computational role for one form of usedependent short term synaptic plasticity [Abbott et al., 1997, Tsodyks and Markram, 1997].
They showed that, under the experimental conditions tested, synaptic depression (of a form
analogous to Vet) in our Eq. (3) can implement a form of gain control in which the steadystate synaptic output is independent of the input firing rate over a wide range of firing
rates. We have adopted a more general approach in which, rather than focussing on a particular role for short term plasticity, we allow the dynamic synapse parameters to vary. This
approach is analogous to that adopted in the study of artificial neural networks, in which
few if any constraints are placed on the connections between units. In our more general
framework, standard neural network tasks such as supervised and unsupervised learning
can be formulated (see also [Liaw and Berger, 1996]). Indeed, a backpropagation-like gradient descent algorithm can be used to adjust the parameters of a network connected by
dynamic synapses (Zador and Maass, in preparation). The advantages of dynamic synapses
may become most apparent in the processing of time-varying Signals.
References
[Abbott et al., 1997] Abbott, L., Varela, J., Sen, K., and S.B., N. (1997). Synaptic depression and cortical gain control. Science, 275:220-4.
[Dobrunz and Stevens, 1997] Dobrunz, L. and Stevens, C. (1997). Heterogeneity of release
probability, facilitation and depletion at central synapses. Neuron, 18:995-1008.
[Hopfield, 1995] Hopfield, J. (1995). Pattern recognition computation using action potential
timing for stimulus representation. Nature, 376:33-36.
[Liaw and Berger, 1996] Liaw, J.-S. and Berger, T. (1996). Dynamic synapse: A new concept of neural representation and computation. Hippocampus, 6:591-600.
[Lisman, 1997] Lisman, J. (1997). Bursts as a unit of neural information: making unreliable
synapses reliable. TINS, 20:38-43.
[Maass and Zador, 1997] Maass, W. and Zador, A. (1997). Dynamic stochastic synapses as
computational units. http://www.sloan.salk.edu/- zador/publications.html .
[MagIe by, 1987] Magleby, K. (1987). Short term synaptic plasticity. In Edelman, G. M.,
Gall, W. E., and Cowan, W. M., editors, Synaptic function. Wiley, New York.
[Markram and Tsodyks, 1996] Markram, H. and Tsodyks, M. (1996). Redistribution of
synaptic efficacy between neocortical pyramidal neurons. Nature, 382:807-10.
[Stevens and Wang, 1995] Stevens, C. and Wang, Y. (1995). Facilitation and depression at
. single central synapses. Neuron, 14:795-802.
[Tsodyks and Markram, 1997] Tsodyks, M. and Markram, H. (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability.
Proc. Natl. Acad. Sci., 94:719-23.
[Varela et al., 1997] Varela, J. A., Sen, K., Gibson, J., Fost, J., Abbott, L. F., and Nelson,
S. B. (1997). A quantitative description of short-term plasticity at excitatory synapses in
layer 2/3 of rat primary visual cortex. J. Neurosci, 17:7926-7940.
| 1338 |@word version:1 hippocampus:1 simulation:1 attainable:1 carry:1 efficacy:3 current:2 activation:1 readily:3 visible:1 interspike:15 plasticity:6 motor:1 ith:1 short:5 detecting:1 provides:1 five:1 rc:3 burst:4 become:1 edelman:1 consists:1 indeed:1 expected:1 examine:1 terminal:3 td:2 window:2 increasing:1 becomes:1 moreover:1 underlying:1 circuit:4 substantially:2 temporal:5 quantitative:1 ti:9 control:2 unit:7 positive:2 t1:1 timing:1 limit:1 fost:1 acad:1 initiated:1 firing:8 becoming:1 might:1 twice:1 co:13 range:7 implement:1 backpropagation:1 area:1 gibson:1 significantly:1 close:1 context:1 www:1 map:1 zador:11 facilitation:9 population:6 analogous:2 spontaneous:1 trigger:3 pt:5 target:1 gall:1 associate:1 recognition:1 particularly:1 bottom:1 role:2 wang:2 tsodyks:7 graz:3 connected:1 decrease:1 pd:1 dynamic:28 depend:1 vesicle:11 serve:1 completely:1 easily:1 hopfield:3 differently:2 represented:1 neurotransmitter:3 train:28 fast:1 describe:3 artificial:4 outside:1 quite:6 encoded:2 widely:1 larger:3 solve:2 apparent:1 sequence:3 advantage:1 sen:2 propose:1 interaction:1 maximal:1 epsp:2 fr:2 tu:1 translate:1 achieve:1 description:1 transmission:3 p:12 produce:3 tk:1 depending:2 ac:1 ij:2 eq:4 strong:1 p2:7 indicate:1 differ:1 stevens:7 stochastic:13 fff:1 translating:2 redistribution:1 assign:1 biological:5 strictly:1 pl:2 around:1 considered:2 vary:1 released:3 omitted:1 purpose:1 proc:1 sensitive:1 agrees:1 always:1 rather:1 varying:3 publication:1 release:60 validated:1 focus:1 usedependent:1 indicates:1 rigorous:1 sense:1 dependent:2 integrated:1 entire:1 html:1 fairly:1 unsupervised:1 nearly:1 t2:13 stimulus:1 fundamentally:1 few:1 individual:1 consisting:1 fire:4 freedom:1 detection:3 highly:1 investigate:1 adjust:1 natl:1 implication:3 necessary:1 filled:1 re:3 theoretical:2 lwe:1 assignment:2 technische:1 subset:1 delay:3 universitat:1 stored:1 characterize:1 varies:1 stay:1 pool:3 analogously:1 together:1 central:3 choose:2 potential:4 coding:6 sloan:1 igi:1 depends:2 multiplicative:2 wolfgang:1 analyze:1 reached:2 start:1 parallel:1 complicated:1 il:1 t3:4 plastic:1 none:1 history:1 detector:1 synapsis:30 synaptic:28 email:2 failure:1 static:4 gain:2 austria:1 actually:1 higher:2 supervised:1 response:8 synapse:45 l111:1 strongly:1 furthermore:4 just:1 hand:3 indicated:2 usa:1 effect:2 concept:1 hence:1 nonzero:1 maass:10 during:2 liaw:3 rat:1 tt:4 neocortical:2 vo:11 steadystate:1 fi:1 recently:1 common:1 spiking:4 resting:1 interpret:1 significant:1 refer:2 similarly:1 ffr:2 had:1 reliability:1 longer:2 cortex:1 showed:1 jolla:1 lisman:3 certain:1 binary:1 arbitrarily:2 greater:1 preceding:2 employed:1 focussing:1 signal:3 ii:5 multiple:3 full:1 e1:1 a1:1 va:2 qi:1 basic:1 mayor:1 poisson:3 represent:2 whereas:3 interval:19 pyramidal:2 cowan:1 incorporates:1 iii:1 variety:1 whether:2 expression:1 york:1 cause:3 action:1 depression:6 useful:1 http:1 inhibitory:2 dotted:1 group:1 varela:6 changing:1 rrf:3 abbott:4 sum:1 fourth:1 respond:2 almost:1 wu:1 layer:1 ct:2 activity:2 nontrivial:1 strength:1 occur:1 precisely:1 constraint:1 expanded:1 tv:9 smaller:1 postsynaptic:6 rrr:1 making:1 depletion:5 computationally:1 equation:1 turn:2 mechanism:3 adopted:2 top:1 cf:2 sw:2 move:3 spike:52 concentration:2 dependence:3 primary:1 traditional:1 gradient:1 sci:1 nelson:1 presynaptic:16 considers:2 length:3 code:1 berger:3 rise:1 rii:1 calcium:3 neuron:22 descent:1 heterogeneity:1 arbitrary:6 pair:2 connection:5 distinction:1 usually:1 pattern:22 fp:2 max:1 reliable:1 power:3 suitable:1 treated:1 conventionally:1 val:1 interesting:1 triple:1 integrate:1 editor:1 pi:2 lo:1 excitatory:3 summary:1 placed:1 arriving:1 formal:4 side:2 allow:1 institute:2 wide:2 markram:7 boundary:1 cortical:1 stand:1 commonly:1 far:1 emphasize:2 frf:2 unreliable:1 investigating:1 assumed:5 ipsp:2 alternatively:1 vet:1 nature:2 ca:1 inherently:1 complex:1 anthony:1 main:2 neurosci:1 arise:1 positively:2 neuronal:1 site:6 fig:7 tl:6 ff:1 salk:3 slow:1 wiley:1 position:2 exponential:5 governed:1 tin:1 theorem:5 specific:1 explored:1 decay:4 dominates:1 consist:1 magnitude:1 te:2 illustrates:1 tc:1 simply:1 likely:7 visual:1 scalar:1 viewed:1 formulated:1 feasible:1 change:4 except:1 total:1 e:1 la:1 experimental:1 zone:2 internal:6 support:1 modulated:1 preparation:1 tested:1 correlated:2 |
371 | 1,339 | Modeling acoustic correlations by
factor analysis
Lawrence Saul and Mazin Rahim
{lsaul.mazin}~research.att.com
AT&T Labs - Research
180 Park Ave, D-130
Florham Park, NJ 07932
Abstract
Hidden Markov models (HMMs) for automatic speech recognition
rely on high dimensional feature vectors to summarize the shorttime properties of speech. Correlations between features can arise
when the speech signal is non-stationary or corrupted by noise. We
investigate how to model these correlations using factor analysis,
a statistical method for dimensionality reduction . Factor analysis
uses a small number of parameters to model the covariance structure of high dimensional data. These parameters are estimated
by an Expectation-Maximization (EM) algorithm that can be embedded in the training procedures for HMMs. We evaluate the
combined use of mixture densities and factor analysis in HMMs
that recognize alphanumeric strings. Holding the total number of
parameters fixed, we find that these methods, properly combined,
yield better models than either method on its own.
1
Introduction
Hidden Markov models (HMMs) for automatic speech recognition[l] rely on high
dimensional feature vectors to summarize the short-time, acoustic properties of
speech. Though front-ends vary from recognizer to recognizer, the spectral information in each frame of speech is typically codified in a feature vector with thirty
or more dimensions . In most systems, these vectors are conditionally modeled by
mixtures of Gaussian probability density functions (PDFs). In this case, the correlations between different features are represented in two ways[2]: implicitly by the
use of two or more mixture components, and explicitly by the non-diagonal elements
in each covariance matrix. Naturally, these strategies for modeling correlationsimplicit versus explicit-involve tradeoffs in accuracy, speed, and memory. This
paper examines these tradeoffs using the statistical method of factor analysis.
L Saul and M Rahim
750
The present work is motivated by the following observation. Currently, most HMMbased recognizers do not include any explicit modeling of correlations; that is to
say-conditioned on the hidden states, acoustic features are modeled by mixtures of
Gaussian PDFs with diagonal covariance matrices. The reasons for this practice are
well known. The use offull covariance matrices imposes a heavy computational burden, making it difficult to achieve real-time recognition. Moreover, one rarely has
enough data to (reliably) estimate full covariance matrices. Some of these disadvantages can be overcome by parameter-tying[3]-e.g., sharing the covariance matrices
across different states or models. But parameter-tying has its own drawbacks: it
considerably complicates the training procedure, and it requires some artistry to
know which states should and should not be tied.
Unconstrained and diagonal covariance matrices clearly represent two extreme
choices for the hidden Markov modeling of speech. The statistical method of factor
analysis[4,5] represents a compromise between these two extremes. The idea behind
factor analysis is to map systematic variations of the data into a lower dimensional
subspace. This enables one to represent, in a very compact way, the covariance matrices for high dimensional data. These matrices are expressed in terms of a small
number of parameters that model the most significant correlations without incurring much overhead in time or memory. Maximum likelihood estimates of these
parameters are obtained by an Expectation-Maximization (EM) algorithm that can
be embedded in the training procedures for HMMs.
In this paper we investigate the use of factor analysis in continuous density HMMs.
Applying factor analysis at the state and mixture component level[6, 7] results in
a powerful form of dimensionality reduction, one tailored to the local properties
of speech. Briefly, the organization of this paper is as follows. In section 2, we
review the method of factor analysis and describe what makes it attractive for large
problems in speech recognition. In section 3, we report experiments on the speakerindependent recognition of connected alpha-digits. Finally, in section 4, we present
our conclusions as well as ideas for future research.
2
Factor analysis
Factor analysis is a linear method for dimensionality reduction of Gaussian random
variables[4, 5]. Many forms of dimensionality reduction (including those implemented as neural networks) can be understood as variants of factor analysis. There
are particularly close ties to methods based on principal components analysis (PCA)
and the notion of tangent distance[8]. The combined use of mixture densities and
factor analysis-resulting in a non-linear form of dimensionality reduction-was
first applied by Hinton et al[6] to the modeling of handwritten digits. The EM
procedure for mixtures of factor analyzers was subsequently derived by Ghahramani et al[7]. Below we describe the method offactor analysis for Gaussian random
variables, then show how it can be applied to the hidden Markov modeling of speech.
2.1
Gaussian model
Let x E nP denote a high dimensional Gaussian random variable. For simplicity,
we will assume that x has zero mean. If the number of dimensions, D, is very
large, it may be prohibitively expensive to estimate, store, multiply, or invert a full
covariance matrix. The idea behind factor analysis is to find a subspace of much
lower dimension, f ? D, that captures most of the variations in x. To this end, let
z E 'RJ denote a low dimensional Gaussian random variable with zero mean and
Modeling Acoustic Correlations by Factor Analysis
751
identity covariance matrix:
(1)
We now imagine that the variable x is generated by a random process in which z is a
latent (or hidden) variable; the elements of z are known as the factors. Let A denote
an arbitrary D x f matrix, and let '11 denote a diagonal, positive-definite D x D
matrix. We imagine that x is generated by sampling z from eq. (1), computing the
D-dime.nsional vector Az, then adding independent Gaussian noise (with variances
Wii) to each component of this vector. The matrix A is known as the factor loading
matrix. The relation between x and z is captured by the conditional distribution:
P(xlz) = 1'111- 1 / 2 e- HX-AZ)TI)-l(X-AZ)
(211")D/2
(2)
The marginal distribution for x is found by integrating out the hidden variable z.
The calculation is straightforward because both P(z) and P(xlz) are Gaussian:
P(x)
J
(3)
I'll + AAT I- 1 / 2 -!XT(I)+AATf1x
(211")D/2
e
(4)
dz P(xlz)P(z)
=
From eq. (4), we see that x is normally distributed with mean zero and covariance
matrix '11 + AAT . It follows that when the diagonal elements ofw are small, most
of the variation in x occurs in the subspace spanned by the columns of A. The
variances Wii measure the typical size of componentwise ftuctations outside this
subspace.
Covariance matrices of the form '11 + AAT have a number of useful properties. Most
importantly, they are expressed in terms of a small number of parameters, namely
the D(f + 1) non-zero elements of A and W. If f ~ D, then storing A and '11 requires
much less memory than storing a full covariance matrix. Likewise, estimating A and
'11 also requires much less data than estimating a full covariance matrix. Covariance
matrices of this form can be efficiently inverted using the matrix inversion lemma[9],
(5)
where I is the f x f identity matrix. This decomposition also allows one to compute the probability P(x) with only O(fD) multiplies, as opposed to the O(D2)
multiplies that are normally required when the covariance matrix is non-diagonal.
Maximum likelihood estimates of the parameters A and '11 are obtained by an EM
procedure[4]. Let {xt} denote a sample of data points (with mean zero). The EM
procedure is an iterative procedure for maximizing the log-likelihood, Lt In P(xt},
with P(Xt) given by eq. (4). The E-step of this procedure is to compute:
Q(A', '11'; A, '11) = 'LJdz P(zIXt,A, w)lnP(z,xtIA', '11').
(6)
t
The right hand side of eq. (6) depends on A and '11 through the statistics[7]:
E[zlxtl
E[zz T lx tl
=
[I + AT w- 1 A]-lATw-1Xt,
(7)
[I + AT W- 1 A]-l
(8)
+ E[zlxtlE[zTlxtl.
Here, E['lxtl denotes an average with respect to the posterior distribution,
P(zlxt, A, '11). The M-step of the EM algorithm is to maximize the right hand
L. Saul and M. Rahim
752
side of eq. (6) with respect to'll' and A'. This leads to the iterative updates[7]:
A'
(~X'E[ZT IX,]) (~E[zzTIX,])
'11'
diag {
~ ~ [x,x; -
-1
A'E[zlx,]xiJ },
(9)
(10)
where N is the number of data points, and'll' is constrained to be purely diagonal. These updates are guaranteed to converge monotonically to a (possibly local)
maximum of the log-likelihood.
2.2
Hidden Markov modeling of speech
Consider a continuous density HMM whose feature vectors, conditioned on the
hidden states, are modeled by mixtures of Gaussian PDFs. If the dimensionality of
the feature space is very large, we can make use of the parameterization in eq. (4).
Each mixture component thus obtains its own means, variances, and factor loading
matrix. Taken together , these amount to a total of C(f + 2)D parameters per
mixture model, where C is the number of mixture components, f the number of
factors, and D the dimensionality of the feature space. Note that these models
capture feature correlations in two ways: implicitly, by using two or more mixture
components, and explicitly, by using one or more factors. Intuitively, one expects
the mixture components to model discrete types of variability (e.g., whether the
speaker is male or female), and the factors to model continuous types of variability
(e.g., due to coarticulation or noise). Both types of variability are important for
building accurate models of speech.
It is straightforward to integrate the EM algorithm for factor analysis into the
training of HMMs. Suppose that S = {xtl represents a sequence of acoustic vectors.
The forward-backward procedure enables one to compute the posterior probability,
,t C = P(St = s, Ct = ciS), that the HMM used state s and mixture component cat
time t. The updates for the matrices A$C and w3C (within each state and mixture
component) have essentially the same form as eqs. (9-10), except that now each
observation Xt is weighted by the posterior probability, c . Additionally, one must
take into account that the mixture components have non-zero means[7]. A complete
derivation of these updates (along with many additional details) will be given in a
longer version of this paper.
,t
Clearly, an important consideration when applying factor analysis to speech is
the choice of acoustic features. A standard choice--and the one we use in our
experiments-is a thirty-nine dimensional feature vector that consists of twelve cepstral coefficients (with first and second derivatives) and the normalized log-energy
(with first and second derivatives). There are known to be correlations[2] between
these features, especially between the different types of coefficients (e.g., cepstrum
and delta-cepstrum). While these correlations have motivated our use of factor
analysis, it is worth emphasizing that the method applies to arbitrary feature vectors. Indeed, whatever features are used to summarize the short-time properties
of speech, one expects correlations to arise from coarticulation, background noise,
speaker idiosynchrasies, etc.
3
Experiments
Continuous density HMMs with diagonal and factored covariance matrices were
trained to recognize alphanumeric strings (e .g., N Z 3 V J 4 E 3 U 2). Highly
Modeling Acoustic Correlations by FactorAnalysis
753
alpha-digits (ML)
alpha-dig~s
(ML)
17
16
37
15
#:
;14
!!
g13
II>
'E 12
~
\ .
,
11
'0.
.
10
3~~--5~--1~0---1~5---2~0--~25~~30--~
parameters
90
5
10
15
20
parameters
25
30
Figure 1: Plots of log-likelihood scores and word error rates on the test set versus
the number of parameters per mixture model (divided by the number of features).
The stars indicate models with diagonal covariance matrices; the circles indicate
models with factor analysis. The dashed lines connect the recognizers in table 2.
confusable letters such as BjV , C jZ, and MjN make this a challenging problem
in speech recognition . The training and test data were recorded over a telephone
network and consisted of 14622 and 7255 utterances, respectively. Recognizers
were built from 285 left-to-right HMMs trained by maximum likelihood estimation;
each HMM modeled a context-dependent sub-word unit. Testing was done with a
free grammar network (i .e., no grammar constraints). We ran several experiments,
varying both the number of mixture components and the number of factors. The
goal was to determine the best model of acoustic feature correlations.
Table 1 summarizes the results of these experiments. The columns from left to
right show the number of mixture components, the number of factors, the number
of parameters per mixture model (divided by the feature dimension), the word error
rates (including insertion , deletion, and substition errors) on the test set , the average
log-likelihood per frame of speech on the test set , and the CPU time to recognize
twenty test utterances (on an SGI R4000). Not surprisingly, the word accuracies
and likelihood scores increase with the number of modeling parameters; likewise,
so do the CPU times. The most interesting comparisons are between models with
the same number of parameters-e.g., four mixture components with no factors
versus two mixture components with two factors. The left graph in figure 1 shows
a plot of the average log-likelihood versus the number of parameters per mixture
model; the stars and circles in this plot indicate models with and without diagonal
covariance matrices. One sees quite clearly from this plot that given a fixed number
of parameters, models with non-diagonal (factored) covariance matrices tend to
have higher likelihoods. The right graph in figure 1 shows a similar plot of the word
error rates versus the number of parameters. Here one does not see much difference;
presumably, because HMMs are such poor models of speech to begin with , higher
likelihoods do not necessarily translate into lower error rates. We will return to this
point later.
It is worth noting that the above experiments used a fixed number of factors per
mixture component . In fact , because the variability of speech is highly contextdependent, it makes sense to vary the number of factors , even across states within
the same HMM . A simple heuristic is to adjust the number of factors depending on
the amount of training data for each state (as determined by an initial segmentation
of the training utterances). We found that this heuristic led to more pronounced
L Saul and M Rahim
754
C
1
1
1
1
1
2
2
2
2
2
4
4
4
4
4
8
8
8
16
f
C(f + 2)
0
1
2
3
4
0
1
2
3
4
0
1
2
3
4
0
1
2
0
2
3
4
5
6
4
6
8
10
12
8
12
16
20
24
16
24
32
32
word error (%)
16.2
14.6
13.7
13.0
12.5
13 .4
12.0
11.4
10.9
10.8
11.5
10.4
10.1
10.0
9.8
10.2
9.7
9.6
9.5
log-likelihood
32.9
34.2
34 .9
35.3
35.8
34 .0
35.1
35.8
36.2
36.6
34.9
35.9
36.5
36.9
37.3
35 .6
36.5
37.0
36.2
CPU time (sec)
25
30
30
38
39
30
44
48
61
67
46
80
93
132
153
93
179
226
222
Table 1: Results for different recogmzers. The columns indicate the number of
mixture components, the number of factors , the number of parameters per mixture
model (divided by the number of features), the word error rates and average loglikelihood scores on the test set, and the CPU time to recognize twenty utterances.
C
1
2
4
f
2
2
2
C(f + 2)
4
8
16
word error J%J
12.3
10.5
9.6
log-likelihood
35.4
36.3
37 .0
CPU time Jse<J
32
53
108
Table 2: Results for recognizers with variable numbers of factors ;
average number of factors per mixture component.
f denotes the
differences in likelihood scores and error rates . In particular, substantial improvements were observed for three recognizers whose HMMs employed an average of
two factors per mixture component; see the dashed lines in figure 1. Table 2 summarizes these results. The reader will notice that these recognizers are extremely
competitive in all aspects of performance--accuracy, memory, and speed-with the
baseline (zero factor) models in table 1.
4
Discussion
In this paper we have studied the combined use of mixture densities and factor
analysis for speech recognition . This was done in the framework of hidden Markov
modeling, where acoustic features are conditionally modeled by mixtures of Gaussian PDFs. We have shown that mixture densities and factor analysis are complementary means of modeling acoustic correlations. Moreover , when used together ,
they can lead to smaller, faster, and more accurate recognizers than either method
on its own . (Compare the last lines of tables 1 and 2.)
Modeling Acoustic Correlations by Factor Analysis
755
Several issues deserve further investigation. First, we have seen that increases in
likelihood scores do not always correspond to reductions in error rates. (This is
a common occurrence in automatic speech recognition.) We are currently investigating discriminative methods[lO] for training HMMs with factor analysis; the idea
here is to optimize an objective function that more directly relates to the goal of
minimizing classification errors. Second, it is important to extend our results to
large vocabulary tasks in speech recognition. The extreme sparseness of data in
these tasks makes factor analysis an appealing strategy for dimensionality reduction. Finally, there are other questions that need to be answered. Given a limited
number of parameters, what is the best way to allocate them among factors and
mixture components? Do the cepstral features used by HMMs throwaway informative correlations in the speech signal? Could such correlations be better modeled by
factor analysis? Answers to these questions can only lead to further improvements
in overall performance.
Acknowledgements
We are grateful to A. Ljolje (AT&T Labs), Z. Ghahramani (University of Toronto)
and H. Seung (Bell Labs) for useful discussions. We also thank P. Modi (AT&T
Labs) for providing an initial segmentation of the training utterances.
References
[1] Rabiner, L., and Juang, B. (1993) Fundamentals of Speech Recognition. Englewood Cliffs: Prentice Hall.
[2] Ljolje, A. (1994) The importance of cepstral parameter correlations in speech
recognition. Computer Speech and Language 8:223-232.
[3] Bellegarda, J., and Nahamoo, D. (1990) Tied mixture continuous parameter
modeling for speech recognition. IEEE Transactions on Acoustics, Speech, and
Signal Processing 38:2033-2045.
[4] Rubin, D., and Thayer, D. (1982) EM algorithms for factor analysis. Psychometrika 47:69-76.
[5] Everitt, B. (1984) An introduction to latent variable models. London: Chapman
and Hall.
[6] Hinton, G., Dayan, P., and Revow, M. (1996) Modeling the manifolds of images
of handwritten digits. To appear in IEEE Transactions on Neural Networks.
[7] Ghahramani, Z. and Hinton, G. (1996) The EM algorithm for mixtures of
factor analyzers. University of Toronto Technical Report CRG-TR-96-1.
[8] Simard, P., LeCun, Y., and Denker, J. (1993) Efficient pattern recognition
using a new transformation distance. In J. Cowan, S. Hanson, and C. Giles,
eds. Advances in Neural Information Processing Systems 5:50-58. Cambridge:
MIT Press.
[9] Press, W., Teukolsky, S., Vetterling, W., and Flannery, B. (1992) Numerical
Recipes in C: The Art of Scientific Computing. Cambridge: Cambridge University Press.
[10] Bahl, L., Brown, P., deSouza, P., and Mercer, 1. (1986) Maximum mutual
information estimation of hidden Markov model parameters for speech recognition. In Proceedings of ICASSP 86: 49-52.
| 1339 |@word version:1 briefly:1 inversion:1 loading:2 d2:1 covariance:20 decomposition:1 tr:1 reduction:7 initial:2 att:1 score:5 com:1 must:1 numerical:1 alphanumeric:2 informative:1 speakerindependent:1 enables:2 plot:5 update:4 stationary:1 parameterization:1 short:2 toronto:2 lx:1 along:1 xtl:1 consists:1 overhead:1 indeed:1 cpu:5 psychometrika:1 begin:1 estimating:2 moreover:2 what:2 tying:2 string:2 transformation:1 nj:1 ti:1 rahim:4 tie:1 prohibitively:1 whatever:1 normally:2 unit:1 appear:1 positive:1 understood:1 local:2 aat:3 cliff:1 studied:1 challenging:1 hmms:13 limited:1 thirty:2 lecun:1 testing:1 practice:1 definite:1 digit:4 procedure:9 bell:1 g13:1 word:8 integrating:1 dime:1 close:1 prentice:1 context:1 applying:2 optimize:1 xlz:3 map:1 dz:1 maximizing:1 straightforward:2 simplicity:1 factored:2 examines:1 importantly:1 spanned:1 notion:1 variation:3 imagine:2 suppose:1 us:1 element:4 recognition:14 particularly:1 expensive:1 observed:1 capture:2 connected:1 ran:1 substantial:1 insertion:1 seung:1 trained:2 grateful:1 compromise:1 purely:1 icassp:1 represented:1 cat:1 derivation:1 describe:2 london:1 outside:1 whose:2 quite:1 heuristic:2 say:1 loglikelihood:1 grammar:2 florham:1 statistic:1 sequence:1 translate:1 achieve:1 pronounced:1 az:3 sgi:1 recipe:1 juang:1 depending:1 eq:7 implemented:1 indicate:4 drawback:1 coarticulation:2 subsequently:1 hx:1 investigation:1 crg:1 hall:2 presumably:1 lawrence:1 vary:2 recognizer:2 estimation:2 currently:2 weighted:1 mit:1 clearly:3 gaussian:11 always:1 varying:1 derived:1 properly:1 pdfs:4 improvement:2 likelihood:15 ave:1 baseline:1 sense:1 dependent:1 dayan:1 vetterling:1 typically:1 lsaul:1 hidden:11 relation:1 issue:1 classification:1 among:1 overall:1 multiplies:2 constrained:1 art:1 mutual:1 marginal:1 sampling:1 zz:1 chapman:1 represents:2 park:2 future:1 report:2 np:1 mjn:1 modi:1 recognize:4 organization:1 fd:1 englewood:1 investigate:2 highly:2 multiply:1 adjust:1 male:1 mixture:34 extreme:3 behind:2 accurate:2 circle:2 confusable:1 complicates:1 column:3 modeling:15 giles:1 disadvantage:1 maximization:2 expects:2 front:1 connect:1 answer:1 corrupted:1 considerably:1 combined:4 st:1 density:8 twelve:1 fundamental:1 systematic:1 factoranalysis:1 together:2 recorded:1 opposed:1 possibly:1 thayer:1 derivative:2 simard:1 return:1 account:1 star:2 sec:1 coefficient:2 explicitly:2 depends:1 later:1 lab:4 competitive:1 accuracy:3 variance:3 likewise:2 efficiently:1 yield:1 correspond:1 rabiner:1 handwritten:2 worth:2 dig:1 sharing:1 ed:1 energy:1 naturally:1 dimensionality:8 segmentation:2 higher:2 cepstrum:2 done:2 though:1 correlation:18 hand:2 bahl:1 scientific:1 building:1 normalized:1 consisted:1 brown:1 conditionally:2 attractive:1 ll:3 speaker:2 complete:1 image:1 consideration:1 common:1 extend:1 significant:1 cambridge:3 everitt:1 automatic:3 unconstrained:1 analyzer:2 language:1 recognizers:7 longer:1 etc:1 posterior:3 own:4 female:1 store:1 lnp:1 inverted:1 captured:1 seen:1 additional:1 employed:1 converge:1 maximize:1 determine:1 monotonically:1 dashed:2 ii:1 relates:1 full:4 signal:3 rj:1 technical:1 faster:1 calculation:1 divided:3 variant:1 essentially:1 expectation:2 represent:2 tailored:1 invert:1 background:1 bellegarda:1 tend:1 cowan:1 noting:1 enough:1 idea:4 tradeoff:2 whether:1 motivated:2 pca:1 allocate:1 speech:28 nine:1 useful:2 involve:1 amount:2 mazin:2 xij:1 notice:1 estimated:1 delta:1 per:9 discrete:1 four:1 backward:1 graph:2 offactor:1 letter:1 powerful:1 reader:1 summarizes:2 ct:1 guaranteed:1 nahamoo:1 constraint:1 zlxt:1 aspect:1 speed:2 answered:1 extremely:1 poor:1 across:2 smaller:1 em:9 appealing:1 making:1 intuitively:1 taken:1 know:1 end:2 wii:2 incurring:1 denker:1 spectral:1 occurrence:1 denotes:2 include:1 ljolje:2 ghahramani:3 especially:1 objective:1 question:2 occurs:1 strategy:2 diagonal:11 subspace:4 distance:2 thank:1 hmm:4 manifold:1 reason:1 modeled:6 providing:1 minimizing:1 difficult:1 holding:1 ofw:1 reliably:1 zt:1 twenty:2 observation:2 markov:7 hinton:3 variability:4 frame:2 arbitrary:2 namely:1 required:1 componentwise:1 hanson:1 acoustic:12 deletion:1 zlx:1 deserve:1 below:1 pattern:1 summarize:3 built:1 including:2 memory:4 rely:2 utterance:5 review:1 acknowledgement:1 tangent:1 embedded:2 interesting:1 versus:5 integrate:1 imposes:1 rubin:1 mercer:1 throwaway:1 storing:2 heavy:1 lo:1 surprisingly:1 last:1 free:1 side:2 saul:4 cepstral:3 distributed:1 overcome:1 dimension:4 vocabulary:1 hmmbased:1 forward:1 transaction:2 alpha:3 compact:1 obtains:1 implicitly:2 ml:2 desouza:1 investigating:1 discriminative:1 continuous:5 latent:2 iterative:2 table:7 additionally:1 jz:1 necessarily:1 diag:1 codified:1 noise:4 arise:2 complementary:1 tl:1 sub:1 explicit:2 tied:2 ix:1 emphasizing:1 xt:6 burden:1 adding:1 zixt:1 importance:1 ci:1 conditioned:2 sparseness:1 flannery:1 lt:1 led:1 expressed:2 applies:1 teukolsky:1 conditional:1 identity:2 goal:2 revow:1 typical:1 except:1 telephone:1 determined:1 principal:1 lemma:1 total:2 rarely:1 evaluate:1 contextdependent:1 |
372 | 134 | 40
EFFICIENT PARALLEL LEARNING
ALGORITHMS FOR NEURAL NETWORKS
Alan H. Kramer and A. Sangiovanni-Vincentelli
Department of EECS
U .C. Berkeley
Berkeley, CA 94720
ABSTRACT
Parallelizable optimization techniques are applied to the problem of
learning in feedforward neural networks. In addition to having superior convergence properties, optimization techniques such as the PolakRibiere method are also significantly more efficient than the Backpropagation algorithm. These results are based on experiments performed on small boolean learning problems and the noisy real-valued
learning problem of hand-written character recognition.
1 INTRODUCTION
The problem of learning in feedforward neural networks has received a great deal
of attention recently because of the ability of these networks to represent seemingly
complex mappings in an efficient parallel architecture. This learning problem can
be characterized as an optimization problem, but it is unique in several respects.
Function evaluation is very expensive. However, because the underlying network is
parallel in nature, this evaluation is easily parallelizable. In this paper, we describe
the network learning problem in a numerical framework and investigate parallel
algorithms for its solution. Specifically, we compare the performance of several
parallelizable optimization techniques to the standard Back-propagation algorithm.
Experimental results show the clear superiority of the numerical techniques.
2 NEURAL NETWORKS
A neural network is characterized by its architecture, its node functions, and its
interconnection weights. In a learning problem, the first two of these are fixed, so
that the weight values are the only free parameters in the system. when we talk
about "weight space" we refer to the parameter space defined by the weights in a
network, thus a "weight vector" w is a vector or a point in weightspace which defines
the values of each weight in the network. We will usually index the components of
a weight vector as Wij, meaning the weight value on the connection from unit i to
unit j. Thus N(w, r), a network function with n output units, is an n-dimensional
vector-valued function defined for any weight vector wand any input vector r:
N( w, r) = [Ol(W, r), D2(w, r), ... , on(w, r)f
Efficient Parallel Learning Algorithms
0.
where is the ith output unit of the network. Any node j in the network has input
ij(w,r)
E'efanin; o.(w,r)wij and output oj(w,r) ;: I;(ij(w,r?, where 1;0 is
the node function. The evaluation of NO is inherently parallel and the time to
evaluate NO on a single input vector is O(#layers). If pipelining is used, multiple
input vectors can be evaluated in constant time.
=
3 LEARNING
The "learning" problem for a neural network refers to the problem of finding a
network function which approximates some desired "target" function TO, defined
over the same set of input vectors as the network function. The problem is simplified
by asking that the network function match the target function on only a finite set of
input vectors, the "training set" R. This is usually done with an error measure. The
most common measure is sum-squared error, which we use to define the "instance
error" between N(w, r) and T(r) at weight vector wand input vector r:
eN,T(w, r)
=
E
! (Ta(r) - o.(w, r?2
= !IIT(r) -
N(w, r)1I2.
ieoutputs
We can now define the "error function" between NO and TO over R as a function
ofw:
EN,T,R(w) =
eN,T(w, r).
reR
The learning problem is thus reduced to finding a w for which EN ,T ,R(w) is minimized. If this minimum value is zero then the network function approximates the
target function exactly on all input vectors in the training set. Henceforth, for notational simplicity we will write eO and EO rather than eN , TO and.EN
T RO.
? ,
I:
4 OPTIMIZATION TECHNIQUES
As we have framed it here, the learning problem is a classic problem in optimization.
More specifically, network learning is a problem of function approximation, where
the approximating function is a finite parameter-based system. The goal is to find
a set of parameter values which minimizes a cost function, which in this case, is a
measure of the error between the target function and the approximating function.
Among the optimization algorithms that can be used to solve this type of problem,
gradient-based algorithms have proven to be effective in a variety of applications
{Avriel, 1976}. These algorithms are iterative in nature, thus Wk is the weight
vector at the kth iteration. Each iteration is characterized by a search direction dk
and a step ak. The weight vector is updated by taking a step in the search direction
as below:
tor(k=o; evaluate(wk) != CONVERGED; ++k) {
dk = determine-.eearch_directionO;
ak = determine-.etepO;
Wk+l =wit; + akdk ;
}
41
42
Kramer and Sangiovanni-Vincentelli
If dk is a direction of descent I such as the negative of the gradient, a sufficiently
small step will reduce the value of EO. Optimization algorithms vary in the way
they determine Q and d, but otherwise they are structured as above.
5 CONVERGENCE CRITERION
The choice of convergence criterion is important. An algorithm must terminate
when EO has been sufficiently minimized. This may be done with a threshold on
the value of EO, but this alone is not sufficient. In the case where the error surface
contains "bad" local minima, it is possible that the error threshold will be unattainable, and in this case the algorithm will never terminate. Some researchers have
proposed the use of an iteration limit to guarantee termination despite an unattainable error threshold {Fahlman, 1989}. Unfortunately, for practical problems where
this limit is not known a priori, this approach is inapplicable.
A necessary condition for W* to be a minimum, either local or global I is that the
gradient g(w*)
V E(w*)
o. Hence, the most usual convergence criterion for
optimization algorithms is Ilg(Wk)11 ~ l where l is a sufficiently small gradient
threshold. The downside of using this as a convergence test is that, for successful
trials, learning times will be longer than they would be in the case of an error threshold. Error tolerances are usually specified in terms of an acceptable bit error, and
a threshold on the maximum bit error (MBE) is a more appropriate representation
of this criterion than is a simple error threshold. For this reason we have chosen
a convergence criterion consisting of a gradient threshold and an M BE threshold
(T), terminating when IIg(wk)1I < lor M BE(Wk) < T, where M BEO is defined as:
=
=
M BE(w,,)
= max
(. max (!(Ti(r) reR leoutputs
Oi(Wkl r))2)) .
6 STEEPEST DESCENT
Steepest Descent is the most classical gradient-based optimization algorithm. In
this algorithm the search direction d" is always the negative of the gradient - the
direction of steepest descent. For network learning problems the computation of
g(w), the gradient of E(w), is straightforward:
g(W)
= VE(w)
[ d~ 2:e(Wlr)]T = 2: Ve(w, r),
reR
reR
where
Ve(w , r)
[ 8e(Wlr), 8e(w,r) , ... , 8e(w, r)]T
where for output units
8e(wlr)
8 W ij
6j(w, r)
,; (ij (w, r))(oj(w, r) - Tj(r)),
while for all other units
6j(w, r)
,;(ij(w, r))
8wn
8W12
L
kefanout;
8w mn
6j (w, r)Wjk.
Efficient Parallel Learning Algorithms
The evaluation of g is thus almost dual to the evaluation of N; while the latter feeds
forward through the net, the former feeds back. Both computations are inherently
parallelizable and of the same complexity.
The method of Steepest Descent determines the step Ok by inexact linesearch, meaning that it minimizes E(Wk - Okdk). There are many ways to perform this computation, but they are all iterative in nature and thus involve the evaluation of
E(Wk - Okdk) for several values of Ok. As each evaluation requires a pass through
the entire training set, this is expensive. Curve fitting techniques are employed to
reduce the number of iterations needed to terminate a linesearch. Again, there are
many ways to curve fit . We have employed the method of false position and used
the Wolfe Test to terminate a line search {Luenberger, 1986}. In practice we find
that the typical linesearch in a network learning problem terminates in 2 or 3 iterations.
7 PARTIAL CONJUGATE GRADIENT METHODS
Because linesearch guarantees that E(Wk+d < E(Wk), the Steepest Descent algorithm can be proven to converge for a large class of problems {Luenberger, 1986}.
Unfortunately, its convergence rate is only linear and it suffers from the problem
of "cross-stitching" {Luenberger, 1986}, so it may require a large number of iterations. One way to guarantee a faster convergence rate is to make use of higher
order derivatives. Others have investigated the performance of algorithms of this
class on network learning tasks, with mixed results {Becker, 1989}. We are not
interested in such techniques because they are less parallelizable than the methods
we have pursued and because they are more expensive, both computationally and
in terms of storage requirements. Because we are implementing our algorithms on
the Connection Machine, where memory is extremely limited, this last concern is
of special importance. We thus confine our investigation to algorithms that require
explicit evaluation only of g, the first derivative.
Conjugate gradient techniques take advantage of second order information to avoid
the problem of cross-stitching without requiring the estimation and storage of the
Hessian (matrix of second-order partials). The search direction is a combination of
the current gradient and the previous search direction:
There are various rules for determining 13k; we have had the most success with the
Polak-Ribiere rule, where 13k is determined from gk+l and gk according to
a _ (gk+l - gk)T . gk+l
T
.
}Jk -
gk . gk
As in the Steepest Descent algorithm, Ok is determined by linesearch. \Vith a simple reinitialization procedure partial conjugate gradient techniques are as robust as
the method of Steepest Descent {Powell, 1977}; in practice we find that the PolakRibiere method requires far fewer iterations than Steepest Descent.
43
44
Kramer and Sangiovanni-Vincentelli
8 BACKPROPAGATION
The Batch Back-propagation algorithm {Rumelhart, 1986} can be described in
terms of our optimization framework. Without momentum, the algorithm is very
similar to the method of Steepest Descent in that dk = -gk. Rather than being
determined by a linesearch, a, the "learning rate", is a fixed user-supplied constant.
With momentum, the algorithm is similar to a partial conjugate gradient method,
as dk+l = -~+l + ,Bkdk, though again (3, the "momentum term", is fixed. On-line
Back-propagation is a variation which makes a change to the weight vector following
the presentation of each input vector: dk = V'e(wk' rk).
Though very simple, we can see that this algorithm is numerically unsound for several reasons. Because,B is fixed, d k may not be a descent direction, and in this
case any a will increase EO. Even if dk is a direction of descent (as is the case
for Batch Back-propagation without momentum), a may be large enough to move
from one wall of a "valley" to the opposite wall, again resulting in an increase in
EO. Because the algorithm can not guarantee that EO is reduced by successive
iterations, it cannot be proven to converge. In practice, finding a value for a which
results in fast progress and stable behavior is a black art, at best.
9 WEIGHT DECAY
One of the problems of performing gradient descent on the "error surface" is that
minima may be at infinity. (In fact, for boolean learning problems all minima
are at infinity.) Thus an algorithm may have to travel a great distance through
weightspace before it converges. Many researchers have found that weight decay is
useful for reducing learning times {Hinton, 1986}. This technique can be viewed as
adding a term corresponding to the length of the weight vector to the cost function;
this modifies the cost surface in a way that bounds all the minima. Rather than
minimizing on the error surface, minimization is performed on the surface with cost
function
C(W) E(w) + 211wll2
2
where I, the relative weight cost, is a problem-specific parameter. The gradient for
this cost function is g( w) V' C( w)
V' E( w) + IW, and for any step o'k, the effect
of I is to "decay" the weight vector by a factor of (1 - O'ey):
=
=
=
10 PARALLEL IMPLEMENTATION ISSUES
We have emphasized the parallelism inherent in the evaluation of EO and gO. To
be efficient, any learning algorithm must exploit this parallelism. Without momentum, the Back-propagation algorithm is the simplest gradient descent technique, as
it requires the storage of only a single vector, gk. Momentum requires the storage of
only one additional vector, dk-l. The Steepest Descent algorithm also requires the
storage of only a single vector more than Back-propagation without momentum:
Efficient Parallel Learning Algorithms
dk, which is needed for linesearch. In addition to dk, the Polak-Ribiere method
requires the storage of two additional vectors: dk-l and gk-l. The additional storage requirements of the optimization techniques are thus minimal. The additional
computational requirements are essentially those needed for linesearch - a single dot
product and a single broadcast per iteration. These operations are parallelizable
(log time on the Connection Machine) so the additional computation required by
these algorithms is also minimal, especially since computation time is dominated
by the evaluation of EO and gO. Both the Steepest Descent and Polak-Ribiere
algorithms are easily parallelizable. We have implemented these algorithms, as well
as Back-propagation, on a Connection Machine {Hillis, 1986}.
11 EXPERIMENTAL RESULTS - BOOLEAN LEARNING
We have compared the performance of the Polak-Ribiere (P-R), Steepest Descent
(S-D), and Batch Back-propagation (B-B) algorithms on small boolean learning
problems. In all cases we have found the Polak-Ribiere algorithm to be significantly
more efficient than the others. All the problems we looked at were based on threelayer networks (1 hidden layer) using the logistic function for all node functions.
Initial weight vectors were generated by randomly choosing each component from
(+r, -r). '1 is the relative weight cost, and f and r define the convergence test.
Learning times are measured in terms of epochs (sweeps through the training set).
The encoder problem is easily scaled and has no bad local minima (assuming sufficient hidden units: log(#inputs)). All Back-propagation trials used Q'
1 and
(3 = OJ these values were found to work about as well as any others. Table 1 summarizes the results. Standard deviations for all data were insignificant ? 25%).
=
TABLE 1. Encoder Results
Encoder
Problem
Parameter Values
num
trials
r
10-5-10
10-5-10
10-5-10
100
100
100
1.0
1.0
1.0
1e-4
1e-4
1e-4
1e-1
2e-2
7e-4
1e-8
1e-8
1e-8
10-5-10
10-5-10
10-5-10
4-2-4
8-3-8
16-4-16
32-5-32
64-6-64
100
100
100
100
100
100
25
25
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1e-4
1e-4
1e-4
1e-4
1e-4
1e-4
1e-4
1e-4
0.0
0.0
0.0
0.1
0.1
0.1
0.1
0.1
1e-4
1e-6
le-8
1e-8
1e-8
1e-8
1e-8
1e-8
1 '11
r/
f
A verage Epochs to Convergence
P-R 1
S-D /
B-B
196.93
63.71
109.06
71.27
142.31
299.55
104.70
431.43
3286.20
279.52
353.30
417.90
36.92
67.63
121.30
208.60
405.60
1490.00
2265.00
2863.00
56.90
194.80
572.80
1379.40
4187.30
13117.00
24910.00
35260.00
179.95
594.76
990.33
1826.15
> 10000
45
46'
Kramer and Sangiovanni-Vincentelli
The parity problem is interesting because it is also easily scaled and its weightspace
is known to contain bad local minima.. To report learning times for problems with
bad local minima, we use expected epochs to solution, EES. This measure makes
sense especially if one considers an algorithm with a restart procedure: if the algorithm terminates in a bad local minima it can restart from a new random weight
vector. EES can be estimated from a set of independent learning trials as the
ratio of total epochs to successful trials. The results of the parity experiments are
summarized in table 2. Again, the optimization techniques were more efficient than
Back-propagation. This fact is most evident in the case of bad trials. All trials used
r = 1, "y 1e - 4, T
0.1 and f
1e - 8. Back-propagation used a
1 and f3 o.
=
=
=
=
=
TABLE 2. Parity Results
II
Parity
alg
2-2-1
P-R
S-D
B-B
P-R
S-D
B-B
P-R
S-D
B-B
4-4-1
8-8-1
trials
100
100
100
100
100
100
16
6
2
I %"uee I avg"uee
72%
80%
78%
61%
99%
71%
73
95
684
352
2052
8704
50%
1716
>10000
>100000
-
(s.d.)
(43)
(115
(1460
(122
(1753
(8339
(748
I avgun ,
232
3077
47915
453
18512
95345
953
>10000
>100000
(s.d.)
(54)
(339)
(5505)
J}17
(-
(11930
(355
I
EES
11
163
864
14197
641
2324
48430
2669
>10000
>100000
12 LETTER RECOGNITION
One criticism of batch-based gradient descent techniques is that for large real-world,
real-valued learning problems, they will be be less efficient than On-line Backpropagation. The task of characterizing hand drawn examples of the 26 capital
letters was chosen as a good problem to test this, partly because others have used
this problem to demonstrate that On-line Back-propagation is more efficient than
Batch Back-propagation {Le Cun, 1986}. The experimental setup was as follows:
Characters were hand-entered in a 80 x 120 pixel window with a 5 pixel-wide brush
(mouse controlled). Because the objective was to have many noisy examples of the
same input pattern, not to learn scale and orientation invariance, all characters were
roughly centered and roughly the full size of the window. Following character entry,
the input window was symbolically gridded to define 100 8 x 12 pixel regions. Each
of these regions was an input and the percentage of "on" pixels in the region was
its value. There were thus 100 inputs, each of which could have any of 96 (8 x 12)
distinct values. 26 outputs were used to represent a one-hot encoding of the 26
letters, and a network with a single hidden layer containing 10 units was chosen.
The network thus had a 100-10-26 architecture; all nodes used the logistic function.
Efficient Parallel Learning Algorithms
A training set consisting of 64 distinct sets of the 26 upper case letters was created
by hand in the manner described. 25" A" vectors are shown in figure 1. This
large training set was recursively split in half to define a series of 6 successively
larger training sets; Ro to Ro, where Ro is the smallest training set consisting
of 1 of each letter and Ri contains Ri-l and 2i - 1 new letter sets. A testing set
consisting of 10 more sets of hand-entered characters was also created to measure
network performance. For each Ri, we compared naive learning to incremental
learning, where naive learning means initializing w~i) randomly and incremental
learning means setting w~i) to w~i-l) (the solution weight vector to the learning
problem based on Ri-d. The incremental epoch count for the problem based on
Ri was normalized to the number of epochs needed starting from w~i-l) plus! the
number of epochs taken by the problem based on Ri-l (since IRi-ll !IRd). This
normalized count thus reflects the total number of relative epochs needed to get
from a naive network to a solution incrementally.
=
Both Polak-Ribiere and On-line Back-propagation were tried on all problems. Table
3 contains only results for the Polak-Ribiere method because no combination of
weight-decay and learning rate were found for which Back-propagation could find a
solution after 1000 times the number of iterations taken by Polak-Ribiere, although
values of "y from 0.0 to 0.001 and values for 0' from 1.0 to 0.001 were tried. All
problems had r 1, "y 0.01, r
Ie - 8 and ? 0.1. Only a single trial was done
for each problem. Performance on the test set is shown in the last column.
=
=
FIGURE 1. 25 "A"s
=
=
TABLE 3. Letter Recognition
prob
set
:11;; H?-:' ?r::.
~-!j.
'~:!'!I
. . r......:I , . I....... ?...
~
F":
" ..,::
~
. I.
~
~
1'\
.r;: .,.'1. .,l'1
:'1''''
HI
.?.
.~
t? '. I!
;i....J
f"t
.:
M
~
r! 1
J.'1
_
??
= t? ! , .,
RO
R1
R2
R3
R4
R5
R6
.r
Learning Time epochs)
INC I NORM I NAIV
95
95
95
130
83
85
63
128
271
14
78
388
191
230
1129
153
268
1323
46
657
180
Test
%
53.5
69.2
80.4
83.4
92.3
98.1
99.6
The incremental learning paradigm was very effective at reducing learning times.
Even non-incrementally, the Polak-Ribiere method was more efficient than on-line
Back-propagation on this problem. The network with only 10 hidden units was
sufficient, indicating that these letters can be encoded by a compact set of features.
13 CONCLUSIONS
Describing the computational task of learning in feedforward neural networks as
an optimization problem allows exploitation of the wealth of mathematical programming algorithms that have been developed over the years. We have found
47
48
Kramer and Sangiovanni-Vincentelli
that the Polak-Ribiere algorithm offers superior convergence properties and significant speedup over the Back-propagation algorithm. In addition, this algorithm is
well-suited to parallel implementation on massively parallel computers such as the
Connection Machine. Finally, incremental learning is a way to increase the efficiency
of optimization techniques when applied to large real-world learning problems such
as that of handwritten character recognition.
Acknowledgments
The authors would like to thank Greg Sorkin for helpful discussions. This work was
supported by the Joint Services Educational Program grant #482427-25304.
References
{Avriel, 1976} Mordecai Avriel. Nonlinear Programming, Analysis and Methods.
Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1976.
{Becker, 1989} Sue Becker and Yan Le Cun. Improving the Convergence of BackPropagation Learning with Second Order Methods. In Proceedings of the 1988
Connectionist Alodels Summer School, pages 29-37, Morgan Kaufmann, San
Mateo Calif., 1989.
{Fahlman, 1989} Scott E. Fahlman. Faster Learning Variations on Back-Propagation:
An Empirical Study. In Proceedings of the 1988 Connectionist Models Summer School, pages 38-51, Morgan Kaufmann, San Mateo Calif., 1989.
{Hillis, 1986} William D. Hillis. The Connection Machine. MIT Press, Cambridge,
Mass, 1986.
{Hinton, 1986} G. E. Hinton. Learning Distributed Representations of Concepts.
In Proceedings of the Cognitive Science Society, pages 1-12, Erlbaum, 1986.
{Kramer, 1989} Alan H. Kramer. Optimization Techniques for Neural Networks.
Technical Memo #UCB-ERL-M89-1, U.C. Berkeley Electronics Research Laboratory, Berkeley Calif., Jan. 1989.
{Le Cun, 1986} Yan Le Cun. HLM: A Multilayer Learning Network. In Proceedings of the 1986 Connectionist Alodels Summer School, pages 169-177,
Carnegie-Mellon University, Pittsburgh, Penn., 1986.
{Luenberger, 1986} David G. Luenberger. Linear and Nonlinear Programming.
Addison-Wesley Co., Reading, Mass, 1986.
{Powell, 1977} M. J. D. Powell. "Restart Procedures for the Conjugate Gradient
Method", Mathematical Programming 12 (1977) 241-254
{Rumelhart, 1986} David E Rumelhart, Geoffrey E. Hinton, and R. J. Williams.
Learning Internal Representations by Error Propagation. In Parallel Distributed Processing: Explorations in the Microstructure , of Cognition. Vol 1:
Foundations, pages 318-362, MIT Press, Cambridge, Mass., 1986
| 134 |@word trial:9 exploitation:1 uee:2 norm:1 termination:1 d2:1 tried:2 recursively:1 electronics:1 initial:1 contains:3 series:1 current:1 written:1 must:2 numerical:2 alone:1 pursued:1 fewer:1 half:1 ith:1 steepest:12 num:1 node:5 successive:1 lor:1 mathematical:2 fitting:1 manner:1 expected:1 weightspace:3 roughly:2 behavior:1 ol:1 window:3 underlying:1 mass:3 minimizes:2 developed:1 finding:3 guarantee:4 berkeley:4 ti:1 exactly:1 ro:5 scaled:2 unit:9 grant:1 penn:1 superiority:1 before:1 service:1 local:6 limit:2 despite:1 encoding:1 ak:2 cliff:1 black:1 plus:1 mateo:2 r4:1 co:1 limited:1 unique:1 practical:1 acknowledgment:1 testing:1 practice:3 backpropagation:4 procedure:3 jan:1 powell:3 empirical:1 yan:2 significantly:2 refers:1 get:1 cannot:1 valley:1 storage:7 prentice:1 modifies:1 straightforward:1 attention:1 go:2 starting:1 iri:1 educational:1 williams:1 wit:1 simplicity:1 rule:2 erl:1 classic:1 variation:2 updated:1 target:4 user:1 programming:4 wolfe:1 rumelhart:3 recognition:4 expensive:3 jk:1 hlm:1 initializing:1 region:3 sangiovanni:5 complexity:1 terminating:1 inapplicable:1 threelayer:1 efficiency:1 iig:1 easily:4 joint:1 iit:1 various:1 jersey:1 talk:1 distinct:2 fast:1 describe:1 effective:2 choosing:1 encoded:1 larger:1 valued:3 solve:1 interconnection:1 otherwise:1 encoder:3 ability:1 polak:10 noisy:2 seemingly:1 advantage:1 net:1 product:1 entered:2 wjk:1 convergence:12 requirement:3 r1:1 incremental:5 converges:1 measured:1 ij:5 school:3 received:1 progress:1 implemented:1 direction:9 pipelining:1 centered:1 exploration:1 implementing:1 require:2 microstructure:1 wall:2 investigation:1 sufficiently:3 confine:1 hall:1 great:2 mapping:1 cognition:1 vith:1 tor:1 vary:1 smallest:1 estimation:1 travel:1 iw:1 ilg:1 reflects:1 minimization:1 mit:2 always:1 rather:3 avoid:1 ribiere:10 notational:1 criticism:1 sense:1 helpful:1 entire:1 hidden:4 wij:2 interested:1 pixel:4 issue:1 among:1 dual:1 orientation:1 priori:1 art:1 special:1 never:1 having:1 f3:1 r5:1 minimized:2 others:4 report:1 connectionist:3 inherent:1 unsound:1 randomly:2 ve:3 consisting:4 william:1 englewood:1 investigate:1 evaluation:10 tj:1 partial:4 necessary:1 calif:3 desired:1 minimal:2 instance:1 column:1 boolean:4 asking:1 downside:1 linesearch:8 cost:7 deviation:1 entry:1 successful:2 erlbaum:1 unattainable:2 eec:1 ie:1 mouse:1 squared:1 again:4 successively:1 containing:1 broadcast:1 henceforth:1 cognitive:1 derivative:2 summarized:1 wk:11 inc:2 performed:2 parallel:13 oi:1 greg:1 kaufmann:2 handwritten:1 researcher:2 converged:1 parallelizable:7 suffers:1 inexact:1 back:19 feed:2 ok:3 ta:1 higher:1 wesley:1 evaluated:1 done:3 though:2 hand:5 nonlinear:2 propagation:19 incrementally:2 defines:1 logistic:2 effect:1 requiring:1 contain:1 normalized:2 concept:1 former:1 hence:1 laboratory:1 i2:1 deal:1 ll:1 mbe:1 criterion:5 evident:1 demonstrate:1 meaning:2 recently:1 superior:2 common:1 approximates:2 numerically:1 refer:1 significant:1 mellon:1 cambridge:2 framed:1 wlr:3 had:3 dot:1 stable:1 longer:1 surface:5 massively:1 success:1 morgan:2 minimum:10 additional:5 eo:10 employed:2 ey:1 determine:3 converge:2 paradigm:1 ii:1 multiple:1 full:1 alan:2 technical:1 match:1 faster:2 characterized:3 cross:2 offer:1 vincentelli:5 controlled:1 multilayer:1 essentially:1 sue:1 iteration:10 represent:2 addition:3 wealth:1 ee:3 feedforward:3 split:1 enough:1 wn:1 variety:1 fit:1 sorkin:1 architecture:3 opposite:1 reduce:2 becker:3 ird:1 hessian:1 useful:1 clear:1 involve:1 simplest:1 reduced:2 supplied:1 percentage:1 estimated:1 per:1 write:1 carnegie:1 vol:1 threshold:9 drawn:1 capital:1 symbolically:1 sum:1 year:1 wand:2 prob:1 letter:8 almost:1 rer:4 w12:1 acceptable:1 summarizes:1 bit:2 layer:3 bound:1 hi:1 summer:3 infinity:2 ri:6 dominated:1 extremely:1 performing:1 speedup:1 department:1 structured:1 according:1 combination:2 verage:1 conjugate:5 terminates:2 character:6 cun:4 taken:2 computationally:1 describing:1 count:2 r3:1 needed:5 addison:1 stitching:2 luenberger:5 operation:1 appropriate:1 batch:5 gridded:1 exploit:1 especially:2 approximating:2 classical:1 society:1 sweep:1 move:1 objective:1 looked:1 usual:1 gradient:18 kth:1 distance:1 thank:1 restart:3 considers:1 reason:2 assuming:1 length:1 index:1 ratio:1 minimizing:1 setup:1 unfortunately:2 gk:10 ofw:1 negative:2 memo:1 implementation:2 perform:1 upper:1 finite:2 descent:18 hinton:4 wkl:1 david:2 required:1 specified:1 connection:6 hillis:3 usually:3 below:1 parallelism:2 pattern:1 scott:1 reading:1 program:1 oj:3 max:2 memory:1 hot:1 mn:1 created:2 naive:3 epoch:9 determining:1 relative:3 mixed:1 interesting:1 proven:3 geoffrey:1 foundation:1 sufficient:3 supported:1 fahlman:3 free:1 last:2 parity:4 wide:1 taking:1 characterizing:1 tolerance:1 distributed:2 curve:2 world:2 forward:1 author:1 avg:1 san:2 simplified:1 far:1 compact:1 global:1 pittsburgh:1 search:6 iterative:2 table:6 terminate:4 nature:3 robust:1 ca:1 inherently:2 learn:1 improving:1 alg:1 investigated:1 complex:1 en:6 position:1 momentum:7 explicit:1 r6:1 rk:1 bad:6 specific:1 emphasized:1 r2:1 dk:11 decay:4 insignificant:1 concern:1 false:1 adding:1 importance:1 suited:1 beo:1 brush:1 determines:1 goal:1 kramer:7 presentation:1 viewed:1 wll2:1 change:1 specifically:2 typical:1 determined:3 reducing:2 total:2 pas:1 partly:1 experimental:3 invariance:1 ucb:1 indicating:1 internal:1 latter:1 evaluate:2 |
373 | 1,340 | Generalization in decision trees and DNF:
Does size matter?
Mostefa Golea\ Peter L. Bartlett h , Wee Sun Lee2 and Llew Mason 1
1 Department of Systems Engineering
Research School of Information
Sciences and Engineering
Australian National University
Canberra, ACT, 0200, Australia
2 School of Electrical Engineering
University College UNSW
Australian Defence Force Academy
Canberra, ACT, 2600, Australia
Abstract
Recent theoretical results for pattern classification with thresholded real-valued functions (such as support vector machines, sigmoid networks, and boosting) give bounds on misclassification
probability that do not depend on the size of the classifier, and
hence can be considerably smaller than the bounds that follow from
the VC theory. In this paper, we show that these techniques can
be more widely applied, by representing other boolean functions
as two-layer neural networks (thresholded convex combinations of
boolean functions). For example, we show that with high probability any decision tree of depth no more than d that is consistent with
m training examples has misclassification probability no more than
o ( (~ (Neff VCdim(U) log2 m log d)) 1/2), where U is the class of
node decision functions, and Neff ::; N can be thought of as the
effective number of leaves (it becomes small as the distribution on
the leaves induced by the training data gets far from uniform).
This bound is qualitatively different from the VC bound and can
be considerably smaller.
We use the same technique to give similar results for DNF formulae.
? Author to whom correspondence should be addressed
260
1
M. Golea, P Bartlett, W. S. Lee and L Mason
INTRODUCTION
Decision trees are widely used for pattern classification [2, 7]. For these problems,
results from the VC theory suggest that the amount of training data should grow
at least linearly with the size of the tree[4, 3]. However, empirical results suggest
that this is not necessary (see [6, 10]). For example, it has been observed that the
error rate is not always a monotonically increasing function of the tree size[6].
To see why the size of a tree is not always a good measure of its complexity, consider
two trees, A with N A leaves and B with N B leaves, where N B ? N A . Although A
is larger than B, if most of the classification in A is carried out by very few leaves
and the classification in B is equally distributed over the leaves, intuition suggests
that A is actually much simpler than B, since tree A can be approximated well by
a small tree with few leaves. In this paper, we formalize this intuition.
We give misclassification probability bounds for decision trees in terms of a new
complexity measure that depends on the distribution on the leaves that is induced
by the training data, and can be considerably smaller than the size of the tree.
These results build on recent theoretical results that give misclassification probability bounds for thresholded real-valued functions, including support vector machines,
sigmoid networks, and boosting (see [1, 8, 9]), that do not depend on the size of the
classifier. We extend these results to decision trees by considering a decision tree as
a thresholded convex combination of the leaf functions (the boolean functions that
specify, for a given leaf, which patterns reach that leaf). We can then apply the
misclassification probability bounds for such classifiers. In fact, we derive and use
a refinement of the previous bounds for convex combinations of base hypotheses,
in which the base hypotheses can come from several classes of different complexity,
and the VC-dimension of the base hypothesis class is replaced by the average (under the convex coefficients) of the VC-dimension of these classes. For decision trees,
the bounds we obtain depend on the effective number of leaves, a data dependent
quantity that reflects how uniformly the training data covers the tree's leaves. This
bound is qualitatively different from the VC bound, which depends on the total
number of leaves in the tree.
In the next section, we give some definitions and describe the techniques used. We
present bounds on the misclassification probability of a thresholded convex combination of boolean functions from base hypothesis classes, in terms of a misclassification
margin and the average VC-dimension of the base hypotheses. In Sections 3 and 4,
we use this result to give error bounds for decision trees and disjunctive normal
form (DNF) formulae.
2
GENERALIZATION ERROR IN TERMS OF MARGIN
AND AVERAGE COMPLEXITY
We begin with some definitions. For a class ti of { -1,1 }-valued functions defined on
the input space X, the convex hull co(ti) ofti is the set of [-1, l]-valued functions of
the form :Ei aihi, where ai ~ 0, :Ei ai = 1, and hi E ti. A function in co(ti) is used
for classification by composing it with the threshold function, sgn : IR ~ {-I, I},
which satisfies sgn(a) = 1 iff a ~ O. So f E co(ti) makes a mistake on the pair
(x,y) E X x {-1,1} iff sgn(f(x? =F y. We assume that labelled examples (x,y)
are generated according to some probability distribution V on X x {-I, I}, and we
let Pv [E] denote the probability under V of an event E. If S is a finite subset
of Z, we let Ps [E] denote the empirical probability of E (that is, the proportion
of points in S that lie in E). We use Ev [.] and Es [.] to denote expectation in a
similar way. For a function class H of {-I, l}-valued functions defined on the input
261
Generalization in Decision Trees and DNF: Does Size Matter?
space X, the growth function and VC dimension of H will be denoted by IIH (m)
and VCdim(H) respectively.
In [8], Schapire et al give the following bound on the misclassification probability
of a thresholded convex combination of functions , in terms of the proportion of
training data that is labelled to the correct side of the threshold by some margin.
(Notice that Pv [sgn(f(x? # y] ~ Pv [yf(x) ~ 0].)
Theorem 1 ([8]) Let V be a distribution on X x {-I, I}, 1? a hypothesis class
with VCdim(H) = d < 00 , and 8> O. With probability at least 1- 8 over a training
set S of m examples chosen according to V, every function f E co(1?) and every
8> 0 satisfy
P v [yf(x) ~ 0] ~ Ps [yf(x) ~ 8] +
1 (dl 2( /d)
(
0..;m
og 82m
+ log(1/8)
)
1/2) .
In Theorem 1, all of the base hypotheses in the convex combination f are elements
of a single class 1? with bounded VC-dimension. The following theorem generalizes
this result to the case in which these base hypotheses may be chosen from any of k
classes, 1?1, ... , 1?k, which can have different VC-dimensions. It also gives a related
result that shows the error decreases to twice the error estimate at a faster rate.
Theorem 2 Let V be a distribution on X x {-I, I}, 1?1, ... ,1?k hypothesis classes
with VCdim(Hi) = di , and 8 > O. With probability at least 1 - 8 over a training
set S of m examples chosen according to V, every function f E co (U~=1 1?i) and
every 8 > 0 satisfy both
Pv [yf(x) ~ 0] ~ Ps [yf(x) ~ 8]
( 1 (1
o ..;m
+
82 (dlogm + logk) log (m8 2 /d) + log(1/8)
)1/2) ,
Pv [yf(x) ~ 0] ~ 2Ps [yf(x) ~ 8] +
o
(! (812
=
where d E ?aidj; and the ai and
jiE{l, ... ,k}.
(dlogm + logk) log (m8 2 /d) +IOg(1/8?)),
ji
are defined by f
= Ei aihi
and hi E 1?j; for
Proof sketch: We shall sketch only the proof of the first inequality of the theorem. The proof closely follows the proof of Theorem 1 (see [8]). We consider
N
a number of approximating sets of the form eN,1 = { (l/N) Ei=1 hi : hi E 1?1; ,
A
A
}
(h, ... , IN) E {I, ... , k}N and N E N. Define eN = Ul eN,I'
For a given f = Ei aihi from co (U~=1 1?i ), we shall choose an approximation
where I
=
9 E eN by choosing hI, .. . , hN independently from {hI, h 2 , ... ,}, according to
the distribution defined by the coefficients ai. Let Q denote this distribution
on eN. As in [8], we can take the expectation under this random choice of
9 E eN to show that, for any 8 > 0, Pv [yf(x) ~ 0] ~ Eg_Q [PD [yg(x) ~ 8/2]] +
exp(-N82/8). Now, for a given I E {I, .. . ,k}N, the probability that there is
a 9 in eN,1 and a 8 > 0 for which Pv [yg(x) ~ 8/2] > Ps [yg(x) ~ 8/2] + fN,1
is at most 8(N + 1) rr~1
(2:/7) d exp( -mf~,zl32).
l
;
Applying the union bound
M. Golea, P. Bartlett, W S. Lee andL Mason
262
(over the values of 1), taking expectation over 9
(
I'V
Q, and setting EN,l
=
~ In (8(N + 1) n~1 (2;;; )". kN / 6N ) ) 1'2 shows that, with probability at least
1 - 6N, every f and 8 > 0 satisfy Pv [yf(x) ~ 0] ~ Eg [P s [yg(x) ~ 8/2]] +
As above, we can bound the probability inside the first expectation
in terms of Ps [yf(x) ~ 81. Also, Jensen's inequality implies that Eg [ENtd ~
(~ (In(8(N + 1)/6N) + Nln k + N L..i aidj; In(2em))) 1/2. Setting 6N = 6/(N(N +
Eg [EN,d.
1)) and N =
r/-I In ( mf) 1gives the result.
I
Theorem 2 gives misclassification probability bounds only for thresholded convex
combinations of boolean functions. The key technique we use in the remainder of the
paper is to find representations in this form (that is, as two-layer neural networks)
of more arbitrary boolean functions. We have some freedom in choosing the convex
coefficients, and this choice affects both the error estimate Ps [yf(x) ~ 81 and the
average VC-dimension d. We attempt to choose the coefficients and the margin 8
so as to optimize the resulting bound on misclassification probability. In the next
two sections, we use this approach to find misclassification probability bounds for
decision trees and DNF formulae.
3
DECISION TREES
A two-class decision tree T is a tree whose internal decision nodes are labeled with
boolean functions from some class U and whose leaves are labeled with class labels U
from {-I, +1}. For a tree with N leaves, define the leaf functions, hi : X -+ {-I, I}
by hi(X) = 1 iff x reaches leaf i, for i = 1, ... ,N. Note that hi is the conjunction
of all tests on the path from the root to leaf i.
For a sample S and a tree T, let Pi = Ps [hi(X) = 1]. Clearly, P = (PI, .. " PN) is
a probability vector. Let Ui E {-I, + I} denote the class assigned to leaf i. Define
the class of leaf functions for leaves up to depth j as
1lj = {h :
h =
UI /\ U2 /\ ?.? /\ U r
Ir
~
j,
Ui
E U}.
It is easy to show that VCdim(1lj) ~ 2jVCdim(U) In(2ej). Let d i denote the depth
of leaf i, so hi E 1ld;, and let d = maxi di .
The boolean function implemented by a decision tree T can be written as a
thresholded convex combination of the form T(x) = sgn(f(x?, where f(x) =
L..~I WWi ?hi(x) + 1)/2) = L..~I WWi hi(X)/2 + L..~l wwd2, with Wi > 0 and
L..~I Wi = 1. (To be precise, we need to enlarge the classes 1lj slightly to be closed
under negation. This does not affect the results by more than a constant.) We first
assume that the tree is consistent with the training sample. We will show later how
the results extend to the inconsistent case.
The second inequality of Theorem 2 shows that, for fixed 6 > 0 there is a constant c such that, for any distribution V, with probability at least 1 - 6 over
the sample S we have Pv [T(x) 'I y] ~ 2Ps [yf(x) ~ 8] + L~I widiB, where
B = ~ VCdim(U) log2 m log d. Different choices of the WiS and the 8 will yield different estimates of the error rate of T. We can assume (wlog) that PI ~ ... ~ PN.
A natural choice is Wi = Pi and Pj+I ::.; 8 < Pj for some j E {I, ... ,N} which gives
-b
Pv [T(x)
'I y] ~
2
LN
i=j+I
Pi
dB
+ (i2'
(1)
Generalization in Decision Trees and DNF: Does Size Matter?
263
where d = L:~1 Pidi . We can optimize this expression over the choices of j E
{I ... ,N} and () to give a bound on the misclassification probability of the tree.
Let pep, U) = L:~1 (Pi - IIN)2 be the quadratic distance between the prob-ability vector P = (PI, ... ,PN ) and the uniform probability vector U =
(liN, liN, ... , liN). Define Neff == N (1 - pep,
The parameter Neff is a measure of the effective number of leaves in the tree.
U?.
Theorem 3 For a fixed d > 0, there is a constant c that satisfies the following. Let
V be a distribution on X x { -1, I}. Consider the class of decision trees of depth 'Up
to d, with decision functions in U. With probability at least 1 - d over the training
set S (of size mY, every decision tree T that is consistent with S has
Pv [T(x)
1= y] ~ c ( Neff VCdlm(~ log m log d
?
2
) 1/2
,
where Neff is the effective number of leaves of T.
Proof: Supposing that () ~ (aIN)I/2 we optimize (1) by choice of (). If the chosen
() is actually smaller than ca/N)I/2 then we show that the optimized bound still
holds by a standard VC result. If () ~ (a/N)I/2 then L:~i+l Pi ~ (}2 Neff/d. So (1)
implies that P v [T (x) 1= y] ~ 2(}2 Neff /d + dB / (}2. The optimal choice of () is then
(~iB/Neff)I/4. So if (~iB/Neff)I/4 ~ (a/N)I/2, we have the result. Otherwise,
the upper bound we need to prove satisfies 2(2NeffB)I/2 > 2NB, and this result is
implied by standard VC results using a simple upper bound for the growth function
of the class of decision trees with N leaves. I
Thus the parameters that quantify the complexity of a tree are: a) the complexity
of the test function class U, and b) the effective number of leaves Neff. The effective
number of leaves can potentially be much smaller than the total number of leaves
in the tree [5]. Since this parameter is data-dependent, the same tree can be simple
for one set of PiS and complex for another set of PiS.
For trees that are not consistent with the training data, the procedure to estimate
the error rate is similar. By defining Qi = Ps [YO'i = -1 I hi(x) = 1] and PI =
Pi (l- Qi)/ (1 - Ps [T(x) 1= V]) we obtain the following result.
Theorem 4 For a fixed d > 0, there is a constant c that satisfies the following. Let
V be a distribution on X x { -1, 1}. Consider the class of decision trees of depth up
to d, with decision functions in U. With probability at least 1 - d over the training
set S (of size mY, every decision tree T has
Pv [T(x)
1= y] ~ Ps [T(x) 1= y] + c ( Neff VCdim ~ log mlogd
where c is a universal constant, and Neff = N(1of leaves ofT.
.
()
pep', U?
2
) 1/3
,
is the effective number
Notice that this definition of Neff generalizes the definition given before Theorem 3.
4
DNF AS THRESHOLDED CONVEX COMBINATIONS
A DNF formula defined on {-1, I}n is a disjunction of terms, where each term is a
conjunction of literals and a literal is either a variable or its negation. For a given
DNF formula g, we use N to denote the number of terms in g, ti to represent the ith
M. Golea, P. Bartlett, W S. Lee and L Mason
264
term in f, Li to represent the set of literals in ti, and Ni the size of L i . Each term ti
can be thought of as a member of the class HNi' the set of monomials with Ni literals. Clearly, IHi I =
The DNF 9 can be written as a thresholded convex combi-
et).
nation of the form g(x) = -sgn( - f(x)) = -sgn ( - L:f:,l Wi ?ti + 1)/2)) . (Recall
that sgn(a) = 1 iff a ~ 0.) Further, each term ti can be written as a thresholded convex combination of the form ti(X) = sgn(Ji(x)) = sgn (L:lkELi Vik ?lk(x) - 1)/2)) .
Assume for simplicity that the DNF is consistent (the results extend easily to the
inconsistent case). Let ')'+ (')'-) denote the fraction of positive (negative) examples under distribution V. Let P v + [.] (P v - [.]) denote probability with respect
to the distribution over the positive (negative) examples, and let Ps+ [.] (P s - [.])
be defined similarly, with respect to the sample S. Notice that P v [g(x) :f:. y] =
')'+Pv+ [g(x) = -l]+,),-P v - [(3i)ti(X) = 1], so the second inequality of Theorem 2
shows that, with probability at least 1- 8, for any 8 and any 8i s,
N ( 2P s - [- fi(x) ~ 8i ] + 8;
B)
Pv [g(x) :f:. y] :::; ')'+ ( 2P s + [I(x) :::; 8] + dB)
? + ')'- ~
where d = L:f:,l WiNi and B = c(lognlog2m+log(N/8)) /m. As in the case of
decision trees, different choices of 8, the 8is, and the weights yield different estimates
of the error. For an arbitrary order of the terms, let Pi be the fraction of positive
examples covered by term ti but not by terms ti-l, ... ,tl' We order the terms such
that for each i, with ti-l. ... ,tl fixed, Pi is maximized, so that PI 2:: ... ~ PN,
and we choose Wi = Pi. Likewise, for a given term ti with literals 11,'" ,LN. in an
arbitrary order, let p~i) be the fraction of negative examples uncovered by literal
lk but not uncovered by lk-l, ... ,11' We order the literals of term ti in the same
greedy way as above so that pi i) ~ ... 2:: P~:, and we choose Vik = P~ i). For
PHI:::; 8 < Pi and pLiL ~ 8i < Pi~iL, where 1 :::; j :::; Nand 1 ~ ji :::; Ni, we get
N
dB)
N (N'
.
B)
P D [g(x) :f:. y] :::; ')'+ ( 2 i~l Pi + ?
+ ')'- ~ 2 kf+l p~,) + 8;
Now, let P
= (Pl,,,,,PN)
and for each term i let p(i)
= (pii), ... ,p~:).
Define
Neff = N(1 - pcP, U)) and N~~ = N i (1 - p(p(i) , U)), where U is the relevant
uniform distribution in each case. The parameter Neff is a measure of the effective
number of terms in the DNF formula. It can be much smaller than N; this would be
the case if few terms cover a large fraction of the positive examples. The parameter
N~~ is a measure of the effective number of literals in term ti. Again, it can be
much smaller than the actual number of literals in ti: this would be the case if few
literals of the term uncover a large fraction of the negative examples.
Optimizing over 8 and the 8i s as in the proof of Theorem 3 gives the following
result.
Theorem 5 For a fixed 8 > 0, there is a constant c that satisfies the following. Let
V be a distribution on X x {-I, I}. Consider the class of DNF formuLae with up to
N terms. With probabiLity at Least 1 - 8 over the training set S (of size mY, every
DNF formulae 9 that is consistent with S has
N
PD [g(x):f:. y]:::; ,),+(NeffdB)1/2
where d
= maxf:.l N i , ')'?
+ ')'-I~)N~~B)1/2
i=l
= Pv [y = ?1] and B = c(lognlog2 m + log(N/8))/m.
Generalization in Decision Trees and DNF: Does Size Matter?
5
265
CONCLUSIONS
The results in this paper show that structural complexity measures (such as size) of
decision trees and DNF formulae are not always the most appropriate in determining
their generalization behaviour, and that measures of complexity that depend on the
training data may give a more accurate descriptirm. Our analysis can be extended
to multi-class classification problems. A similar analysis implies similar bounds
on misclassification probability for decision lists, and it seems likely that these
techniques will also be applicable to other pattern classification methods.
The complexity parameter, Neff described here does not always give the best possible error bounds. For example, the effective number of leaves Neff in a decision tree
can be thought of as a single number that summarizes the probability distribution
over the leaves induced by the training data. It seems unlikely that such a number
will give optimal bounds for all distributions. In those cases, better bounds could be
obtained by using numerical techniques to optimize over the choice of (J and WiS. It
would be interesting to see how the bounds we obtain and those given by numerical
techniques reflect the generalization performance of classifiers used in practice.
Acknowledgements
Thanks to Yoav Freund and Rob Schapire for helpful comments.
References
[1] P. L. Bartlett. For valid generalization, the size of the weights is more important
than the size of the network. In Neural Information Processing Systems 9, pages
134-140. Morgan Kaufmann, San Mateo, CA, 1997.
[2] L. Breiman, J.H . Friedman, R.A. Olshen, and C.J. Stone. Classification and
Regression Trees. Wadsworth, Belmont, 1984.
[3] A. Ehrenfeucht and D. Haussler. Learning decision trees from random examples. Information and Computation, 82:231-246, 1989.
[4] U .M. Fayyad and K.B. Irani. What should be .'1inimized in a decision tree?
In AAAI-90, pages 249-754,1990.
[5] R. C. Holte. Very simple rules perform well on most commonly used databases.
Machine learning, 11:63-91, 1993.
[6] P.M. Murphy and M.J. pazzani. Exploring the decision forest: An empirical
investigation of Occam's razor in decision tree induction. Journal of Artificial
Intelligence Research, 1:257-275, 1994.
[7] J.R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann,
1992.
[8] R. E. Schapire, Y. Freund, P. L. Bartlett, and W. S. Lee. Boosting the margin:
a new explanation for the effectiveness of voting methods. In Machine Learning:
Proceedings of the Fourteenth International Conference, pages 322-330, 1997.
[9] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. A framework for structural risk minimisation. In Proc. 9th COLT, pages 68-76. ACM
Press, New York, NY, 1996.
[10] G.L. Webb. Further experimental evidence against the utility of Occam's razor.
Journal of Artificial Intelligence Research, 4:397-417, 1996.
| 1340 |@word proportion:2 seems:2 ld:1 uncovered:2 written:3 pcp:1 belmont:1 fn:1 numerical:2 greedy:1 leaf:32 intelligence:2 ith:1 boosting:3 node:2 simpler:1 prove:1 inside:1 multi:1 m8:2 actual:1 considering:1 increasing:1 becomes:1 begin:1 bounded:1 what:1 mostefa:1 every:8 act:2 ti:19 growth:2 nation:1 voting:1 classifier:4 before:1 llew:1 engineering:3 positive:4 mistake:1 path:1 twice:1 mateo:1 suggests:1 co:6 ihi:1 union:1 practice:1 procedure:1 universal:1 empirical:3 thought:3 suggest:2 get:2 nb:1 risk:1 applying:1 optimize:4 independently:1 convex:14 simplicity:1 rule:1 haussler:1 hypothesis:9 element:1 approximated:1 labeled:2 database:1 observed:1 disjunctive:1 electrical:1 sun:1 decrease:1 intuition:2 pd:2 complexity:9 ui:3 depend:4 easily:1 dnf:16 effective:10 describe:1 artificial:2 choosing:2 disjunction:1 whose:2 widely:2 valued:5 larger:1 otherwise:1 ability:1 rr:1 remainder:1 relevant:1 iff:4 academy:1 dlogm:2 p:13 derive:1 school:2 implemented:1 come:1 australian:2 lee2:1 implies:3 quantify:1 pii:1 closely:1 correct:1 golea:4 vc:13 hull:1 australia:2 sgn:10 vcdim:7 behaviour:1 generalization:8 investigation:1 exploring:1 pl:1 hold:1 normal:1 exp:2 proc:1 applicable:1 label:1 ain:1 reflects:1 clearly:2 always:4 defence:1 pn:5 ej:1 breiman:1 og:1 conjunction:2 minimisation:1 yo:1 helpful:1 dependent:2 lj:3 unlikely:1 nand:1 classification:8 colt:1 denoted:1 wadsworth:1 enlarge:1 few:4 hni:1 wee:1 national:1 murphy:1 replaced:1 negation:2 attempt:1 freedom:1 friedman:1 accurate:1 necessary:1 tree:46 taylor:1 theoretical:2 boolean:8 cover:2 andl:1 yoav:1 subset:1 monomials:1 uniform:3 kn:1 considerably:3 my:3 thanks:1 international:1 lee:4 yg:4 again:1 reflect:1 aaai:1 choose:4 hn:1 literal:10 li:1 coefficient:4 matter:4 satisfy:3 depends:2 later:1 root:1 closed:1 il:1 ir:2 ni:3 kaufmann:2 likewise:1 maximized:1 yield:2 reach:2 definition:4 against:1 proof:6 di:2 recall:1 formalize:1 uncover:1 actually:2 follow:1 specify:1 sketch:2 ei:5 yf:12 hence:1 assigned:1 irani:1 i2:1 ehrenfeucht:1 eg:3 razor:2 stone:1 iin:1 neff:18 fi:1 sigmoid:2 ji:3 extend:3 pep:3 ai:4 similarly:1 shawe:1 base:7 recent:2 optimizing:1 inequality:4 nln:1 morgan:2 holte:1 monotonically:1 faster:1 lin:3 equally:1 iog:1 qi:2 regression:1 expectation:4 represent:2 addressed:1 grow:1 comment:1 induced:3 supposing:1 db:4 member:1 inconsistent:2 effectiveness:1 structural:2 easy:1 affect:2 vik:2 aihi:3 maxf:1 expression:1 bartlett:7 utility:1 ul:1 peter:1 york:1 jie:1 covered:1 amount:1 schapire:3 notice:3 shall:2 key:1 threshold:2 pj:2 thresholded:11 fraction:5 prob:1 fourteenth:1 decision:32 summarizes:1 bound:28 hi:15 layer:2 correspondence:1 quadratic:1 fayyad:1 department:1 according:4 combination:10 smaller:7 slightly:1 em:1 wi:7 rob:1 ln:2 generalizes:2 apply:1 appropriate:1 log2:2 quinlan:1 build:1 approximating:1 implied:1 quantity:1 distance:1 whom:1 induction:1 olshen:1 webb:1 potentially:1 negative:4 unsw:1 perform:1 upper:2 finite:1 n82:1 defining:1 extended:1 precise:1 arbitrary:3 pair:1 iih:1 optimized:1 c4:1 pattern:4 ev:1 oft:1 program:1 including:1 explanation:1 misclassification:13 event:1 natural:1 force:1 representing:1 lk:3 carried:1 acknowledgement:1 kf:1 determining:1 freund:2 interesting:1 consistent:6 pi:20 occam:2 side:1 taking:1 distributed:1 depth:5 dimension:7 valid:1 author:1 qualitatively:2 refinement:1 san:1 commonly:1 far:1 why:1 pazzani:1 composing:1 ca:2 forest:1 williamson:1 complex:1 anthony:1 linearly:1 canberra:2 en:9 tl:2 ny:1 wlog:1 pv:15 lie:1 ib:2 formula:9 theorem:14 jensen:1 maxi:1 mason:4 list:1 evidence:1 dl:1 logk:2 margin:5 mf:2 likely:1 phi:1 u2:1 satisfies:5 acm:1 labelled:2 uniformly:1 total:2 e:1 experimental:1 college:1 internal:1 support:2 |
374 | 1,341 | Nonlinear Markov Networks for Continuous
Variables
Reimar Hofmann and Volker Tresp*
Siemens AG, Corporate Technology
Information and Communications
81730 Munchen, Germany
Abstract
We address the problem oflearning structure in nonlinear Markov networks
with continuous variables. This can be viewed as non-Gaussian multidimensional density estimation exploiting certain conditional independencies
in the variables. Markov networks are a graphical way of describing conditional independencies well suited to model relationships which do not exhibit a natural causal ordering. We use neural network structures to model
the quantitative relationships between variables. The main focus in this paper will be on learning the structure for the purpose of gaining insight into
the underlying process. Using two data sets we show that interesting structures can be found using our approach. Inference will be briefly addressed.
1 Introduction
Knowledge about independence or conditional independence between variables is most helpful in ''understanding'' a domain. An intuitive representation of independencies is achieved by
graphical models in which independency statements can be extracted from the structure of the
graph. The two most popular types of graphical stochastical models are Bayesian networks
which use a directed graph, and Markov networks which use an undirected graph. Whereas
Bayesian networks are well suited to represent causal relationships, Markov networks are
mostly used in cases where the user wants to express statistical correlation between variables.
This is the case in image processing where the variables typically represent the grey levels
of pixels and the graph encourages smootheness in the values of neighboring pixels (Markov
random fields, Geman and Geman, 1984). We believe that Markov networks might be a useful
representation in many domains where the concept of cause and effect is somewhat artificial.
The learned structure of a Markov network also seems to be more easily communicated to
non-experts; in a Bayesian network not all arc directions can be uniquely identified based on
training data alone which makes a meaningful interpretation for the non-expert rather difficult.
As in Bayesian networks, direct dependencies between variables in Markov networks are represented by an arc between those variables and missing edges represent independencies (in
Section 2 we will be more precise about the independencies represented in Markov networks).
Whereas the graphical structure in Markov networks might be known a priori in some cases,
f{eimar.Hofinann@mchp.siemens.de Volker.Tresp@mchp.siemens.de
522
R. Hofmann and V. Tresp
the focus of this work is the case that structure is unknown and must be inferred from data.
For both discrete variables and linear relationships between continuous variables algorithms
for structure learning exist (Whittaker, 1990). Here we address the problem of learning structure for Markov networks of continuous variables where the relationships between variables
are nonlinear. In particular we use neural networks for approximating the dependency between a variable and its Markov boundary. We demonstrate that structural learning can be
achieved without a direct reference to a likelihood function and show how inference in such
networks can be perfonned using Gibbs sampling. From a technical point of view, these
Marlwv boundary networks perfonn multi-dimensional density estimation for a very general
class of non-Gaussian densities.
In the next section we give a mathematical description of Markov networks and a formulation
of the joint probability density as a product of compatibility functions. In Section 3.1 we
discuss strucurallearning in Markov networks based on a maximum likelihood approach and
show that this approach is in general unfeasible. We then introduce our approach which is
based on learning the Markov boundary of each variable. We also show how belief update can
be performed using Gibbs sampling. In Section 4 we demonstrate that useful structures can
be extraced from two data sets (Boston housing data., financial market) using our approach.
2
Markov Networks
The following brief introduction to Markov networks is adapted from Pearl (1988). Consider
a strictly positive I joint probability density p(x) over a set of variables X := {XI, ... , XN }.
For each variable Xi, let the Marlwv boundary of Xi, Bi ~ X - {Xi}, be the smallest set of
variables that renders Xi and X - ({ xd U Bd independent under p( x) (the Markov boundary
is unique for strictly positive distributions). Let the Marlwv network 9 be the undirected
graph with nodes Xl, ??? , xN and edges between Xi and Xj if and only if Xi E Bj (which also
implies X j E Bi). In other words, a Markov network is generated by connecting each node to
the nodes in its Markov boundary. Then for any set Z ~ (X - {Xi, Xj}), Xi is independent
of Xj given Z if and only if every path from Xi to Xj goes through at least one node in Z. In
other words, two variables are independent if any path between those variables is "blocked"
by a known variable. In particular a variable is independent of the remaining variables if the
variables in its Markov boundary are known.
A clique in G is a maximal fully connected subgraph. Given a Markov Network G for p( x) it
can be shown that p can be factorized as a product of positive functions on the cliques of G,
i.e.
(1)
where the product is over all cliques in the graph. Xclique, is the projection of X to the
variables of the i-th clique and the gi are the compatibility functions w.r.t. cliquej. K =
J fli gi(Xclique.)dx is the normalization constant. Note, that a state whose clique functions
have large values has high probability. The theorem of Hammersley and Clifford states that
the nonnalized product in equation 1 embodies all the conditional independencies portrayed
by the graph (Pearl, 1988? for any choice of the gi .
If the graph is sparse, i.e. if many conditional independencies exist then the cliques might
1 To simplify the discussion we will assume strict positivity for the rest of this paper. For some of the
statements weaker conditions may also be sufficient. Note that strict positivity implies that functional
constraints (for example, a = b) are excluded.
2 In terms of graphical models: The graph G is an I-map of p.
Nonlinear Markov Networks for Continuous Variables
523
be small and the product will be over low dimensional functions. Similar to Bayesian networks where the complexity of describing a joint probability density is greatly reduced by
decomposing the joint density in a product of ideally low-dimensional conditional densities,
equation 1 describes the decomposition of a joint probability density function into a product
of ideally low-dimensional compatibility functions. It should be noted that Bayesian networks
and Markov networks differ in which specific independencies they can represent (Pearl, 1988).
3 Learning the Markov Network
3.1
Likelihood Function Based Learning
Learning graphical stochastical models is usually decomposed into the problems of learning
structure (that is the edges in the graph) and of learning the parameters of the joint density
function under the constraint that it obeys the independence statements made by the graph.
The idea is to generate candidate structures according to some search strategy, learn the parameters for this structure and then judge the structure on the basis of the (penalized) likelihood
of the model or, in a fully Bayesian approach, using a Bayesian scoring metric.
Assume that the compatibility functions in equation 1 are approximated using a function approximator such as a neural network gi 0 ~ 9 i (x). Let {x P }:= 1 be a training set. With
likelihood L = I1;=1 pM (x P ) (where the M in pM indicates a probability density model in
contrast to the true distribution), the gradient of the log-likelihood with respect to weight W i
in gi (.) becomes
~~I
aWi p=l
L-0gp
M( P)-~~l
x
-L-a
p=l Wi
~(P
oggl Xclique,
)_NI(i!v;loggi(Xclique,))I1jgj(XcliqueJ)dX
II1 W(
)d
j gj Xclique) X
(2)
where the sums are over N training patterns. The gradient decomposes into two terms. Note,
that only in the first term the training patterns appear explicitly and that, conveniently, the first
term is only dependent on the clique i which contains parameter Wi. The second term emerges
from the normalization constant K in equation I. The difficulty is that the integrals in the
second term can not be solved in closed form for universal types of compatibility functions gi
and have to be approximated numerically, typically using a form of Monte Carlo integration.
This is exactly what is done in the Boltzmann machine, which is a special case of a Markov
network with discrete variables.3
Currently, we consider maximum likelihood learning based on the compatibility functions unsuitable, considering the complexity and slowness of Monte Carlo integration (Le. stochastic
sampling). Note, that for structural learning the maximum likelihood learning is in the inner
loop and would have to be executed repeatedly for a large number of structures.
3.2 Markov Boundary Learning
The difficulties in using maximum likelihood learning for finding optimal structures motivated
the approach pursued in this paper. If the underlying true probability density is known the
structure in a Markov network can be found using either the edge deletion method or the
3 A fully connected Boltzmann machine does not display any independencies and we only have one
clique consisting of all variables. The compatibility function is gO = exp (WijSiSj). The Boltzmann machine typically contains hidden variables, such that not only the second tenn (corresponding to
the unclamped phase) in equation 2 has to be approximated using stochastic sampling but also the first
tenn. (In this paper we only consider the case that data are complete).
L:
524
R. Hofmann and V Tresp
Markov boundary method (Pearl, 1988). The edge deletion method uses the fact that variables
a and b are not connected by an edge if and only if a and b are independent given all other
variables. Evaluating this test for each pair of variables reveals the structure of the network.
The Markov boundary method consists of determining - for each variable a - its Markov
boundary and connecting a to each variable in its Markov boundary. Both approaches are
simple if we have a reliable test for true conditional independence.
Both methods cannot be applied directly for learning structure from data since here tests
for conditional independence cannot be based on the true underlying probability distribution
(which is unknown) but has to be inferred from a finite data set. The hope is that dependencies which are strong enough to be supported by the data can still be reliably identified. It is,
however not difficult to construct cases where simply using an (unreliable) statistical test for
conditional independence with the edge deletion method does not work wel1. 4
We now describe our approach, which is motivated by the Markov boundary method. First,
we start with a fully connected graph. We train a model ptt to approximate the conditional
density of each variable i, given the current candidate variables for its Markov boundary Bi
which initially are all other variables. For this we can use a wide variety of neural networks.
We use conditional Parzen windows
(3)
where {XP};'=l is the training set and G(x; J-l, 1:) is our notation for a multidimensional Gaussian centered at J-l with covariance matrix 1: evaluated at x. The Gaussians in the nominator are
centered at X~i}U8: which is the location of the p-th sample in the jointinput!output( {x;} UBi)
space and the Gaussians in the denominator are centered at x~: which is the location of the
p-th sample in the input space (Bi). There is one covariance matrix 1:i for each conditional
density model which is shared between all the Gaussians in that model. 1:i is restricted to a
diagonal matrix where the diagonal elements in all dimensions except the output dimension i,
are the same. So there are only two free parameters in the matrix: The variance in the output
dimension and the variance in all input dimensions. Ei 8' is equal to 1:i except that the row
and column corresponding to the output dimension ha~e been deleted. For each conditional
model ptt, 1: i was optimized on the basis of the leave-one-out cross validation log-likelihood.
Our approach is based on tentatively removing edges from the model. Removing an edge
decreases the size of the Markov boundary candidates of both affected variables and thus
decreases the number of inputs in the corresponding two conditional density models. With
the inputs removed, we retrain the two models (in our case, we simply find the optimal Ei
for the two conditional Parzen windows). If the removal of the edge was correct, the leaveone-out cross validation log-likelihood (model-score) of the two models should improve since
an unnecessary input is removed. (Removing an unnecessary input typically decreases model
variance.) We therefore remove an edge if the model-scores of both models improve. Let's
define as edge-removal-score the smaller ofthe two improvements in model-score.
Here is the algorithm in pseudo code:
? Start with a fully connected network
4The problem is that in the edge deletion method the decision is made independently for each edge
whether or not it should be present There are however cases where it is obvious that at least one of two
edges must be present although the edge deletion method which tests each edge individually removes
both.
.
Nonlinear Markov Networksfor Continuous Variables
525
? Until no edge-removal-score is positive:
- for all edges edgeij in the network
* calculate the model-scores of the reduced models ptt (Xi IBi - {j}) and
ptt (Xj IB; - {i})
* compare
with the model-scores of the current models pM (xiIB~) and
Mi
t
I
Pi (XjIBj)
* set the edge-removal-score to the smaller of both model-score improvements
- remove the edge for which the edge-removal-score is in maximum.
? end
3.3
Inference
Note that we have learned the structure of the Markov network without an explicit representation of the probability density. Although the conditional densities p(.r i IBi) provide sufficient
information to calculate the joint probability density the latter can not be easily computed.
More precisely, the conditional densities overdetermine the joint density which might lead
to problems if the conditional densities are estimated from data. For inference, we are typically interested in the expected value of an unknown variable, given an arbitrary set of known
variables, which can be calculated using Gibbs sampling. Note, that the conditional densities pM (Xi IBi) which are required for Gibbs sampling are explicitly modeled in our approach
by the conditional Parzen windows. Also note, that sampling from the conditional Parzen
model (as well as many other neural networks, such as mixture of experts models) is easy.5
In Hofmann (1997) we show that Gibbs sampling from the conditional Parzen models gives
significantly better results than running inference using either a kernel estimator or a Gaussian
mixture model of the joint density.
4 Experiments
In our first experiment we used the Boston housing data set, which contains 506 samples.
Each sample consists of the housing price and 13 other variables which supposedly influence
the housing price in a Boston neighborhood. Maximizing the cross validation log-likelihood
as score as described in the previous chapters results in a Markov network with 68 edges.
While cross validation gives an unbiased estimate of whether a direct dependency exists between two variables the estimate can have a large variance depending on the size of the given
data set. If the goal of the experiment is to interpret the resulting structure one would prefer
to see only those edges corresponding to direct dependencies which can be clearly identified
from the given data set. In other words, if the relationship between two variables observed on
the given data set is so weak that we can not be sure that it is not just an effect of the finite
data set size, then we do not want to display the corresponding edge. This can be achieved by
adding a penalty per edge to the score of the conditional density models. (figure 1).
Figure 2 shows the resulting Markov network for a penalty per edge of 0.2. The goal of the
original experiment for which the Boston housing data were collected was to examine whether
the air quality (5) has direct influence on the housing price (14). Our algorithm did not find
such an influence - in accordance with the original study. It found that the percentage of low
status population (13) and the average number of rooms (6) are in direct relationship with
the housing price. The pairwise relationships between these three variables are displayed in
figure 3.
5 Readers
not familiar with Gibbs sampling, please consult Geman and Geman (1984).
R. Hofmann and V. Tresp
526
?0
0011
01
016
02
on
.........
0 3
oc
0?
o.~
os
Figure I: Number of edges in the Markov network for the Boston housing data as a function
of the penalty per edge.
I
2
3
4
5
6
7
8
9
10
II
12
13
14
crime rate
percent land zoned for lots
percent nooretail bu.. in"",
located on Charies river'!
nitrogen oxide concentration
average number of room.,
percent bui It before 1940
weighted distance to employment center
acces. to radial highways
tax rate
pupi IIteacher ratio
percent black
percent lower-status population
median value ofbomes
Figure 2: Final structure of a run on the full Boston housing data set (penalty = 0.2).
The scatter plots visualize the relationship between variables 13 and 14, 6 and 14 and between
6 and 13 (from left to right). The left and the middle correspond to edges in the Markov
network whereas for the right diagram the corresponding edge (6-13) is missing even though
both variables are clearly dependent. The reason is, that the dependency between 6 and 13 can
be explained as indirect relationship via variable 14. The Markov network tells us that 13 and
6 are independent given 14, but dependent if 14 is unknown.
In a second experiment we used a financial dataset. Each pattern corresponds to one business
day. The variables in our model are relative changes in certain economic variables from the
last business day to the present day which were expected to possibly influence the development
of the German stock index DAX and the composite DAX, which contains a larger selection of
stocks than the DAX. We used 500 training patterns consisting of 12 variables (figure 4). In
comparison to the Boston housing data set most relationships are very weak. Using a penalty
per edge of 0.2 leads to a very sparse model with only three edges (2-12, 12-1 ,5-11) (not
shown). A penalty of 0.025 results in the model shown in figure 4. Note, that the composite
50 ~- -
-
- ....
- - ~ -- -..?
.
.. ,I :
- -'
.0- -- -- --- -- - ---
:.
10
o-- - ----~-- ...J
o
10
~
Pc Low Status
~
Population
~
O'- - ~
-
._
3456
_ ______ ____ -..l
7
89
A. Number of Rooms
Figure 3: Pairwise relationship between the variables 6, 13 and 14. Displayed are all data
points in the Boston housing data set.
527
Nonlinear Markov Networks/or Continuous Variables
4
5
6
7
8
9
10
II
12
DAX
composite DAX
3 month interest rates Gennany
rerum Gennany
Morgan Stanley tndex Germany
I)(JW' Jones mdustrial index
DM-USD exchange rate
US treasury bonds
gold price in DM
N.kkei index Japan
Morgan Stanley index Europe
price earning ratio (DAX stocks)
Figure 4: Final structure of a run on the financial data set with a penalty of 0.025. The small
numbers next to the edges indicate the strength of the connection, i.e. the decrease in score
(excluding the penalty) when the edge is removed. All variables are relative changes - not
absolute values.
DAX is connected to the DAX mainly through the price earning ratio. While the DAX has
direct connections to the Nikkei index and to the DM-USD exchange rate the composite DAX
has a direct connection to the Morgan Stanley index for Germany. Recall, that composite
DAX contains the stocks of many smaller companies in addition to the DAX stocks. The
graph structure might be interpreted (with all caution) in the way that the composite DAX
(including small companies) has a stronger dependency on national business whereas the DAX
(only including the stock of major companies) reacts more to international indicators.
5 Conclusions
We have demonstrated, to our knowledge for the first time, how nonlinear Markov networks
can be learned for continuous variables and we have shown that the resulting structures can
give interesting insights into the underlying process. We used a representation based on models of the conditional probability density of each variable given its Markov boundary. These
models can be trained locally. We showed how searching in the space of all possible structures
can be done using this representation.
We suggest to use the conditional densities of each variable given its Markov boundary also for
inference by Gibbs sampling. Since the required conditional densities are modeled explicitly
by our approach and sampling from these is easy, Gibbs sampling is easier and faster to realize
than with a direct representation of the joint density.
A topic of further research is the variance in resulting structures, i.e. the fact that different
structures can lead to almost equally good models. It would for example be desirable to
indicate to the user in a principled way the certainty of the existence or nonexistence of edges.
References
Geman, S., and Geman, D. (1984). Stochastic relaxations, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. on Pattern Analysis and Machine Intelligence PAMI-6 (no. 6):721-42
Hofinann, R. (1997). Inference in Markov Blanket Models. Technical report, in preparation.
Monti, S., and Cooper, G. (1997). Learning Bayesian belief networks with neural network estimators.
In Neural Information Processing Systems 9., MIT Press.
Pearl, J. (1988). Probabilistic reasoning in intelligent systems. San Mateo: Morgan Kaufinann.
Whittaker, J. (1990). Graphical models in applied multivariate statistics. Chichester, UK: John Wiley
and Sons.
| 1341 |@word middle:1 briefly:1 seems:1 stronger:1 grey:1 decomposition:1 covariance:2 contains:5 score:13 current:2 scatter:1 dx:2 must:2 bd:1 john:1 realize:1 hofmann:5 remove:3 plot:1 update:1 alone:1 pursued:1 tenn:2 intelligence:1 node:4 location:2 mathematical:1 direct:9 consists:2 introduce:1 pairwise:2 expected:2 market:1 examine:1 multi:1 decomposed:1 company:3 window:3 considering:1 becomes:1 underlying:4 notation:1 dax:14 factorized:1 what:1 interpreted:1 caution:1 ag:1 finding:1 perfonn:1 pseudo:1 quantitative:1 every:1 multidimensional:2 certainty:1 xd:1 exactly:1 uk:1 appear:1 positive:4 before:1 accordance:1 path:2 pami:1 might:5 awi:1 ibi:3 black:1 mateo:1 bi:4 obeys:1 directed:1 unique:1 communicated:1 universal:1 significantly:1 composite:6 projection:1 word:3 radial:1 suggest:1 unfeasible:1 cannot:2 selection:1 influence:4 map:1 demonstrated:1 missing:2 maximizing:1 center:1 go:2 independently:1 insight:2 estimator:2 financial:3 population:3 searching:1 user:2 us:1 element:1 approximated:3 located:1 geman:6 observed:1 solved:1 calculate:2 connected:6 ordering:1 decrease:4 removed:3 principled:1 supposedly:1 complexity:2 ideally:2 employment:1 trained:1 basis:2 easily:2 joint:10 indirect:1 stock:6 represented:2 chapter:1 train:1 describe:1 monte:2 artificial:1 tell:1 neighborhood:1 whose:1 larger:1 statistic:1 gi:6 gp:1 final:2 housing:11 product:7 maximal:1 neighboring:1 loop:1 subgraph:1 wijsisj:1 tax:1 gold:1 intuitive:1 description:1 exploiting:1 leave:1 depending:1 strong:1 implies:2 judge:1 indicate:2 differ:1 direction:1 blanket:1 correct:1 stochastic:3 centered:3 exchange:2 strictly:2 exp:1 bj:1 visualize:1 major:1 smallest:1 purpose:1 estimation:2 bond:1 currently:1 individually:1 highway:1 weighted:1 hope:1 mit:1 clearly:2 gaussian:4 rather:1 kaufinann:1 volker:2 gennany:2 focus:2 unclamped:1 improvement:2 likelihood:12 indicates:1 mainly:1 greatly:1 contrast:1 helpful:1 inference:7 dependent:3 typically:5 initially:1 hidden:1 i1:1 germany:3 interested:1 pixel:2 compatibility:7 priori:1 development:1 integration:2 special:1 field:1 construct:1 equal:1 sampling:12 jones:1 report:1 simplify:1 intelligent:1 national:1 familiar:1 phase:1 consisting:2 interest:1 chichester:1 mixture:2 monti:1 pc:1 edge:36 integral:1 causal:2 column:1 restoration:1 oflearning:1 dependency:7 density:28 international:1 river:1 bu:1 probabilistic:1 connecting:2 parzen:5 clifford:1 possibly:1 positivity:2 usd:2 oxide:1 expert:3 japan:1 de:2 explicitly:3 performed:1 view:1 lot:1 closed:1 start:2 ii1:1 air:1 ni:1 variance:5 correspond:1 ofthe:1 weak:2 bayesian:10 carlo:2 nitrogen:1 obvious:1 dm:3 mi:1 dataset:1 popular:1 recall:1 knowledge:2 emerges:1 stanley:3 day:3 jw:1 formulation:1 done:2 evaluated:1 though:1 just:1 correlation:1 until:1 ei:2 nonlinear:7 o:1 quality:1 believe:1 effect:2 concept:1 true:4 unbiased:1 excluded:1 encourages:1 uniquely:1 please:1 noted:1 oc:1 complete:1 demonstrate:2 percent:5 reasoning:1 image:2 functional:1 interpretation:1 numerically:1 interpret:1 blocked:1 gibbs:9 pm:4 reimar:1 europe:1 gj:1 multivariate:1 showed:1 slowness:1 certain:2 scoring:1 morgan:4 somewhat:1 ii:2 full:1 desirable:1 corporate:1 technical:2 faster:1 ptt:4 cross:4 equally:1 denominator:1 metric:1 represent:4 normalization:2 kernel:1 achieved:3 whereas:4 want:2 addition:1 addressed:1 diagram:1 median:1 rest:1 strict:2 sure:1 undirected:2 consult:1 structural:2 nominator:1 enough:1 easy:2 reacts:1 variety:1 independence:6 xj:5 identified:3 inner:1 idea:1 economic:1 whether:3 motivated:2 ubi:1 penalty:8 render:1 cause:1 repeatedly:1 useful:2 locally:1 reduced:2 generate:1 exist:2 percentage:1 estimated:1 per:4 discrete:2 affected:1 express:1 independency:10 deleted:1 graph:13 relaxation:1 sum:1 run:2 almost:1 reader:1 earning:2 decision:1 networksfor:1 prefer:1 display:2 adapted:1 strength:1 constraint:2 precisely:1 according:1 describes:1 smaller:3 son:1 wi:2 explained:1 restricted:1 equation:5 describing:2 discus:1 german:1 end:1 decomposing:1 gaussians:3 munchen:1 existence:1 original:2 remaining:1 running:1 graphical:7 unsuitable:1 embodies:1 nonexistence:1 approximating:1 strategy:1 concentration:1 diagonal:2 exhibit:1 gradient:2 distance:1 topic:1 collected:1 reason:1 code:1 modeled:2 relationship:12 index:6 ratio:3 difficult:2 mostly:1 executed:1 statement:3 reliably:1 boltzmann:3 unknown:4 markov:48 arc:2 finite:2 displayed:2 communication:1 precise:1 nonnalized:1 excluding:1 arbitrary:1 inferred:2 pair:1 required:2 optimized:1 connection:3 crime:1 learned:3 deletion:5 pearl:5 trans:1 address:2 usually:1 pattern:5 hammersley:1 gaining:1 reliable:1 including:2 belief:2 perfonned:1 natural:1 difficulty:2 business:3 indicator:1 improve:2 technology:1 brief:1 tentatively:1 tresp:5 understanding:1 removal:5 stochastical:2 determining:1 relative:2 fully:5 interesting:2 approximator:1 validation:4 sufficient:2 xp:1 pi:1 land:1 row:1 penalized:1 supported:1 last:1 free:1 weaker:1 wide:1 absolute:1 mchp:2 sparse:2 leaveone:1 boundary:17 dimension:5 xn:2 evaluating:1 calculated:1 made:2 san:1 approximate:1 status:3 bui:1 unreliable:1 clique:8 reveals:1 unnecessary:2 xi:12 zoned:1 continuous:8 search:1 decomposes:1 learn:1 fli:1 domain:2 did:1 main:1 retrain:1 cooper:1 wiley:1 explicit:1 xl:1 candidate:3 portrayed:1 ib:1 theorem:1 removing:3 treasury:1 specific:1 exists:1 adding:1 easier:1 boston:8 suited:2 simply:2 conveniently:1 corresponds:1 extracted:1 whittaker:2 conditional:26 viewed:1 goal:2 month:1 u8:1 room:3 shared:1 price:7 change:2 except:2 siemens:3 meaningful:1 latter:1 preparation:1 |
375 | 1,342 | Multiplicative Updating Rule
for Blind Separation Derived
from the Method of Scoring
Howard Hua Yang
Department of Computer Science
Oregon Graduate Institute
PO Box 91000, Portland, OR 97291, USA
hyang@cse.ogi.edu
Abstract
For blind source separation, when the Fisher information matrix is
used as the Riemannian metric tensor for the parameter space, the
steepest descent algorithm to maximize the likelihood function in
this Riemannian parameter space becomes the serial updating rule
with equivariant property. This algorithm can be further simplified
by using the asymptotic form of the Fisher information matrix
around the equilibrium.
1
Introduction
The relative gradient was introduced by (Cardoso and Laheld, 1996) to design
multiplicative updating algorithms with equivariant property for blind separation
problems. The idea is to calculate differentials by using a relative increment instead
of an absolute increment in the parameter space. This idea has been extended to
compute the relative Hessian by (Pham, 1996).
For a matrix function
f = f (W), the relative gradient is defined by
Vf=
::V
WT .
(1)
From the differential of f (W) based on the relative gradient, the following learning
rule is given by (Cardoso and Laheld, 1996) to maximize the function f:
dW = VfW= 8f WTW
dt
1]
1] 8W
(2)
Also motivated by designing blind separation algorithms with equivariant property,
Multiplicative Updating Rule for Blind Separation
697
the natural gradient defined by
(3)
was introduced in (Amari et al, 1996) which yields the same learning rule (2). The
geometrical meaning of the natural gradient is given by (Amari, 1996). More details
about the natural gradient can be found in (Yang and Amari, 1997) and (Amari,
1997).
The framework of the natural gradient learning was proposed by (Amari, 1997) . In
this framework, the ordinary gradient descent learning algorithm in the Euclidean
space is not optimal in minimizing a function defined in a Riemannian space. The
ordinary gradient should be replaced by the natural gradient which is defined by
operating the inverse of the metric tensor in the Riemannian space on the ordinary
gradient. Let w denote a parameter vector. It is proved by (Amari, 1997) that if
C (w) is a loss function defined on a Riemannian space {w} with a metric tensor G,
the negative natural gradient of C(w), namely, _G- 1
is the steepest descent
direction to decrease this function in the Riemannian space. Therefore, the steepest
descent algorithm in this Riemannian space has the following form:
gg
dw = -T}G- I ac.
dt
ow
If the Fisher information matrix is used as the metric tensor for the Riemannian
space and C(w) is replaced by the negative log-likelihood function, the above learning rule becomes the method of scoring (Kay, 1993) which is the focus ofthis paper.
Both the relative gradient V and the natural gradient V were proposed in order to
design the multiplicative updating algorithms with the equivariant property. The
former is due to a multiplicative increment in calculating differential while the latter
is du,: to an increment based on a nonholonomic basis (Amari, 1997). Neither V
nor V' depends on the data model. The Fisher information matrix is a special
and important choice for the Riemannian metric tensor for statistical estimation
problems. It depends on the data model. Operating the inverse of the Fisher
information matrix on the ordinary gradient, we have another gradient operator. It
is called a natural gradient induced by the Fisher information matrix.
In this paper, we show how to derive a multiplicative updating algorithm from
the method of scoring. This approach is different from those based on the relative
gradient and the natural gradient defined by (3).
2
Fisher Information Matrix For Blind Separation
Consider a linear mixing system:
x =As
where A E Rnxn, x = (Xl,' .. ,xn)T and s = (Sl,???, snf. Assume that sources
are independent with a factorized joint pdf:
n
res) =
II r(si).
i=l
The likelihood function is
rCA -IX)
p(x;A)
= IAI
698
H.H. Yang
where IAI = Idet(A)I. Let W = A -1 and y
have the log-likelihood function
= Wx
( a demixing system), then we
n
L(W) = Llogri(Yi)
+ log IWI.
i=1
It is easy to obtain
8L _ rHYi) x.
8 W ij - ri(Yi)
+ W:-: T
1
(4)
'1
w;t is the (i,j) entry in W- T = (W-l)T.
where
we have
Writing (4) in a matrix form,
:~ = W- T _ ~(Y)XT = (I - ~(y)yT)W-T = F(y)W- T
where ~(y)
= (<I>I (y.),"', <l>n(Yn))T,
<l>i(Yi) = -
;H~:~ and F(y) = I - ~(y)yT.
The maximum likelihood algorithm based on the ordinary gradient
d : = TJ(I _
(5)
~(y)yT)W-T =
8'1:tt
is
TJF(y)W- T
which has the high computational complexity due to the matrix inverse W- I . The
maximum likelihood algorithm based on the natural gradient of matrix functions is
dW
dt
= TJ"VL
=
TJ(I - ~(y)yT)W.
(6)
d!f
The same algorithm is obtained from
= TJV LW by using the relative gradient.
An apparent reason for using this algorithm is to avoid the matrix inverse W- 1 ?
Another good reason for using it is due to the fact that the matrix W driven by
(6) never becomes singular if the initial matrix W is not singular. This is proved
by (Yang and Amari, 1997). In fact, this property holds for any learning rule of the
following type:
dW
dt
= H(y)W.
(7)
Let < U,V >= Tr(UTV) denote the inner product of U and V E 3?nxn. When
Wet) is driven by the equation (7), we have
8'U;'
d'W,
dW >-<
IWI(W-I)T , dW
dt -_< 8
'dt
dt
Tr(IWIW- 1 H(y)W) Tr(H(y))IWI.
=
>
=
Therefore,
IW(t)1
{l
= IW(O)I exP
t
Tr(H(y(r)))dr}
(8)
which is non-singular when the initial matrix W(O) is non-singular.
The matrix function F(y) is also called an estimating function. At the equilibrium
of the system (6), it satisfies the zero condition E[F(y)] = 0, i.e.,
E[<I>i (Yi)Yj] = f5 ij
(9)
where f5ij = 1 if i = j and 0 otherwise.
To calculate the Fisher information matrix, we need a vector form of the equation
(5). Let Vec(?) denote an operator on a matrix which cascades the columns of the
699
Multiplicative Updating Rule for Blind Separation
matrix from the left to the right and forms a column vector. This operator has the
following property:
Vec(ABC) = (C T 0 A)Vec(B)
(10)
where 0 denotes the Kronecker product. Applying this property, we first rewrite
(5) as
aL
aL
-1
aVec(W) = Vec(aW) = (W 0I)Vec(F(y)),
(11)
and then obtain the Fisher information matrix
G
- E[
aL
(
aL
)T]
aVec(W) aVec(W)
=(W- l 0 I)E[Vec(F(y))VecT(F(y))](W-T 0 I).
-
(12)
The inverse of G is
G- l = (WT 0 I)D- l (W 0 I)
where D = E[Vec(F(y))VecT(F(y))].
3
(13)
Natural Gradient Induced By Fisher Information Matrix
Define a Riemannian space
V = {Vec(W); W E Gl(n)}
in which the Fisher information matrix G is used as its metric. Here, Gl(n) is the
space of all the n x n invertible matrices.
Let C(W) be a matrix function to be minimized. It is shown by (Amari, 1997) that
the steepest descent direction in the Riemannian space V is _G- 1 8V:C~W)'
Let us define the natural gradient in V by
- (
(T
-l(
)
ac
\lC W) = W 0I)D
W 0 I aVec(W)
(14)
which is called the natural gradient induced by the Fisher information matrix. The
time complexity of computing the natural gradient in the space V is high since
inverting the matrix D of n 2 x n 2 is needed.
Using the natural gradient in V to maximize the likelihood function L(W) or the
method of scoring, from (11) and (14) we have the following learning rule
Vec(d:) = T/(W T 0 I)D-1Vec(F(y))
(15)
We shall prove that the above learning rule has the equivariant property.
Denote Vec- l the inverse of the operator Vec. Let matrices B and A be of n 2 x n 2
and n x n, respectively. Denote B(i,?) the i-th row of Band Bi = Vec-I(B(i, .)),
i = 1, ... , n 2 . Define an operator B* as a mapping from ~n x n to ~n x n:
B*A= [
where
< Bl,A > .. , < BnLn+I,A >
...
..,
< Bn,A > ...
...
< B n2,A >
1
< .,. > is the inner product in ~nxn. With the operation *, we have
BVec(A) = [
< Bl:' A> ]
<Bn2,A>
= Vec(Vec- l ( [
<
~\ A>
<Bn2,A>
]))
= Vec(B * A),
H. H. Yang
700
i.e.,
BVec(A) = Vec(B * A) .
Applying the above relation, we first rewrite the equation (15) as
dW
_
Vec( dt) = 1](WT 0 J)Vec(D 1* F(y)),
then applying (10) to the above equation we obtain
d: = 1](D-1 *F(y))W.
(16)
Theorem 1 For the blind separation problem, the maximum likelihood algorithm
based on the natural gradient induced by the Fisher information matrix or the method
of scoring has the form (16) which is a multiplicative updating rule with the equivariant property.
To implement the algorithm (16), we estimate D by sample average. Let fij(Y) be
the (i,j) entry in F(y). A general form for the entries in D is
dij,kl
= E[Jij (y)fkl (y)]
which depends on the source pdfs ri(si) . When the source pdfs are unknown, in
practice we choose Ti(Si) as our prior assumptions about the source pdfs. To simplify
the algorithm (16), we replace D by its asymptotic form at the solution points
a = (ClSlT(I),? .. , CnSlT(n?)T where (0"(1),.??, O"(n)) is a permutation of (1,? ? ?, n).
Regarding the structure of the asymptotic D, we have the following theorem:
Theorem 2 Assume that the pdfs of the sources Si are even fu.nctions.
Then at the solution point a = (Cl SlT(l) , ... , CnSlT(n?)T, D is a diagonal matrix and
its n 2 diagonal entries have two forms, namely,
E[Jij(a)!ij(a)] = J-LiAj, for i =fi j and
E[(Jii(a))2] = Vi
where J-Li
have
= E[4>;(ai)],
Ai
= E[a;]
= E[4>~(ai)a~] -
and Vi
1. More concisely, we
D = diag( Vec( H))
where
(17)
H = (J-LiAj )nx n
-
+ diag( VI, ... , vn )
diag(J-L1 AI, .. . ,J-LnAn)
The proof of Theorem 2 is given in Appendix 1.
Let H = (hij)nxn. Since all J-Li, Ai, and Vi are positive, and so are all h ij . We define
1
H
1
= (h ij )nxn.
Then from (17), we have
D- 1 = diag(Vec( ~)).
The results in Theorem 2 enable us to simplify the algorithm (16) to obtain a low
complexity learning rule. Since D- 1 is a diagonal matrix, for any n x n matrix A
we have
D-1Vec(A) = Vec( ~ 0 A)
(18)
Multiplicative Updating Rule/or Blind Separation
701
where 0 denotes the componentwise multiplication of two matrices of the same
dimension. Applying (18) to the learning rule (15), we obtain the following learning
rule
dVV
1
Vec( ---;It) = 1](VVT Ci9 I)Vec(H 0 F(y?.
Again, applying (10) t.o the above equation we have the following learning rule
dVV
1
dt = 1]( H 0 F(y?VV.
(19)
Like the learning rule (16), the algorithm (19) is also multiplicative; but unlike \16),
there is no need to inverse the n 2 x n 2 matrix in (19). The computation of H is
straightforward by computing the reciprocals of the entries in H.
(f.Li, Ai, Vi) are 3n unknowns in G. Let us impose the following constraint
Vi = f.LiAi.
(20)
Under this constraint, the number of unknowns in G is 2n, and D can be written
as
(21)
D=D>.Ci9D~
where D>. = diag(Al,' . " An) and D~ = diag(f.LI, " . ,f.Ln)'
From (14), using (21) we have the natural gradient descent rule in the Riemannian
space V
dVec(VV) = _ (VVTD- 1 VVCi9D- 1 )
ae
(22)
dt
1]
>.
~ aVec(VV) .
Applying the property (10), we rewrite the above equation in a matrix form
dVV = _ D- 1 ae VV T D- 1 VV
(23)
dt
1] ~ avv
>"
Since f.Li and Ai are unknown, D ~ and D>. are replaced by the identity matrix in
practice. Therefore, the algorithm (2) is an approximation of the algorithm (23).
Taking e = - L(VV) as the negative likelihood function and applying the expression (5), we have the following maximum likelihood algorithm based on the natural
gradient in V:
dVV = 1]D~-1 ( I - ~ ( y ) y T) D>.- I VV.
(
-;It
24)
Again, replacing D~ and D>. by the identity matrix we obtain the maximum likelihood algorithm (6) based on the relative gradient or natural gradient of matrix
functions.
In the context of the blind separation, the source pdfs are unknown. The prior
assumption ri(si) used to define the functions <Pi(Yi) may not match the true pdfs
of the sources. However, the algorithm (24) is generally robust to the mismatch
between the true pdfs and the pdfs employed by the algorithm if the mismatch is
not too large. See (Cardoso, 1997) and ( Pham, 1996) for example.
4
Conclusion
In the context of blind separation, when the Fisher information matrix is used as
the Riemannian metric tensor for the parameter space, maximizing the likelihood
function in this Riemannian space based on the steepest descent method is the
method of scoring. This method yields a multiplicative updating rule with the
equivariant property. It is further simplified by using the asymptotic form of the
Fisher information matrix around the equilibrium.
5
Appendix
Appendix 1 Proof of Theorem 2:
By definition fij(Y) = 6ij -4>i(Yi)Yj. At the equilibrium a = (CI s".(1) , ... ,cns".(n)V,
we have E[4>i(ai)aj] = 0 for i i= j and E[4>i(ai)ai] = 1. So E[jij(a)] = O. Since
the source pdfs are even functions, we have E[ai] = 0 and E[4>i(ai)] = o. Applying
these equalities , it is not difficult to verify that
E[jij(a)!k,(a)] = 0, for (i,j)
i= (k,l) .
(25)
So, D is a diagonal matrix and
= E[4>;(ai)a;] E[jij(a)!ij(a)] = E[4>;(ai)a;] = {tiAj
E[jii(a)!ii(a)]
= E[(l -
4>i(ai)ai)2]
1,
for i i= j.
Q.E.D.
References
[1] S. Amari. Natural gradient works efficiently in learning. Accepted by Neural
Computation, 1997.
[2] S. Amari. Neural learning in structured parameter spaces - natural Riemannian
gradient. In Advances in Neural Information Processing Systems, 9, ed. M. C.
Mozer, M. 1. Jordan and T. Petsche, The MIT Press: Cambridge, MA., pages
127-133, 1997.
[3] S. Amari, A. Cichocki, and H. H. Yang. A new learning algorithm for blind
[4]
[5]
[6]
[7]
[8]
signal separation. In Advances in Neural Information Processing Systems, 8,
eds. David S. Touretzky, Michael C. Mozer and Michael E. Hasselmo, MIT
Press: Cambridge, MA., pages 757-763, 1996.
J.-F. Cardoso. Infomax and maximum likelihood for blind source separation.
IEEE Signal Processing Letters, April 1997.
J.-F. Cardoso and B. Laheld. Equivariant adaptive source separation. IEEE
Trans. on Signal Processing, 44(12):3017-3030, December 1996.
S. M. Kay. FUndamentals of Statistical Signal Processing: Estimation Theory.
PTR Prentice Hall, Englewood Cliffs, 1993.
D. T. Pham. Blind separation of instantaneous mixture of sources via an ica.
IEEE Trans. on Signal Processing, 44(11):2768-2779, November 1996.
H. H. Yang and S. Amari. Adaptive on-line learning algorithms for blind separation: Maximum entropy and minimum mutual information. Neural Computation, 9(7):1457-1482, 1997.
| 1342 |@word verify:1 true:2 former:1 equality:1 direction:2 bl:2 fij:2 tensor:6 bn:1 diagonal:4 ogi:1 enable:1 gradient:34 ow:1 tr:4 ptr:1 initial:2 nx:1 gg:1 pdf:1 avec:5 tt:1 reason:2 l1:1 pham:3 hold:1 around:2 hall:1 geometrical:1 si:5 exp:1 meaning:1 written:1 equilibrium:4 mapping:1 instantaneous:1 fi:1 minimizing:1 difficult:1 wx:1 hij:1 negative:3 estimation:2 bn2:2 design:2 rnxn:1 wet:1 unknown:5 iw:2 vect:2 cambridge:2 vec:25 hasselmo:1 steepest:5 reciprocal:1 ai:16 howard:1 descent:7 november:1 mit:2 extended:1 cse:1 avoid:1 operating:2 differential:3 introduced:2 inverting:1 prove:1 derived:1 focus:1 namely:2 pdfs:9 portland:1 driven:2 likelihood:13 kl:1 componentwise:1 concisely:1 ica:1 equivariant:8 nor:1 yi:6 trans:2 scoring:6 minimum:1 vl:1 mismatch:2 impose:1 lc:1 employed:1 relation:1 maximize:3 becomes:3 signal:5 estimating:1 ii:2 calculating:1 factorized:1 david:1 natural:21 match:1 special:1 mutual:1 never:1 serial:1 vvt:1 ti:1 ae:2 metric:7 cichocki:1 minimized:1 simplify:2 prior:2 yn:1 multiplication:1 positive:1 asymptotic:4 relative:9 nxn:4 loss:1 replaced:3 singular:4 source:12 cns:1 permutation:1 cliff:1 unlike:1 induced:4 englewood:1 december:1 jordan:1 mixture:1 yang:7 pi:1 bi:1 graduate:1 tj:3 easy:1 row:1 gl:2 yj:2 fu:1 practice:2 implement:1 inner:2 regarding:1 idea:2 vv:7 institute:1 taking:1 euclidean:1 slt:1 absolute:1 snf:1 laheld:3 re:1 motivated:1 cascade:1 expression:1 dimension:1 xn:1 dvv:4 column:2 adaptive:2 simplified:2 hessian:1 operator:5 prentice:1 context:2 applying:8 writing:1 ordinary:5 entry:5 generally:1 cardoso:5 yt:4 maximizing:1 dij:1 straightforward:1 band:1 too:1 aw:1 sl:1 rule:19 fundamental:1 kay:2 dw:7 robust:1 invertible:1 increment:4 michael:2 infomax:1 shall:1 du:1 again:2 cl:1 designing:1 choose:1 diag:6 f5:1 tjf:1 dr:1 neither:1 wtw:1 updating:10 n2:1 li:5 jii:2 inverse:7 letter:1 calculate:2 oregon:1 vn:1 blind:15 depends:3 decrease:1 multiplicative:11 vi:6 separation:16 appendix:3 vf:1 mozer:2 dvec:1 xl:1 complexity:3 lw:1 ix:1 iwi:3 theorem:6 rewrite:3 xt:1 kronecker:1 constraint:2 efficiently:1 ri:3 yield:2 basis:1 demixing:1 po:1 joint:1 ofthis:1 nonholonomic:1 ci:1 department:1 structured:1 entropy:1 touretzky:1 ed:2 apparent:1 definition:1 amari:13 otherwise:1 proof:2 riemannian:15 hua:1 rca:1 proved:2 satisfies:1 ln:1 equation:6 abc:1 ma:2 identity:2 nctions:1 bvec:2 product:3 jij:5 needed:1 idet:1 replace:1 fisher:15 fkl:1 dt:11 operation:1 mixing:1 wt:3 iai:2 utv:1 april:1 hyang:1 box:1 petsche:1 called:3 accepted:1 replacing:1 denotes:2 latter:1 derive:1 ac:2 aj:1 ij:7 usa:1 |
376 | 1,343 | Extended leA Removes Artifacts from
Electroencephalographic Recordings
Tzyy-Ping JungI, Colin Humphries!, Te-Won Lee!, Scott Makeig 2 ,3,
Martin J. McKeown!, Vicente IraguP, Terrence J. SejnowskF
1 Howard
Hughes Medical Institute and Computational Neurobiology Lab
The Salk Institute, P.O . Box 85800 , San Diego, CA 92186-5800
{jung,colin,tewon,scott,martin,terry}~salk.edu
2Naval Health Research Center, P.O. Box 85122, San Diego, CA 92186-5122
3Department of Neurosciences, University of California San Diego , La Jolla, CA 92093
Abstract
Severe contamination of electroencephalographic (EEG) activity
by eye movements, blinks, muscle, heart and line noise is a serious
problem for EEG interpretation and analysis. Rejecting contaminated EEG segments results in a considerable loss of information
and may be impractical for clinical data. Many methods have been
proposed to remove eye movement and blink artifacts from EEG
recordings. Often regression in the time or frequency domain is
performed on simultaneous EEG and electrooculographic (EOG)
recordings to derive parameters characterizing the appearance and
spread of EOG artifacts in the EEG channels. However, EOG
records also contain brain signals [1, 2], so regressing out EOG activity inevitably involves subtracting a portion of the relevant EEG
signal from each recording as well. Regression cannot be used to
remove muscle noise or line noise, since these have no reference
channels. Here , we propose a new and generally applicable method
for removing a wide variety of artifacts from EEG records. The
method is based on an extended version of a previous Independent Component Analysis (lCA) algorithm [3, 4] for performing
blind source separation on linear mixtures of independent source
signals with either sub-Gaussian or super-Gaussian distributions.
Our results show that ICA can effectively detect, separate and remove activity in EEG records from a wide variety of artifactual
sources, with results comparing favorably to those obtained using
regression-based methods.
Extended leA Removes Artifacts from EEG Recordings
1
895
Introduction
Eye movements, muscle noise, heart signals , and line noise often produce large and
distracting artifacts in EEG recordings. Rejecting EEG segments with artifacts
larger than an arbitrarily preset value is the most commonly used method for eliminating artifacts. However, when limited data are available, or blinks and muscle
movements occur too frequently, as in some patient groups, the amount of data
lost to artifact rejection may be unacceptable. Methods are needed for removing
artifacts while preserving the essential EEG signals.
Berg & Scherg [5] have proposed a spatio-temporal dipole model for eye-artifact removal that requires a priori assumptions about the number of dipoles for saccade,
blink, and other eye-movements, and assumes they have a simple dipolar structure .
Several other proposed methods for removing eye-movement artifacts are based on
regression in the time domain [6, 7] or frequency domain [8, 9] . However, simple
time-domain regression tends to overcompensate for blink artifacts and may introduce new artifacts into EEG records [10] . The cause of this overcompensation is
the difference between the spatial EOG-to-EEG transfer functions for blinks and
saccades. Saccade artifacts arise from changes in orientation of the retinocorneal
dipole , while blink artifacts arise from alterations in ocular conductance produced
by contact of the eyelid with the cornea [11]. The transfer of blink artifacts to
the recording electrodes decreases rapidly with distance from the eyes, while the
transfer of saccade artifacts decreases more slowly, so that at the vertex the effect
of saccades on the EEG is about double that of blinks [11], while at frontal sites
the two effects may be near-equal.
Regression in the frequency domain [8, 9] can account for frequency-dependent
spatial transfer function differences from EOG to EEG , but is acausal and thus
unsuitable for real-time applications. Both time and frequency domain regression
methods depend on having a good regressor (e.g., an EOG), and share an inherent
weakness that spread of excitation from eye movements and EEG signals is bidirectional. This means that whenever regression-based artifact removal is performed, a
portion of relevant EEG signals also contained in the EOG data will be cancelled out
along with the eye movement artifacts. Further , since the spatial transfer functions
for various EEG phenomena present in the EOG differ from the regression transfer
function, their spatial distributions after artifact removal may differ from the raw
record . Similar problems complicate removal of other types of EEG artifacts. Relatively little work has been done on removing muscle activity, cardiac signals and
electrode noise from EEG data. Regressing out muscle noise is impractical since
regressing out signals from multiple muscle groups require multiple reference channels . Line noise is most commonly filtered out in the frequency domain . However,
current interest in EEG in the 40-80 Hz gamma band phenomena may make this
approach undesirable as well.
We present here a new and generally applicable method for isolating and removing
a wide variety of EEG artifacts by linear decomposition using a new Independent
Component Analysis (ICA) algorithm [4] related to a previous algorithm [3, 12].
The ICA method is based on spatial filtering and does not rely on having a
"clean" reference channel. It effectively decomposes multiple-channel EEG data
into spatially-fixed and temporally independent components. Clean EEG signals
can then be derived by eliminating the contributions of artifactual sources, since
their time courses are generally temporally independent from and differently distributed than sources of EEG activity.
896
2
T-P' lung, C. Humphries, T- W Lee, S. Makeig, M. J. McKeown, V. Iragui and T. l. Sejnowski
Independent Component Analysis
Bell and Sejnowski [3] have proposed a simple neural network algorithm that blindly
separates mixtures, x, of independent sources, s, using infomax. They show that
maximizing the joint entropy, H (y), of the output of a neural processor minimizes
the mutual information among the output components, Yi = g( ud, where g( Ui) is
an invertible bounded nonlinearity and u = Wx. This implies that the distribution
of the output Yi approximates a uniform density. Independence is achieved through
the nonlinear squashing function which provides necessary higher-order statistics
through its Taylor series expansion. The learning rule can be derived by maximizing
output joint entropy, H(y), with respect to W [3], giving,
LlWex
8:~)WTW =
[I + i>uT ] W
(1)
where Pi = (8/8ui) In(8yd8ui)' The 'natural gradient' WTW term [13] avoids
matrix inversions and speeds convergence. The form of the nonlinearity g( u) plays
an essential role in the success of the algorithm. The ideal form for gO is the cumulative density function (cdf) of the distributions of the independent sources. In
practice, if we choose gO to be a sigmoid function (as in [3]), the algorithm is then
limited to separating sources with super-Gaussian distributions. An elegant way of
generalizing the learning rule to sources with either sub- or super-Gaussian distributions is to approximate the estimated probability density function (pdf) in the
form of a 4th -order Edgeworth approximation as derived by Girolami and Fyfe [14].
For sub-Gaussians, the following approximation is possible: Pi = + tanh( Ui) - Ui.
For super-Gaussians, the same approximation becomes Pi = - tanh( Ui) - Ui. The
sign can be chosen for each component using its normalized kurtosis, k 4 (Ui), giving,
Ll W ex
8::)w w=
T
[I -
sign(k4 ) tanh(u)u T
-
uuT] W
(2)
Intuitively, for super-Gaussians the - tanh(u)u T term is an anti-Hebbian rule that
tends to minimize the variance of u, whereas for sub-Gaussians the corresponding
term is a Hebbian rule that tends to maximize its variance.
2.1
Applying leA to artifact correction
The leA algorithm is effective in performing source separation in domains where,
(1) the mixing medium is linear and propagation delays are negligible, (2) the time
courses of the sources are independent, and (3) the number of sources is the same
as the number of sensors, meaning if we employ N sensors the leA algorithm can
separate N sources [3, 4, 12]. In the case of EEG signals [12), volume conduction is thought to be linear and instantaneous, hence assumption (1) is satisfied.
Assumption (2) is also reasonable because the sources of eye and muscle activity,
line noise, and cardiac signals are not generally time locked to the sources of EEG
activity which is thought to reflect activity of cortical neurons. Assumption (3) is
questionable since we do not know the effective number of statistically-independent
signals contributing to the scalp EEG. However, numerical simulations have confirmed that the leA algorithm can accurately identify the time courses of activation
and the scalp topographies of relatively large and temporally-independent sources
from simulated scalp recordings, even in the presence of a large number of low-level
and temporally-independent source activities [16].
For EEG analysis, the rows of the input matrix x are the EEG signals recorded at
different electrodes, the rows of the output data matrix u = W x are time courses of
activation ofthe leA components, and the columns of the inverse matrix, W-l, give
the projection strengths of the respective components onto the scalp sensors. The
Extended leA Removes Artifacts from EEG Recordings
897
scalp topographies of the components provide evidence for their biological origin
(e.g ., eye activity should project mainly to frontal sites) . In general, and unlike
PCA , the component time courses of activation will be nonorthogonal. 'Corrected'
EEG signals can then be derived as x' = (W)-lU' , where u' is the matrix of
activation waveforms, u, with rows representing artifactual sources set to zero.
3
Methods and Materials
One EEG data set used in the analysis was collected from 20 scalp electrodes placed
according to the International 10-20 System and from 2 EOG placements, all referred to the left mastoid. A second EEG data set contained 19 EEG channels (no
EOG channel). Data were recorded with a sampling rate of 256 Hz. ICA decomposition was performed on 10-sec EEG epochs from each data set using Matlab 4.2c on
a DEC 2100A 5/300 processor. The learning batch size was 90, and initial learning
rate was 0.001. Learning rate was gradually reduced to 5 x 10- 6 during 80 training
iterations requiring 6.6 min of computer time . To evaluate the relative effectiveness of ICA for artifact removal, the multiple-lag regression method of Kenemans
et al. [17] was performed on the same data.
4
4.1
Results
Eye movement artifacts
Figure 1 shows a 3-sec portion of the recorded EEG time series and its ICA component activations, the scalp topographies of four selected components, and the
'corrected' EEG signals obtained by removing four selected EOG and muscle noise
components from the data. The eye movement artifact at 1.8 sec in the EEG data
(left) is isolated to ICA components 1 and 2 (left middle) . The scalp maps (right
middle) indicate that these two components account for the spread of EOG activity
to frontal sites. After eliminating these two components and projecting the remaining components onto the scalp channels , the 'corrected' EEG data (right) are free
of these artifacts .
Removing EOG activity from frontal channels reveals alpha activity near 8 Hz that
occurred during the eye movement but was obscured by the eye movement artifact
in the original EEG traces. Close inspection of the EEG records (Fig . 1b) confirms
its presence in the raw data. ICA also reveals the EEG 'contamination' appearing
in the EOG electrodes (right) . By contrast, the 'corrected ' EEG resulting from
multiple-lag regression on this data shows no sign of 8 Hz activity at Fp1 (Fig .
1b). Here, regression was performed only when the artifact was detected (l-SE'C
surrounding the EOG peak) , since otherwise a large amount of EEG activity would
also have been regressed out during periods without eye movements.
4.2
Muscle artifacts
Left and right temporal muscle activity in the data are concentrated in ICA components 14 and 15 (Fig. la, right middle). Removing them from the data (right)
reveals underlying EEG activity at temporal sites T3 and T4 that had been masked
by muscle activity in the raw data (left). The signal at T3 (Fig. 1c left) sums muscle
activity from component 14 (center) and underlying EEG activity. Spectral analysis of the two records (right) shows a large amount of overlap between their power
spectra, so bandpass filtering cannot separate them. ICA component 13 (Fig . la,
left middle) reveals the presence of small periodic muscle spiking (in right frontal
channels, map not shown) that is highly obscured in the original data (left).
898
T-P' lung, C. Hwnphries, T- W. Lee, S. Makeig, M. 1. McKeown, V. Iragui and T. 1. Sejnowsld
(a)
Original EEG
Fp1
Fp2
F3
F4
C3
C4
A2
P3
P4
01
02
F7
F8
T3
T4
T5
T6
Corrected EEG
, .....
Ol'
~
.
..
~
11
12
13
14
15
16
17
18
19
20
21
22
pz
EOG1
EOG2
0
1 2 3
Time(sec)
.
. ..
Fp1 w.
.. ...,....,.
~
Fp2
'rtl Jl"f'?'H
F3 ~
....
'
,a,
......
F4
'....
11.1, .J~;.
C3
.....
'~JJ~" .I:~ rr' ,M.J.L
C4
A2
P3 1J.t.
....
P4 .A..
01
02
r... ?? ..
F7 """
?w.....
,""
F8 Iawi...~ .lJ,
T3
II, w.;~
T4 J"',fl"l" ,.,,1. ,iJ.
T5 .J..
T6 ..LL
"
Fz
:A" ... ,,~ ,..
'
IU
....
..
....
Cz
.
'r. 1
.J..f'
pz
EOG1
EOG2
0
2
......
"r ....
'y
o
3
"
2
3
(b)
EEG -
Fp1
RelT1c>ved using ICA
o
(c)
2
5
4
3
Tirne(sec)
Power Spectral Density
-
40,.-----------.
Remaining EEG - T3
~
30
20
o
', r1\1(\A~.?
~r'\.",
,
V V\.fl'\ft,
16
32
Hz
48
64
Figure 1: A 3-sec portion of an EEG time series (left), corresponding ICA components activations (left middle), scalp maps of four selected components (right
middle), and EEG signals corrected for artifacts according to: (a) ICA with the
four selected components removed (right), or (b) multiple-lag regression on the two
EOG channels. ICA cancels multiple artifacts in all the EEG and EOG channels simultaneously. (c) The EEG record at T3 (left) is the sum of EEG activity recorded
over the left temporal region and muscle activity occurring near the electrode (center). Below 20 Hz, the spectra of remaining EEG (dashed line) and muscle artifact
(dotted line) overlap strongly, whereas ICA separates them by spatial filtering.
Extended leA Removes Artifacts from EEG Recordings
4.3
899
Cardiac contamination and line noise
Figure 2 shows a 5-sec portion of a second EEG time series, five ICA components
that represent artifactual sources, and 'corrected' EEG signals obtained by removing these components. Eye blink artifacts at 0.5, 2.0 and 4.7 sec (left) are detected
and isolated to ICA component 1 (middle left), even though the training data contains no EOG reference channel. The scalp map of the component captures the
spread of EOG activity to frontal sites. Component 5 represents horizontal eye
movements, while component 2 reveals the presence of small periodic muscle spiking in left frontal channels which is hard to see in the raw data. Line noise has a
sub-Gaussian distribution and so could not be clearly isolated by earlier versions of
the algorithm [3, 12]. By contrast, the new algorithm effectively concentrates the
line noise present in nearly all the channels into ICA component 3. The widespread
cardiac contamination in the EEG data (left) is concentrated in ICA component
4. After eliminating these five artifactual components, the 'corrected' EEG data
(right) are largely free of these artifacts.
5
Discussion and Conclusions
ICA appears to be an effective and generally applicable method for removing known
artifacts from EEG records. There are several advantages of the method: (1) ICA
is computationally efficient. Although it requires more computation than the algorithm used in [15, 12], the extended ICA algorithm is effective even on large EEG
data sets. (2) ICA is generally applicable to removal of a wide variety of EEG artifacts. (3) A simple analysis simultaneously separates both the EEG and its artifacts
into independent components based on the statistics of the data, without relying
on the availability of 'clean' reference channels. This avoids the problem of mutual
contamination between regressing and regressed channels. (4) No arbitrary thresholds (variable across sessions) are needed to determine when regression should be
performed. (5) Once the training is complete, artifact-free EEG records can then
be derived by eliminating the contributions of the artifactual sources. However,
the results of lCA are meaningful only when the amount of data and number of
channels are large enough. Future work should determine the minimum data length
and number of channels needed to remove artifacts of various types.
Acknow legement
This report was supported in part by grants from the Office of Naval Research. The
views expressed in this article are those of the authors and do not reflect the official
policy or position of the Department of the Navy, Department of Defense, or the
U.S. Government. Dr. McKeown is supported by a grant from the Heart & Stroke
Foundation of Ontario.
References
[1] J.F. Peters (1967). Surface electrical fields generated by eye movement and eye blink
potentials over the scalp, J. EEG Technol., 7:27-40.
[2] P.J. Oster & J.A. Stern (1980). Measurement of eye movement electrooculography,
In: Techniques in Psychophysiology, Wiley, Chichester, 275-309.
[3] A.J. Bell & T.J. Sejnowski (1995). An information-maximization approach to blind
separation and blind deconvolution, Neural Computation 7:1129-1159.
[4] T.W. Lee and T. Sejnowski (1997). Independent Component Analysis for SubGaussian and Super-Gaussian Mixtures, Proc. 4th Joint Symp. Neural Computation
7:132-9.
[5] P. Berg & M. Scherg (1991) Dipole models of eye movements and blinks, Electroencephalog. din. Neurophysiolog. 79:36-44.
T-P' lung, C. Hwnphries, T- W. Lee, S. Makeig, M. 1. McKeown, V. Iragui and T. l. Sejnowski
900
Original EEG
ICA Components
Corrected EEG
....
-.......- .-.r
C4~~~-r~~~~~
P3~~~~~~-w--~
P4~~~~--~~~~
01~~~~~~----~
02~~~~~~~~~
o
234
Time(sec)
5
Fp1 ~.
~
Fp2
F3
F4 ~
C3
4~~~~~.w~~~ C4 f,....
P3
5
P4 Iw.
5 01
4
o
2
3
02
F7
F8 ~
T3
T4 ~
T5
T6 Iw.
Fz """"'Cz ......
pz
o
.
..,..-.,
-.....
...,.
.....
.NY-..
""'"
-""'"
,r...
234
~
5
Figure 2: (left) A 5-sec portion of an EEG time series. (center) leA components
accounting for eye movements, cardiac signals, and line noise sources. (right) The
same EEG signals ' corrected' for artifacts by removing the five selected components.
[6] S.A. Hillyard & R. Galambos (1970) . Eye-movement artifact in the CNV, Electroencephalog. clin. N europhysiolog. 28: 173-182.
[7] R. Verleger, T . Gasser & 1. Mocks (1982). Correction of EOG artifacts in event-related
potentials ofEEG: Aspects ofreliability and validity, Psychoph., 19(4):472-80 .
[8] 1.1. Whitton. F. Lue & H. Moldofsky (1978) . A spectral method for removing eyemovement artifacts from the EEG. Electroencephalog. din. Neurophysiolog.44:735-41.
[9] 1.C. Woestenburg, M.N. Verbaten & 1.L. Slangen (1983). The removal of the eyemovement artifact from the EEG by regression analysis in the frequency domain ,
Biological Psychology 16:127-47.
[10] T.C. Weerts & P.l. Lang (1973). The effects of eye fixation and stimulus and response
location on the contingent negative variation (CNV), Biological Psychology1(1):1-19.
[11] D.A. Overton & C . Shagass (1969). Distribution of eye movement and eye blink
potentials over the scalp, Electroencephalog. clin. Neurophysiolog. 27 :546.
[12] S. Makeig, A.l . Bell, T-P lung, T.l. Sejnowski (1996) Independent Component Analysis of Electroencephalographic Data, In: Advances in Neural Information Processing
Systems 8:145-51.
[13] S. Amari, A. Cichocki & H. Yang (1996) A new learning algorithm for blind signal
separation, In: Advances in Neural Information Processing Systems, 8:757-63.
[14] M Girolami & C Fyfe (1997) Generalized Independent Component Analysis through
Unsupervised Learning with Emergent Bussgang Properties. in Proc. IEEE International Conference on Neural Networks, 1788-91.
[15] A.l. Bell & T.l. Sejnowski (1995). Fast blind separation based on information theory,
in Proc. Intern. Symp. on Nonlinear Theory and Applications (NOLTA) 1:43-7.
[16] S. Makeig, T-P lung, D. Ghahremani & T.l. Sejnowski (1996). Independent Component Analysis of Simulated ERP Data, Tech . Rep. INC-9606, Institute for Neural
Computation, San Diego, CA.
[17] 1.L. Kenemans, P. Molenaar, M.N. Verbaten & 1.1. Slangen (1991). Removal of the
ocular artifact from the EEG: a comparison of time and frequency domain methods
with simulated and real data, Psychoph., 28(1):114-21.
| 1343 |@word middle:7 version:2 eliminating:5 inversion:1 confirms:1 simulation:1 decomposition:2 accounting:1 initial:1 series:5 contains:1 molenaar:1 current:1 comparing:1 activation:6 lang:1 numerical:1 wx:1 remove:8 lue:1 selected:5 inspection:1 record:10 filtered:1 provides:1 location:1 five:3 unacceptable:1 along:1 fixation:1 symp:2 introduce:1 ica:24 frequently:1 mock:1 brain:1 ol:1 relying:1 little:1 becomes:1 project:1 bounded:1 underlying:2 overcompensation:1 medium:1 minimizes:1 electroencephalog:4 impractical:2 temporal:4 questionable:1 makeig:6 medical:1 grant:2 negligible:1 cornea:1 tends:3 scherg:2 limited:2 locked:1 statistically:1 hughes:1 lost:1 practice:1 edgeworth:1 bell:4 thought:2 projection:1 cannot:2 undesirable:1 onto:2 close:1 applying:1 humphries:2 map:4 center:4 maximizing:2 go:2 dipole:4 bussgang:1 rule:4 variation:1 diego:4 play:1 origin:1 role:1 ft:1 electrical:1 capture:1 region:1 movement:20 contamination:5 decrease:2 removed:1 ui:7 depend:1 segment:2 joint:3 differently:1 emergent:1 various:2 surrounding:1 fast:1 effective:4 sejnowski:8 detected:2 fyfe:2 navy:1 lag:3 larger:1 otherwise:1 amari:1 statistic:2 advantage:1 rr:1 kurtosis:1 propose:1 subtracting:1 p4:4 relevant:2 rapidly:1 mixing:1 ontario:1 slangen:2 convergence:1 electrode:6 double:1 r1:1 produce:1 mckeown:5 derive:1 ij:1 involves:1 implies:1 indicate:1 girolami:2 differ:2 eog1:2 waveform:1 concentrate:1 f4:3 material:1 require:1 government:1 biological:3 cnv:2 correction:2 nonorthogonal:1 overcompensate:1 a2:2 f7:3 proc:3 applicable:4 tanh:4 iw:2 clearly:1 sensor:3 gaussian:6 super:6 office:1 derived:5 naval:2 electroencephalographic:3 mainly:1 tech:1 contrast:2 detect:1 ved:1 dependent:1 lj:1 iu:1 among:1 orientation:1 priori:1 spatial:6 mutual:2 equal:1 once:1 f3:3 having:2 field:1 sampling:1 represents:1 cancel:1 nearly:1 unsupervised:1 future:1 contaminated:1 report:1 stimulus:1 serious:1 inherent:1 employ:1 gamma:1 simultaneously:2 conductance:1 interest:1 highly:1 regressing:4 severe:1 weakness:1 chichester:1 mixture:3 overton:1 neurophysiolog:3 necessary:1 respective:1 taylor:1 isolating:1 isolated:3 obscured:2 column:1 earlier:1 maximization:1 vertex:1 uniform:1 masked:1 delay:1 too:1 conduction:1 periodic:2 density:4 international:2 peak:1 lee:5 terrence:1 infomax:1 regressor:1 invertible:1 reflect:2 satisfied:1 recorded:4 choose:1 slowly:1 dr:1 account:2 potential:3 alteration:1 sec:10 availability:1 inc:1 blind:5 performed:6 view:1 lab:1 portion:6 mastoid:1 lung:5 fp1:5 contribution:2 minimize:1 variance:2 largely:1 t3:7 identify:1 ofthe:1 blink:13 raw:4 rejecting:2 produced:1 accurately:1 lu:1 confirmed:1 processor:2 stroke:1 ping:1 simultaneous:1 whenever:1 complicate:1 frequency:8 ocular:2 hwnphries:2 galambos:1 ut:1 appears:1 bidirectional:1 higher:1 psychophysiology:1 response:1 done:1 box:2 strongly:1 though:1 horizontal:1 nonlinear:2 propagation:1 widespread:1 artifact:49 effect:3 validity:1 contain:1 requiring:1 normalized:1 hence:1 din:2 spatially:1 ll:2 during:3 excitation:1 won:1 generalized:1 distracting:1 pdf:1 complete:1 meaning:1 instantaneous:1 tzyy:1 sigmoid:1 spiking:2 volume:1 jl:1 interpretation:1 approximates:1 occurred:1 measurement:1 session:1 nonlinearity:2 eog2:2 had:1 hillyard:1 surface:1 jolla:1 rep:1 arbitrarily:1 success:1 kenemans:2 yi:2 muscle:17 preserving:1 minimum:1 contingent:1 determine:2 maximize:1 colin:2 ud:1 dashed:1 ii:1 period:1 multiple:7 signal:22 hebbian:2 clinical:1 regression:15 dipolar:1 patient:1 blindly:1 iteration:1 represent:1 cz:2 achieved:1 dec:1 lea:10 whereas:2 source:21 unlike:1 recording:10 hz:6 elegant:1 effectiveness:1 subgaussian:1 near:3 presence:4 ideal:1 yang:1 enough:1 variety:4 independence:1 psychology:1 pca:1 defense:1 peter:1 cause:1 jj:1 matlab:1 generally:6 se:1 amount:4 band:1 gasser:1 concentrated:2 reduced:1 fz:2 dotted:1 sign:3 neuroscience:1 estimated:1 group:2 four:4 threshold:1 k4:1 erp:1 clean:3 wtw:2 f8:3 sum:2 inverse:1 reasonable:1 separation:5 p3:4 fl:2 activity:23 scalp:13 strength:1 occur:1 placement:1 regressed:2 aspect:1 speed:1 min:1 performing:2 martin:2 relatively:2 department:3 lca:2 according:2 across:1 cardiac:5 intuitively:1 gradually:1 projecting:1 heart:3 computationally:1 needed:3 know:1 available:1 gaussians:4 spectral:3 cancelled:1 appearing:1 batch:1 original:4 assumes:1 remaining:3 clin:2 unsuitable:1 giving:2 contact:1 gradient:1 distance:1 separate:6 separating:1 simulated:3 collected:1 length:1 ghahremani:1 verbaten:2 fp2:3 favorably:1 trace:1 acknow:1 negative:1 stern:1 policy:1 sejnowsld:1 neuron:1 howard:1 inevitably:1 anti:1 technol:1 extended:6 neurobiology:1 arbitrary:1 c3:3 c4:4 california:1 below:1 scott:2 iragui:3 terry:1 power:2 overlap:2 event:1 natural:1 rely:1 psychology1:1 representing:1 tewon:1 eye:27 temporally:4 cichocki:1 health:1 eog:21 oster:1 epoch:1 removal:8 contributing:1 relative:1 loss:1 topography:3 filtering:3 foundation:1 article:1 nolta:1 share:1 pi:3 squashing:1 row:3 course:5 jung:1 placed:1 supported:2 free:3 t6:3 institute:3 wide:4 characterizing:1 eyelid:1 distributed:1 cortical:1 avoids:2 cumulative:1 t5:3 author:1 commonly:2 san:4 approximate:1 alpha:1 rtl:1 reveals:5 spatio:1 spectrum:2 decomposes:1 channel:19 transfer:6 ca:4 eeg:77 expansion:1 domain:10 official:1 spread:4 noise:14 arise:2 site:5 referred:1 fig:5 salk:2 wiley:1 ny:1 sub:5 position:1 bandpass:1 removing:12 uut:1 pz:3 evidence:1 deconvolution:1 essential:2 effectively:3 te:1 occurring:1 t4:4 rejection:1 entropy:2 generalizing:1 artifactual:6 appearance:1 intern:1 expressed:1 contained:2 saccade:5 cdf:1 considerable:1 change:1 vicente:1 acausal:1 hard:1 corrected:10 preset:1 la:3 meaningful:1 berg:2 frontal:7 evaluate:1 phenomenon:2 ex:1 |
377 | 1,344 | Analytical study of the interplay between
architecture and predictability
Avner Priel, Ido Kanter, David A. Kessler
Minerva Center and Department of Physics, Bar Ilan University,
Ramat-Gan 52900, Israel.
e-mail: priel@mail.cc.biu.ac.il
(web-page: http://faculty.biu.ac.il/ ""'priel)
Abstract
We study model feed forward networks as time series predictors
in the stationary limit. The focus is on complex, yet non-chaotic,
behavior. The main question we address is whether the asymptotic
behavior is governed by the architecture, regardless the details of
the weights . We find hierarchies among classes of architectures
with respect to the attract or dimension of the long term sequence
they are capable of generating; larger number of hidden units can
generate higher dimensional attractors. In the case of a perceptron,
we develop the stationary solution for general weights, and show
that the flow is typically one dimensional. The relaxation time
from an arbitrary initial condition to the stationary solution is
found to scale linearly with the size of the network. In multilayer
networks, the number of hidden units gives bounds on the number
and dimension of the possible attractors. We conclude that long
term prediction (in the non-chaotic regime) with such models is
governed by attractor dynamics related to the architecture.
Neural networks provide an important tool as model free estimators for the solution
of problems when the real model is unknown, or weakly known. In the last decade
there has been a growing interest in the application of such tools in the area of time
series prediction (see Weigand and Gershenfeld, 1994). In this paper we analyse a
typical class of architectures used in this field, i.e. a feed forward network governed
by the following dynamic rule:
- ,
2 .. . , N
S 1t+l = S out ;
S ]t+l = st] - 1 J' (1)
where Sout is the network's output at time step t and Sj are the inputs at that time;
N is the size of the delayed input vector. The rational behind using time delayed
vectors as inputs is the theory of state space reconstruction of a dynamic system
A. Priel, 1. Kanter and D. A. Kessler
316
using delay coordinates (Takens 1981, Sauer Yorke and Casdagli 1991). This theory address the problem of reproducing a set of states associated with the dynamic
system using vectors obtained from the measured time series, and is widely used for
time series analysis. A similar architecture incorporating time delays is the TDNN
- time-delay neural network with a recurrent loop (Waibel et. al. 1989). This type
of networks is known to be appropriate for learning temporal sequences, e.g. speech
signal. In the context of time series, it is mostly used for short term predictions. Our
analysis focuses on the various long-time properties of the sequence generated by a
given architecture and the interplay between them. The aim of such an investigation is the understanding and characterization of the long term sequences generated
by such architectures, and the time scale to reach this asymptotic behavior. Such
knowledge is necessary to define adequate measures for the transition between a
locally dependent prediction and the long term behavior. Though some work has
been done on characterization of a dynamic system from its time series using neural networks, not much analytical results that connect architecture and long-time
prediction are available (see M. Mozer in Weigand and Gershenfeld, 1994). Nevertheless, practical considerations for choosing the architecture were investigated
extensively (Weigand and Gershenfeld, 1994 and references therein). It has been
shown that such networks are capable of generating chaotic like sequences. While
it is possible to reconstruct approximately the phase space of chaotic attractors (at
least in low dimension), it is clear that prediction of chaotic sequences is limited
by the very nature of such systems, namely the divergence of the distance between
nearby trajectories. Therefore one can only speak about short time predictions with
respect to such systems. Our focus is the ability to generate complex sequences,
and the relation between architecture and the dimension of such sequences.
1
Perceptron
We begin with a study of the simplest feed forward network, the perceptron. We
analyse a perceptron whose output Sout at time step t is given by:
Sou. = tanh [13 (t,(W; + WO)Sj) ]
(2)
where {3 is a gain parameter, N is the input size. The bias term ,Wo, plays the same
role as the common 'external field' used in the literature, while preserving the same
qualitative asymptotic solution. In a previous work (Eisenstein et. al. , 1995) it was
found that the stationary state (of a similar architecture but with a "sign" activation
function instead of the "tanh", equivalently (3 --t 00) is influenced primarily by one
of the larger Fourier components in the power spectrum of the weights vector W
of the perceptron. This observation motivates the following representation of the
vector W. Let us start with the case of a vector that consists of a singl? biased
Fourier component of the form:
Wj
= acos(27fKjjN)
j
= 1, ... ,N ;
Wo =b
(3)
where a, b are constants and K is a positive integer. This case is generalized later on,
however for clarity we treat first the simple case. Note that the vector W can always
be represented as a Fourier decomposition of its values. The stationary solution for
the sequence (SI) produced by the output of the percept ron , when inserting this
choice of the weights into equation (2), can be shown to be of the form:
SI
= tanh [A({3) cos(27fKljN) + B({3)]
There are two non-zero solutions possible for the variables (A, B):
(4)
317
The Interplay between Architecture and Predictability
A
t{3N a I:~l D(p)(A/2)2P-l (p!)-2
B
{3Nb I:~l D(p)S2 p-l ((2p)!)-1
B =0
(5)
where D(p) = 22p (2 2p - 1)B2p and B 2p are the Bernoulli numbers. Analysis
of equations (5) reveals the following behavior as a function of the parameter {3.
Each of the variables is the amplitude of an attractor. The attractor represented
by (A i- 0, B = 0) is a limit cycle while the attractor represented by (B i- 0, A = 0)
is !l fixed point of the dynamics. The onset of each of the attractors A(B) is at
{3cl = 2(aN)-1 ({3c2 = (bN)-l) respectively. One can identify three regimes: (1)
{3 < {3cl,c2 - the stable solution is Sl = O. (2) min({3cl, (3c2) < {3 < max({3cl, (3c2) the system flows for all initial conditions into the attractor whose {3c is smaller. (3)
{3 > {3cl,c2 - depending on the initial condition of the input vector, the system flows
into one of the attractors, namely, the stationary state is either a fixed point or a
periodic flow. {3cl is known as a Hopf bifurcation point. Naturally, the attractor
whose {3c is smaller has a larger basin of attraction, hence it is more probable to
attract the flow (in the third regime).
1.0
0.5
o
o
o
o
o
o
o
o
o
o
0.0
-0.5
00
00
000
00
Figure 1: Embedding of a sequence generated by a perceptron whose weights follow eq. 3
(6) . Periodic sequence (outer
curve) N = 128, k = 17, b = 0.3,
{3 = 1/40 and quasi periodic (inner) k = 17, ? = 0.123, (3
1/45 respectively.
000000000
-1.0
-1.0
-0.5
0.0
Sl
0.5
1.0
Next we discuss the more general case where the weights of eq. (3) includes an
arbitrary phase shift of the form:
Wj
= acos(27fKj/N -
7f?)
? E (-1,1)
The leading term of the stationary solution in the limit N
Sl
= tanh [A({3) cos(27f(K -
?)l/N)
?
(6)
1 is of the form:
+ B({3)]
(7)
where the higher harmonic corrections are of O( 1/ K). A note should be made here
that the phase shift in the weights is manifested as a frequency shift in the solution.
In addition, the attractor associated with A i- 0 is now a quasi-periodic flow in the
generic case when ? is irrational. The onset value ofthe fixed point ({3c2) is the same
as before, however the onset of the quasi-periodic orbit is (3cl = sin'(!4? 2(aN)-1.
The variables A, B follow similar equations to (5):
A
(3Na SinJ;4? I:~l D(p)(A/2)2P-l(p!)-2
B =0
(8)
A=O
The three regimes discussed above appear in this case as well. Figure 1 shows the
attractor associated with (A i- 0, B = 0) for the two cases where the series generated
by the output is embedded as a sequence of two dimensional vectors (Sl+l, Sl).
A. Priel, I Kanter and D. A. Kessler
318
The general weights can be written as a combination of their Fourier components
with different K's and ?'s:
m
Wj = Laicos(27fKd/N-7f?i)
?i E (-1,1)
(9)
i=l
When the different K's are not integer divisors of each other, the general solution
is similar to that described above:
[t,
Sl = tanh
A,({3) cos(27r(K, - <pi)l / N)
+ B({3)1
(10)
where m is the number of relevant Fourier components. As above, the variables
Ai ,B are coupled via self consistent equations. Nevertheless, the generic stationary
flow is one of the possible attractors, depending on /3 and the initial condition; i.e.
(Aq i- 0, Ai = 0 Vi i- q ,B = 0) or (B i- 0, Ai = 0). By now we can conclude that
the generic flow for the perceptron is one of three: a fixed point, periodic cycle or
quasi-periodic flow. The first two have a zero dimension while the last describes a
one dimensional flow. we stress that more complex flows are possible even in our
solution (eq. 10), however they require special relation between the frequencies and
a very high value of /3, typically more than an order of magnitude greater than
bifurcation value.
2
Relaxation time
At this stage the reader might wonder about the relation between the asymptotic
results presented above and the ability of such a model to predict. In fact, the
practical use of feed forward networks in time series prediction is divided into two
phases. In the first phase, the network is trained in an open loop using a given time
series. In the second phase, the network operates in a closed loop and the sequence it
generates is also used for the future predictions. Hence, it is clear from our analysis
that eventually the network will be driven to one of the attractors. The relevant
We
question is how long does it takes to arrive at such asymptotic behavior?
shall see that the characteristic time is governed by the gap between the largest and
the second largest eigenvalues of the linearized map. Let us start by reformulating
= (Sf, s~, ... ,Sj.,,)
eqs. (1,2) in a matrix form, i.e. we linearize the map. Denote
-t
-t
-t+1
and (S )' is the transposed vector. The map is then T(S)' = (S )' where
st
CN-l
CN
o
o
T=
o
0
0
0
(11)
o
1
The first row of T gives the next output value = si+ 1 while the rest of the matrix is
just the shift defined by eq. (1) . This matrix is known as the "companion matrix"
(e.g. Ralston and Rabinowitz, 1978). The characteristic function of T can be
written as follows:
N
/3 '"
~
C
n
).n
=1
(12)
n=l
from which it is possible to extract the eigenvalues. At (3 = /3c the largest eigenvalue
of T is 1).11 = 1. Denote the second largest eigenvalue ).2 such that 1).21 = 1 - 6. .
The Interplay between Architecture and Predictability
0.002
<l
Figure 2: Scaling of ~ for a perceptron with two Fourier components, (eq. 9), with ai = 1,
Kl = 3, 1>1 = 0.121, K2 = 7,
1>2 = 0 ,Wo = 0.3 . The dashed
line is a linear fit of 0.1/N, N =
50, ... ,400.
0.001
o
319
o
0.01
0.02
1/N
Applying T T - times to an initial state vector results in a vector whose second
largest component is of order:
(13)
therefore we can define the characteristic relaxation time in the vicinity of an attractor to be T = ~ -1 . 1
We have analysed eq. (12) numerically for various cases of Ci, e.g. Wi composed of
one or two Fourier components. In all the cases (3 was chosen to be the minimal f3c to
ensure that the linearized form is valid. We found that ~,....., I/N. Figure 2 depicts
one example of two Fourier components. Next, we have simulated the network and
measured the average time ( T S ) it takes to flow into an attractor starting from
an arbitrary initial condition. The following simulations support the analytical
result ( T ,....., N ) for general (random) weights and high gain (13) value as well.
The threshold we apply for the decision whether the flow is already close enough
to the attractor is the ratio between the component with the largest power in the
-t
spectrum and the total power spectrum of the current state (S ), which should
exceed 0.95. The results presented in Figure 3 are an average over 100 samples
started from random initial condition. The weights are taken at random, however
we add a dominant Fourier component with no phase to control the bifurcation
point more easily. This component has an amplitude which is about twice the other
components to make sure that its bifurcation point is the smallest. We observe a
clear linear relation between this time and N (T S ,....., N ). The slope depends on the
actual values of the weights, however the power law scaling does not change.
On general principles, we expect the analytically derived scaling law for ~ to be
valid even beyond the linear regime. Indeed the numerical simulations (Figure 3)
support this conjecture.
3
Multilayer networks
For simplicity, we restrict the present analysis to a multilayer network (MLN) with
N inputs, H hidden units and a single linear output, however this restriction can
be removed, e.g. nonlinear output and more hidden layers. The units in the hidden
layer are the perceptrons discussed above and the output is given by:
(14)
INote that if one demand the L.R.S. of eq. (13) to be of O(~), then
T '"
~ -11og(~ -1).
A. Priel, 1. Kanter and D. A. Kessler
320
800
Figure 3: Scaling of r S for random weights with a dominant
component at K = 7, ? = 0, a =
1; All other amplitudes are randomly taken between (0,0.5) and
the phases are random as well.
{3 = 3.2/N. The dashed line is a
linear fit of eN, e = 2.73 ? 0.03.
N = 16, ... , 256.
600
I"'p
400
200
0
100
0
200
300
N
The dynamic rule is defined by eq. (1). First consider the case where the weights
of each hidden unit are of the form described by eq. (6), Le. each hidden unit has
only one (possibly biased) Fourier component:
m=l, ... ,H.
(15)
Following a similar treatment as for the perceptron, the stationary solution is a
combination of the perceptron-like solution:
H
Sl =
L tanh [Am({3) cos(27r(Km - ?m)l/N) + Bm({3)]
(16)
m=l
The variables Am, Bm are the solution of the self consistent coupled equations, however by contrast with the single perceptron, each hidden unit operates independently
and can potentially develop an attractor of the type described in section 1. The
number of attractors depends on {3 with a maximum of H attractors. The number
of non-zero Am's defines the attractor's dimension in the generic case of irrational
?'s associated with them. If different units do not share Fourier components with
a common divisor or harmonics of one another, it is easy to define the quantitative
result, otherwise, one has to analyse the coupled equations more carefully to find
the exact value of the variables. Nevertheless, each hidden unit exhibits only a
single highly dominant component (A 1= 0 or B 1= 0).
Generalization of this result to more than a single biased Fourier component is
straightforward. Each vector is of the form described in eq. (9) plus an index for the
hidden unit. The solution is a combination of the general perceptron solution, eq.
(10). This solution is much more involved and the coupled equations are complicated
but careful study of them reveals the same conclusion, namely each hidden unit
possess a single dominant Fourier component (possibly with several other much
smaller due to the other components in the vector). As the gain parameter {3
becomes larger, more components becomes available and the number of possible
attractors increases. For a very large value it is possible that higher harmonics
from different hidden units might interfere and complicate considerably the solution.
Still, one can trace the origin of this behavior by close inspection of the fields in
each hidden unit.
We have also measured the relaxation time associated with MLN's in simulations.
The preliminary results are similar to the perceptron, Le. r S ' " N but the constant
prefactor is larger when the weights consist of more Fourier components.
The Interplay between Architecture and Predictability
4
321
Discussion
Neural networks were proved to be universal approximators (e.g. Hornik, 1991),
hence they are capable of approximating the prediction function of the delay coordinate vector. The conclusion should be that prediction is indeed possible. This
observation holds only for short times in general. As we have shown, long time
predictions are governed by the attractor dynamics described above. The results
point out the conclusion that the asymptotic behavior for this networks is dictated
by the architecture and not by the details of the weights. Moreover, the attractor dimension of the asymptotic sequence is typically bounded by the number of
hidden units in the first layer (assuming the network does not contain internal delays) . To prevent any misunderstanding we note again that this result refers to
the asymptotic behavior although the short term sequence can approximate a very
complicated attractor.
The main result can be interpreted as follows. Since the network is able to approximate the prediction function, the initial condition is followed by reasonable
predictions which are the mappings from the vicinity of the original manifold created by the network. As the trajectory evolves, it flows to one of the attractors
described above and the predictions are no longer valid. In other words, the initial
combination of solutions described in eq. (10) or its extension to MLN (with an
arbitrary number of non-zero variables, A's or B's) serves as the approximate mapping. Evolution of this approximation is manifested in the variables of the solution,
which eventually are attracted to a stable attractor (in the non-chaotic regime).
The time scale for the transition is given by the relaxation time developed above.
The formal study can be applied for practical purposes in two ways. First, taking
into account this behavior by probing the generated sequence and looking for its
indications. One such indication is stationarity of the power spectrum. Second, one
can incorporate ideas from local linear models in the reconstructed space to restrict
the inputs in such a way that they always remain in the vicinity of the original
manifold (Sauer, in Weigand and Gershenfeld, 1994).
Acknowledgments
This research has been supported by the Israel Science Foundation.
References
Weigand A. S. and Gershenfeld N. A. ; Time Series Prediction, Addison-Wesley,
Reading, MA, 1994.
E. Eisenstein, I. Kanter, D. A. Kessler and W. Kinzel; Generation and prediction
of time series by a neural network, Phys. Rev. Lett. 74,6 (1995).
Waibel A., Hanazawa T., Hinton G., Shikano K. and Lang K.; Phoneme recognition
using TDNN, IEEE Trans. Acoust., Speech & Signal Proc. 37(3), (1989).
Takens F., Detecting strange attractors in turbulence, in Lecture notes in mathematics vol. 898, Springer-Verlag, 1981.
T. Sauer, J. A. Yorke and M. Casdagli; Embedology, J. Stat. Phys. 65(3), (1991) .
Ralston A. and Rabinowitz P. ; A first course in numerical analysis, McGraw-Hill,
1978.
K. Hornik; Approximation capabilities of multilayer feed forward networks, Neural
Networks 4, (1991).
| 1344 |@word faculty:1 casdagli:2 open:1 km:1 simulation:3 linearized:2 bn:1 decomposition:1 initial:9 series:11 current:1 analysed:1 activation:1 yet:1 si:3 written:2 attracted:1 lang:1 numerical:2 stationary:9 mln:3 inspection:1 short:4 characterization:2 detecting:1 ron:1 priel:6 c2:6 hopf:1 qualitative:1 consists:1 indeed:2 behavior:10 growing:1 actual:1 becomes:2 begin:1 kessler:5 moreover:1 bounded:1 israel:2 interpreted:1 developed:1 acoust:1 temporal:1 quantitative:1 k2:1 control:1 unit:14 appear:1 positive:1 before:1 local:1 treat:1 limit:3 approximately:1 might:2 plus:1 twice:1 therein:1 co:4 ramat:1 limited:1 practical:3 acknowledgment:1 chaotic:6 area:1 universal:1 word:1 refers:1 close:2 turbulence:1 nb:1 context:1 applying:1 restriction:1 map:3 center:1 straightforward:1 regardless:1 starting:1 independently:1 simplicity:1 estimator:1 rule:2 attraction:1 fkj:1 embedding:1 coordinate:2 hierarchy:1 play:1 speak:1 exact:1 fkd:1 origin:1 recognition:1 role:1 prefactor:1 wj:3 cycle:2 removed:1 mozer:1 dynamic:8 irrational:2 trained:1 weakly:1 easily:1 various:2 represented:3 choosing:1 kanter:5 whose:5 larger:5 widely:1 reconstruct:1 otherwise:1 ability:2 analyse:3 hanazawa:1 interplay:5 sequence:16 eigenvalue:4 indication:2 analytical:3 reconstruction:1 inserting:1 relevant:2 loop:3 generating:2 depending:2 develop:2 ac:2 stat:1 recurrent:1 linearize:1 measured:3 eq:13 require:1 generalization:1 investigation:1 preliminary:1 probable:1 extension:1 correction:1 hold:1 mapping:2 predict:1 smallest:1 purpose:1 proc:1 tanh:6 largest:6 tool:2 always:2 aim:1 og:1 sou:1 derived:1 focus:3 bernoulli:1 contrast:1 am:3 dependent:1 attract:2 typically:3 hidden:14 relation:4 quasi:4 among:1 takens:2 special:1 bifurcation:4 field:3 future:1 primarily:1 randomly:1 composed:1 divergence:1 delayed:2 phase:8 divisor:2 attractor:29 stationarity:1 interest:1 highly:1 behind:1 capable:3 necessary:1 sauer:3 orbit:1 minimal:1 predictor:1 delay:5 wonder:1 connect:1 periodic:7 ido:1 considerably:1 st:2 sout:2 physic:1 na:1 again:1 possibly:2 external:1 leading:1 account:1 ilan:1 includes:1 onset:3 vi:1 depends:2 later:1 closed:1 start:2 complicated:2 capability:1 misunderstanding:1 slope:1 il:2 phoneme:1 characteristic:3 percept:1 identify:1 ofthe:1 produced:1 trajectory:2 cc:1 reach:1 influenced:1 phys:2 complicate:1 frequency:2 involved:1 naturally:1 associated:5 transposed:1 rational:1 gain:3 proved:1 treatment:1 knowledge:1 amplitude:3 carefully:1 feed:5 wesley:1 higher:3 follow:2 done:1 though:1 just:1 stage:1 web:1 nonlinear:1 interfere:1 defines:1 rabinowitz:2 f3c:1 contain:1 evolution:1 hence:3 vicinity:3 reformulating:1 analytically:1 sin:1 self:2 eisenstein:2 generalized:1 stress:1 hill:1 harmonic:3 consideration:1 embedology:1 common:2 ralston:2 kinzel:1 discussed:2 numerically:1 ai:4 mathematics:1 aq:1 stable:2 longer:1 add:1 dominant:4 dictated:1 driven:1 verlag:1 manifested:2 approximators:1 preserving:1 greater:1 signal:2 dashed:2 long:8 divided:1 prediction:17 multilayer:4 minerva:1 addition:1 biased:3 rest:1 posse:1 sure:1 flow:14 integer:2 exceed:1 enough:1 easy:1 fit:2 architecture:16 restrict:2 inner:1 idea:1 cn:2 shift:4 whether:2 wo:4 speech:2 adequate:1 clear:3 locally:1 extensively:1 simplest:1 http:1 generate:2 sl:7 sign:1 shall:1 vol:1 nevertheless:3 threshold:1 acos:2 clarity:1 prevent:1 gershenfeld:5 relaxation:5 arrive:1 reader:1 reasonable:1 strange:1 decision:1 scaling:4 bound:1 layer:3 followed:1 nearby:1 generates:1 fourier:14 min:1 conjecture:1 department:1 waibel:2 combination:4 smaller:3 describes:1 remain:1 wi:1 evolves:1 inote:1 rev:1 avner:1 taken:2 equation:7 discus:1 eventually:2 addison:1 serf:1 available:2 apply:1 observe:1 b2p:1 appropriate:1 generic:4 original:2 ensure:1 gan:1 approximating:1 question:2 already:1 exhibit:1 distance:1 simulated:1 outer:1 mail:2 manifold:2 assuming:1 index:1 biu:2 yorke:2 ratio:1 equivalently:1 mostly:1 potentially:1 trace:1 motivates:1 unknown:1 observation:2 hinton:1 looking:1 reproducing:1 arbitrary:4 david:1 namely:3 kl:1 trans:1 address:2 beyond:1 bar:1 able:1 regime:6 reading:1 max:1 power:5 started:1 created:1 tdnn:2 coupled:4 extract:1 understanding:1 literature:1 asymptotic:8 law:2 embedded:1 expect:1 lecture:1 generation:1 foundation:1 basin:1 consistent:2 principle:1 pi:1 share:1 row:1 course:1 supported:1 last:2 free:1 bias:1 formal:1 perceptron:13 taking:1 curve:1 dimension:7 lett:1 transition:2 valid:3 forward:5 made:1 bm:2 sj:3 approximate:3 reconstructed:1 mcgraw:1 reveals:2 weigand:5 conclude:2 shikano:1 spectrum:4 decade:1 nature:1 hornik:2 investigated:1 complex:3 cl:7 main:2 linearly:1 s2:1 en:1 depicts:1 predictability:4 probing:1 sf:1 governed:5 third:1 companion:1 incorporating:1 consist:1 ci:1 magnitude:1 demand:1 gap:1 springer:1 ma:1 careful:1 change:1 typical:1 operates:2 total:1 perceptrons:1 internal:1 support:2 incorporate:1 |
378 | 1,345 | Multi-modular Associative Memory
Nir Levy
David Horn
School of Physics and Astronomy
Tel-Aviv University Tel Aviv 69978, Israel
Eytan Ruppin
Departments of Computer Science & Physiology
Tel-Aviv University Tel Aviv 69978, Israel
Abstract
Motivated by the findings of modular structure in the association
cortex, we study a multi-modular model of associative memory that
can successfully store memory patterns with different levels of activity. We show that the segregation of synaptic conductances into
intra-modular linear and inter-modular nonlinear ones considerably
enhances the network's memory retrieval performance. Compared
with the conventional, single-module associative memory network,
the multi-modular network has two main advantages: It is less susceptible to damage to columnar input, and its response is consistent
with the cognitive data pertaining to category specific impairment.
1
Introduction
Cortical modules were observed in the somatosensory and visual cortices a few
decades ago. These modules differ in their structure and functioning but are likely to
be an elementary unit of processing in the mammalian cortex. Within each module
the neurons are interconnected. Input and output fibers from and to other cortical
modules and subcortical areas connect to these neurons. More recently, modules
were also found in the association cortex [1] where memory processes supposedly
take place. Ignoring the modular structure of the cortex, most theoretical models
of associative memory have treated single module networks. This paper develops
a novel multi-modular network that mimics the modular structure of the cortex.
In this framework we investigate the computational rational behind cortical multimodular organization, in the realm of memory processing.
Does multi-modular structure lead to computational advantages? Naturally one
53
Multi-modular Associative Memory
may think that modules are necessary in order to accommodate memories of different coding levels. We show in the next section that this is not the case, since
one may accommodate such memories in a standard sparse coding network . In
fact, when trying to capture the same results in a modular network we run into
problems, as shown in the third section: If both inter and intra modular synapses
have linear characteristics, the network can sustain memory patterns with only a
limited range of activity levels. The solution proposed here is to distinguish between intra-modular and inter-modular couplings, endowing the inter-modular ones
with nonlinear characteristics. From a computational point of view, this leads to
a modular network that has a large capacity for memories with different coding
levels. The resulting network is particularly stable with regard to damage to modular inputs. From a cognitive perspective it is consistent with the data concerning
category specific impairment.
2
Homogeneous Network
We study an excitatory-inhibitory associative memory network [2], having N excitatory neurons. We assume that the network stores M1 memory patterns 7]1-' of
sparse coding level p and M2 patterns ~v with coding level f such that p < f < < 1.
The synaptic efficacy Jjj between the jth (presynaptic) neuron and the ith (postsynaptic) neuron is chosen in the Hebbian manner
1
= N
Jij
1
Ml
L
7]1-' i 7]1-' j
+N
P 1-'=1
M2
L ~v i~v
j
,
(1)
P 1-'=1
The updating rule for the activity state Vi of the ith binary neuron is given by
Vi(t + 1)
where
= e (hj(t) -
0)
e is the step function and 0 is the threshold.
hi(t) = hHt) - lQ(t)
p
(2)
(3)
is the local field, or membrane potential. It includes the excitatory Hebbian coupling
of all other excitatory neurons,
N
hi(t)
= L Jij Vj(t)
,
(4)
j::f.i
and global inhibition that
neurons
IS
proportional to the total activity of the excitatory
1 N
Q(t)
=N L
Vj(t) .
(5)
j
The overlap m(t) between the network activity and the memory patterns is defined
for the two memory populations as
N
m{V(t)
= ~f ~~VjVj(t)
,
(6)
J
=
The storage capacity a
M / N of this network has two critical capacities. a c{
above which the population of ~v patterns is unstable and a C 7] above which the
population of 7]1-' patterns is unstable. We derived equations for the overlap and
total activity of the two populations using mean field analysis. Here we give the
N. Levy, D. Hom and E. Ruppin
54
fixed-point equations for the case of Ml
resulting equations are
ml1
=
(J - m )
<I> (
1> '1
= M2 = ~
and 'Y
Q = pml1
,
+ <I>
= Md 2 + M2p2.
(~)
The
(7)
,
and
(8)
where
(9)
and
<I>(x) =
1
00
x
(z2) - dz
exp - 2
(10)
~
(b)
(a)
p
p
=
=
Figure 1: (a) The critical capacity acl1 VS. f and p for f ~ p, (J 0.8 and N
1000.
(b) (acl1 - ac~) / acl1 versus f and p for the same parameters as in (a). The validity
of these analytical results was tested and verified in simulations.
Next, we look for the critical capacities, acl1 and ac~ at which the fixed-point
equations become marginally stable. The results are shown in Figure 1. Figure 1 (a)
shows acl1 VS. the coding levels f and p (f ~ p). Similar results were obtained for
ac~. As evident the critical capacities of both populations are smaller than the one
observed in a homogeneous network in which f = p. One hence necessarily pays a
price for the ability to store patterns with different levels of activity.
Figure l(b) plots the relative capacity difference (acl1 - ac~)/acl1 vs. f and p. The
function is non negative, i.e., acl1 ~ ac~ for all f and p. Thus, low activity memories
are more stable than high activity ones.
Assuming that high activity codes more features [3], these results seem to be at
odds with the view [3, 4] that memories that contain more semantic features, and
therefore correspond to larger Hebbian cell assemblies, are more stable, such as
concrete versus abstract words. The homogeneous network, in which the memories
with high activity are more susceptible to damage, cannot account for these observations. In the next section we show how a modular network can store memories
with different activity levels and account for this cognitive phenomenon.
Multi-modular Associative Memory
3
55
Modular Network
We study a multi modular excitatory-inhibitory associative memory network, storing M memory patterns in L modules of N neurons each. The memories are coded
such that in every memory a variable number n of 1 to L modules is active. This
number will be denoted as modular coding. The coding level inside the modules
is sparse and fixed, i.e., each modular Hebbian cell assembly consists of pN active
neurons with p < < 1. The synaptic efficacy Ji/ k between the jth (presynaptic)
neuron from the kth module and the ith (postsynaptic) neuron from the lth module
is chosen in a Hebbian manner
M
J .. lk _ 1 ~ ~ ~
IJ
-N L.JTJiITJjk,
(11)
p ~=1
where TJ~ il are the stored memory patterns. The updating rule for the activity state
Vii of the ith binary neuron in the lth module is given by
(12)
where (J$ is the threshold, and S(x) is a stochastic sigmoid function, getting the
value 1 with probability (1 + e- X )-1 and 0 otherwise. The neuron 's local field, or
membrane potential has two components,
hil (t) = h/ internal(t)
+ h/ external(t)
(13)
.
The internal field , hilinternal(t), includes the contributions from all other excitatory
neurons that are situated in the lth module, and inhibition that is proportional to
the total modular activity of the excitatory neurons, i.e.,
N
h/ internal(t)
=L
Ji/ Y~/ (t) -
')'$
QI (t)
,
(14)
j:?.j
where
N
I
= N1 ~
L.J Yj (t) .
P J.
I
Q (t)
(15)
The external field component, hi lexternal(t), includes the contributions from all
other excitatory neurons that are situated outside the lth module, and inhibition
that is proportional to the total network activity.
hi l external(t) = 9
(t t
k~1
Ji/kYjk(t) - ')'d
t
Qk(t) - (Jd) .
(16)
k
j
We allow here for the freedom of using more complicated behavior than the standard
9(x) = x one. In fact, as we will see, the linear case is problematic, since only
memory storage with limited modular coding is possible.
The retrieval quality at each trial is measured by the overlap function, defined by
m~ (t)
=
1
Nn~
P
where
n~
is the modular coding of TJ~.
L
N
L L TJ~ ik Vi k (t) ,
k=1 ;=1
(17)
N. Levy, D. Hom and E. Ruppin
56
In the simulations we constructed a network of L = 10 modules, where each module
500 neurons. The network stores M = 50 memory patterns randomly
contains N
distributed over the modules. Five sets of ten memories each are defined. In each
set the modular coding is distributed homogeneously between one to ten active
modules. The sparse coding level within each module was set to be p 0.05. Every
simulation experiment is composed of many trials. In each trial we use as initial
condition a corrupted version of a stored memory pattern with error rate of 5%,
and check the network's retrieval after it converges to a stable state.
=
=
0.9
...
0.8
0.7
0.8
&0.5
0.4
0.3
0.2
0.1
0
4
5
Modular
6
10
Coding
Figure 2: Quality of retrieval vs. memory modular coding. The dark shading represents the mean overlap achieved by a network with linear intra-modular and intermodular synaptic couplings. The light shading represents the mean overlap of a
network with sigmoidal inter-modular connections, which is perfect for all memory
patterns. The simulation parameters were: L = 10, N = 500, M = 50, p = 0.05,
0.7, (Jd 2 and (J,
0.6.
). =
=
=
=
We start with the standard choice of 9(x)
x, i.e. treating similarly the intramodular and inter-modular synaptic couplings. The performance of this network
is shown in Figure 2. As evident, the network can store only a relatively narrow
span of memories with high modular coding levels, and completely fails to retrieve
memories with low modular coding levels (see also [5]). If, however, 9 is chosen to be
a sigmoid function, a completely stable system is obtained, with all possible coding
levels allowed. A sigmoid function on the external connections is hence very effective
in enhancing the span of modular coding of memories that the network can sustain.
The segregation of the synaptic inputs to internal and external connections has been
motivated by observed patterns of cortical connectivity: Axons forming excitatory
intra-modular connections make synapses more proximal to the cell body than do
inter-modular connections [6]. Dendrites, having active conductances, embody a
rich repertoire of nonlinear electrical and chemical dynamics (see [7] for a review).
In our model, the setting of 9 to be a sigmoid function crudely mimics these active
conductance properties.
We may go on and envisage the use of a nested set of sigmoidal dendritic transmission functions. This turns out to be useful when we test the effects of pathologic
alterations on the retrieval of memories with different modular codings. The amazing result is that if the damage is done to modular inputs, the highly nonlinear
transmission functions are very resistible to it. An example is shown in Fig. 3.
Multi-modular Associative Memory
57
Here we compare two nonlinear functions:
[t t
t
[t. [~J';IkV;'(t)
Ji/kV/(t) - "Yd
91 =.x8
k:f.1
g, = Ae
j
Qk(t) - Od] ,
k:f.1
e
- ,.Q.(t) -
0.] - 0.]
The second one is the nested sigmoidal function mentioned above. Two types of
input cues are compared: correct TJIJ il to one of the modules and no input to the
rest, or partial input to all modules.
,
I
0.9
,
O.S
I
I
I
0.7
I
0.4
0.3
0.2
0.1
00
0.1
0.2
0.3
0.4
0.5
0.8
0.7
O.B
0.9
m(I'())
Figure 3: The performance of modular networks with different types of non-linear
inter-connections when partial input cues are given. The mean overlap is plotted
vs. the overlap of the input cue. The solid line represents the performance of
the network with 92 and the dash-dot line represents 91. The left curve of 92
corresponds to the case when full input is presented to only one module (out of
the 5 that comprise a memory), while the right solid curve corresponds to partial
input to all modules. The two 91 curves describe partial input to all modules, but
correspond to two different choices of the threshold parameter Od, 1.5 (left) and 2
(right). Parameters are L
5, N
1000, p
0.05, .x 0.8, n 5, 0,
0.7 and
Ok = 0.7.
=
=
=
=
=
=
As we can see, the nested nonlinearities enable retrieval even if only the input to
a single module survives. One may therefore conclude that, under such conditions,
patterns of high modular coding have a grater chance to be retrieved from an input
to a single module and thus are more resilient to afferent damage. Adopting the
assumption that different modules code for distinct semantic features, we now find
that a multi-modular network with nonlinear dendritic transmission can account
for the view of [3], that memories with more features are more robust.
4
Summary
We have studied the ability of homogeneous (single-module) and modular networks
to store memory patterns with variable activity levels. Although homogeneous networks can store such memory patterns, the critical capacity oflow activity memories
was shown to be larger than that of high activity ones. This result seems to be inconsistent with the pertaining cognitive data concerning category specific semantic
58
N. Levy, D. Hom and E. Ruppin
impairment, which seem to imply that high activity memories should be the more
stable ones.
Motivated by the findings of modular structure in associative cortex, we developed a
multi-modular model of associative memory. Adding the assumption that dendritic
non-linear processing operates on the signals of inter-modular synaptic connections,
we obtained a network that has two important features: coexistence of memories
with different modular codings and retrieval of memories from cues presented to a
small fraction of all modules. The latter implies that memories encoded in many
modules should be more resilient to damage in afferent connections, hence it is
consistent with the conventional interpretation of the data on category specific
impairment.
References
[1] R. F. Hevner. More modules. TINS, 16(5):178,1993.
[2] M. V. Tsodyks. Associative memory in neural networks with the hebbian learning rule. Modern Physics Letters B, 3(7):555-560, 1989.
[3] G. E. Hinton and T. Shallice. Lesioning at attractor network: investigations of
acquired dyslexia. Psychological Review, 98(1):74-95, 1991.
[4] G. V. Jones. Deep dyslexia, imageability, and ease of predication. Brain and
Language, 24:1-19, 1985.
[5] R. Lauro Grotto, S. Reich, and M. A. Virasoro. The computational role of
conscious processing in a model of semantic memory. In Proceedings of the lIAS
Symposium on Cognition Computation and Consciousness, 1994.
[6] P. A. Hetherington and L. M. Shapiro. Simulating hebb cell assemblies: the
necessity for partitioned dendritic trees and a post-not-pre ltd rule. Network,
4:135-153,1993.
[7] R. Yuste and D. W. Tank. Dendritic integration in mammalian neurons a century after cajal. Neuron, 16:701-716, 1996.
| 1345 |@word trial:3 version:1 seems:1 simulation:4 solid:2 shading:2 accommodate:2 necessity:1 initial:1 contains:1 efficacy:2 z2:1 od:2 plot:1 treating:1 v:5 cue:4 ith:4 sigmoidal:3 five:1 constructed:1 become:1 symposium:1 ik:1 consists:1 inside:1 manner:2 acquired:1 inter:9 embody:1 behavior:1 multi:11 brain:1 israel:2 developed:1 astronomy:1 finding:2 every:2 unit:1 local:2 yd:1 studied:1 ease:1 limited:2 range:1 horn:1 yj:1 area:1 physiology:1 word:1 pre:1 lesioning:1 cannot:1 storage:2 conventional:2 dz:1 go:1 m2:3 rule:4 retrieve:1 population:5 century:1 homogeneous:5 particularly:1 updating:2 mammalian:2 observed:3 role:1 module:33 electrical:1 capture:1 tsodyks:1 mentioned:1 supposedly:1 dynamic:1 completely:2 fiber:1 distinct:1 effective:1 describe:1 pertaining:2 outside:1 modular:49 larger:2 encoded:1 otherwise:1 ability:2 think:1 envisage:1 associative:12 advantage:2 analytical:1 interconnected:1 jij:2 vjvj:1 kv:1 getting:1 transmission:3 perfect:1 converges:1 coupling:4 ac:5 amazing:1 measured:1 ij:1 school:1 pathologic:1 somatosensory:1 implies:1 differ:1 correct:1 stochastic:1 enable:1 resilient:2 investigation:1 repertoire:1 dendritic:5 elementary:1 exp:1 cognition:1 successfully:1 survives:1 hil:1 pn:1 hj:1 derived:1 check:1 multimodular:1 nn:1 tank:1 denoted:1 integration:1 field:5 comprise:1 having:2 represents:4 look:1 jones:1 mimic:2 develops:1 few:1 modern:1 randomly:1 composed:1 cajal:1 attractor:1 n1:1 freedom:1 conductance:3 organization:1 investigate:1 highly:1 intra:5 light:1 behind:1 tj:3 partial:4 necessary:1 tree:1 plotted:1 theoretical:1 psychological:1 virasoro:1 oflow:1 stored:2 connect:1 corrupted:1 proximal:1 considerably:1 physic:2 concrete:1 connectivity:1 tjij:1 cognitive:4 external:5 ml1:1 account:3 potential:2 nonlinearities:1 alteration:1 coding:21 includes:3 afferent:2 vi:3 view:3 start:1 complicated:1 contribution:2 il:2 qk:2 characteristic:2 correspond:2 marginally:1 ago:1 synapsis:2 synaptic:7 naturally:1 rational:1 coexistence:1 realm:1 ok:1 response:1 sustain:2 done:1 crudely:1 nonlinear:6 quality:2 aviv:4 effect:1 validity:1 contain:1 functioning:1 hence:3 chemical:1 consciousness:1 semantic:4 trying:1 evident:2 ruppin:4 novel:1 recently:1 endowing:1 sigmoid:4 ji:4 association:2 interpretation:1 m1:1 similarly:1 language:1 dot:1 stable:7 reich:1 cortex:7 inhibition:3 perspective:1 retrieved:1 store:8 binary:2 signal:1 full:1 hebbian:6 retrieval:7 concerning:2 post:1 coded:1 qi:1 ae:1 enhancing:1 adopting:1 achieved:1 cell:4 rest:1 inconsistent:1 seem:2 odds:1 motivated:3 ltd:1 impairment:4 deep:1 useful:1 dark:1 ten:2 situated:2 conscious:1 category:4 shapiro:1 problematic:1 inhibitory:2 threshold:3 verified:1 fraction:1 run:1 letter:1 place:1 hi:4 hom:3 pay:1 distinguish:1 dash:1 activity:19 span:2 jjj:1 relatively:1 department:1 membrane:2 smaller:1 postsynaptic:2 partitioned:1 segregation:2 equation:4 turn:1 simulating:1 homogeneously:1 dyslexia:2 jd:2 assembly:3 damage:6 md:1 enhances:1 kth:1 capacity:8 presynaptic:2 unstable:2 assuming:1 code:2 susceptible:2 negative:1 shallice:1 neuron:20 observation:1 predication:1 hinton:1 david:1 connection:8 narrow:1 pattern:17 memory:50 intermodular:1 overlap:7 critical:5 treated:1 imply:1 lk:1 x8:1 nir:1 review:2 relative:1 yuste:1 subcortical:1 proportional:3 versus:2 consistent:3 storing:1 excitatory:10 summary:1 jth:2 allow:1 sparse:4 distributed:2 regard:1 curve:3 cortical:4 rich:1 grater:1 ml:2 global:1 active:5 conclude:1 decade:1 robust:1 ignoring:1 tel:4 dendrite:1 necessarily:1 vj:2 main:1 allowed:1 body:1 fig:1 hebb:1 axon:1 fails:1 lq:1 levy:4 third:1 tin:1 specific:4 adding:1 columnar:1 vii:1 likely:1 forming:1 visual:1 nested:3 corresponds:2 chance:1 lth:4 price:1 operates:1 total:4 hht:1 eytan:1 hetherington:1 internal:4 latter:1 tested:1 phenomenon:1 |
379 | 1,346 | A Framework for Multiple-Instance Learning
Oded Maron
Tomas Lozano-Perez
NE43-836a
AI Lab, M .I.T.
Cambridge, MA 02139
tlp@ai.mit.edu
NE43-755
AI Lab, M.I. T.
Cambridge, MA 02139
oded@ai.mit.edu
Abstract
Multiple-instance learning is a variation on supervised learning, where the
task is to learn a concept given positive and negative bags of instances.
Each bag may contain many instances, but a bag is labeled positive even
if only one of the instances in it falls within the concept. A bag is labeled
negative only if all the instances in it are negative. We describe a new
general framework, called Diverse Density, for solving multiple-instance
learning problems. We apply this framework to learn a simple description
of a person from a series of images (bags) containing that person, to a stock
selection problem, and to the drug activity prediction problem.
1 Introduction
One ofthe drawbacks of applying the supervised learning model is that it is not always possible
for a teacher to provide labeled examples for training. Multiple-instance learning provides a
new way of modeling the teacher's weakness. Instead of receiving a set of instances which
are labeled positive or negative, the learner receives a set of bags that are labeled positive or
negative. Each bag contains many instances. A bag is labeled negative if all the instances in
it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it
which is positive. From a collection of labeled bags, the learner tries to induce a concept that
will label individual instances correctly. This problem is harder than even noisy supervised
learning since the ratio of negative to positive instances in a positively-labeled bag (the noise
ratio) can be arbitrarily high.
The first application of multiple-instance learning was to drug activity prediction. In the
activity prediction application, one objective is to predict whether a candidate drug molecule
will bind strongly to a target protein known to be involved in some disease state. Typically,
A Frameworkfor Multiple-Instance Learning
571
one has examples of molecules that bind well to the target protein and also of molecules that
do not bind well. Much as in a lock and key, shape is the most important factor in determining
whether a drug molecule and the target protein will bind. However, drug molecules are
flexible, so they can adopt a wide range of shapes. A positive example does not convey what
shape the molecule took in order to bind - only that one of the shapes that the molecule can
~ake was the right one. However, a negative example means that none of the shapes that the
molecule can achieve was the right key.
The multiple-instance learning model was only recently formalized by [Dietterich et ai., 1997].
They assume a hypothesis class of axis-parallel rectangles, and develop algorithms for dealing
with the drug activity prediction problem described above. This work was followed by [Long
and Tan, 1996], where a high-degree polynomial PAC bound was given for the number of
examples needed to learn in the multiple-instance learning model. [Auer, 1997] gives a more
efficient algorithm, and [Blum and Kalai, 1998] shows that learning from multiple-instance
examples is reducible to PAC-learning with two sided noise and to the Statistical Query model.
Unfortunately, the last three papers make the restrictive assumption that all instances from all
bags are generated independently.
In this paper, we describe a framework called Diverse Density for solving multiple-instance
problems. Diverse Density is a measure of the intersection of the positive bags minus the union
of the negative bags. By maximizing Diverse Density we can find the point of intersection
(the desired concept), and also the set of feature weights that lead to the best intersection.
We show results of applying this algorithm to a difficult synthetic training set as well as the
"musk" data set from [Dietterich et ai., 1997]. We then use Diverse Density in two novel
applications: one is to learn a simple description of a person from a series of images that are
labeled positive if the person is somewhere in the image and negative otherwise. The other is
to deal with a high amount of noise in a stock selection problem.
2 Diverse Density
We motivate the idea of Diverse Density through a molecular example. Suppose that the
shape of a candidate molecule can be adequately described by a feature vector. One instance
of the molecule is therefore represented as a point in n-dimensional feature space. As the
molecule changes its shape (through both rigid and non-rigid transformations), it will trace out
a manifold through this n-dimensional space l . Figure l(a) shows the paths of four molecules
through a 2-dimensional feature space.
If a candidate molecule is labeled positive, we know that in at least one place along the
manifold, it took on the right shape for it to fit into the target protein. If the molecule is labeled
negative, we know that none of the conformations along its manifold will allow binding with
the target protein. If we assume that there is only one shape that will bind to the target protein,
what do the positive and negative manifolds tell us about the location of the correct shape
in feature space? The answer: it is where all positive feature-manifolds intersect without
intersecting any negative feature-manifolds. For example, in Figure lea) it is point A.
Unfortunately, a multiple-instance bag does not give us complete distribution information,
but only some arbitrary sample from that distribution. In fact, in applications other than
drug discovery, there is not even a notion of an underlying continuous manifold. Therefore,
Figure l(a) becomes Figure l(b). The problem of trying to find an intersection changes
I In practice, one needs to restrict consideration to shapes of the molecule that have sufficiently low
potential energy. But, we ignore this restriction in this simple illustration.
O. Maron and T. Lozano-Perez
572
negattv. bag
posttlVe
~g.1
x
o
0
x
positive
bag 112
X
X
point A
~A
Cl
~Cl Cl
Cl
A
A
A
A
Cl
x
x
Cl
Cl
X
Cl
Xx
A
o
X
A
~Itlv.
positive
bag 113
bag #13
(a)
o
o
(b)
The different shapes that a molecule can Samples taken along the paths. Section B
take on are represented as a path. The inter- is a high density area, but point A is a high
section point of positive paths is where they Diverse Density area.
took on the same shape.
Figure 1: A motivating example for Diverse Density
to a problem of trying to find an area where there is both high density of positive points
and low density of negative points. The difficulty with using regular density is illustrated in
Figure 1(b), Section B. We are not just looking for high density, but high "Diverse Density".
We define Diverse Density at a point to be a measure of how many different positive bags have
instances near that point, and how far the negative instances are from that point.
2.1
Algorithms for multiple-instance learning
In this section, we derive a probabilistic measure of Diverse Density, and test it on a difficult
artificial data set. We denote positive bags as Bt, the ph point in that bag as Bt, and the
value of the kth feature of that point as Bt k ' Likewise, BiJ represents a negative point.
Assuming for now that the true concept is a single point t, we can find it by maximizing
Pr(x = t I Bt, ... , B;;, B], ... , B;) over all points x in feature space. If we use Bayes'
rule and an uninformative prior over the concept location, this is equivalent to maximizing
the likelihood Pr( Bt , . .. , B;;, B] , ... ,B; I x = t). By making the additional assumption
that the bags are conditionally independent given the target concept t, the best hypothesis is
argmax x TIi Pr(Bt I x = t) TIi Pr(B; I x = t). Using Bayes' rule once more (and again
assuming a uniform prior over concept location), this is equivalent to
argm:x
II Pr(x = t I Bn II Pr(x = t I B
i
i-) ?
(1)
i
This is a general definition of maximum Diverse Density, but we need to define the terms in the
products to instantiate it. One possibility is a noisy-or model: the probability that not all points
missed the target is Pr(x = t I Bt) = Pr(x = t I Bi1, B:!i, . .. ) = 1- TIj(I-Pr(x = tIBt?,
and likewise Pre x = t I Bj- ) = TIj (1 - Pr( x = t I Bij?. We model the causal probability of
an individual instance on a potential target as related to the distance between them. Namely,
Pre x = t I Bij) = exp( - II Bij - x 11 2 ). Intuitively, if one of the instances in a positive bag
is close to x, then Pre x = t I Bt) is high. Likewise, if every positive bag has an instance
close to x and no negative bags are close to x, then x will have high Diverse Density. Diverse
Density at an intersection of n bags is exponentially higher than it is at an intersection of n - 1
bags, yet all it takes is one well placed negative instance to drive the Diverse Density down.
A Frameworkfor Multiple-Instance Learning
573
Figure 2: Negative and positive bags drawn from the same distribution, but labeled according
to their intersection with the middle square. Negative instances are dots, positive are numbers.
The square contains at least one instance from every positive bag and no negatives,
The Euclidean distance metric used to measure "closeness" depends on the features that
describe the instances. It is likely that some of the features are irrelevant, or that some should
be weighted to be more important than others. Luckily, we can use the same framework to
find not only the best location in feature space, but also the best weighting of the features.
Once again, we find the best scaling of the individual features by finding the scalings that
maximize Diverse Density. The algorithm returns both a location x and a scaling vector s,
Lk ShBijk - Xk)2 .
where 1/ Bij - X
W=
Note that the assumption that all bags intersect at a single point is not necessary. We can
assume more complicated concepts, such as for example a disjunctive concept ta V to . In this
case, we maximize over a pair of locations Xa and Xo and define Pr(x a = ta V Xb = to I
Bij) = maXXa ,Xb(Pr(Xa = ta I Bij ), Pr(xo = to I Bij )).
To test the algorithm, we created an artificial data set: 5 positive and 5 negative bags, each with
50 instances. Each instance was chosen uniformly at randomly from a [0 , 100] x [0, 100] E n2
domain, and the concept was a 5 x 5 square in the middle of the domain. A bag was labeled
positive if at least one of its instances fell within the square, and negative if none did, as shown
in Figure 2. The square in the middle contains at least one instance from every positive bag
and no negative instances. This is a difficult data set because both positive and negative bags
are drawn from the same distribution. They only differ in a small area of the domain.
Using regular density (adding up the contribution of every positive bag and subtracting negative
bags; this is roughly what a supervised learning algorithm such as n~arest neighbor performs),
we can plot the density surface across the domain. Figure 3(a) shows this surface for the
data set in Figure 2, and it is clear that finding the peak (a candidate hypothesis) is difficult.
However, when we plotthe Diverse Density surface (using the noisy-or model) in Figure 3(b),
it is easy to pick out the global maximum which is within the desired concept. The other
O. Maron and T. Ulzano-Perez
574
(a) Surface using regular density
(b) Surface using Diverse Density
Figure 3: Density surfaces over the example data of Figure 3
major peaks in Figure 3(b) are the result of a chance concentration of instances from different
bags. With a bit more bad luck, one of those peaks could have eclipsed the one in the middle.
However, the chance of this decreases as the number of bags (training examples) increases.
One remaining issue is how to find the maximum Diverse Density. In general, we are searching
an arbitrary density landscape and the number of local maxima and size of the search space
could prohibit any efficient exploration. In this paper, we use gradient ascent with multiple
starting points. This has worked succesfully in every test case because we know what starting
points to use. Th'e maximum Diverse Density peak is made of contributions from some set
of positive points. If we start an ascent from every positive point, one of them is likely to
be closest to the maximum, contribute the most to it and have a climb directly to it. While
this heuristic is sensible for maximizing with respect to location, maximizing with respect to
scaling of feature weights may still lead to local maxima.
3
Applications of Diverse Density
By way of benchmarking, we tested the Diverse Density approach on the "musk" data sets
from [Dietterich et ai., 1997], which were also used in [Auer, 1997]. We also have begun
investigating two new applications of multiple-instance learning. We describe preliminary
results on all of these below. The musk data sets contain feature vectors describing the surfaces
of a variety of low-energy shapes from approximately 100 molecules. Each feature vector
has 166 dimensions. Approximately half ofthese molecules are known to smell "musky," the
remainder are very similar molecules that do not smell musky. There are two musk data sets;
the Musk-l data set is smaller, both in having fewer molecules and many fewer instances per
molecule. Many (72) of the molecules are shared between the two data sets, but the second
set includes more instances for the shared molecules.
We approached the problem as follows: for each run, we held out a randomly selected
1/10 of the data set as a test set. We computed the maximum Diverse Density on the
training set by multiple gradient ascents, starting at each positi ve instance. This produces a
575
A Frameworkfor Multiple-Instance Learning
maximum feature point as well as the best feature weights corresponding to that point. We
note that typically less than half of the 166 features receive non-zero weighting. We then
computed a distance threshold that optimized classification performance under leave-one-out
cross validation within the training set. We used the feature weights and distance threshold to
classify the examples of the test set; an example was deemed positive if the weighted distance
from the maximum density point to any of its instances was below the threshold.
The table below lists the average accuracy of twenty runs, compared with the performance
of the two principal algorithms reported in [Dietterich et aI., 1997] (i tera ted-discrim
APR and GFS elim-kde APR), as well as the MULTINST algorithm from [Auer, 1997l.
We note that the performances reported for i terated-discrim APR involves choosing
parameters to maximize test set performance and so probably represents an upper bound for
accuracy on this data set. The MULTINST algorithm assumes that all instances from all
bags are generated independently. The Diverse Density results, which required no tuning, are
comparable or better than those ofGFS elim-kde APR and MULTINST.
Musk Data Set 1
algorithm
accuracy
iterated-discrim APR
92.4
GFS elim-kde APR
91.3
Diverse Density
88.9
MULTINST
76.7
Musk Data Set 2
accuracy
algorithm
iterated-discrim APR
89.2
MULTINST
84.0
Di verse Densi ty
82.5
80.4
GFS elim-kde APR
We also investigated two new applications of multiple-instance learning. The first of these is
to learn a simple description of a person from a series of images that are labeled positive if
they contain the person and negative otherwise. For a positively labeled image we only know
that the person is somewhere in it, but we do not know where. We sample 54 subimages of
varying centers and sizes and declare them to be instances in one positive bag since one of
them contains the person. This is repeated for every positive and negative image.
We use a very simple representation for the instances. Each subimage is divided into three parts
which roughly correspond to where the head, torso and legs of the person would be. The three
dominant colors (one for each subsection) are used to represent the image. Figure 4 shows
a training set where every bag included two people, yet the algorithm learned a description
of the person who appears in all the images. This technique is expanded in [Maron and
LakshmiRatan, 1998] to learn descriptions of natural images and use the learned concept to
retrieve similar images from a large image database.
Another new application uses Diverse Density in the stock selection problem. Every month,
there are stocks that perform well for fundamental reasons and stocks that perform well because
of flukes; there are many more of the latter, but we are interested in the former. For every
month, we take the 100 stocks with the highest return and put them in a positive bag, hoping
that at least one of them did well for fundamental reasons. Negative bags are created from
the bottom 5 stocks in every month. A stock instance is described by 17 features such as
momentum, price to fair-value, etc. Grantham, Mayo, Van Otterloo & Co. kindly provided
us with data on the 600 largest US stocks since 1978. We tested the algorithm through five
runs of training for ten years, then testing on the next year. In each run, the algorithm returned
the stock description (location in feature space and a scaling of the features) that maximized
Diverse Density. The test stocks were then ranked and decilized by distance (in weighted
feature space) to the max-DD point. Figure 5 shows the average return of every decile. The
return in the top decile (stocks that are most like the "fundamental stock") is positive and
O. Maron and T. Lozano-Perez
576
return
a. 1
n?
4
5
6
f
8
9
10
declle
?0. 2
?0. 4
person in common.
Figure 6: Black bars show Diverse Density's average return on a decile, and the
white bars show GMO's predictor's return.
higher than the average return of a GMO predictor. Likewise, the return in the bottom decile
is negative and below that of a GMO predictor.
4
Conclusion
In this paper, we have shown that Diverse Density is a general tool with which to learn from
Multiple-Instance examples. In addition, we have shown that Multiple-Instance problems
occur in a wide variety of domains. We attempted to show the various ways in which
ambiguity can lead to the Multiple-Instance framework: through lack of knowledge in the
drug discovery .example, through ambiguity of representation in the vision example, and
through a high degree of noise in the stock example.
Acknowledgements
We thank Peter Dayan and Paul Viola at MIT and Tom Hancock and Chris Darnell at GMO
for helpful discussions and the AFOSR ASSERT program, Parent Grant#:F49620-93-1-0263
for their support of this research.
References
[Auer, 1997] P. Auer. On Learning from Multi-Instance Examples: Empirical Evaluation of a
theoretical Approach. NeuroCOLT Technical Report Series, NC-TR-97-025, March 1997.
[Blum and Kalai, 1998] A. Blum and A. Kalai. A Note on Learning from Multiple-Instance
Examples. To appear in Machine Learning, 1998.
[Dietterich et aI., 1997] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez. Solving the
Multiple-Instance Problem with Axis-Parallel Rectangles. Artificial Intelligence Journal,
89, 1997.
[Long and Tan, 1996] P. M. Long and L. Tan. PAC-learning axis alligned rectangles with
respect to product distributions from multiple-instance examples. In Proceedings of the
J996 Conference on Computational Learning Theory, 1996.
[Maron and LakshmiRatan, 1998] O. Maron and A. LakshmiRatan. Multiple-Instance Learning for Natural Scene Classification. In Submitted to CVPR-98, 1998.
| 1346 |@word middle:4 polynomial:1 bn:1 pick:1 minus:1 tr:1 harder:1 series:4 contains:4 yet:2 arest:1 shape:14 hoping:1 tlp:1 plot:1 half:2 instantiate:1 fewer:2 selected:1 intelligence:1 xk:1 argm:1 provides:1 contribute:1 location:8 five:1 along:3 inter:1 roughly:2 multi:1 becomes:1 provided:1 xx:1 underlying:1 what:4 finding:2 transformation:1 assert:1 every:12 decile:4 grant:1 appear:1 positive:37 declare:1 bind:6 local:2 gmo:4 path:4 approximately:2 black:1 co:1 succesfully:1 range:1 testing:1 union:1 practice:1 area:4 intersect:2 empirical:1 drug:8 tera:1 pre:3 induce:1 regular:3 protein:6 close:3 selection:3 put:1 applying:2 restriction:1 equivalent:2 center:1 maximizing:5 starting:3 independently:2 tomas:1 formalized:1 rule:2 retrieve:1 searching:1 notion:1 variation:1 smell:2 target:9 tan:3 suppose:1 us:1 hypothesis:3 labeled:16 database:1 bottom:2 disjunctive:1 reducible:1 decrease:1 luck:1 highest:1 disease:1 ne43:2 motivate:1 solving:3 learner:2 stock:14 represented:2 various:1 hancock:1 describe:4 query:1 artificial:3 tell:1 approached:1 choosing:1 heuristic:1 cvpr:1 otherwise:2 noisy:3 took:3 subtracting:1 product:2 remainder:1 achieve:1 description:6 parent:1 produce:1 leave:1 derive:1 develop:1 conformation:1 involves:1 differ:1 drawback:1 correct:1 multinst:5 luckily:1 exploration:1 preliminary:1 bi1:1 sufficiently:1 exp:1 predict:1 bj:1 major:1 adopt:1 mayo:1 bag:44 label:1 largest:1 tool:1 weighted:3 mit:3 always:1 kalai:3 varying:1 likelihood:1 helpful:1 dayan:1 rigid:2 typically:2 bt:8 interested:1 issue:1 classification:2 flexible:1 musk:7 once:2 having:1 ted:1 represents:2 others:1 report:1 randomly:2 ve:1 individual:3 argmax:1 possibility:1 evaluation:1 weakness:1 perez:5 darnell:1 held:1 xb:2 lakshmiratan:3 necessary:1 euclidean:1 desired:2 causal:1 theoretical:1 instance:59 classify:1 modeling:1 uniform:1 predictor:3 motivating:1 reported:2 answer:1 teacher:2 synthetic:1 person:11 density:41 peak:4 fundamental:3 probabilistic:1 receiving:1 intersecting:1 again:2 ambiguity:2 containing:1 return:9 potential:2 tii:2 includes:1 depends:1 try:1 lab:2 start:1 bayes:2 parallel:2 complicated:1 contribution:2 square:5 accuracy:4 who:1 likewise:4 maximized:1 correspond:1 ofthe:1 landscape:1 iterated:2 none:3 drive:1 submitted:1 definition:1 verse:1 ty:1 energy:2 involved:1 di:1 begun:1 color:1 subsection:1 knowledge:1 torso:1 auer:5 appears:1 higher:2 ta:3 supervised:4 tom:1 strongly:1 just:1 xa:2 hand:1 receives:1 lack:1 maron:7 dietterich:6 concept:13 contain:3 true:1 lozano:4 adequately:1 former:1 illustrated:1 deal:1 conditionally:1 white:1 prohibit:1 trying:2 complete:1 performs:1 image:11 consideration:1 novel:1 recently:1 common:1 exponentially:1 cambridge:2 ai:9 tuning:1 dot:1 surface:7 etc:1 dominant:1 closest:1 irrelevant:1 arbitrarily:1 additional:1 maximize:3 ii:3 multiple:25 frameworkfor:3 technical:1 cross:1 long:3 divided:1 molecular:1 prediction:4 vision:1 metric:1 represent:1 gfs:3 lea:1 receive:1 addition:1 uninformative:1 fell:1 ascent:3 probably:1 climb:1 near:1 easy:1 variety:2 discrim:4 fit:1 restrict:1 idea:1 whether:2 peter:1 returned:1 tij:2 clear:1 amount:1 ten:1 ph:1 correctly:1 per:1 diverse:30 key:2 four:1 threshold:3 blum:3 drawn:2 rectangle:3 year:2 run:4 place:1 missed:1 scaling:5 comparable:1 bit:1 bound:2 followed:1 activity:4 occur:1 worked:1 scene:1 expanded:1 according:1 march:1 across:1 smaller:1 ofthese:1 making:1 leg:1 intuitively:1 pr:13 xo:2 sided:1 taken:1 describing:1 needed:1 know:5 apply:1 assumes:1 remaining:1 top:1 lock:1 somewhere:2 restrictive:1 objective:1 concentration:1 gradient:2 kth:1 distance:6 thank:1 neurocolt:1 sensible:1 chris:1 manifold:7 reason:2 assuming:2 illustration:1 ratio:2 nc:1 difficult:4 unfortunately:2 kde:4 trace:1 negative:31 ake:1 twenty:1 perform:2 upper:1 viola:1 looking:1 head:1 arbitrary:2 namely:1 pair:1 required:1 optimized:1 learned:2 bar:2 below:4 program:1 max:1 difficulty:1 natural:2 ranked:1 axis:3 lk:1 created:2 deemed:1 prior:2 discovery:2 acknowledgement:1 determining:1 afosr:1 validation:1 degree:2 dd:1 placed:1 last:1 allow:1 elim:4 fall:1 wide:2 neighbor:1 subimage:1 van:1 f49620:1 dimension:1 collection:1 made:1 far:1 ignore:1 dealing:1 global:1 investigating:1 continuous:1 search:1 table:1 learn:7 molecule:23 musky:2 investigated:1 cl:8 domain:5 did:2 apr:8 kindly:1 noise:4 paul:1 n2:1 repeated:1 fair:1 convey:1 positively:2 oded:2 benchmarking:1 momentum:1 candidate:4 weighting:2 bij:8 down:1 bad:1 pac:3 list:1 closeness:1 adding:1 subimages:1 positi:1 intersection:7 likely:2 binding:1 chance:2 ma:2 month:3 shared:2 price:1 change:2 included:1 uniformly:1 principal:1 called:2 lathrop:1 attempted:1 people:1 support:1 latter:1 tested:2 |
380 | 1,347 | Incorporating Test Inputs into Learning
Zebra Cataltepe
Learning Systems Group
Department of Computer Science
California Institute of Technology
Pasadena, CA 91125
zehra@cs.caltech.edu
Malik Magdon-Ismail
Learning Systems Group
Department of Electrical Engineering
California Institute of Technology
Pasadena, CA 91125
magdon@cco.caltech.edu
Abstract
In many applications, such as credit default prediction and medical image recognition, test inputs are available in addition to the labeled training examples. We propose a method to incorporate the test inputs into
learning. Our method results in solutions having smaller test errors than
that of simple training solution, especially for noisy problems or small
training sets.
1 Introduction
We introduce an estimator of test error that takes into consideration the test inputs. The
new estimator, augmented error, is composed of the training error and an additional term
computed using the test inputs. In some applications, such as credit default prediction and
medical image recognition, we do have access to the test inputs. In our experiments, we
found that the augmented error (which is computed without looking at the test outputs but
only test inputs and training examples) can result in a smaller test error. In particular, it
tends to increase when the test error increases (overtraining) even if the simple training
error does not. (see figure (1)).
In this paper, we provide an analytic solution for incorporating test inputs into learning in
the case oflinear, noisy targets and linear hypothesis functions. We also show experimental
results for the nonlinear case.
Previous results on the use of unlabeled inputs include Castelli and Cover [2] who show that
the labeled examples are exponentially more valuable than unlabeled examples in reducing
the classification error. For mixture models, Shahshahani and Landgrebe [7] and Miller
and Uyar [6] investigate incorporating unlabeled examples into learning for classification
problems and using EM algorithm, and show that unlabeled examples are useful especially
when input dimensionality is high and the number of examples is small. In our work we
only concentrate on estimating the test error better using the test inputs and our method
Z Cataltepe and M. MagMn-Ismail
438
2?~----~======~==~~------1
Training error
- - Test error
- - Augmented error
--,.
1:l1----__
'. ,
, .\
\
I!!
~
,.
,
I
\
w
I
__ ?
1
J
5
~~------~5~------~6--------~7--------~8
log(pass)
Figure I: The augmented error, computed not looking at the test outputs at all, follows the
test error as overtraining occurs.
extends to the case of unlabeled inputs or input distribution information. Our method is
also applicable for regression or classification problems.
In figure 1, we show the training, test and augmented errors, while learning a nonlinear
noisy target function with a nonlinear hypothesis. As overtraining occurs, the augmented
error follows the test error. In section 2, we explain our method of incorporating test inputs
into learning and give the analytical solutions for linear target and hypothesis functions.
Section 3 includes theory about the existence and general form of the new solution. Section
4 discusses experimental results. Section 5 extends our solution to the case of knowing the
input distribution, or knowing extra inputs that are not necessarily test inputs.
2 Incorporating Test Inputs into Learning
In learning-from-examples, we assume we have a training set: {(Xl, II), ., . ,(XN, IN)}
with inputs Xn and possibly noisy targets In. Our goal is to choose a hypothesis
gv, among a class of hypotheses G, minimizing the test error on an unknown test set
{(YI, hd,???, (YM, hM)}'
Using the sample mean square error as our error criterion, the training error of hypothesis
gv is:
Similarly the test error of gv is:
E(gv)
Expanding the test error:
439
Incorporating Test InpuJs into Learning
The main observation is that. when we know the test inputs. we know the first term exactly.
Therefore we need only approximate the remaining terms using the training set:
1 M
2 N
1 N
M L9~(Ym)- NL9 v (X n)!n+ NL!~
m=1
n=1
n=1
(1)
We scale the addition to the training error by an augmentation parameter a to obtain a
more general error function that we call the augmented error:
Eo (g.)
where a
+ <> (~ ~ g; (Ym) -
t.
~
g; (Xn))
= acorresponds to the training error Eo and a = 1 corresponds to equation (1).
The best value of the augmentation parameter depends on a number offactors including the
target function. the noise distribution and the hypothesis class. In the following sections
we investigate properties of the best augmentation parameter and give a method of finding
the best augmentation parameter when the hypothesis is linear.
3 Augmented Solution for the Linear Hypothesis
In this section we assume hypothesis functions of the form 9v(X) = v T x. From here
onwards we will denote the functions by the vector that multiplies the inputs. When the
hypothesis is linear we can find the minimum of the augmented error analytical1y.
Let X dxN be the matrix of training inputs. YdxM be the matrix of test inputs and f NXI
contain the training targets. The solution Wo minimizing the training error Eo is the least
squares solution [5]: Wo
= (X~T) -1
The augmented error Ea (v) = Eo (v)
mented error Wa :
x:.
+ av T(Y[/
- x ~T
)
v is minimized at the aug(2)
( T)
-1
T. When a = O. the augmented solution Wa is equal to the
where R = 1 - x ~
Y1,;
least mean squares solution Woo
4 Properties of the Augmentation Parameter
Assume a linear target and possibly noisy training outputs: f = w? T X +e where (ee T ) =
u;INxN .
Since the specific realization of noise e is unknown. instead of minimizing the test error
directly. we focus on minimizing (E (wa))e. the expected value of the test error of the
augmented solution with respect to the noise distribution:
(E (wa))e
w? T
((I - aRT)
+ ~tr
((I -
-1 -
aRT '
I) Y~T ((I - aR)-1 - I) w?
Y~T (I -
<>R)-'
(X;Tr')
(3)
440
Z Cataltepe and M. Magdon-Ismail
where we have used (e T Ae) e = cr;tr (A) and tr(A) denotes the trace of matrix A. When
o = 0, we have:
cr;
-tr
N
T - - - -1)
(YY
- - (XXT)
M
N
(4)
Now, we prove the existence of a nonzero augmentation parameter 0 when the outputs are
noisy.
Theorem 1: If cr; > 0 and tr (R (I - R)) =1= 0, then there is an 0 =1= 0 that minimizes the
expected test error (E (wa))e'
Proof: Since &B;~(a) = _B-l(o)&~~a)B-l(o) for any matrix B whose elements are
scalar functions of 0 [3], the derivative of (E (wa))e with respect to 0 at 0 = 0 is:
dIE
t?)). I.~.
=
2~tr (R( X;TfY~T) = 2~tr (R(I -R?
If the derivative is < 0 (> 0 respectively), then (E (wa))e is minimized at some 0 > 0
(0 < 0 respectively). 0
The following proposition gives an approximate formula for the best o.
Theorem 2: If Nand M are large, and the traini~ and test inputs are drawn i.i.d from
an input distribution with covariance matrix (xx ) = cr;l, then the 0* minimizing
(E (wa))e,x,y' the expected test error of the augmented solution with respect to noise and
inputs, is approximately:
(5)
Proof: is given in the appendix. 0
This formula determines the behavior of the best o. The best 0:
? decreases as the signal-to-noise ratio increases.
? increases as ~ increases, i.e. as we have less examples per input dimension.
4.1
Wa as an Estimator ofw*
The mean squared error (m.s.e.) of any estimator W ofw*, can be written as [1]:
IIW* - (w)eI1 2
m.s.e(w)
bias 2 (w)
+ (11w -
(w)eI1 2 ) e
+ variance(w)
When 0 is independent of the specific realization e of the noise:
W*T
(1 - (1 - oRT)-l) (1 - (I -
+ ~ tr (
(X;T)
-1
(I _
oR)-I) w*
aRTr1(I _aR)-I)
441
Incorporating Test Inputs into Learning
Hence the m.s.e. of the least square estimator Wo is:
m.s.e.(wo)
Wo is the minimum variance unbiased linear estimator of W?. Although w 0< is a biased
estimator if exR =/:. 0, the following proposition shows that, when there is noise, there is an
ex =/:. ominimizing them.s.e. ofwo<:
0'; > 0 and tT ( ( X~T)
Theorem 3: If
-1
(R + RT)) =/:. 0, then there is an ex =/:. 0 that
minimizes the m.s.e. ofwo<'
Proof: is similar to the proof of proposition 1 and will be skipped D.
As Nand M get large, R
= 1 - (X~T) -1 Y,{/
-+
0 and Wo<
= (1 -
aR)-l wo
-+
woo
Hence, for large Nand M, the bias and variance of w 0< approach 0, making w 0< an un biased
and consistent estimator of w? .
5 A Method to Find the Best Augmentation Parameter
Uver data. <1=0. M=CO
1.5
~
?
6.5
1,
1.3
w
!!
least squares >+-<
augmented error with estimated alpha -
i.
1.4
Bond da1a. d=11. 1.1=50
w
e
??
"-
12
\
!
CD
0>
!
!
1.1
I.
~
<
?
~
<
......
4.5
~:f.::~
20
40
5.S
!
'. '.
0.9
0
\
6
0>
'
0.8
k
60
80
100
N:number of tranng exal!'Clies
120
140
160
4
40
60
80
100
120
Nromber of tranng exal!'Clies
140
Figure 2: Using the augmented error results in smaller test error especially when the number of training examples is small.
Given only the training and test inputs X and Y, and the training outputs f, in this section
we propose a method to find the best ex minimizing the test error of w 0<'
Equation (3) gives a formula for the expected test error which we want to minimize. However, we do not know the target w? and the noise variance
In equation (3), we replace
0';.
w'" by Wo< and
by (XTwa-:L:~~Twa-f), where Wo< is given by equation (2). Then we
find the ex minimizing the resulting approximation to the expected test error.
0';
We experimented with this method of finding the best a on artificial and real data. The
results of experiments for liver datal and bond data 2 are shown in figure 2. In the liver
Iftp:llftp.ics.uci.edulpub/machine-Iearning-databaseslliver-disorders/bupa.data
2We thank Dr. John Moody for providing the bond data.
Z Cataltepe and M. Magdon-Ismail
442
database the inputs are different blood test results and the output is the number of drinks
per day. The bond data consists of financial ratios as inputs and rating of the bond from
AAA to B- or lower as the output.
We also compared our method to the least squares (wo) and early stopping using different
validation set sizes for linear and noisy problems. The table below shows the results.
SNR
mean
O.oI
0.650
0.830
1.001
I
100
E(wol
E(wo)
mean
? 0.006
? 0.007
? 0.002
mean
N -!i
1>(Wearlll .top)
E~wo.l
'v
0.126
1.113
2.373
-
3
? 0.003
? 0.021
? 0.040
1>(Wearlll
E Wn
0.192
1.075
2.073
.top) N =!i
'v
6
? 0.004
? 0.020
? 0.042
Table 1: Augmented solution is consistently better than the least squares whereas early
stopping gives worse results as the signal-to-noise ratio (SNR) increases. Even averaging
early stopping solutions did not help when SNR = 100 (E(wE(~~;top) = 1.245 ? 0.018
when N v = ~ and 1.307 ? 0.021 for N v = ~). For the results shown, d
training examples were used, N v is the number of validation examples.
= 11, N = 30
6 Extensions
When the input probability distribution or the covariance matrix of inputs, instead of test
inputs are known,
can be replaced by (xx
= E and our methods are still applicable.
T)
YJ/
If the inputs available are not test inputs but just some extra inputs, they can still be incorporated into learning. Let us denote the extra K inputs {ZI, ... , ZK} by the matrix ZdxK.
Then the augmented error becomes:
Ea(v)
Eo (v)
+a
K v (ZZT XXT) v
T
K+N
-- -
K
--
N
The augmented new solution and its expected test error are same as in equations (2) and
(3), except we have Rz = 1 - (
X~T) -1 Z;T
instead of R.
Note that for the linear hypothesis case, the augmented error is not necessarily a regularized
version of the training error, because the matrix Yl;T - ~ is not necessarily a positive
definite matrix.
7 Conclusions and Future Work
We have demonstrated a method of incorporating inputs into learning when the target and
hypothesis functions are linear, and the target is noisy. We are currently working on extending our method to nonlinear target and hypothesis functions.
Appendix
Proof of Theorem 2: When the spectral radius of o.R is less than I (a is small and/or
Nand M are large), we can approximate (1 - aR)-l :::::: 1 + aR [4], and similarly,
(1 - aRT) -1 :::::: 1 + aRT. Discarding any terms with powers of a greater than 1, and
443
Incorporating Test Inputs into Learning
solving for 0 in d(E(W;l)?'X 'y
= (d(E~"'?). )
= 0:
x ,Y
0*
The last step follows since we can write
and
(X~T) -1
(Vx}x
= (Vy}y
Ignoring terms of 0
X ,y'
= u; (I + h
),
X
~T
= u; (I -
.IN )
= th (I + .IN + Yi-) + 0 (Nb) for matrices Vx and Vy such that
= 0 and (Vx2)x and (V;)y are constant with respect to Nand M. For
large M we can approximate
(Yi- + ~)
y~T
(Y~T R2)
x ,Y
= u; (R2) x ,Y .
(Nt 5 ) , (R2 - R) x,y
It can be shown that
input distribution. Similarly
(Yi) x
(2Yi + ~)
x ,y
and
(R2) x,y
= ~ I for a constant .A depending on the
(~ ) y = ttl. Therefore:
0*
o
Acknowledgments
We would like to thank the Caltech Learning Systems Group: Prof. Yaser Abu-Mostafa, Dr.
Amir Atiya, Alexander Nicholson, Joseph Sill and Xubo Song for many useful discussions.
References
[1] Bishop, c. (1995) Neural Networks for Pattern Recognition, Clarendon Press, Oxford,
1995.
[2] Castelli, V. & Cover T. (1995) On the Exponential Value of Labeled Samples. Pattern
Recognition Letters, Vol. 16, Jan. 1995, pp. 105-111.
[3] Devijver, P. A. & Kittler, J. (1982) Pattern Recognition: A Statistical Approach, pp.
434. Prentice-Hall International, London.
[4] Golub, G. H. & Van Loan C. F. (1993) Matrix Computations, The Johns-Hopkins University Press, Baltimore, MD.
[5] Hocking, R. R. (1996) Methods and Applications of Linear Models. John Wiley &
Sons, NY.
[6] Miller, D. J. & Uyar, S. (1996), A Mixture of Experts Classifier with Learning Based
on Both Labeled and Unlabeled Data. In G. Tesauro, D. S. Touretzky and T.K. Leen (eds.),
Advances in Neural Information Processing Systems 9. Cambridge, MA: MIT Press.
[7] Shahshahani, B. M. & Landgrebe, D. A. (1994) The Effect of Unlabeled Samples in
Reducing Small Sample Size Problem and Mitigating the Hughes Phonemenon. IEEE
Transactions on Geoscience and Remote Sensing, Vol. 32 No. 5, Sept 1994, pp. 10871095.
| 1347 |@word effect:1 especially:3 c:1 version:1 contain:1 unbiased:1 prof:1 hence:2 concentrate:1 malik:1 cco:1 radius:1 nonzero:1 shahshahani:2 occurs:2 nicholson:1 rt:1 covariance:2 vx:2 md:1 wol:1 tr:9 thank:2 die:1 necessarily:3 criterion:1 ei1:2 proposition:3 tt:1 extension:1 l1:1 nt:1 credit:2 ic:1 hall:1 image:2 iiw:1 written:1 consideration:1 john:3 datal:1 ratio:3 mostafa:1 minimizing:7 analytic:1 gv:4 early:3 exponentially:1 trace:1 ofw:2 applicable:2 unknown:2 bond:5 currently:1 amir:1 av:1 observation:1 cambridge:1 zebra:1 similarly:3 mit:1 looking:2 incorporated:1 y1:1 access:1 cr:4 ort:1 ignoring:1 rating:1 prove:1 consists:1 focus:1 consistently:1 tesauro:1 introduce:1 california:2 ofwo:2 skipped:1 expected:6 xubo:1 behavior:1 yi:5 caltech:3 stopping:3 minimum:2 additional:1 greater:1 below:1 nand:5 eo:5 pasadena:2 pattern:3 becomes:1 signal:2 estimating:1 xx:2 mitigating:1 ii:1 classification:3 among:1 including:1 power:1 ttl:1 multiplies:1 regularized:1 minimizes:2 art:4 expanding:1 equal:1 finding:2 having:1 technology:2 prediction:2 regression:1 ae:1 iearning:1 hm:1 exactly:1 future:1 classifier:1 minimized:2 woo:2 sept:1 medical:2 composed:1 addition:2 positive:1 want:1 engineering:1 whereas:1 baltimore:1 tends:1 replaced:1 extra:3 oxford:1 biased:2 onwards:1 approximately:1 drawn:1 validation:2 investigate:2 dxn:1 golub:1 consistent:1 call:1 co:1 bupa:1 mixture:2 nl:1 ee:1 sill:1 cd:1 wn:1 acknowledgment:1 zi:1 yj:1 last:1 hughes:1 definite:1 bias:2 knowing:2 institute:2 jan:1 van:1 default:2 xn:3 dimension:1 landgrebe:2 wo:12 song:1 yaser:1 get:1 cover:2 unlabeled:7 ar:5 nb:1 nxi:1 prentice:1 useful:2 transaction:1 approximate:4 alpha:1 snr:3 demonstrated:1 atiya:1 disorder:1 vy:2 estimator:8 group:3 un:1 international:1 estimated:1 per:2 financial:1 hd:1 yy:1 yl:1 table:2 write:1 zk:1 ca:2 vx2:1 ym:3 hopkins:1 target:11 zzt:1 exr:1 moody:1 squared:1 augmentation:7 vol:2 abu:1 choose:1 hypothesis:14 possibly:2 cataltepe:4 blood:1 element:1 dr:2 recognition:5 worse:1 expert:1 derivative:2 did:1 main:1 noise:9 traini:1 labeled:4 database:1 letter:1 augmented:19 electrical:1 includes:1 extends:2 kittler:1 oflinear:1 wiley:1 ny:1 depends:1 remote:1 decrease:1 valuable:1 appendix:2 zehra:1 exponential:1 xl:1 drink:1 theorem:4 formula:3 bishop:1 solving:1 square:7 minimize:1 oi:1 specific:2 variance:4 who:1 discarding:1 miller:2 l9:1 twa:1 sensing:1 experimented:1 r2:4 incorporating:9 castelli:2 xxt:2 hocking:1 london:1 department:2 artificial:1 overtraining:3 explain:1 touretzky:1 ed:1 smaller:3 whose:1 em:1 son:1 joseph:1 pp:3 making:1 aaa:1 geoscience:1 scalar:1 proof:5 noisy:8 corresponds:1 devijver:1 determines:1 equation:5 ma:1 analytical:1 dimensionality:1 discus:1 propose:2 goal:1 know:3 ea:2 replace:1 uci:1 realization:2 clarendon:1 available:2 day:1 magdon:4 loan:1 except:1 reducing:2 averaging:1 uyar:2 leen:1 ismail:4 spectral:1 pas:1 just:1 experimental:2 working:1 extending:1 mented:1 existence:2 rz:1 nonlinear:4 denotes:1 remaining:1 include:1 help:1 depending:1 top:3 providing:1 alexander:1 liver:2 incorporate:1 aug:1 ex:4 |
381 | 1,348 | A Solution for Missing Data in Recurrent Neural
Networks With an Application to Blood Glucose
Prediction
Volker Tresp and Thomas Briegel *
Siemens AG
Corporate Technology
Otto-Hahn-Ring 6
81730 Miinchen, Germany
Abstract
We consider neural network models for stochastic nonlinear dynamical
systems where measurements of the variable of interest are only available at irregular intervals i.e. most realizations are missing. Difficulties
arise since the solutions for prediction and maximum likelihood learning with missing data lead to complex integrals, which even for simple
cases cannot be solved analytically. In this paper we propose a specific combination of a nonlinear recurrent neural predictive model and
a linear error model which leads to tractable prediction and maximum
likelihood adaptation rules. In particular, the recurrent neural network
can be trained using the real-time recurrent learning rule and the linear
error model can be trained by an EM adaptation rule, implemented using forward-backward Kalman filter equations. The model is applied to
predict the glucose/insulin metabolism of a diabetic patient where blood
glucose measurements are only available a few times a day at irregular
intervals. The new model shows considerable improvement with respect
to both recurrent neural networks trained with teacher forcing or in a free
running mode and various linear models.
1
INTRODUCTION
In many physiological dynamical systems measurements are acquired at irregular intervals.
Consider the case of blood glucose measurements of a diabetic who only measures blood
glucose levels a few times a day. At the same time physiological systems are typically
highly nonlinear and stochastic such that recurrent neural networks are suitable models.
Typically, such networks are either used purely free running in which the networks predictions are iterated, or in a teacher forcing mode in which actual measurements are substituted
? {volker.tresp, thomas.briegel} @mchp.siemens.de
V. Tresp and T. Briegel
972
if available. In Section 2 we show that both approaches are problematic for highly stochastic systems and if many realizations of the variable of interest are unknown. The traditional
solution is to use a stochastic model such as a nonlinear state space model. The problem
here is that prediction and training missing data lead to integrals which are usually considered intractable (Lewis, 1986). Alternatively, state dependent linearizations are used for
prediction and training, the most popular example being the extended Kalman filter. In this
paper we introduce a combination of a nonlinear recurrent neural predictive model and a
linear error model which leads to tractable prediction and maximum likelihood adaptation
rules. The recurrent neural network can be used in all generality to model the nonlinear
dynamics of the system. The only limitation is that the error model is linear which is not
a major constraint in many applications. The first advantage of the proposed model is that
for single or multiple step prediction we obtain simple iteration rules which are a combination of the output of the iterated neural network and a linear Kalman filter which is used
for updating the linear error model. The second advantage is that for maximum likelihood
learning the recurrent neural network can be trained using the real-time recurrent learning
rule RTRL and the linear error model can be trained by an EM adaptation rule, implemented using forward-backward Kalman filter equations. We apply our model to develop a
model of the glucose/insulin metabolism of a diabetic patient in which blood glucose measurements are only available a few times a day at irregular intervals and compare results
from our proposed model to recurrent neural networks trained and used in the free running
mode or in the teacher forcing mode as well as to various linear models.
2 RECURRENT SYSTEMS WITH MISSING DATA
reasonable estimate for 1
/
..
=:t.
6
I
/ncosuremcnt at (,me t=7 y.
Y,
?
0
~..'_'m~__
o
teacher forC ing
free runn ing
'..-
.
~CQsurcment
at
lIm~
1= 7
UIIIIdrrrrI
1
2
"
4
S
6
7
8
9
10
J I
12
13
Figure 1: A neural network predicts the next value of a time-series based on the latest two
previous measurements (left). As long as no measurements are available (t = 1 to t = 6),
the neural network is iterated (unfilled circles). In a free-running mode, the neural network
would ignore the measurement at time t = 7 to predict the time-series at time t = 8. In a
teacher forcing mode, it would substitute the measured value for one of the inputs and use
the iterated value for the other (unknown) input. This appears to be suboptimal since our
knowledge about the time-series at time t = 7 also provides us with information about the
time-series at time t = 6. For example the dotted circle might be a reasonable estimate. By
using the iterated value for the unknown input, the prediction of the teacher forced system
is not well defined and will in general lead to unsatisfactory results. A sensible response
is shown on the right where the first few predictions after the measurement are close to the
measurement. This can be achieved by including a proper error model (see text).
Consider a deterministic nonlinear dynamical model of the form
Yt
=
!w(Yt-l," ?,Yt-N,Ut)
of order N, with input Ut and where ! w (.) is a neural network model with parametervector w. Such a recurrent model is either used in a free running mode in which network
predictions are used in the input of the neural network or in a teacher forcing mode where
measurements are substituted in the input of the neural network whenever these are available.
Missing Data in RNNs with an Application to Blood Glucose Prediction
-
- ..-.:::...:..- -~-
973
-~
-- ---
Figure 2: Left: The proposed architecture. Right: Linear impulse response.
Both can lead to undesirable results when many realizations are missing and when the system is highly stochastic. Figure 1 (left) shows that a free running model basically ignores
the measurement for prediction and that the teacher forced model substitutes the measured
value but leaves the unknown states at their predicted values which also might lead to undesirable responses. The traditional solution is to include a model of the error which leads
to nonlinear stochastical models, the simplest being
Yt
= fw (Yt-l,.'"
Yt-N,
utJ + lOt
where lOt is assumed to be additive uncorrelated zero-mean noise with probability density P? (f) and represents unmodeled system dynamics. For prediction and learning with
missing values we have to integrate over the unknowns which leads to complex integrals
which, for nonlinear models, have to be approximated. for example, using Monte Carlo
integration. l In general, those integrals are computationally too expensive to solve and, in
practice, one relies on locally linearized approximations of the nonlinearities typically in
form of the extended Kalman filter. The extended Kalman filter is suboptimal and summarizes past data by an estimate of the means and the covariances of the variables involved
(Lewis, 1986).
In this paper we pursue an alternative approach. Consider the model with state updates
Yt*
(1)
fW(Y;-l"' " Y;-N' ut}
K
Xt
2: (JiXt-i +
(2)
lOt
i=l
K
Yt
Y;
+ Xt
= fw (Y;-l' ... , Y;-N, ue) + L (JiXt-i +
lOt
(3)
i=l
and with measurement equation
Zt=Yt+Ot.
(4)
where lOt and Ot denote additive noise. The variable of interest Yt is now the sum of the
deterministic response of the recurrent neural network Y; and a linear system error model
Xt (Figure 2). Zt is a noisy measurement of Yt. In particular we are interested in the special
cases that Yt can be measured with certainty (variance of Ot is zero) or that a measurement
is missing (variance of Ot is infinity). The nice feature is now that Y; can be considered
a deterministic input to the state space model consisting of the equations (2)- (3). This
means that for optimal one-step or multiple-step prediction, we can use the linear Kalman
filter for equations (2)- (3) and measurement equation (4) by treating Y; as deterministic
input. Similarly, to train the parameters in the linear part of the system (i.e. {Oi }f:l) we can
use an EM adaptation rule, implemented using forward-backward Kalman filter equations
(see the Appendix). The deterministic recurrent neural network is adapted with the residual
error which cannot be explained by the linear model, i.e. target~nn = y": - :W near
1 For maximum likelihood learning of linear models we obtain EM equations which can be solved
using forward-backward Kalman equations (see Appendix).
V. Tresp and T. Briegel
974
where Y~ is a measurement ofYt at time t and where f)/near is the estimate of the linear
model. After the recurrent neural network is adapted the linear model can be retrained
using the residual error which cannot be explained by the neural network. then again the
neural network is retrained and so on until no further improvement can be achieved.
The advantage of this approach is that all of the nonlinear interactions are modeled by
a recurrent neural network which can be trained deterministically. The linear model is
responsible for the noise model which can be trained using powerful learning algorithms
for linear systems. The constraint is that the error model cannot be nonlinear which often
might not be a major limitation.
3
BLOOD GLUCOSE PREDICTION OF A DIABETIC
The goal of this work is to develop a predictive model of the blood glucose ofa person with
type 1 Diabetes mellitus. Such a model can have several useful applications in therapy:
it can be used to warn a person of dangerous metabolic states, it can be used to make
recommendations to optimize the person's therapy and, finally, it can be used in the design
of a stabilizing control system for blood glucose regulation, a so-called "artificial beta cell"
(Tresp, Moody and Delong, 1994). We want the model to be able to adapt using patient data
collected under normal every day conditions rather than the controlled conditions typical
of a clinic. In a non-clinical setting, only a few blood glucose measurements per day are
available.
Our data set consists of the protocol ofa diabetic over a period of almost six months. During that time period, times and dosages of insulin injections (basal insulin ut and normal
insulin u;), the times and amounts of food intake (fast u~, intermediate ut and slow u~
carbohydrates), the times and durations of exercise (regular u~ or intense ui) and the blood
glucose level Yt (measured a few times a day) were recorded. The u{, j = 1, ... ,7 are
equal to zero except if there is an event, such as food intake, insulin injection or exercise.
For our data set, inputs u{ were recorded with 15 minute time resolution. We used the first
43 days for training the model (containing 312 measurements of the blood glucose) and the
following 21 days for testing (containing 151 measurements of the blood glucose). This
means that we have to deal with approximately 93% of missing data during training.
The effects on insulin, food and exercise on the blood glucose are delayed and are approximated by linear response functions. v{ describes the effect of input u{ on glucose. As an
example, the response of normal insulin after injection is determined by the diffusion
of the subcutaneously injected insulin into the blood stream and can be modeled by three
first order compartments in series or, as we have done, by a response function of the form
= l:T g2(t withg 2(t) = a2t2e-b2t (see figure 2 for a typical impulse response).
The functional mappings gj (.) for the digestive tract and for exercise are less well known.
In our experiments we followed other authors and used response functions of the above
form.
vt
vt
u;
r)u;
The response functions 9j ( .) describe the delayed effect of the inputs on the blood glucose.
We assume that the functional form of gj (.) is sufficient to capture the various delays of the
inputs and can be tuned to the physiology of the patient by varying the parameters aj ,bj .
To be able to capture the highly nonlinear physiological interactions between the response
functions
and the blood glucose level Yt, which is measured only a few times a day, we
employ a neural network in combination with a linear error model as described in Section 2.
In our experiments fw (.) is a feedforward multi-layer perceptron with three hidden units.
The five inputs to the network were insulin (in; = vi + v;>, food (in; = vf + vt + vt),
exercise (inr = vf + vi) and the current and previous estimate of the blood glucose. To be
specific, the second order nonlinear neural network model is
vi
Yt*
* +f
= Yt-l
W
(.
*
. t1 ,zn
. 2 . 3)
Yt_llYt_2,ln
t ,znt
(5)
Missing Data in RNNs with an Application to Blood Glucose Prediction
975
For the linear error model we also use a model of order 2
(6)
Table 1 shows the explained variance of the test set for different predictive models.
2
In the first experiment (RNN-FR) we estimate the blood glucose at time t as the output
of the neural network Yt
The neural network is used in the free running mode for
training and prediction. We use RTRL to both adapt the weights in the neural network as
well as all parameters in the response functions 9j (.). The RNN-FR model explains 14.1
percent of the variance. The RNN-TF model is identical to the previous experiment except
that measurements are substituted whenever available. RNN-TF could explain more of the
variance (18.8%). The reason for the better performance is, of course, that information
about measurements of the blood glucose can be exploited.
= y;.
The model RNN-LEM2 (error model with order 2) corresponds to the combination of the
recurrent neural network and the linear error model as introduced in Section 2. Here,
Yt
Xt + Y; models the blood glucose and Zt
Yt + 8t is the measurement equation
where we set the variance of 8t = 0 for a measurement of the blood glucose at time t and
00 for missing values. For ft we assume Gaussian independent noise.
the variance of 8t
For prediction, equation (5) is iterated in the free running mode. The blood glucose at time
t is estimated using a linear Kalman filter, treating
as deterministic input in the state
space model Yt
x t + Y; ,Zt Yt + 8t . We adapt the parameters in the linear error model
(i.e. (h, O2 , the variance of ft) using an EM adaptation rule, implemented using forwardbackward Kalman filter equations (see Appendix). The parameters in the neural network
are adapted using RTRL exactly the same way as in the RNN-FR model, except that the
iftinear where
is a measurement of Yt at time t and
target is now target~nn =
where iftinear is the estimate of the linear error model (based on the linear Kalman filter).
The adaptation of the linear error model and the neural network are performed aIternatingly
until no significant further improvement in performance can be achieved.
=
=
=
=
Y;
=
yr -
yr
As indicated in Table 1, the RNN-LEM2 model achieves the best prediction performance
with an explained variance of 44.9% (first order error model RNN-LEMI: 43.7%). As
a comparison, we show the performance of just the linear error model LEM (this model
ignores all inputs), a linear model (LM-FR) without an error model trained with RTRL and
a linear model with an error model (LM-LEM). Interestingly, the linear error model which
does not see any of the inputs can explain more variance (12.9%) than the LM-FR model
(8.9%). The LM-LEM model, which can be considered a combination of both can explain more than the sum of the individual explained variances (31.5%) which indicates that
the combined training gives better perfonnance than training both submodels individually.
Note also, that the nonlinear models (RNN-FR, RNN-TF, RNN-LEM) give considerably
better results than their linear counterparts, confirming that the system is highly nonlinear.
Figure 3 (left) shows an example of the responses of some of the models. We see that
the free running neural network (dotted line) has relatively small amplitudes and cannot
predict the three measurements very well. The RNN-TF model (dashed line) shows a
better response to the measurements than the free running network. The best prediction of
all measurements is indeed achieved by the RNN-LEM model (continuous line).
Based on the linear iterated Kalman filter we can calculate the variance of the prediction.
As shown in Figure 3 (right) the standard deviation is small right after a measurement is
available and then converges to a constant value. Based on the prediction and the estimated
variance, it will be possible to do a risk analysis for the diabetic (i.e a warning of dangerous
metabolic states).
2MSPE(model) is the mean squared prediction error on the test set of the model and
MSPE( mean) is the mean squared prediction error of predicting the mean.
V. Tresp and T. Briegel
976
240~--~----~--~--~
230
220
..,-210
~
=-200
I
~ 190
1-g
~
lBO
p.
~\
1\\
I
,
I \
J
\
\
'."
\
. '.
?
~
- ':
"
~
, "-
170?
160
_ -
'.~'::""........
::;-;~..-~..,,"-
???????????????
150
2 .5
5
time [hOurS]
2.5
7 .5
5
time [hOurS]
7.5
Figure 3: Left: Responses of some models to three measurements. Note, that the prediction
of the first measurement is bad for all models but that the RNN-LEM model (continuous
line) predicts the following measurements much better than both the RNN -FR (dotted) and
the RNN-TF (dashed) model. Right: Standard deviation of prediction error ofRNN-LEM.
Table 1: Explained variance on test set [in percent]: 100 .
MODEL
mean
LM
LEM
RNN-FR
4
%
0
8.9
12.9
14.1
(1 - ~;~~ ::~~ )
MODEL
%
RNN-TF
LM-LEM
RNN-LEMl
RNN-LEM2
18.8
3l.4
43.7
44.9
CONCLUSIONS
We introduced a combination of a nonlinear recurrent neural network and a linear error
model. Applied to blood glucose prediction it gave significantly better results than both
recurrent neural networks alone and various linear models. Further work might lead to a
predictive model which can be used by a diabetic on a daily bases. We believe that our results are very encouraging. We also expect that our specific model can find applications in
other stochastical nonlinear systems in which measurements are only available at irregular
intervals such that in wastewater treatment, chemical process control and various physiological systems. Further work will include error models for the input measurements (for
example, the number of food calories are typically estimated with great uncertainty).
Appendix: EM Adaptation Rules for Training the Linear Error Model
Model and observation equations of a general model are3
Xt
=
eXt-l
+ ft
Zt
= MtXt + 8t .
(7)
e
where is the K x K transition matrix ofthe K -order linear error model. The K x 1 noise
terms (t are zero-mean uncorrelated normal vectors with common covariance matrix Q. 8t
is m-dimensional 4 zero-mean uncorrelated normal noise vector with covariance matrix
R t ? Recall that we consider certain measurements and missing values as special cases of
3Note, that any linear system of order K can be transformed into a first order linear system of
dimension K.
4 m indicates the dimension of the output of the time-series.
Missing Data in RNNs with an Application to Blood Glucose Prediction
977
noisy measurements. The initial state of the system is assumed to be a normal vector with
mean Jl and covariance E.
We describe the EM equations for maximizing the likelihood of the model. Define the
estimated parameters at the (r+ l)st iterate of EM as the values Jl, E, e, Q which maximize
G(Jl, E, e, Q) = Er (log Llzl, ... , zn)
(8)
,
where log L is log-likelihood of the complete data Xo, Xl, ??? , X n ZI, ? .? , Zn and Er denotes the conditional expectation relative to a density containing the rth iterate values
Jl(r), E(r), e(r) and Q(r). Recall that missing targets are modeled implicitly by the definition of M t and R t .
For calculating the conditional expectation defined in (8) the following set of recursions
are used (using standard Kalman filtering results, see (Jazwinski, 1970)). First, we use the
forward recursion
e- x t-l
X t-l
t
t_ l
pt-l
ept-leT
+Q
t
t-l
Kt
P/-lMtT(MtP/-IMtT + R t )-1
(9)
t-l
+
r.' (*
M
t-l)
Xtt
xt
I'q Yt tXt
Pi
p tt-l - K t M t P;-1
where we take
xg = Jl and p3 = E. Next, we use the backward recursion
t-leT(pt-l)-l
Pt-l
t
J t-l
X~_l
x~=~ + Jt-l(X~ - ex~=D
(10)
Ptn_l
p:~t + Jt-dPr - p;-l)J?'_l
pr-l,t-2
P/~11JL2 + Jt-dPtt-l - ePtt~l)Jt~2
with initialization p;: n-l
(1 - KnMn)ep;:::l. One forward and one backward recursion completes the E-'step of the EM algorithm.
=
To derive the M-step first realize that the conditional expectations in (8) yield to the following equation:
G
=
-~ log IEI- !tr{E-l(Pon + (xo - Jl)(xo - Jl)T)}
-% log IQI-1tr{Q-l(C - BeT - eB T - eAeT)}
-% log IRtl- !tr{R;-l E~l[(Y; - Mtxt)(y; - Mtxd T + MtprMtT]}
(11 )
where tr{.} denotes the trace, A
= E~=l (pr-l + xr_lx~:d,
B = E~=l(Pt~t-l
+ X~X~:l) and C = E~=l (pr + x~x~ T).
e(r + 1) = BA- 1 and Q(r + 1) = n-1(C - BA- l BT) maximize the log-likelihood
equation (11). Jl (r + 1) is set to Xo and E may be fixed at some reasonable baseline level.
The derivation of these equations can be found in (Shumway & Stoffer, 1981).
The E- (forward and backward Kalman filter equations) and M-steps are alternated repeatedly until convergence to obtain the EM solution.
References
Jazwinski, A. H. (1970) Stochastic Processes and Filtering Theory, Academic Press, N.Y.
Lewis, F. L. (1986) Optimal Estimation, John Wiley, N.Y.
Shumway, R. H. and Stoffer, D. S. (1981) TIme Series Smoothing and Forecasting Using
the EM Algorithm, Technical Report No. 27, Division of Statistics, UC Davis.
Tresp, v., Moody, 1. and Delong, W.-R. (1994) Neural Modeling of Physiological Processes, in Comput. Leaming Theory and Natural Leaming Sys. 2, S. Hanson et al., eds.,
MIT Press.
| 1348 |@word linearized:1 t_:1 covariance:4 tr:4 initial:1 series:7 tuned:1 interestingly:1 past:1 o2:1 current:1 intake:2 john:1 realize:1 additive:2 confirming:1 treating:2 update:1 alone:1 leaf:1 metabolism:2 yr:2 sys:1 provides:1 miinchen:1 digestive:1 lbo:1 five:1 beta:1 consists:1 dpr:1 introduce:1 acquired:1 indeed:1 multi:1 food:5 actual:1 encouraging:1 pursue:1 ag:1 warning:1 certainty:1 every:1 ofa:2 exactly:1 control:2 unit:1 t1:1 iqi:1 ext:1 approximately:1 might:4 rnns:3 initialization:1 eb:1 responsible:1 testing:1 practice:1 rnn:20 mellitus:1 physiology:1 significantly:1 regular:1 cannot:5 close:1 undesirable:2 risk:1 optimize:1 deterministic:6 missing:15 yt:23 maximizing:1 latest:1 pon:1 duration:1 resolution:1 stabilizing:1 rule:10 target:4 pt:4 diabetes:1 approximated:2 expensive:1 updating:1 predicts:2 ep:1 ft:3 solved:2 capture:2 calculate:1 forwardbackward:1 ui:1 dynamic:2 trained:9 predictive:5 purely:1 division:1 various:5 derivation:1 train:1 carbohydrate:1 forced:2 fast:1 describe:2 monte:1 artificial:1 solve:1 otto:1 statistic:1 insulin:10 noisy:2 advantage:3 propose:1 interaction:2 adaptation:8 fr:8 realization:3 convergence:1 tract:1 ring:1 converges:1 derive:1 recurrent:20 develop:2 measured:5 implemented:4 predicted:1 filter:13 stochastic:6 explains:1 therapy:2 considered:3 normal:6 great:1 mapping:1 predict:3 bj:1 lm:6 major:2 achieves:1 estimation:1 individually:1 tf:6 mit:1 gaussian:1 rather:1 volker:2 varying:1 bet:1 improvement:3 unsatisfactory:1 likelihood:8 indicates:2 ept:1 baseline:1 dependent:1 nn:2 typically:4 jazwinski:2 bt:1 hidden:1 transformed:1 interested:1 germany:1 delong:2 integration:1 special:2 smoothing:1 uc:1 equal:1 identical:1 represents:1 report:1 few:7 dosage:1 employ:1 leml:1 individual:1 delayed:2 consisting:1 interest:3 highly:5 inr:1 stoffer:2 kt:1 integral:4 daily:1 intense:1 perfonnance:1 circle:2 modeling:1 zn:3 deviation:2 delay:1 too:1 teacher:8 considerably:1 combined:1 person:3 density:2 st:1 moody:2 again:1 squared:2 recorded:2 containing:3 nonlinearities:1 de:1 iei:1 vi:3 stream:1 performed:1 lot:5 oi:1 compartment:1 irtl:1 variance:14 who:1 yield:1 ofthe:1 iterated:7 basically:1 carlo:1 explain:3 whenever:2 ed:1 definition:1 involved:1 treatment:1 popular:1 recall:2 lim:1 knowledge:1 ut:5 amplitude:1 appears:1 day:9 response:15 done:1 generality:1 just:1 until:3 warn:1 nonlinear:17 mode:10 aj:1 indicated:1 impulse:2 believe:1 effect:3 counterpart:1 analytically:1 chemical:1 unfilled:1 deal:1 during:2 ue:1 davis:1 complete:1 tt:1 percent:2 common:1 functional:2 jl:8 rth:1 measurement:37 significant:1 glucose:28 similarly:1 gj:2 base:1 forcing:5 mspe:2 certain:1 vt:4 exploited:1 maximize:2 period:2 dashed:2 multiple:2 corporate:1 ing:2 technical:1 academic:1 adapt:3 clinical:1 long:1 controlled:1 prediction:29 patient:4 expectation:3 txt:1 iteration:1 achieved:4 cell:1 irregular:5 want:1 interval:5 completes:1 ot:4 briegel:5 near:2 intermediate:1 feedforward:1 iterate:2 gave:1 zi:1 architecture:1 suboptimal:2 six:1 forecasting:1 repeatedly:1 useful:1 amount:1 locally:1 simplest:1 problematic:1 dotted:3 estimated:4 per:1 basal:1 blood:26 diffusion:1 backward:7 znt:1 sum:2 linearizations:1 powerful:1 injected:1 uncertainty:1 almost:1 reasonable:3 submodels:1 p3:1 summarizes:1 appendix:4 vf:2 layer:1 b2t:1 followed:1 adapted:3 dangerous:2 infinity:1 constraint:2 injection:3 utj:1 relatively:1 combination:7 describes:1 em:11 rtrl:4 lem:9 explained:6 pr:3 xo:4 computationally:1 equation:18 ln:1 tractable:2 available:10 apply:1 jl2:1 alternative:1 thomas:2 substitute:2 denotes:2 running:10 include:2 calculating:1 hahn:1 traditional:2 sensible:1 me:1 collected:1 reason:1 kalman:15 modeled:3 regulation:1 trace:1 ba:2 design:1 proper:1 zt:5 unknown:5 observation:1 extended:3 unmodeled:1 retrained:2 introduced:2 hanson:1 hour:2 able:2 dynamical:3 usually:1 including:1 suitable:1 event:1 difficulty:1 natural:1 predicting:1 residual:2 recursion:4 technology:1 xg:1 tresp:7 alternated:1 text:1 nice:1 xtt:1 stochastical:2 relative:1 shumway:2 expect:1 limitation:2 filtering:2 clinic:1 integrate:1 sufficient:1 metabolic:2 uncorrelated:3 pi:1 course:1 free:11 perceptron:1 mchp:1 dimension:2 transition:1 ignores:2 forward:7 author:1 ignore:1 implicitly:1 assumed:2 alternatively:1 continuous:2 diabetic:7 table:3 complex:2 protocol:1 substituted:3 noise:6 arise:1 slow:1 wiley:1 deterministically:1 exercise:5 xl:1 comput:1 minute:1 bad:1 specific:3 xt:6 jt:4 er:2 physiological:5 intractable:1 g2:1 recommendation:1 corresponds:1 relies:1 lewis:3 conditional:3 goal:1 month:1 leaming:2 considerable:1 fw:4 typical:2 except:3 determined:1 called:1 siemens:2 ex:1 |
382 | 1,349 | Learning to Schedule Straight-Line Code
Eliot Moss, Paul Utgoff, John Cavazos
Doina Precup, Darko Stefanovic .
Dept. of Compo Sci., Univ. of Mass.
Amherst, MA 01003
Carla Brodley, David Scheeff
Sch. of Elec. and Compo Eng.
Purdue University
W. Lafayette, IN 47907
Abstract
Program execution speed on modem computers is sensitive, by a factor of
two or more, to the order in which instructions are presented to the processor. To realize potential execution efficiency, an optimizing compiler must
employ a heuristic algorithm for instruction scheduling. Such algorithms
are painstakingly hand-crafted, which is expensive and time-consuming. We
show how to cast the instruction scheduling problem as a learning task, obtaining the heuristic scheduling algorithm automatically. Our focus is the
narrower problem of scheduling straight-line code (also called basic blocks
of instructions). Our empirical results show that just a few features are adequate for quite good performance at this task for a real modem processor,
and that any of several supervised learning methods perform nearly optimally with respect to the features used.
1 Introduction
Modem computer architectures provide semantics of execution equivalent to sequential execution of instructions one at a time. However, to achieve higher execution efficiency, they
employ a high degree of internal parallelism. Because individual instruction execution times
vary, depending on when an instruction's inputs are available, when its computing resources
are available, and when it is presented, overall execution time can vary widely. Based on just
the semantics of instructions, a sequence of instructions usually has many permutations that
are easily shown to have equivalent meaning-but they may have considerably different execution time. Compiler writers therefore include algorithms to schedule instructions to achieve
low execution time. Currently, such algorithms are hand-crafted for each compiler and target
processor. We apply learning so that the scheduling algorithm is constructed automatically.
Our focus is local instruction scheduling, i.e., ordering instructions within a basic block. A
basic block is a straight-line sequence of code, with a conditional or unconditional branch
instruction at the end. The scheduler should find optimal, or good, orderings of the instructions
prior to the branch. It is safe to assume that the compiler has produced a semantically correct
sequence of instructions for each basic block. We consider only reorderings of each sequence
930
E. Moss, P. UtgofJ, J Cavazos, D. Precup, D. Stefanovic, C. Bradley and D. Scheeff
(not more general rewritings), and only those reorderings that cannot affect the semantics.
The semantics of interest are captured by dependences of pairs of instructions. Specifically,
instruction Ij depends on (must follow) instruction Ii if: it follows Ii in the input block and has
one or more of the following dependences on Ii: (a) Ij uses a register used by Ii and at least one
of them writes the register (condition codes, if any, are treated as a register); (b) Ij accesses a
memory location that may be the same as one accessed by Ii, and at least one of them writes
the location. From the input total order of instructions, one can thus build a dependence DAG,
usually a partial (not a total) order, that represents aU the semantics essential for scheduling
the instructions of a basic block. Figure 1 gives a sample basic block and its DAG. The task of
scheduli.ng is to find a least-cost total order of each block's DAG .
X~V ;
Y=*P;
P =P+ I;
(a) C Code
I:
2:
STQ
LDQ
RI.X
R2,O(RIO)
3:
4:
STQ
ADDQ
R2, Y
RIO,RIO,S
(b) Instruction Sequence to be Scheduled
v\
(c) Dependence Dag of Instructions
~ A~~\
Not Available
Available
(d) Partial Schedule
Figure 1: Example basic block code, DAG, and partial schedule
2 Learning to Schedule
The learning task is to produce a scheduling procedure to use in the performance task of
scheduling instructions of basic blocks. One needs to transform the partial order of instructions into a total order that will execute as efficiently as possible, assuming that all memory
references "hit" in the caches. We consider the class of schedulers that repeatedly select the
apparent best of those instructions that could be scheduled next, proceeding from the beginning
of the block to the end; this greedy approach should be practical for everyday use.
Because the scheduler selects the apparent best from those instructions that could be selected
next, the learning task consists of learning to make this selection well. Hence, the notion
of 'apparent best instruction' needs to be acquired. The process of selecting the best of the
alternatives is like finding the maximum of a list of numbers, One keeps in hand the current
best, and proceeds with pairwise comparisons, always keeping the better of the two. One
can view this as learning a relation over triples (P,Ii,Ij), where P is the partial schedule (the
total order of what has been scheduled, and the partial order remaining), and I is the set of
instructions from which the selection is to be made. Those triples that belong to the relation
define pairwise preferences in which the first instruction is considered preferable to the second.
Each triple that does not belong to the relation represents a pair in which the first instruction is
not better than the second.
One must choose a representation in which to state the relation, create a process by which correct examples and counter-examples of the relation can be inferred, and modify the expression
of the relation as needed. Let us consider these steps in greater detail.
2.1
Representation of Scbeduling Preference
The representation used here takes the form of a logical relation, in which known examples
and counter-examples of the relation are provided as triples. It is then a matter of constructing
or revising an expression that evaluates to TRUE if (P,Ii,!j) is a member of the relation, and
FALSE if it is not. If (P, Ii, Ij) is considered to be a member of the relation, then it is safe to
infer that (P,Ij,Ii) is not a member,
For any representation of preference, one needs to represent features of a candidate instruction
and of the partial schedule. There is some art in picking useful features for a state. The method
Learning to Schedule Straight-line Code
931
used here was to consider the features used in a scheduler (called DEC below) supplied by
the processor vendor, and to think carefully about those and other features that should indicate
predictive instruction characteristics or important aspects of the partial schedule.
2.2 Inferring Examples and Counter-Examples
One would like to produce a preference relation consistent with the examples and counterexamples that have been inferred, and that generalizes well to triples that have not been seen.
A variety of methods exist for learning and generalizing from examples, several of which are
tested in the experiments below. Of interest here is how to infer the examples and counterexamples needed to drive the generalization process.
The focus here is on supervised learning (reinforcement learning is mentioned later), in which
one provides a process that produces correctly labeled examples and counter-examples of the
preference relation. For the instruction-scheduling task, it is possible to search for an optimal
schedule for blocks of ten or fewer instructions. From an optimal schedule, one can infer
the correct preferences that would have been needed to produce that optimal schedule when
selecting the best instruction from a set of candidates, as described above. It may well be that
there is more than one optimal schedule, so it is important only to infer a preference for a pair
of instructions when the first can produce some schedule better than any the second can.
One should be concerned whether training on preference pairs from optimally scheduled small
blocks is effective, a question the experiments address. It is worth noting that for programs
studied below, 92% of the basic blocks are of this small size, and the average block size is
4.9 instructions. On the other hand, larger blocks are executed more often, and thus have
disproportionate impact on program execution time. One could learn from larger blocks by
using a high quality scheduler that is not necessarily optimal. However, the objective is to be
able to learn to schedule basic blocks well for new architectures, so a useful learning method
should not depend on any pre-existing solution. Of course there may be some utility in trying to
improve on an existing scheduler, but that is not the longer-term goal here. Instead, we would
like to be able to construct a scheduler with high confidence that it produces good schedules.
2.3 Updating the Preference Relation
A variety of learning algorithms can be brought to bear on the task of Updating the expression
of the preference relation. We consider four methods here.
The first is the decision tree induction program m (Utgoff, Berkman & Clouse, in press). Each
triple that is an example of the relation is translated into a vector offeature values, as described
in more detail below. Some of the features pertain to the current partial schedule, and others
pertain to the pair of candidate instructions. The vector is then labeled as an example of the
relation. For the same pair of instructions, a second triple is inferred, with the two instructions
reversed. The feature vector for the triple is constructed as before, and labeled as a counterexample of the relation. The decision tree induction program then constructs a tree that can be
used to predict whether a candidate triple is a member of the relation.
The second method is table lookup OLU), using a table indexed by the feature values of a
triple. The table has one cell for every possible combination of feature values, with integer
valued features suitably discretized. Each cell records the number of positive and negative
instances from a training set that map to that cell. The table lookup function returns the most
frequently seen value associated with the corresponding cell. It is useful to know that the data
set used is large and generally covers all possible table cells with mUltiple instances. Thus,
table lookup is "unbiased" and one would expect it to give the best predictions possible for the
chosen features, assuming the statistics of the training and test sets are consistent.
The third method is the ELF function approximator (Utgoff & Precup, 1997), which constructs
932
E. Moss, P Utgoff, J Cavazos, D. Precup, D. Stefanovic, C. Brodley and D. Scheeff
additional features (much like a hidden unit) as necessary while it updates its representation of
the function that it is learning. The function is represented by two layers of mapping. The first
layer maps the features of the triple, which must be boolean for ELF, to a set of boolean feature
values. The second layer maps those features to a single scalar value by combining them
linearly with a vector of real-valued coefficients called weights. Though the second layer is
linear in the instruction features, the boolean features are nonlinear in the instruction features.
Finally, the fourth method considered is a feed-forward artificial neural network (NN) (Rumelhart, Hinton & Williams, 1986). Our particular network uses scaled conjugate gradient descent
in its back-propagation, which gives results comparable to back-propagation with momentum,
but converges much faster. Our configuration uses 10 hidden units.
3 Empirical Results
We aimed to answer the following questions: Can we schedule as well as hand-crafted algorithms in production compilers? Can we schedule as well as the best hand-crafted algorithms?
How close can we come to optimal schedules? The first two questions we answer with comparisons of program execution times, as predicted from simulations of individual basic blocks
(multiplied by the number of executions of the blocks as measured in sample program runs).
This measure seems fair for local instruction scheduling, since it omits other execution time
factors being ignored. Ultimately one would deal with these factors, but they would cloud the
issues for the present enterprise. Answering the third question is harder, since it is infeasible
to generate optimal schedules for long blocks. We offer a partial answer by measuring the
number of optimal choices made within small blocks.
To proceed, we selected a computer architecture implementation and a standard suite of benchmark programs (SPEC95) compiled for that architecture. We extracted basic blocks from the
compiled programs and used them for training, testing, and evaluation as described below.
3.1 Architecture and Benchmarks
We chose the Digital Alpha (Sites, 1992) as our architecture for the instruction scheduling
problem. When introduced it was the fastest scalar processor available, and from a dependence
analysis and scheduling standpoint its instruction set is simple. The 21064 implementation of
the instruction set (DEC, 1992) is interestingly complex, having two dissimilar pipelines and
the ability to issue two instructions per cycle (also called dual issue) if a complicated collection
of conditions hold. Instructions take from one to many tens of cycles to execute.
SPEC95 is a standard benchmark commonly used to evaluate CPU execution time and the
impact of compiler optimizations. It consists of 18 programs, 10 written in FORTRAN and
tending to use floating point calculations heavily, and 8 written in C and focusing more on
integers, character strings, and pointer manipulations. These were compiled with the vendor's
compiler, set at the highest level of optimization offered, which includes compile- or linktime instruction scheduling. We call these the Orig schedules for the blocks. The resulting
collection has 447,127 basic blocks, composed of 2,205,466 instructions.
3.2 Simulator, Schedulers, and Features
Researchers at Digital made publicly available a simulator for basic blocks for the 21064,
which will indicate how many cycles a given block requires for execution, assuming all memory references hit in the caches and translation look-aside buffers, and no resources are busy
when the basic block starts execution. When presenting a basic block one can also request that
the simulator apply a heuristic greedy scheduling algorithm. We call this scheduler DEC.
By examining the DEC scheduler, applying intuition, and considering the results of various
Learning to Schedule Straight-Une Code
933
preliminary experiments, we settled on using the features of Table 1 for learning. The mapping
from triples to feature vectors is: odd: a single boolean 0 or 1; wep, e, and d: the sign ( -, 0,
or +) of the value for Ij minus the value for Ii; ie: both instruction's values, expressed as 1 of
20 categories. For ELF and NN the categorical values for ie, as well as the signs, are mapped
to a l-of-n vector of bits, n being the number of distinct values.
Table I: Features for Instructions and Partial Schedule
Heuristic Name
Odd Partial (odd)
Heuristic Description
Is the current number of instructions scheduled odd or even?
Instruction Class (ic)
The Alpha's instructions can be divided into
equivalence classes with respect to timing
properties.
The height ofthe instruction in the DAG (the
length of the longest chain of instructions dependent on this one), with edges weighted by
expected latency of the result produced by
the instruction
Can the instruction dual-issue with the previous scheduled instruction?
Weighted Critical Path (wcp)
Actual Dual (d)
Max Delay (e)
The earliest cycle when the instruction can
begin to execute, relative to the current cycle;
this takes into account any wait for inputs for
functional units to become available
Intuition for Use
If TRUE, we're interested in scheduling instructions that can dual-issue with the previous instruction.
The instructions in each class can be executed only in certain execution pipelines, etc.
Instructions on longer critical paths should
be scheduled first, since they affect the lower
bound of the schedule cost.
If Odd Partial is TRUE, it is important that
we find an instruction, if there is one, that
can issue in the same cycle with the previous
scheduled instruction.
We want to schedule instructions that will
have their data and functional unit available
earliest.
This mapping of triples to feature values loses information. This does not affect learning much
(as shown by preliminary experiments omitted here), but it reduces the size of the input space,
and tends to improve both speed and quality of learning for some learning algorithms.
3.3 Experimental Procedures
From the 18 SPEC95 programs we extracted aU basic blocks, and also determined, for sample runs of each program, the number of times each basic block was executed. For blocks
having no more than ten instructions, we used exhaustive search of all possible schedules to
(a) find instruction decision points with pairs of choices where one choice is optimal and the
other is not, and (b) determine the best schedule cost attainable for either decision. Schedule
costs are always as judged by the DEC simulator. This procedure produced over 13,000,000
distinct choice pairs, resulting in over 26,000,000 triples (given that swapping Ii and Ij creates
a counter-example from an example and vice versa). We selected I % of the choice pairs at
random (always insuring we had matched example/counter-example triples).
For each learning scheme we performed an 18-fold cross-validation, holding out one program's
blocks for independent testing. We evaluated both how often the trained scheduler made optimal decisions, and the simulated execution time of the resulting schedules, The execution time
was computed as the sum of simulated basic block costs, weighted by execution frequency as
observed in sample program runs, as described above.
To summarize the data, we use geometric means across the 18 runs of each scheduler. The
geometric mean g(XI, ... ,Xn) of XI, ... ,Xn is (XI? ... ?xn)l/n. It has the nice property that
g(xI/YI, ... ,Xn/Yn) = g(XI, ... ,Xn)/g(YI"",Yn), which makes it particularly meaningful for
comparing performance measures via ratios, It can also be written as the anti-logarithm of
the mean of the logarithms of the Xi; we use that to calculate confidence intervals using traditional measures over the logarithms of the values. In any case, geometric means are preferred
for aggregating benchmark results across differing programs with varying execution times.
934
E. Moss, P. Utgoff, 1. Cavazos, D. Precup, D. Ste!anovic, C. Brodley and D. Scheeff
3.4 Results and Discussion
Our results appear in Table 2. For evaluations based on predicted program execution time, we
compare with Drig. For evaluations based directly on the learning task, i.e., optimal choices,
we compare with an optimal scheduler, but only over basic blocks no more than 10 instructions
long. Other experiments indicate that the DEC scheduler almost always produces optimal
schedules for such short blocks; we suspect it does well on longer blocks too.
Table 2: Experimental Results: Predicted Execution Time
Scheduler
DEC
TLU
m
NN
ELF
Orig
Rand
Relevant Blocks Only
cycles
ratio to Orig
(x 109 )
(95% conf. int.)
24.018 0.979 (0.969,0.989)
24.338 0.992 (0.983,1.002)
24.395 0.995 (0.984,1.006)
24.410 0.995 (0.983,1.007)
24.465 0.998 (0.985,1.010)
1.000 (1.000,1.000)
24.525
31.292 1.276 (1.186,1.373)
cycles
(x 109 )
28.385
28.710
28.758
28.770
28.775
28.862
36.207
All Blocks
ratio to Orig
(95% conf. int.)
0.983 (0.975,0.992)
0.995 (0.987,1.003)
0.996 (0.987,1.006)
0.997 (0.986,1.008)
0.997 (0.988,1.006)
1.000 (1.000,1.000)
1.254 (1.160,1.356)
Small Blocks
% Optimal
Choices
98.1
98.2
98.1
98.1
The results show that all supervised learning techniques produce schedules predicted to be
better than the production compilers, but not as good as the DEC heuristic scheduler. This is a
striking success, given the small number of features. As expected, table lookup performs the
best of the learning techniques. Curiously, relative performance in terms of making optimal
decisions does not correlate with relative performance in terms of producing good schedules.
This appears to be because in each program a few blocks are executed very often, and thus
contribute much to execution time, and large blocks are executed disproportionately often.
Still, both measures of performance are quite good.
What about reinforcement learning? We ran experiments with temporal difference (ID) learning, some of which are described in (Scheeff, et at., 1997) and the results are not as good. This
problem appears to be tricky to cast in a form suitable for ID, because ID looks at candidate instructions in isolation, rather than in a preference setting. It is also hard to provide an
adequate reward function and features predictive for the task at hand.
4 Related Work, Conclusions, and Outlook
Instruction scheduling is well-known and others have proposed many techniques. Also, optimal instruction scheduling for today's complex processors is NP-complete. We found two
pieces of more closely related work. One is a patent (Tarsy & Woodard, 1994). From the
patent's claims it appears that the inventors trained a simple perceptron by adjusting weights of
some heuristics. They evaluate each weight setting by scheduling an entire benchmark suite,
running the resulting programs, and using the resulting times to drive weight adjustments. This
approach appears to us to be potentially very time-consuming. It has two advantages over our
technique: in the learning process it uses measured execution times rather than predicted or
simulated times, and it does not require a simulator. Being a patent, this work does not offer
experimental results. The other related item is the application of genetic algorithms to tuning
weights of heuristics used in a greedy scheduler (Beaty, S., Colcord, & Sweany, 1996). The
authors showed that different hardware targets resulted in different learned weights, but they
did not offer experimental evaluation of the qUality of the resulting schedulers.
While the results here do not demonstrate it, it was not easy to cast this problem in a form
suitable for machine learning. However, once that form was accomplished, supervised learn-
Learning to Schedule Straight-line Code
935
ing produced quite good results on this practical problem-better than two vendor production
compilers, as shown on a standard benchmark suite used for evaluating such optimizations.
Thus the outlook for using machine learning in this application appears promising.
On the other hand, significant work remains. The current experiments are for a particular
processor; can they be generalized to other processors? After all, one of the goals is to improve
and speed processor design by enabling more rapid construction of optimizing compilers for
proposed architectures. While we obtained good performance predictions, we did not report
performance on a real processor. (More recently we obtained those results (Moss, et al., 1997);
ELF tied Orig for the best scheme.) This raises issues not only of faithfulness of the simulator
to reality, but also of global instruction scheduling, i.e., across basic blocks, and of somewhat
more general rewritings that allow more reorderings of instructions. From the perspective of
learning, the broader context may make supervised learning impossible, because the search
space will explode and preclude making judgments of optimal vs. suboptimal. Thus we will
have to find ways to make reinforcement learning work better for this problem. A related issue
is the difference between learning to make optimal decisions (on small blocks) and learning
to schedule (all) blocks well. Another relevant issue is the cost not of the schedules, but of
the schedulers: are these schedulers fast enough to use in production compilers? Again, this
demands further experimental work. We do conclude, though, that the approach is promising
enough to warrant these additional investigations.
Acknowledgments: We thank various people of Digital Equipment Corporation, for the DEC
scheduler and the ATOM program instrumentation tool (Srivastava & Eustace, 1994), essential
to this work. We also thank Sun Microsystems and Hewlett-Packard for their support.
References
Beaty, S., Colcord, S., & Sweany, P. (1996). Using genetic algOrithms to fine-tune instructionscheduling heuristics. In Proc. of the Int'l Con! on Massively Parallel Computer Systems.
Digital Equipment Corporation, (1992). DECchip 2I064-AA Microprocessor Hardware Reference Manual, Maynard, MA, first edition, October 1992.
Haykin, S. (1994). Neural networks: A comprehensivefoundation. New York, NY: Macmillan.
Moss, E., Cavazos, J., Stefanovic, D., Utgoff, P., Precup, D., Scheeff, D., & Brodley, C. (1997).
Learning Policies for Local Instruction Scheduling. Submitted for publication.
Rumelhart, D. E., Hinton, G. E., & Williams, RJ. (1986). Learning internal representations by
error propagation. In Rumelhart & McClelland (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Cambridge, MA: MIT Press.
Scheeff, D., Brodley, C ., Moss, E., Cavazos, J., Stefanovic. D. (1997). Applying Reinforcement
Learning to Instruction Scheduling within Basic Blocks. Technical report.
Sites, R. (1992). Alpha Architecture Reference Manual. Digital Equip. Corp., Maynard, MA.
Srivastava, A. & Eustace, A. (1994). ATOM: A system for building customized program analysis tools. In Proc. ACM SIGPLAN '94 Con! on Prog. Lang. Design and Impl., 196-205.
Sutton, R. S. (1988). Learning to predict by the method of temporal differences. Machine
Learning, 3,9-44.
Tarsy, G. & Woodard, M. (1994). Method and apparatus for optimizing cost-based heuristic
instruction schedulers. US Patent #5,367,687. Filed 7/7/93, granted 11122/94.
Utgoff. P. E., Berkman, N. C., & Clouse, J. A. (in press). Decision tree induction based on
efficient tree restructuring. Machine Learning.
Utgoff, P. E., & Precup. D. (1997). Constructive function approximation, (Technical Report
97 -04), Amherst. MA: University of Massachusetts, Department of Computer Science.
| 1349 |@word seems:1 suitably:1 instruction:79 simulation:1 eng:1 attainable:1 minus:1 outlook:2 harder:1 configuration:1 selecting:2 genetic:2 interestingly:1 eustace:2 existing:2 bradley:1 current:5 comparing:1 lang:1 must:4 written:3 john:1 realize:1 update:1 aside:1 v:1 greedy:3 selected:3 fewer:1 item:1 une:1 beginning:1 short:1 record:1 compo:2 pointer:1 haykin:1 provides:1 contribute:1 location:2 preference:11 accessed:1 height:1 constructed:2 enterprise:1 become:1 consists:2 pairwise:2 acquired:1 expected:2 rapid:1 frequently:1 simulator:6 discretized:1 automatically:2 cpu:1 actual:1 cache:2 considering:1 tlu:1 preclude:1 provided:1 begin:1 matched:1 mass:1 what:2 string:1 revising:1 differing:1 finding:1 corporation:2 suite:3 temporal:2 every:1 impl:1 preferable:1 scaled:1 hit:2 tricky:1 unit:4 yn:2 appear:1 producing:1 before:1 positive:1 local:3 modify:1 timing:1 tends:1 aggregating:1 apparatus:1 sutton:1 id:3 path:2 chose:1 au:2 studied:1 equivalence:1 compile:1 fastest:1 lafayette:1 practical:2 acknowledgment:1 testing:2 block:46 writes:2 procedure:3 empirical:2 pre:1 confidence:2 wait:1 cannot:1 close:1 selection:2 pertain:2 scheduling:22 judged:1 context:1 applying:2 impossible:1 equivalent:2 map:3 williams:2 scheeff:7 notion:1 target:2 today:1 heavily:1 construction:1 us:4 rumelhart:3 expensive:1 particularly:1 updating:2 labeled:3 observed:1 cloud:1 calculate:1 cycle:8 sun:1 ordering:2 counter:6 highest:1 ran:1 mentioned:1 intuition:2 utgoff:8 reward:1 ultimately:1 trained:2 depend:1 raise:1 orig:5 predictive:2 creates:1 writer:1 efficiency:2 translated:1 easily:1 represented:1 various:2 univ:1 elec:1 distinct:2 effective:1 fast:1 artificial:1 exhaustive:1 quite:3 heuristic:10 widely:1 apparent:3 larger:2 valued:2 ability:1 statistic:1 think:1 transform:1 sequence:5 advantage:1 ste:1 relevant:2 combining:1 achieve:2 description:1 everyday:1 woodard:2 produce:8 converges:1 depending:1 measured:2 ij:8 odd:5 predicted:5 disproportionate:1 indicate:3 come:1 safe:2 closely:1 correct:3 comprehensivefoundation:1 exploration:1 disproportionately:1 require:1 microstructure:1 generalization:1 preliminary:2 investigation:1 hold:1 considered:3 ic:1 mapping:3 predict:2 cognition:1 claim:1 vary:2 omitted:1 proc:2 currently:1 sensitive:1 vice:1 create:1 tool:2 weighted:3 brought:1 mit:1 always:4 rather:2 varying:1 broader:1 publication:1 earliest:2 focus:3 longest:1 equipment:2 rio:3 dependent:1 nn:3 entire:1 hidden:2 relation:18 selects:1 semantics:5 interested:1 overall:1 issue:9 dual:4 art:1 construct:3 once:1 having:2 ng:1 atom:2 represents:2 look:2 nearly:1 warrant:1 report:3 others:2 elf:5 np:1 employ:2 few:2 composed:1 wep:1 resulted:1 individual:2 floating:1 interest:2 evaluation:4 unconditional:1 swapping:1 hewlett:1 chain:1 edge:1 partial:13 necessary:1 tree:5 offeature:1 indexed:1 logarithm:3 re:1 instance:2 boolean:4 cover:1 measuring:1 cost:7 delay:1 examining:1 too:1 optimally:2 answer:3 considerably:1 amherst:2 ie:2 filed:1 picking:1 precup:7 again:1 settled:1 choose:1 conf:2 return:1 busy:1 potential:1 account:1 lookup:4 includes:1 coefficient:1 matter:1 int:3 register:3 doina:1 depends:1 piece:1 later:1 view:1 performed:1 compiler:11 start:1 complicated:1 parallel:2 publicly:1 characteristic:1 efficiently:1 judgment:1 ofthe:1 produced:4 worth:1 drive:2 researcher:1 straight:6 processor:10 submitted:1 manual:2 ed:1 evaluates:1 frequency:1 associated:1 con:2 adjusting:1 massachusetts:1 logical:1 schedule:36 carefully:1 back:2 focusing:1 feed:1 appears:5 higher:1 supervised:5 follow:1 rand:1 execute:3 though:2 evaluated:1 just:2 hand:8 nonlinear:1 propagation:3 maynard:2 quality:3 scheduled:8 name:1 building:1 true:3 unbiased:1 cavazos:6 hence:1 deal:1 generalized:1 trying:1 presenting:1 complete:1 demonstrate:1 performs:1 meaning:1 recently:1 tending:1 functional:2 patent:4 belong:2 significant:1 versa:1 counterexample:3 dag:6 cambridge:1 tuning:1 had:1 access:1 longer:3 compiled:3 etc:1 showed:1 perspective:1 optimizing:3 instrumentation:1 manipulation:1 massively:1 buffer:1 certain:1 corp:1 success:1 yi:2 accomplished:1 captured:1 stefanovic:5 greater:1 seen:2 additional:2 somewhat:1 determine:1 ii:11 branch:2 multiple:1 rj:1 infer:4 reduces:1 ing:1 technical:2 faster:1 calculation:1 offer:3 long:2 cross:1 dept:1 divided:1 impact:2 prediction:2 basic:22 represent:1 dec:9 cell:5 want:1 fine:1 interval:1 standpoint:1 sch:1 suspect:1 member:4 integer:2 call:2 noting:1 easy:1 concerned:1 enough:2 variety:2 affect:3 spec95:3 isolation:1 architecture:8 suboptimal:1 whether:2 expression:3 curiously:1 utility:1 granted:1 inventor:1 proceed:1 york:1 repeatedly:1 adequate:2 ignored:1 useful:3 generally:1 latency:1 aimed:1 tune:1 ten:3 hardware:2 category:1 mcclelland:1 generate:1 supplied:1 exist:1 sign:2 correctly:1 per:1 four:1 rewriting:2 sum:1 run:4 fourth:1 striking:1 prog:1 almost:1 decision:8 comparable:1 bit:1 layer:4 bound:1 fold:1 ri:1 sigplan:1 explode:1 aspect:1 speed:3 darko:1 department:1 combination:1 request:1 conjugate:1 across:3 character:1 making:2 pipeline:2 resource:2 vendor:3 remains:1 needed:3 know:1 fortran:1 end:2 available:8 generalizes:1 multiplied:1 apply:2 alternative:1 remaining:1 include:1 running:1 build:1 objective:1 question:4 dependence:5 traditional:1 gradient:1 reversed:1 thank:2 mapped:1 sci:1 simulated:3 induction:3 equip:1 assuming:3 code:9 length:1 ratio:3 executed:5 october:1 potentially:1 holding:1 negative:1 wcp:1 implementation:2 design:2 policy:1 perform:1 modem:3 purdue:1 benchmark:6 enabling:1 descent:1 anti:1 hinton:2 inferred:3 david:1 introduced:1 cast:3 pair:9 faithfulness:1 omits:1 learned:1 address:1 able:2 proceeds:1 parallelism:1 usually:2 below:5 microsystems:1 summarize:1 program:20 max:1 memory:3 packard:1 critical:2 suitable:2 treated:1 customized:1 scheme:2 improve:3 brodley:5 categorical:1 moss:7 prior:1 geometric:3 nice:1 relative:3 reordering:3 expect:1 permutation:1 bear:1 clouse:2 approximator:1 triple:15 digital:5 validation:1 degree:1 offered:1 consistent:2 production:4 translation:1 course:1 keeping:1 infeasible:1 allow:1 perceptron:1 distributed:1 xn:5 evaluating:1 forward:1 made:4 reinforcement:4 collection:2 commonly:1 insuring:1 author:1 correlate:1 alpha:3 preferred:1 keep:1 global:1 conclude:1 consuming:2 xi:6 search:3 table:11 reality:1 promising:2 learn:3 obtaining:1 necessarily:1 complex:2 constructing:1 microprocessor:1 did:2 linearly:1 paul:1 edition:1 fair:1 crafted:4 site:2 ny:1 inferring:1 scheduler:22 momentum:1 candidate:5 answering:1 tied:1 third:2 stq:2 list:1 r2:2 essential:2 false:1 sequential:1 execution:25 demand:1 generalizing:1 carla:1 expressed:1 adjustment:1 macmillan:1 restructuring:1 scalar:2 aa:1 loses:1 extracted:2 ma:5 acm:1 conditional:1 goal:2 narrower:1 hard:1 specifically:1 determined:1 eliot:1 semantically:1 called:4 total:5 experimental:5 meaningful:1 select:1 internal:2 people:1 support:1 dissimilar:1 constructive:1 evaluate:2 tested:1 srivastava:2 |
383 | 135 | 248
A CONNECTIONIST EXPERT SYSTEM
THAT ACTUALLY WORKS
Gary Bradshaw
Psychology
Richard Fozzard
Computer Science
University of Colorado
Boulder, CO 80302
fozzard@boulder.colorado.edu
LouisCeci
Computer Science
ABSTRACf
The Space Environment Laboratory in Boulder has collaborated
with the University of Colorado to construct a small expert
system for solar flare forecasting, called THEa. It performed as
well as a skilled human forecaster. We have constructed
TheoNet, a three-layer back-propagation connectionist network that learns to forecast flares as well as THEa does.
TheoNet's success suggests that a connectionist network can
perform the task of knowledge engineering automatically. A
study of the internal representations constructed by the network
may give insights to the "microstructure" of reasoning processes
in the human brain.
INTRODUCTION
Can neural network learning algorithms let us build "expert systems"
automatically, merely by presenting the network with data from the problem
domain? We tested this possibility in a domain where a traditional expert
system has been developed that is at least as good as the expert, to see if the
connectionist approach could stand up to tough competition.
Knowledge-based expert systems attempt to capture in a computer program the
knowledge of a human expert in a limited doma!n and make this knowledge
available to a user with less experience. Such systeins could be valuable as an
assistant to a forecaster or as a training tool. In the past three years, the Space
Environment Laboratory (SEL) in Boulder has collaborated with the Computer
Science and Psychology Departments at the University of Colorado to construct
a small expett system emulating a methodology for solar flare forecasting
developed by Pat McIntosh, senior solar physicist at SEL. The project
convincingly demonstrated the possibilities of this type of computer assistance,
which also proved to be a useful tool for formally expressing a methodology,
verifying its performance, and instructing novice forecasters. The system,
A Connectionist Expert System that Actually Warks
named THEO (an OPS-83 production system with about 700 rules), performed as
well as a skilled human forecaster using the same methods, and scored well
compared with actual forecasts in the period covered by the test data [Lewis
and Dennett 1986].
In recent years connectionist (sometimes called "non-symbolic" or "neural")
network approaches have been used with varying degrees of success to simulate
human behavior in such areas as vision and speech learning and recognition
[Hinton 1987, Lehky and Sejnowski 1988, Sejnowski and Rosenberg 1986, Elman
and Zipser 1987]. Logic (or "symbolic") approaches have been used to simulate
human (especially expert) reasoning [see Newell 1980 and Davis 1982]. There
has developed in the artificial intelligence and cognitive psychology
communities quite a schism between the two areas of research and the same
problem has rarely been attacked by both approaches. It is hardly our intent to
debate the relative merits of the two paradigms. The intent of this project is to
directly apply a connectionist learning technique (multi-layer backpropagation) to the same problem, even the very same database used in an
existing successful rule-based expert system. At this time we know of no current
work attempting to do this.
Forecasting, as described by those who practice it, is a unique combination of
informal reasoning within very soft constraints supplied by often incomplete
and inaccurate data. The type of reasoning involved makes it a natural
application for traditional rule-based approaches. Solar and flare occurrence
data are often inconsistent and noisy. The nature of the data, therefore, calls
for careful handling of rule strengths and certainty factors. Yet dealing with
this sort of data is exactly one of the strengths claimed for connectionist
networks. It may also be that some of the reasoning involves pattern matching
of the different categories of data. This is what led us to hope that a
connectionist network might be able to learn the necessary internal
representations to cope with this task.
TECHNICAL APPROACH
The TheoNet network model has three layers of simple, neuron-like processing
elements called "units". The lowest layer is the input layer and is clamped to a
pattern that is a distributed representation of the solar data for a given day.
For the middle ("hidden") and upper ("output") layers, each unit's output
(called "activation") is the weighted sum of all inputs from the units in the
layer below:
Yj
= ~1_ _
1 + e-Xj
where: Xj
= ~Y'Wji
,
- 8j
(1)
where Yi is the activation of the ith unit in the layer below, Wji is the weight
on the connection from the ith to the jth unit, and 9j is the threshold of the jth
249
250
Fozzard, Bradshaw and Ceci
unit. The weights are initially set to random values between -1.0 and +1.0, but
are allowed to vary beyond that range. A least mean square error learning
procedure called back-propagation is used to modify the weights incrementally
for each input data pattern presented to the network. This compares the output
unit activations with the "correct" (what actually happened) solar flare
activity for that day. This gives the weight update rule:
(2)
where VE(t) is the partial derivative of least mean square error, ? is a
parameter called the learning rate that affects how quickly the network
attempts to converge on the appropriate weights (if possible), and a is called
the momentum which affects the amount of damping in the procedure. This is
as in [Hinton 1987], except that no weight decay was used. Weights were
updated after each presentation of an input/output pattern.
The network was constructed as shown in Figure 1. The top three output units
are intended to code for each of the three classes of solar flares to be forecasted.
The individual activations are currently intended to correspond to the relative
likelihood of a flare of that class within the next 24 hours (see the analysis of
the results below). The 17 input units provide a distributed coding of the ten
categories of input data that are currently fed into the "default" mode of the
expert system THEO. That is, three binary (on/off) units code for the seven
classes of sunspots, two for spot distribution, and so on. The hidden units
mediate the transfer of activation from the input to the output units and
provide the network with the potential of forming internal representations.
Each layer is fully interconnected with the layer above and/or below it, but
there are no connections within layers.
RESULTS
The P3 connectionist network simulator from David Zipser of University of
California at San Diego's parallel distributed processing (PDP) group was used
to implement and test TheoNet on a Symbolics 3653 workstation. This
simulator allowed the use of Lisp code to compile statistics and provided an
interactive environment for working with the network simulation.
The network was trained and tested using two sets of data of about 500
input/ output pairs (solar data/flare occurrence) each from the THEO database.
Many of these resulted in the same input pattern (there were only about 250
different input patterns total), and in many cases the same input would result in
different flare results in the following 24 hours. The data was from a low flare
frequency period (about 70-80 flares total). These sorts of inconsistencies in the
data make the job of prediction difficult to systematize. The network would be
A Connectionist Expert System that Actually Works
OUTPUT:
Flare probability by class
c
?
'W'
en
~
()
.r::.
.52
....
:::l
N
"0"=01
"A"=011
"D"=011
,.
'W'
-8.
-
en
en
CD
~
~
M
'
,
x
yes no
growth
above 5
less
than
M1
small
reduced
iJ
''--''
iJ
c
.~
.2
.2
:;
..c
?c
.~
CI
.~
~
c
,
.
iJ
!(G
S
u::
w
:::l
g
, '--' '--' '--' '--'
en
.2
>
!
Q.
INPUT SOLAR DATA
)(
CD
0-
E
)(
(G
oS!
CD
0-
E
....
?
8 8>.
~
~
?c
.e
.!a
....CD
?
8.
en
C
iii
CD
CD
~
a:::
as
...J
:I:
1. Modified Zurich class (7 possible values: A/B/C/D/E/F/H)
2. Largest spot size (6 values: X/R/S/ A/H/K)
3. Spot distribution (4 values: X/O/I/C)
4. Activity (reduced / unchanged)
5. Evolution (decay / no growth / or growth)
6. Previous flare activity (less than Ml / Ml / more than Ml)
7. Historically complex (yes/no)
8. Recently became complex on this pass (yes/no)
9. Area (small/large)
10. Area of the largest spot (up to 5/ above 5)
Figure 1. Architecture of TheoNet
(G
~
251
252
Fozzard, Bradshaw and Ceci
trained on one data set and then tested on the other (it did not matter which
one was used for which).
Two ways of measuring performance were used. An earlier simulation tracked a
simple measure called overall-prediction-error. This was the average
difference over one complete epoch of input patterns between the activation of
an output unit and the "correct" value it was supposed to have. This is directly
related to the sum-squared error used by the back-propagation method.
While the overall-prediction-error would drop quickly for all flare classes
after a dozen epoches or so (about 5 minutes on the Symbolics), individual
weights would take much longer to stabilize. Oscillations were seen in weight
values if a large learning rate was used. When this was reduced to 0.2 or lower
(with a momentum of 0.9), the weights would converge more smoothly to their
final values.
Overall-prediction-error however, is not a good measure of performance since
this could be reduced simply by reducing average activation (a "Just-Say-No"
network). Analyzing performance of an expert system is best done using
measures from the problem domain. Forecasting problems are essentially
probabilistic, requiring the detection of signal from noisy data. Thus
forecasting techniques and systems are often analyzed using signal detection
theory [Spoehr and Lehmkuhle 1982].
The system was modified to calculate P(H), the probability of a hit, and
P(FA), the probability of a false alarm, over each epoch. These parameters
depend on the response bias, which determines the activation level used as a
threshold for a yes/no response? . A graph of P(H) versus P(FA) gives the
receiver operating characteristic or ROC curve. The amount that this curve is
bowed away from a 1:1 slope is the degree to which a signal is being detected
against background. This was the method used for measuring the performance
of THEO [Lewis and Dennett 1986].
As in the earlier simulation, the network was exposed to the test data before
and after training. After training, the probability of hits was consistently
higher than that of false alarms in all flare classes (Figure 2). Given the
limited data and very low activations for X-class flares, it mayor may not be
reasonable to draw conclusions about the network's ability to detect these - in
the test data set there were only four X-flares in the entire data set. The
degree to which the hits exceed false alarms is given by a', the area under the
curve. The performance of TheoNet was at least as good as the THEO expert
system.
.. Even though both THEO and TheoNet have a continuous output (probability
of flare and activation), varying the response bias gives a continuous
evaluation of performance at any output level.
A Connectionist Expert System that Actually Works
1.0
a
?
~
i ~.5
a?
.
~
, /
1/
\/
/
a/
@ ~5
I
l-
0.:'
.
1'
/
.
/
-/
.5
1.0
~5 a
1.0
/
"//
/
1/
.5
Q::.
a' ?.68
P(FA)
M-class
/
i/
.a
X-class
.5
1.0
/
/
a'?. 90
1.0
P(FA)
.1, / .,,-//
?
C-c1ass
~5
Q::'
P(FA)
/
/
/
a' ?.78
1.0
?
/
/
~
a' ?.71
/
/
-
1.0
/
0
0
C-class
?
/'~/
~5
Q::'
.5
?
?
/
P(FA)
1.0
1.0
1.0
?
/
/
/
?:5
Q::.
/
/
/
M-class
a' ?.70
.5
P(FA)
/
1.0
/
X-class
a' ?.78
.5
P(FA)
1.0
Figure 2. ROC perfonnance measures of TheoNet and THEO
CONCLUSIONS
Two particularly intriguing prospects are raised by these results. The first is
that if a connectionist network can perform the same task as a rule-based
system, then a study of the internal representations constructed by the network
may give insights to the "microstructure" of how reasoning processes occur in the
human brain. These are the same reasoning processes delineated at a higher
level of description by the rules in an expert system. How this sort of work
might affect the schism between the symbolic and non-symbolic camps
(mentioned in the introduction) is anyone's guess. Our hope is that the two
paradigms may eventually come to complement and support each other in
cognitive science research.
The second prospect is more of an engineering nature. Though cortt1ectionist
networks do offer some amount of biological plausibility (and hence their
trendy status right now), it is difficult to imagine a neural mechanism for the
back-propagation algorithm. However, what do engineers care? As a lot, they
are more interested in implementing a solution than explaining the nature of
human thought. Witness the current explosion of expert system technology in
the marketplace today. Yet for all its glamor, expert systems have usually
proved time consuming and expensive to implement. The "knowledgeengineering" step of interviewing experts and transferring their knowledge to
253
254
Fozzard, Bradshaw and Ceci
rules that work successfully together has been the most difficult and expensive
part, even with advanced knowledge representation languages and expert
system shells. TheoNet has shown that at least in this instance, a standard
back-propagation network can quickly learn those necessary representations
and interactions (rules?) needed to do the same sort of reasoning. Development
of THEO (originally presented as one of the quickest developments of a usable
expert system) required more than a man-year of work and 700 rules, while
TheoNet was developed in less than a week using a simple simulator. In
addition, THEO requires about five minutes to process a single prediction while
the network requires only a few milliseconds, thus promising better perfonnance
under real-time conditions.
Many questions remain to be answered. TheoNet has only been tested on a small
segment of the ll-year solar cycle. It has yet to be determined how many
hidden units are needed for generalization of performance (is a simple pattern
associator sufficient?). We would like to examine the internal representations
formed and see if there is any relationship to the rules in THEO. Without
those interpretations, connectionist networks cannot easily offer the help and
explanation facilities of traditional expert systems that are a fallout of the
rule-writing process.
Since the categories of data used were what is input to THEO, and therefore
known to be significant, we need to ask if the network can eliminate redundant
or unnecessary categories. We also would like to attempt to implement other
well-known expert systems to determine the generality of this approach.
Acknowledgements
The authors would like to acknowledge the encouragement and advice of Paul
Smolensky (University of Colorado) on this project and the desktop publishing
eqUipment of Fischer Imaging Corporation in Denver.
REFERENCES
Randall Davis "Expert Systems: Where Are We? And Where Do We Go From
Here?", The AI Magazine, Spring 1982
J.L. Elman and David Zipser Discovering the Hidden Structure of Speech ICS
Technical Report 8701, University of California, San Diego
Geoffrey Hinton "Learning Translation Invariant Recognition in a Massively
Parallel Network", in Proc. Conf. Parallel Architectures and Languages
Europe, Eindhoven, The Netherlands, June 1987
A Connectionist Expert System that Actually Works
Sidney Lehky and Terrence Sejnowski, "Neural Network Model for the Cortical
Representation of Surface Curvature from Images of Shaded Surfaces",
in Sensory Processing, J.S. Lund, ed., Oxford 1988
Clayton Lewis and Joann Dennett "Joint CU/NOAA Study Predicts Events on
the Sun with Artificial Intelligence Technology", CUEngineering, 1986
Allen Newell ''Physical Symbol Systems", Cognitive Science 4:135-183
David Rumelhart, Jay McClelland, and the PDP research group, Parallel
Distributed Processing. Volume 1. Cambridge, MA, Bradford books,
1986
Terrence Sejnowski and C.R. Rosenberg, NETtalk: A parallel network that
learns to read aloud Technical Report 86-01 Dept. of Electrical
Engineering and Computer Science, Johns Hopkins University, BaltimoreMD
K.T. Spoehr and S.W. Lehmkuhle "Signal Detection Theory" in Visual
Information Processing, Freeman 1982
255
| 135 |@word cu:1 middle:1 simulation:3 forecaster:4 c1ass:1 past:1 existing:1 current:2 activation:10 yet:3 intriguing:1 john:1 drop:1 update:1 intelligence:2 discovering:1 guess:1 flare:18 desktop:1 ith:2 bowed:1 five:1 skilled:2 constructed:4 sidney:1 behavior:1 elman:2 examine:1 multi:1 brain:2 simulator:3 freeman:1 automatically:2 actual:1 project:3 provided:1 lowest:1 what:4 developed:4 corporation:1 certainty:1 growth:3 interactive:1 exactly:1 lehmkuhle:2 hit:3 unit:14 before:1 engineering:3 modify:1 physicist:1 analyzing:1 oxford:1 might:2 quickest:1 suggests:1 shaded:1 compile:1 co:1 limited:2 range:1 unique:1 yj:1 practice:1 implement:3 backpropagation:1 spot:4 procedure:2 area:5 thought:1 matching:1 symbolic:4 cannot:1 writing:1 demonstrated:1 go:1 insight:2 rule:12 updated:1 diego:2 today:1 imagine:1 user:1 colorado:5 magazine:1 element:1 rumelhart:1 recognition:2 particularly:1 expensive:2 predicts:1 database:2 electrical:1 capture:1 verifying:1 calculate:1 cycle:1 sun:1 prospect:2 valuable:1 mentioned:1 environment:3 trained:2 depend:1 segment:1 exposed:1 easily:1 joint:1 sejnowski:4 artificial:2 detected:1 marketplace:1 aloud:1 quite:1 say:1 ability:1 statistic:1 fischer:1 noisy:2 final:1 interconnected:1 interaction:1 dennett:3 supposed:1 description:1 competition:1 abstracf:1 help:1 ij:3 job:1 involves:1 come:1 correct:2 human:8 implementing:1 microstructure:2 generalization:1 biological:1 eindhoven:1 ic:1 week:1 doma:1 vary:1 assistant:1 proc:1 currently:2 largest:2 successfully:1 tool:2 weighted:1 hope:2 modified:2 sel:2 varying:2 rosenberg:2 forecasted:1 june:1 consistently:1 likelihood:1 equipment:1 detect:1 camp:1 inaccurate:1 entire:1 transferring:1 eliminate:1 initially:1 hidden:4 interested:1 overall:3 development:2 raised:1 construct:2 connectionist:15 report:2 richard:1 few:1 ve:1 resulted:1 individual:2 intended:2 attempt:3 detection:3 possibility:2 evaluation:1 analyzed:1 partial:1 necessary:2 experience:1 explosion:1 perfonnance:2 damping:1 incomplete:1 instance:1 soft:1 earlier:2 measuring:2 successful:1 ops:1 probabilistic:1 off:1 terrence:2 together:1 quickly:3 hopkins:1 squared:1 cognitive:3 conf:1 expert:25 derivative:1 usable:1 book:1 potential:1 coding:1 stabilize:1 matter:1 performed:2 lot:1 sort:4 parallel:5 solar:10 slope:1 square:2 formed:1 became:1 who:1 characteristic:1 correspond:1 yes:4 ed:1 against:1 frequency:1 involved:1 workstation:1 proved:2 ask:1 knowledge:6 actually:6 back:5 noaa:1 higher:2 originally:1 day:2 methodology:2 response:3 done:1 though:2 generality:1 just:1 working:1 o:1 propagation:5 incrementally:1 mode:1 requiring:1 evolution:1 hence:1 facility:1 read:1 laboratory:2 nettalk:1 assistance:1 ll:1 davis:2 presenting:1 complete:1 allen:1 reasoning:8 image:1 recently:1 denver:1 tracked:1 physical:1 volume:1 interpretation:1 m1:1 expressing:1 significant:1 cambridge:1 ai:1 mcintosh:1 encouragement:1 language:2 europe:1 longer:1 operating:1 surface:2 curvature:1 recent:1 massively:1 claimed:1 trendy:1 schism:2 success:2 binary:1 inconsistency:1 yi:1 wji:2 seen:1 care:1 converge:2 paradigm:2 period:2 redundant:1 signal:4 determine:1 technical:3 plausibility:1 offer:2 prediction:5 essentially:1 vision:1 fallout:1 mayor:1 sometimes:1 background:1 addition:1 tough:1 inconsistent:1 lisp:1 call:1 zipser:3 exceed:1 iii:1 xj:2 affect:3 psychology:3 architecture:2 forecasting:5 speech:2 hardly:1 useful:1 covered:1 netherlands:1 amount:3 bradshaw:4 ten:1 lehky:2 category:4 mcclelland:1 reduced:4 supplied:1 millisecond:1 happened:1 group:2 four:1 threshold:2 imaging:1 graph:1 merely:1 year:4 sum:2 named:1 reasonable:1 p3:1 oscillation:1 draw:1 layer:11 collaborated:2 activity:3 strength:2 occur:1 constraint:1 simulate:2 anyone:1 answered:1 spring:1 attempting:1 department:1 combination:1 remain:1 ceci:3 delineated:1 randall:1 invariant:1 boulder:4 zurich:1 eventually:1 mechanism:1 needed:2 know:1 merit:1 fed:1 informal:1 available:1 apply:1 away:1 appropriate:1 occurrence:2 top:1 symbolics:2 publishing:1 build:1 especially:1 unchanged:1 question:1 fa:8 traditional:3 seven:1 code:3 relationship:1 difficult:3 debate:1 intent:2 perform:2 upper:1 neuron:1 acknowledge:1 attacked:1 pat:1 emulating:1 hinton:3 witness:1 pdp:2 community:1 david:3 complement:1 pair:1 required:1 clayton:1 connection:2 instructing:1 california:2 hour:2 able:1 beyond:1 below:4 pattern:8 usually:1 lund:1 smolensky:1 convincingly:1 program:1 explanation:1 event:1 natural:1 advanced:1 historically:1 technology:2 epoch:3 acknowledgement:1 relative:2 fully:1 versus:1 geoffrey:1 degree:3 sufficient:1 cd:6 production:1 translation:1 theo:11 jth:2 bias:2 senior:1 explaining:1 distributed:4 curve:3 default:1 cortical:1 stand:1 sensory:1 author:1 san:2 novice:1 cope:1 status:1 logic:1 dealing:1 ml:3 receiver:1 unnecessary:1 consuming:1 continuous:2 promising:1 nature:3 learn:2 transfer:1 associator:1 complex:2 domain:3 did:1 scored:1 mediate:1 alarm:3 paul:1 allowed:2 advice:1 en:5 sunspot:1 roc:2 momentum:2 clamped:1 jay:1 learns:2 dozen:1 minute:2 thea:2 symbol:1 decay:2 false:3 ci:1 forecast:2 smoothly:1 led:1 simply:1 forming:1 visual:1 gary:1 newell:2 determines:1 lewis:3 ma:1 shell:1 presentation:1 careful:1 man:1 determined:1 except:1 reducing:1 engineer:1 called:8 total:2 pas:1 bradford:1 rarely:1 formally:1 internal:5 support:1 dept:1 tested:4 handling:1 |
384 | 1,350 | Analysis of Drifting Dynamics with
Neural Network Hidden Markov Models
J. Kohlmorgen
GMD FIRST
Rudower Chaussee 5
12489 Berlin, Germany
K.-R. Miiller
GMD FIRST
Rudower Chaussee 5
12489 Berlin, Germany
K. Pawelzik
MPI f. Stromungsforschung
Bunsenstr. 10
37073 Gottingen, Germany
Abstract
We present a method for the analysis of nonstationary time series with multiple operating modes. In particular, it is possible to
detect and to model both a switching of the dynamics and a less
abrupt, time consuming drift from one mode to another. This is
achieved in two steps. First, an unsupervised training method provides prediction experts for the inherent dynamical modes. Then,
the trained experts are used in a hidden Markov model that allows
to model drifts. An application to physiological wake/sleep data
demonstrates that analysis and modeling of real-world time series
can be improved when the drift paradigm is taken into account.
1
Introduction
Modeling dynamical systems through a measured time series is commonly done by
reconstructing the state space with time-delay coordinates [10]. The prediction of
the time series can then be accomplished by training neural networks [11]. H, however, a system operates in multiple modes and the dynamics is drifting or switching,
standard approaches like multi-layer perceptrons are likely to fail to represent the
underlying input-output relations. Moreover, they do not reveal the dynamical
structure of the system. Time series from alternating dynamics of this type can
originate from many kinds of systems in physics, biology and engineering.
In [2, 6, 8], we have described a framework for time series from switching dynamics,
in which an ensemble of neural network predictors specializes on the respective
operating modes. We now extend the ability to describe a mode change not only
as a switching but - if appropriate - also as a drift from one predictor to another.
Our results indicate that physiological signals contain drifting dynamics, which
J. Kohlmorgen, K-R. Maller and K. Pawelzik
736
underlines the potential relevance of our method in time series analysis.
2
Detection of Drifts
The detection and analysis of drifts is performed in two steps. First, an unsupervised
(hard-)segmentation method is applied. In this approach, an ensemble of competing
prediction experts Ii, i = 1, ... , N, is trained on a given time series. The optimal
choice of function approximators Ii depends on the specific application. In general,
however, neural networks are a good choice for the prediction of time series [11]. In
this paper, we use radial basis function (RBF) networks of the Moody-Darken type
[5] as predictors, because they offer a fast and robust learning method.
Under a gaussian assumption, the probability that a particular predictor i would
have produced the observed data y is given by
(1)
where K is the normalization term for the gaussian distribution. If we assume that
the experts are mutually exclusive and exhaustive, we have p(y) = LiP(Y I i)p(i).
We further assume that the experts are - a priori - equally probable,
p(i) = liN.
(2)
In order to train the experts, we want to maximize the likelihood that the ensemble
would have generated the time series. This can be done by a gradient method. For
the derivative of the log-likelihood log L = log(P(y? with respect to the output of
an expert, we get
(3)
This learning rule can be interpreted as a weighting of the learning rate of each
expert by the expert's relative prediction performance. It is a special case of the
Mixtures of Experts [1] learning rule, with the gating network being omitted. Note
that according to Bayes' rule the term in brackets is the posterior probability that
expert i is the correct choice for the given data y, i.e. p(i I y). Therefore, we can
simply write
alogL
.
(4)
ali ex: p(z I y)(y - Ii)?
Furthermore, we imposed a low-pass filter on the prediction errors Ci = (y - 1i)2
and used deterministic annealing of f3 in the training process (see [2, 8] for details).
We found that these modifications can be essential for a successful segmentation
and prediction of time series from switching dynamics.
As a prerequisite of this method, mode changes should occur infrequent, i.e. between two mode changes the dynamics should operate stationary in one mode for a
certain number of time steps. Applying this method to a time series yields a (hard)
segmentation of the series into different operating modes together with prediction
experts for each mode. In case of a drift between two modes, the respective segment
tends to be subdivided into several parts, because a single predictor is not able to
handle the nonstationarity.
Analysis of Drifting Dynamics with Neural Network Hidden Markov Models
737
The second step takes the drift into account. A segmentation algorithm is applied
that allows to model drifts between two stationary modes by combining the two
respective predictors, Ii and h. The drift is modeled by a weighted superposition
(5)
where a(t) is a mixing coefficient and Xt = (Xt,Xt-r, .?. ,Xt_(m_l)r)T is the vector
of time-delay coordinates of a (scalar) time series {Xt}. Furthermore, m is the
embedding dimension and T is the delay parameter of the embedding. Note that
the use of multivariate time series is straightforward.
3
A Hidden Markov Model for Drift Segmentation
In the following, we will set up a hidden Markov model (HMM) that allows us
to use the Viterbi algorithm for the analysis of drifting dynamics. For a detailed
description of HMMs, see [9] and the references therein. An HMM consists of (1)
a set S of states, (2) a matrix A
{poi,,} of state transition probabilities, (3) an
observation probability distribution p(Yls) for each state s, which is a continuous
density in our case, and (4) the initial state distribution 7r {7r8 }.
=
=
Let us first consider the construction of S, the set of states, which is the crucial
point of this approach. Consider a set P of 'pure' states (dynamical modes). Each
state s E P represents one of the neural network predictors Ik(,) trained in the first
step. The predictor of each state performs the predictions autonomously. Next,
consider a set M of mixture states, where each state s E M represents a linear
mixture of two nets /;.(.) and h(.). Then, given a state s E S, S = P U M, the
prediction of the overall system is performed by
;ifsEP
;ifsEM
(6)
For each mixture state s EM, the coefficients a( s) and b( 8) have to be set together
with the respective network indices i(s) and j(s). For computational feasibility, the
number of mixture states has to be restricted. Our intention is to allow for drifts
between any two network outputs of the previously trained ensemble. We choose
a(s) and b(s) such that 0 < a(s) < 1 and b(s) = 1 - a(s). Moreover, a discrete set
of a( s) values has to be defined. For simplicity, we use equally distant steps,
r
a r = R + 1 ' r = 1, ... , R.
(7)
R is the number of intermediate mixture levels. A given resolution R between any
two out of N nets yields a total number of mixed states IMI = R? N? (N - 1)/2.
If, for example, the resolution R = 32 is used and we assume N = 8, then there are
IMI = 896 mixture states, plus IFI = N = 8 pure states.
Next, the transition matrix A = {poi,,} has to be chosen. It determines the transition probability for each pair of states. In principle, this matrix can be found
using a training procedure, as e.g. the Baum-Welch method [9]. However, this is
hardly feasible in this case, because of the immense size of the matrix. In the above
example, the matrix A has (896 + 8)2 = 817216 elements that would have to be
estimated. Such an exceeding number of free parameters is prohibitive for any adaptive method. Therefore, we use a fixed matrix. In this way, prior knowledge about
1. Kohlmorgen, K-R. Maller and K. Pawelzik
738
the dynamical system can be incorporated. In our applications either switches or
smooth drifts between two nets are allowed, in such a way that a (monotonous) drift
from one net to another is a priori as likely as a switch. All the other transitions are
disabled by setting P.,. = O. Defining p(y Is) and 7r is straightforward. Following
eq.(I) and eq.(2), we assume gauflsian noise
p(y Is) = Ke- f3 (1I-g.)'l,
(8)
and equally probable initial states, 7r. = 151-1.
The Viterbi algorithm [9] can then be applied to the above stated HMM, without
any further training of the HMM parameters. It yields the drift segmentation of a
given time series, i.e. the most likely state sequence (the sequence of predictors or
linear mixtures of two predictors) that could have generated the time series, in our
case with the assumption that mode changes occur either as (smooth) drifts or as
infrequent switches.
4
Drifting Mackey-Glass Dynamics
As an example, consider a high-dimensional chaotic system generated by the
Mackey-Glass delay differential equation
dx(t) _ 01 ()
- - - - xt
dt
.
0.2x(t - td)
+--~----:'-::-::-
1 + x(t - td)1? .
(9)
It was originally introduced as a model of blood cell regulation [4]. Two stationary
operating modes, A and B, are established by using different delays, td = 17 and
23, respectively. After operating 100 time steps in mode A (with respect to a
subsampling step size T :;; 6), the dynamics is drifting to mode B. The drift takes
another 100 time steps. It is performed by mixing the equations for td = 17 and
23 during the integration of eq.(9). The mixture is generated according to eq.(5),
using an exponential drift
a(t) = exp
(-4t)
100 '
t = 1, . .. ,100.
(10)
Then, the system runs stationary in mode B for the following 100 time steps, whereupon it is switching back to mode A at t = 300, and the loop starts again (Fig.l(a?.
The competing experts algorithm is applied to the first 1500 data points of the generated time series, using an ensemble of 6 predictors h(Xt), i = 1, ... ,6. The input
to each predictor is a vector Xt of time-delay coordinates of the scalar time series
{xt}. The embedding dimension is m = 6 and the delay parameter is T = 1 on the
subsampled data. The RBF predictors consist of 40 basis functions each.
After training, nets 2 and 3 have specialized on mode A, nets 5 and 6 on mode B.
This is depicted in the drift segmentation in Fig.l(b). Moreover, the removal of
four nets does not increase the root mean squared error (RMSE) of the prediction
significantly (Fig.l(c?, which correctly indicates that two predictors completely
describe the dynamical system. The sequence of nets to be removed is obtained by
repeatedly computing the RMSE of all n subsets with n - 1 nets each, and then
selecting the subset with the lowest RMSE of the respective drift segmentation.
The segmentation of the remaining nets, 2 and 5, nicely reproduces the evolution
of the dynamiCS, as seen in Fig.1(d).
739
Analysis of Drifting Dynamics with Neural Network Hidden Markov Models
?
1.4
1.3
r7-
5
E 0.8
I
!
" 0.8
rrr'
7-
8
1.2
t .t
j
I
3-~
2
'--
iJ?r-
(
f
J
0.7
0.8
I
-,-
~~
'--
'-'--
0.5
0.4
0.3
0
50
100
150
200
250
o
300
t
200
~
~
~
1~
1200
1~
1~
t
(b)
(a)
01
0.011
r7- r
8
0.08
7-
5
0.07
4
w
!!lo.os
(
I
!
a:
IT
I
0.05
2~'-
0.04
f,
I
I
'--'-- '-_
L
'-'--
0.03
0.02
0
2
.Remo\Iod
N.,.3
(c)
o
200
~
~
~
I~
1200
I~
1~
t
(d)
Figure 1: (a) One 'loop' of the drifting Mackey-Glass time series (see text). (b)
The resulting drift segmentation invokes four nets. The dotted line indicates the
evolution of the mixing coefficient a(t) of the respective nets. For example, between
t = 100 and 200 it denotes a drift from net 3 to net 5, which appears to be exponential. (c) Increase of the prediction error when predictors are successively removed.
(d) The two remaining predictors model the dynamics of the time series properly.
5
Wake/Sleep EEG
In [7] , we analyzed physiological data recorded from the wake/sleep transition of a
human. The objective was to provide an unsupervised method to detect the sleep
onset and to give a detailed approximation of the signal dynamics with a high time
resolution, ultimately to be used in diagnosis and treatment of sleep disorders. The
application of the drift segmentation algorithm now yields a more detailed modeling
of the dynamical system.
As an example, Fig. 2 shows a comparison of the drift segmentation (R = 32)
with a manual segmentation by a medical expert. The experimental data was measured during an afternoon nap of a healthy human. The computer-based analysis
is performed on a single-channel EEG recording (occipital-l), whereas the manual
segmentation was worked out using several physiological recordings (EEG, EOG,
ECG, heart rate, blood pressure, respiration) .
The two-step drift segmentation method was applied using 8 RBF networks. However, as shown in Fig. 2, three nets (4, 6, and 8) are finally found by the Viterbi
algorithm to be sufficient to represent the most likely state sequence. Before the
sleep onset, at t ~ 3500 (350s) in the manual analysis, a mixture of two wake-state
J Kohimorgen, K-R Maller and K. Pawelzik
740
net8 t - - - - - - - ,
net7
net6 1------+-------,
netS
net4
net3
net2
net1
W1
W2
S1
S2
n.a.
art.
data
o
2000
4000
6000
8000
10000
12000
14000
16000
t
Figure 2: Comparison of the drift segmentation obtained by the algorithm (upper
plot), and a manual segmentation by a medical expert (middle). Only a singlechannel EEG recording (occipital-l, time resolution O.ls) of an afternoon nap is
given for the algorithmic approach, while the manual segmentation is based on all
available measurements. In the manual analysis, WI and W2 indicate two wakestates (eyes open/closed), and 81 and 82 indicate sleep stage I and II, respectively.
(n.a.: no assessment, art.: artifacts)
nets, 6 and 8, performs the best reconstruction of the EEG dynamics. Then, at
t = 3000 (300s), there starts a drift to net 4, which apparently represents the dynamics of sleep stage II (82) . Interestingly, sleep stage I (81) is not represented by
a separate net but by a linear mixture of net 4 and net 6, with much more weight
on net 4. Thus, the process of falling asleep is represented as a drift from the state
of being awake directly to sleep stage II.
During sleep there are several wake-up spikes indicated in the manual segmentation.
At least the last four are also clearly indicated in the drift segmentation, as drifts
back to net 6. Furthermore, the detection ofthe final arousal after t = 12000 (1200s)
is in good accordance with the manual segmentation: there is a fast drift back to
net 6 at that point.
Considering the fact that our method is based only on the recording of a single
EEG channel and does not use any medical expert knowledge, the drift algorithm
is in remarkable accordance with the assessment of the medical expert. Moreover,
it resolves the dynamical structure of the signal to more detail. For a more comprehensive analysis of wake/sleep data, we refer to our forthcoming publication
[3] .
Analysis of Drifting Dynamics with Neural Network Hidden Markov Models
6
741
Summary and Discussion
We presented a method for the unsupervised segmentation and identification of
nonstationary drifting dynamics. It applies to time series where the dynamics is
drifting or switching between different operating modes. An application to physiological wake/sleep data (EEG) demonstrates that drift can be found in natural
systems. It is therefore important to consider this aspect of data description.
In the case of wake/sleep data, where the physiological state transitions are far from
being understood, we can extract the shape of the dynamical drift from wake to
sleep in an unsupervised manner. By applying this new data analysis method, we
hope to gain more insights into the underlying physiological processes. Our future
work is therefore dedicated to a comprehensive analysis of large sets of physiological
wake/sleep recordings. We expect, however, that our method will be also applicable
in many other fields.
Acknowledgements: We acknowledge support of the DFG (grant Ja379/51) and
we would like to thank J. Rittweger for the EEG data and for fruitful discussions.
References
[1] Jacobs, R.A., Jordan, M.A. , Nowlan, S.J., Hinton, G.E. (1991). Adaptive Mixtures of Local Experts, Neural Computation 3, 79-87.
[2] Kohlmorgen, J., Miiller, K.-R., Pawelzik, K. (1995). Improving short-term prediction with competing experts. ICANN'95, EC2 & Cie, Paris, 2:215-220.
[3] Kohlmorgen, J., Miiller, K.-R., Rittweger, J., Pawelzik, K., in preparation.
[4] Mackey, M., Glass, L. (1977). Oscillation and Chaos in a Physiological Control
System, Science 197,287.
[5] Moody, J., Darken, C. (1989). Fast Learning in Networks of Locally-Tuned
Processing Units. Neural Computation 1, 281-294.
[6] Miiller, K.-R., Kohlmorgen, J., Pawelzik, K. (1995). Analysis of Switching
Dynamics with Competing Neural Networks, IEICE 'nans. on Fundamentals
of Electronics, Communications and Computer Sc., E78-A, No.1O, 1306-1315.
[7] Miiller, K.-R., Kohlmorgen, J., Rittweger, J., Pawelzik, K. (1995). Analysing
Physiological Data from the Wake-Sleep State Transition with Competing Predictors, NOLTA'95: Symposium on Nonlinear Theory and its Appl., 223-226.
[8] Pawelzik, K. , Kohlmorgen, J., Miiller, K.-R. (1996). Annealed Competition of
Experts for a Segmentation and Classification of Switching Dynamics, Neural
Computation, 8:2, 342-358.
[9] Rabiner, L.R. (1988). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In Readings in Speech Recognition, ed. A.
Waibel, K. Lee, 267-296. San Mateo: Morgan Kaufmann, 1990.
[10] Takens, F. (1981). Detecting Strange Attractors in Turbulence. In: Rand, D.,
Young, L.-S., (Eds.), Dynamical Systems and Turbulence, Springer Lecture
Notes in Mathematics, 898, 366.
[11] Weigend, A.S., Gershenfeld, N.A. (Eds.) (1994). Time Series Prediction: Forecasting the Future and Understanding the Past, Addison-Wesley.
| 1350 |@word middle:1 underline:1 open:1 jacob:1 pressure:1 electronics:1 initial:2 series:23 selecting:1 tuned:1 interestingly:1 past:1 nowlan:1 dx:1 distant:1 shape:1 net1:1 plot:1 mackey:4 stationary:4 prohibitive:1 selected:1 short:1 provides:1 detecting:1 differential:1 symposium:1 ik:1 consists:1 manner:1 multi:1 td:4 resolve:1 pawelzik:9 kohlmorgen:8 considering:1 underlying:2 moreover:4 lowest:1 kind:1 interpreted:1 gottingen:1 demonstrates:2 control:1 unit:1 medical:4 grant:1 before:1 engineering:1 accordance:2 understood:1 tends:1 local:1 switching:9 nap:2 plus:1 therein:1 mateo:1 ecg:1 appl:1 hmms:1 chaotic:1 procedure:1 significantly:1 intention:1 radial:1 get:1 turbulence:2 applying:2 whereupon:1 fruitful:1 imposed:1 deterministic:1 baum:1 annealed:1 straightforward:2 occipital:2 l:1 ke:1 resolution:4 welch:1 simplicity:1 abrupt:1 pure:2 disorder:1 rule:3 insight:1 embedding:3 handle:1 coordinate:3 construction:1 infrequent:2 element:1 net2:1 recognition:2 observed:1 autonomously:1 removed:2 dynamic:23 ultimately:1 rudower:2 trained:4 segment:1 ali:1 basis:2 completely:1 represented:2 train:1 fast:3 describe:2 sc:1 exhaustive:1 ability:1 final:1 sequence:4 net:24 reconstruction:1 combining:1 loop:2 mixing:3 description:2 competition:1 measured:2 ij:1 eq:4 indicate:3 correct:1 filter:1 human:2 subdivided:1 probable:2 exp:1 viterbi:3 algorithmic:1 omitted:1 applicable:1 superposition:1 healthy:1 weighted:1 hope:1 clearly:1 gaussian:2 poi:2 publication:1 properly:1 likelihood:2 indicates:2 detect:2 glass:4 hidden:8 relation:1 germany:3 overall:1 classification:1 priori:2 takens:1 art:2 special:1 integration:1 field:1 f3:2 nicely:1 biology:1 represents:3 r7:2 unsupervised:5 future:2 inherent:1 comprehensive:2 dfg:1 subsampled:1 attractor:1 detection:3 mixture:12 bracket:1 analyzed:1 immense:1 respective:6 arousal:1 monotonous:1 modeling:3 subset:2 predictor:17 delay:7 successful:1 imi:2 density:1 fundamental:1 ec2:1 lee:1 physic:1 together:2 moody:2 w1:1 again:1 squared:1 recorded:1 successively:1 choose:1 expert:20 derivative:1 account:2 potential:1 coefficient:3 depends:1 onset:2 performed:4 root:1 closed:1 apparently:1 start:2 bayes:1 rmse:3 kaufmann:1 ensemble:5 yield:4 ofthe:1 rabiner:1 identification:1 produced:1 nonstationarity:1 manual:8 ed:3 gain:1 treatment:1 knowledge:2 segmentation:23 back:3 appears:1 wesley:1 originally:1 dt:1 improved:1 rand:1 done:2 furthermore:3 stage:4 o:1 assessment:2 nonlinear:1 mode:23 artifact:1 indicated:2 reveal:1 disabled:1 ieice:1 contain:1 evolution:2 alternating:1 during:3 mpi:1 performs:2 dedicated:1 chaos:1 specialized:1 extend:1 measurement:1 respiration:1 refer:1 mathematics:1 operating:6 posterior:1 multivariate:1 certain:1 yls:1 approximators:1 accomplished:1 seen:1 morgan:1 paradigm:1 maximize:1 signal:3 ii:7 multiple:2 smooth:2 offer:1 lin:1 equally:3 feasibility:1 prediction:14 represent:2 normalization:1 cie:1 achieved:1 cell:1 whereas:1 want:1 chaussee:2 annealing:1 wake:11 crucial:1 w2:2 operate:1 recording:5 jordan:1 nonstationary:2 intermediate:1 switch:3 forthcoming:1 competing:5 ifi:1 forecasting:1 miiller:6 speech:2 hardly:1 repeatedly:1 detailed:3 locally:1 gmd:2 tutorial:1 dotted:1 estimated:1 correctly:1 diagnosis:1 discrete:1 write:1 four:3 blood:2 falling:1 gershenfeld:1 weigend:1 run:1 strange:1 oscillation:1 bunsenstr:1 layer:1 nan:1 sleep:17 occur:2 worked:1 awake:1 aspect:1 according:2 waibel:1 reconstructing:1 em:1 wi:1 modification:1 rrr:1 s1:1 restricted:1 taken:1 heart:1 equation:2 mutually:1 previously:1 fail:1 addison:1 available:1 prerequisite:1 appropriate:1 drifting:12 denotes:1 remaining:2 subsampling:1 invokes:1 objective:1 spike:1 exclusive:1 gradient:1 separate:1 thank:1 berlin:2 hmm:4 originate:1 modeled:1 index:1 regulation:1 stated:1 upper:1 observation:1 darken:2 markov:8 acknowledge:1 maller:3 defining:1 hinton:1 incorporated:1 communication:1 drift:34 introduced:1 pair:1 paris:1 established:1 able:1 dynamical:10 reading:1 natural:1 eye:1 specializes:1 extract:1 eog:1 text:1 prior:1 understanding:1 acknowledgement:1 removal:1 relative:1 expect:1 lecture:1 mixed:1 remarkable:1 sufficient:1 nolta:1 principle:1 lo:1 summary:1 last:1 free:1 allow:1 dimension:2 world:1 transition:7 commonly:1 adaptive:2 san:1 far:1 reproduces:1 consuming:1 continuous:1 lip:1 channel:2 rittweger:3 robust:1 eeg:8 improving:1 icann:1 s2:1 noise:1 allowed:1 fig:6 xt_:1 exceeding:1 exponential:2 weighting:1 young:1 specific:1 xt:8 gating:1 r8:1 physiological:10 essential:1 consist:1 ci:1 depicted:1 simply:1 likely:4 scalar:2 applies:1 springer:1 afternoon:2 determines:1 asleep:1 rbf:3 feasible:1 change:4 hard:2 analysing:1 operates:1 total:1 pas:1 experimental:1 perceptrons:1 support:1 relevance:1 preparation:1 ex:1 |
385 | 1,351 | 2D Observers for Human 3D Object Recognition?
Zili Liu
NEC Research Institute
Daniel Kersten
University of Minnesota
. Abstract
Converging evidence has shown that human object recognition
depends on familiarity with the images of an object. Further,
the greater the similarity between objects, the stronger is the
dependence on object appearance, and the more important twodimensional (2D) image information becomes. These findings, however, do not rule out the use of 3D structural information in recognition, and the degree to which 3D information is used in visual
memory is an important issue. Liu, Knill, & Kersten (1995) showed
that any model that is restricted to rotations in the image plane
of independent 2D templates could not account for human performance in discriminating novel object views. We now present results
from models of generalized radial basis functions (GRBF), 2D nearest neighbor matching that allows 2D affine transformations, and
a Bayesian statistical estimator that integrates over all possible 2D
affine transformations. The performance of the human observers
relative to each of the models is better for the novel views than
for the familiar template views, suggesting that humans generalize
better to novel views from template views. The Bayesian estimator yields the optimal performance with 2D affine transformations
and independent 2D templates. Therefore, models of 2D affine
matching operations with independent 2D templates are unlikely
to account for human recognition performance.
1 Introduction
Object recognition is one of the most important functions in human vision. To
understand human object recognition, it is essential to understand how objects are
represented in human visual memory. A central component in object recognition
is the matching of the stored object representation with that derived from the image input. But the nature of the object representation has to be inferred from
recognition performance, by taking into account the contribution from the image
information. When evaluating human performance, how can one separate the con-
830
Z Liu and D. Kersten
tributions to performance of the image information from the representation? Ideal
observer analysis provides a precise computational tool to answer this question. An
ideal observer's recognition performance is restricted only by the available image
information and is otherwise optimal, in the sense of statistical decision theory,
irrespective of how the model is implemented. A comparison of human to ideal
performance (often in terms of efficiency) serves to normalize performance with respect to the image information for the task. We consider the problem of viewpoint
dependence in human recognition.
A recent debate in human object recognition has focused on the dependence of recognition performance on viewpoint [1 , 6]. Depending on the experimental conditions,
an observer's ability to recognize a familiar object from novel viewpoints is impaired
to varying degrees. A central assumption in the debate is the equivalence in viewpoint dependence and recognition performance. In other words, the assumption is
that viewpoint dependent performance implies a viewpoint dependent representation, and that viewpoint independent performance implies a viewpoint independent
representation. However, given that any recognition performance depends on the
input image information, which is necessarily viewpoint dependent, the viewpoint
dependence of the performance is neither necessary nor sufficient for the viewpoint
dependence of the representation. Image information has to be factored out first,
and the ideal observer provides the means to do this.
The second aspect of an ideal observer is that it is implementation free. Consider the GRBF model [5], as compared with human object recognition (see below). The model stores a number of 2D templates {Ti} of a 3D object 0,
and reco~nizes or rejects a stimulus image S by the following similarity measure
~iCi exp UITi - SI1 2 j2( 2 ), where Ci and a are constants. The model's performance
as a function of viewpoint parallels that of human observers. This observation has
led to the conclusion that the human visual system may indeed, as does the model,
use 2D stored views with GRBF interpolation to recognize 3D objects [2]. Such a
conclusion, however, overlooks implementational constraints in the model, because
the model's performance also depends on its implementations. Conceivably, a model
with some 3D information of the objects can also mimic human performance, so
long as it is appropriately implemented. There are typically too many possible
models that can produce the same pattern of results.
In contrast, an ideal observer computes the optimal performance that is only limited
by the stimulus information and the task. We can define constrained ideals that are
also limited by explicitly specified assumptions (e.g., a class of matching operations).
Such a model observer therefore yields the best possible performance among the
class of models with the same stimulus input and assumptions. In this paper,
we are particularly interested in constrained ideal observers that are restricted in
functionally Significant aspects (e.g., a 2D ideal observer that stores independent
2D templates and has access only to 2D affine transformations) . The key idea is
that a constrained ideal observer is the best in its class. So if humans outperform
this ideal observer, they must have used more than what is available to the ideal.
The conclusion that follows is strong: not only does the constrained ideal fail to
account for human performance, but the whole class of its implementations are also
falsified.
A crucial question in object recognition is the extent to which human observers
model the geometric variation in images due to the projection of a 3D object onto a
2D image. At one extreme, we have shown that any model that compares the image
to independent views (even if we allow for 2D rigid transformations of the input
image) is insufficient to account for human performance. At the other extreme, it
is unlikely that variation is modeled in terms of rigid transformation of a 3D object
2D Observers/or Hwnan 3D Object Recognition?
831
template in memory. A possible intermediate solution is to match the input image
to stored views, subject to 2D affine deformations. This is reasonable because 2D
affine transformations approximate 3D variation over a limited range of viewpoint
change.
In this study, we test whether any model limited to the independent comparison
of 2D views, but with 2D affine flexibility, is sufficient to account for viewpoint
dependence in human recognition. In the following section, we first define our experimental task, in which the computational models yield the provably best possible
performance under their specified conditions. We then review the 2D ideal observer
and GRBF model derived in [4], and the 2D affine nearest neighbor model in [8].
Our principal theoretical result is a closed-form solution of a Bayesian 2D affine ideal
observer. We then compare human performance with the 2D affine ideal model, as
well as the other three models. In particular, if humans can classify novel views of
an object better than the 2D affine ideal, then our human observers must have used
more information than that embodied by that ideal.
2 The observers
Let us first define the task. An observer looks at the 2D images of a 3D wire
frame object from a number of viewpoints. These images will be called templates
{Td. Then two distorted copies of the original 3D object are displayed. They
are obtained by adding 3D Gaussian positional noise (i.i.d.) to the vertices of the
original object. One distorted object is called the target, whose Gaussian noise has
a constant variance. The other is the distract or , whose noise has a larger variance
that can be adjusted to achieve a criterion level of performance. The two objects
are displayed from the same viewpoint in parallel projection, which is either from
one of the template views, or a novel view due to 3D rotation. The task is to choose
the one that is more similar to the original object. The observer's performance is
measured by the variance (threshold) that gives rise to 75% correct performance.
The optimal strategy is to choose the stimulus S with a larger probability p (OIS).
From Bayes' rule, this is to choose the larger of p (SIO).
Assume that the models are restricted to 2D transformations of the image, and
cannot reconstruct the 3D structure of the object from its independent templates
{Ti}. Assume also that the prior probability p(Td is constant. Let us represent S
and Ti by their (x, y) vertex coordinates: (X Y )T, where X = (Xl, x2, ... , x n ),
y = (yl, y2 , ... , yn). We assume that the correspondence between S and T i is
solved up to a reflection ambiguity, which is equivalent to an additional template:
Ti = (xr yr )T, where X r = (x n , ... ,x2,xl ), yr = (yn, ... ,y2,yl). We still
denote the template set as {Td. Therefore,
(1)
In what follows, we will compute p(SITi)p(T i ), with the assumption that S =
F (Ti) + N (0, crI 2n ), where N is the Gaussian distribution, 12n the 2n x 2n identity
matrix, and :F a 2D transformation. For the 2D ideal observer, :F is a rigid 2D
rotation. For the GRBF model, F assigns a linear coefficient to each template
T i , in addition to a 2D rotation. For the 2D affine nearest neighbor model, :F
represents the 2D affine transformation that minimizes liS - Ti11 2 , after Sand Ti
are normalized in size. For the 2D affine ideal observer, :F represents all possible
2D affine transformations applicable to T i.
Z Liu and D. Kersten
832
2.1
The 2D ideal observer
The templates are the original 2D images, their mirror reflections, and 2D rotations
(in angle ?) in the image plane. Assume that the stimulus S is generated by adding
Gaussian noise to a template, the probability p(SIO) is an integration over all
templates and their reflections and rotations. The detailed derivation for the 2D
ideal and the GRBF model can be found in [4].
Ep(SITi)p(Ti) ex: E
J
d?exp (-liS - Ti(?)112 /2( 2 )
?
(2)
2.2 The GRBF model
The model has the same template set as the 2D ideal observer does. Its training
requires that EiJ;7r d?Ci(?)N(IITj - Ti(?)II,a) = 1, j = 1,2, ... , with which {cd
can be obtained optimally using singular value decomposition. When a pair of new
stimuli is} are presented, the optimal decision is to choose the one that is closer
to the learned prototype, in other words, the one with a smaller value of
111- E 127r d?ci(?)exp
(_liS - 2:~(?)1I2) II.
(3)
2.3 The 2D affine nearest neighbor model
It has been proved in [8] that the smallest Euclidean distance D(S, T) between S
and T is, when T is allowed a 2D affine transformation, S ~ S/IISII, T ~ T/IITII,
D2(S, T) = 1 - tr(S+S . TTT)/IITII2,
(4)
where tr strands for trace, and S+ = ST(SST)-l. The optimal strategy, therefore,
is to choose the S that gives rise to the larger of E exp (_D2(S, Ti)/2a 2) , or the
smaller of ED2(S, Ti). (Since no probability is defined in this model, both measures
will be used and the results from the better one will be reported.)
2.4 The 2D affine ideal observer
We now calculate the Bayesian probability by assuming that the prior probability distribution of the 2D affine transformation, which is applied to the template
T i , AT
+ Tr
(~ ~) Ti + (~: ::: ~:),
=
obeys a Gaussian distribution
N(X o ,,,,/1 6 ), where Xo is the identity transformation
(1,0,0,1,0,0). We have
Ep(SIT i )
i:
xl'
= (a,b,c,d,tx,t y) =
dX exp (-IIATi + Tr - SII 2/2( 2)
=
E
=
EC(n, a, ",/)deC 1 (QD exp (tr (KfQi(QD-1QiKi) /2(12), (6)
where C(n, a, ",/) is a function of n, a, "'/; Q' = Q
Q _ ( XT . X T
Y T ?XT
(5)
+ ",/-212, and
X T ? Y T ) QK _ ( XT? Xs
YT ?YT
'
X T ?Ys
Y T . Xs)
-21
Y T .Ys
+"'/
2?
(7)
The free parameters are "'/ and the number of 2D rotated copies for each T i (since
a 2D affine transformation implicitly includes 2D rotations, and since a specific
prior probability distribution N(Xo, ",/1) is assumed, both free parameters should
be explored together to search for the optimal results).
2D Observers for Hwnan 3D Object Recognition?
833
?
?
?
?
?
?
Figure 1: Stimulus classes with increasing structural regularity: Balls, Irregular,
Symmetric, and V-Shaped. There were three objects in each class in the experiment.
2.5
The human observers
Three naive subjects were tested with four classes of objects: Balls, Irregular, Symmetric, and V-Shaped (Fig. 1). There were three objects in each class. For each
object, 11 template views were learned by rotating the object 60? /step, around
the X- and Y-axis, respectively. The 2D images were generated by orthographic
projection, and viewed monocularly. The viewing distance was 1.5 m. During the
test, the standard deviation of the Gaussian noise added to the target object was
(J"t = 0.254 cm. No feedback was provided.
Because the image information available to the humans was more than what was
available to the models (shading and occlusion in addition to the (x, y) positions of
the vertices), both learned and novel views were tested in a randomly interleaved
fashion. Therefore, the strategy that humans used in the task for the learned and
novel views should be the same. The number of self-occlusions, which in principle provided relative depth information, was counted and was about equal in both
learned and novel view conditions. The shading information was also likely to be
equal for the learned and novel views. Therefore, this additional information was
about equal for the learned and novel views, and should not affect the comparison
of the performance (humans relative to a model) between learned and novel views.
We predict that if the humans used a 2D affine strategy, then their performance
relative to the 2D affine ideal observer should not be higher for the novel views than
for the learned views. One reason to use the four classes of objects with increasing
structural regularity is that structural regularity is a 3D property (e.g., 3D Symmetric vs. Irregular), which the 2D models cannot capture. The exception is the
planar V-Shaped objects, for which the 2D affine models completely capture 3D rotations, and are therefore the "correct" models. The V-Shaped objects were used in
the 2D affine case as a benchmark. If human performance increases with increasing
structural regularity of the objects, this would lend support to the hypothesis that
humans have used 3D information in the task.
2.6
Measuring performance
A stair-case procedure [7] was used to track the observers' performance at 75%
correct level for the learned and novel views, respectively. There were 120 trials
for the humans, and 2000 trials for each of the models. For the GRBF model,
the standard deviation of the Gaussian function was also sampled to search for
the best result for the novel views for each of the 12 objects, and the result for
the learned views was obtained accordingly. This resulted in a conservative test
of the hypothesis of a GRBF model for human vision for the following reasons:
(1) Since no feedback was provided in the human experiment and the learned and
novel views were randomly intermixed, it is not straightforward for the model to
find the best standard deviation for the novel views, particularly because the best
standard deviation for the novel views was not the same as that for the learned
Z Liu and D. Kersten
834
ones. The performance for the novel views is therefore the upper limit of the
model's performance. (2) The subjects' performance relative to the model will be
defined as statistical efficiency (see below). The above method will yield the lowest
possible efficiency for the novel views, and a higher efficiency for the learned views,
since the best standard deviation for the novel views is different from that for the
learned views. Because our hypothesis depends on a higher statistical efficiency for
the novel views than for the learned views, this method will make such a putative
difference even smaller. Likewise, for the 2D affine ideal, the number of 2D rotated
copies of each template Ti and the value I were both extensively sampled, and the
best performance for the novel views was selected accordingly. The result for the
learned views corresponding to the same parameters was selected. This choice also
makes it a conservative hypothesis test.
3 Results
Learned Views
25
IJ
O
e-
O 20 Affine Nearest NtMghbor
:g
0
Human
20 Ideal
GRBF
rn 20 Affine kIoai
.?.
~
?
1.5
Novel Views
? Human
EJ 20 Ideal
o
o
e-
.?.
:!2
0
~
GRBF
20 Affine Nearesl N.tghbor
~ 2DAfllna~
1.5
~
81
l!
~
l-
I-
0.5
Object Type
Object Type
Figure 2: The threshold standard deviation of the Gaussian noise, added to the
distractor in the test pair, that keeps an observer's performance at the 75% correct
level, for the learned and novel views, respectively. The dotted line is the standard
deviation of the Gaussian noise added to the target in the test pair.
Fig. 2 shows the threshold performance. We use statistical efficiency E to compare human to model performance. E is defined as the information used by
humans relative to the ideal observer [3] : E = (d~uman/d~deal)2, where d'
is the discrimination index. We have shown in [4] that, in our task, E =
((a~1!f;actor)2 - (CTtarget)2) / ((CT~~~~~tor)2 - (CTtarget)2) , where CT is the threshold. Fig. 3 shows the statistical efficiency of the human observers relative to each
of the four models.
We note in Fig. 3 that the efficiency for the novel views is higher than those for the
learned views (several of them even exceeded 100%), except for the planar V-Shaped
objects. We are particularly interested in the Irregular and Symmetric objects in
the 2D affine ideal case, in which the pairwise comparison between the learned
and novel views across the six objects and three observers yielded a significant
difference (binomial, p < 0.05). This suggests that the 2D affine ideal observer
cannot account for the human performance, because if the humans used a 2D affine
template matching strategy, their relative performance for the novel views cannot
be better than for the learned views. We suggest therefore that 3D information was
used by the human observers (e.g., 3D symmetry). This is supported in addition
by the increasing efficiencies as the structural regularity increased from the Balls,
Irregular, to Symmetric objects (except for the V-Shaped objects with 2D affine
models).
2D Observers for Hwnan 3D Object Recognition?
300
..
l
.."
..!
"
$:
""
20 Ideal
250
o Learned
200
? Novel
250
l
f
'50
'"
~ ""
w
"-
II!
.'"
N
~
I .Noval
0 l&arnedl
~
--------------
" '"
Q
l>j
GRBF Modol
---
t
!
ObjoctTypo
300
20 Aftlne Nearest
250
200
o
Learned
? Novel
Q
N
Ighbor
l,..
j"
"
~
150
300
20 Affine Ideal
250
o Learned
200
? Novel
'50
j
i
I
~
ObJect Type
835
0
Object Type
ObjOGtType
Figure 3: Statistical efficiencies of human observers relative to the 2D ideal observer,
the GRBF model, the 2D affine nearest neighbor model, and the 2D affine ideal
observer_
4 Conclusions
Computational models of visual cognition are subject to information theoretic as
well as implementational constraints. When a model's performance mimics that of
human observers, it is difficult to interpret which aspects of the model characterize
the human visual system. For example, human object recognition could be simulated by both a GRBF model and a model with partial 3D information of the object.
The approach we advocate here is that, instead of trying to mimic human performance by a computational model, one designs an implementation-free model for a
specific recognition task that yields the best possible performance under explicitly
specified computational constraints. This model provides a well-defined benchmark
for performance, and if human observers outperform it, we can conclude firmly that
the humans must have used better computational strategies than the model. We
showed that models of independent 2D templates with 2D linear operations cannot
account for human performance. This suggests that our human observers may have
used the templates to reconstruct a representation of the object with some (possibly
crude) 3D structural information.
References
[1] Biederman I and Gerhardstein P C. Viewpoint dependent mechanisms in visual
object recognition: a critical analysis. J. Exp. Psych.: HPP, 21: 1506-1514, 1995.
[2] Biilthoff H H and Edelman S. Psychophysical support for a 2D view interpolation
theory of object recognition. Proc. Natl. Acad. Sci. , 89:60-64, 1992.
[3] Fisher R A. Statistical Methods for Research Workers. Oliver and Boyd, Edinburgh, 1925.
[4] Liu Z, Knill D C, and Kersten D. Object classification for human and ideal
observers. Vision Research, 35:549-568, 1995.
[5] Poggio T and Edelman S. A network that learns to recognize three-dimensional
objects. Nature, 343:263-266, 1990.
[6] Tarr M J and Biilthoff H H. Is human object recognition better described
by geon-structural-descriptions or by multiple-views? J. Exp. Psych.: HPP,
21:1494-1505,1995.
[7] Watson A B and Pelli D G. QUEST: A Bayesian adaptive psychometric method.
Perception and Psychophysics, 33:113-120, 1983.
[8] Werman M and Weinshall D. Similarity and affine invariant distances between
2D point sets. IEEE PAMI, 17:810-814,1995.
| 1351 |@word trial:2 stronger:1 d2:2 decomposition:1 tr:5 shading:2 liu:6 daniel:1 hpp:2 dx:1 must:3 v:1 discrimination:1 selected:2 yr:2 accordingly:2 plane:2 provides:3 si1:1 sii:1 edelman:2 advocate:1 pairwise:1 indeed:1 nor:1 distractor:1 td:3 increasing:4 becomes:1 provided:3 lowest:1 what:3 weinshall:1 cm:1 minimizes:1 psych:2 finding:1 transformation:15 ti:13 yn:2 limit:1 acad:1 interpolation:2 pami:1 equivalence:1 suggests:2 limited:4 range:1 obeys:1 orthographic:1 xr:1 procedure:1 reject:1 matching:5 projection:3 word:2 radial:1 boyd:1 suggest:1 onto:1 cannot:5 twodimensional:1 kersten:6 equivalent:1 yt:2 straightforward:1 focused:1 assigns:1 factored:1 rule:2 estimator:2 variation:3 coordinate:1 modol:1 target:3 hypothesis:4 recognition:25 particularly:3 tributions:1 ep:2 solved:1 capture:2 calculate:1 efficiency:10 basis:1 completely:1 represented:1 tx:1 derivation:1 whose:2 larger:4 otherwise:1 reconstruct:2 ability:1 j2:1 flexibility:1 achieve:1 description:1 normalize:1 impaired:1 regularity:5 produce:1 rotated:2 object:60 depending:1 measured:1 ij:1 nearest:7 strong:1 reco:1 implemented:2 ois:1 implies:2 qd:2 correct:4 human:54 viewing:1 sand:1 adjusted:1 around:1 exp:8 cognition:1 predict:1 werman:1 tor:1 smallest:1 proc:1 integrates:1 applicable:1 siti:2 tool:1 gaussian:9 ej:1 varying:1 derived:2 contrast:1 sense:1 dependent:4 rigid:3 cri:1 unlikely:2 typically:1 interested:2 provably:1 issue:1 among:1 classification:1 constrained:4 integration:1 psychophysics:1 equal:3 shaped:6 tarr:1 represents:2 look:1 mimic:3 stimulus:7 randomly:2 recognize:3 resulted:1 familiar:2 occlusion:2 extreme:2 stair:1 natl:1 noval:1 oliver:1 closer:1 partial:1 necessary:1 worker:1 poggio:1 euclidean:1 rotating:1 deformation:1 theoretical:1 increased:1 classify:1 implementational:2 measuring:1 vertex:3 deviation:7 too:1 optimally:1 stored:3 reported:1 characterize:1 answer:1 st:1 discriminating:1 yl:2 together:1 uiti:1 central:2 ambiguity:1 choose:5 possibly:1 li:3 account:8 suggesting:1 includes:1 coefficient:1 explicitly:2 depends:4 view:46 observer:44 closed:1 bayes:1 parallel:2 ttt:1 contribution:1 variance:3 qk:1 likewise:1 yield:5 generalize:1 bayesian:5 overlook:1 con:1 sampled:2 proved:1 exceeded:1 higher:4 planar:2 grbf:14 normalized:1 y2:2 symmetric:5 i2:1 deal:1 during:1 self:1 criterion:1 generalized:1 trying:1 theoretic:1 reflection:3 image:23 novel:31 rotation:8 ti11:1 functionally:1 interpret:1 significant:2 minnesota:1 access:1 actor:1 similarity:3 showed:2 recent:1 store:2 watson:1 greater:1 additional:2 ii:3 multiple:1 match:1 long:1 y:2 converging:1 vision:3 represent:1 dec:1 irregular:5 addition:3 singular:1 crucial:1 appropriately:1 subject:4 cttarget:2 iitii:1 structural:8 ideal:36 intermediate:1 affect:1 zili:1 idea:1 prototype:1 whether:1 six:1 detailed:1 sst:1 extensively:1 outperform:2 dotted:1 track:1 key:1 four:3 threshold:4 neither:1 angle:1 distorted:2 reasonable:1 putative:1 decision:2 interleaved:1 ct:2 correspondence:1 yielded:1 constraint:3 x2:2 aspect:3 geon:1 ball:3 smaller:3 across:1 iisii:1 conceivably:1 restricted:4 invariant:1 xo:2 fail:1 mechanism:1 serf:1 available:4 operation:3 original:4 binomial:1 monocularly:1 psychophysical:1 question:2 added:3 strategy:6 dependence:7 distance:3 separate:1 simulated:1 sci:1 extent:1 reason:2 assuming:1 modeled:1 index:1 insufficient:1 difficult:1 intermixed:1 debate:2 trace:1 rise:2 implementation:4 design:1 upper:1 observation:1 wire:1 benchmark:2 displayed:2 precise:1 frame:1 rn:1 gerhardstein:1 biederman:1 inferred:1 pair:3 specified:3 pelli:1 learned:25 below:2 pattern:1 perception:1 memory:3 lend:1 critical:1 firmly:1 axis:1 irrespective:1 naive:1 embodied:1 review:1 geometric:1 prior:3 relative:9 degree:2 affine:37 sufficient:2 principle:1 viewpoint:17 cd:1 supported:1 free:4 copy:3 allow:1 understand:2 institute:1 neighbor:5 template:24 taking:1 edinburgh:1 feedback:2 depth:1 evaluating:1 computes:1 adaptive:1 ici:1 counted:1 ec:1 approximate:1 ed2:1 biilthoff:2 implicitly:1 keep:1 assumed:1 conclude:1 search:2 nature:2 symmetry:1 distract:1 necessarily:1 whole:1 noise:7 knill:2 allowed:1 hwnan:3 fig:4 psychometric:1 ighbor:1 fashion:1 position:1 xl:3 crude:1 learns:1 familiarity:1 xt:3 specific:2 uman:1 explored:1 x:2 evidence:1 sit:1 essential:1 adding:2 ci:3 mirror:1 nec:1 falsified:1 led:1 eij:1 appearance:1 likely:1 visual:6 positional:1 strand:1 identity:2 viewed:1 fisher:1 change:1 except:2 principal:1 conservative:2 called:2 iitj:1 experimental:2 exception:1 sio:2 support:2 quest:1 tested:2 ex:1 |
386 | 1,352 | A Simple and Fast Neural Network
Approach to Stereovision
Rolf D. Henkel
Institute of Theoretical Physics
University of Bremen
P.O. Box 330 440, D-28334 Bremen
http://axon.physik.uni-bremen.de/-rdh
Abstract
A neural network approach to stereovision is presented based on
aliasing effects of simple disparity estimators and a fast coherencedetection scheme. Within a single network structure, a dense disparity map with an associated validation map and, additionally,
the fused cyclopean view of the scene are available. The network
operations are based on simple, biological plausible circuitry; the
algorithm is fully parallel and non-iterative.
1
Introduction
Humans experience the three-dimensional world not as it is seen by either their left
or right eye, but from a position of a virtual cyclopean eye, located in the middle
between the two real eye positions. The different perspectives between the left and
right eyes cause slight relative displacements of objects in the two retinal images
(disparities), which make a simple superposition of both images without diplopia
impossible. Proper fusion of the retinal images into the cyclopean view requires the
registration of both images to a common coordinate system, which in turn requires
calculation of disparities for all image areas which are to be fused.
1.1
The Problems with Classical Approaches
The estimation of disparities turns out to be a difficult task, since various random
and systematic image variations complicate this task. Several different techniques
have been proposed over time, which can be loosely grouped into feature-, area-
A Simple and Fast Neural Network Approach to Stereovision
809
and phase-based approaches. All these algorithms have a number of computational
problems directly linked to the very assumptions inherent in these approaches.
In feature-based stereo, intensity data is first converted to a set of features assumed
to be a more stable image property than the raw image intensities. Matching
primitives used include zerocrossings, edges and corner points (Frisby, 1991), or
higher order primitives like topological fingerprints (see for example: Fleck, 1991) .
Generally, the set of feature-classes is discrete, causing the two primary problems
of feature-based stereo algorithms: the famous "false-matches"-problem and the
problem of missing disparity estimates.
False matches are caused by the fact that a single feature in the left image can
potentially be matched with every feature of the same class in the right image.
This problem is basic to all feature-based stereo algorithms and can only be solved
by the introduction of additional constraints to the solution. In conjunction with
the extracted features these constraints define a complicated error measure which
can be minimized by cooperative processes (Marr, 1979) or by direct (Ohta, 1985)
or stochastic search techniques (Yuille, 1991). While cooperative processes and
stochastic search techniques can be realized easily on a neural basis, it is not immediately clear how to implement the more complicated algorithmic structures of
direct search techniques neuronally. Cooperative processes and stochastic search
techniques turn out to be slow, needing many iterations to converge to a local
minimum of the error measure.
The requirement of features to be a stable image property causes the second problem
of feature-based stereo: stable features can only be detected in a fraction of the
whole image area, leading to missing disparity estimates for most of the image area.
For those image parts, disparity estimates can only be guessed.
Dense disparity maps can be obtained with area-based approaches, where a suitable
chosen correlation measure is maximized between small image patches of the left and
right view. However, a neuronally plausible implementation of this seems to be not
readily available. Furthermore, the maximization turns out to be a computationally
expensive process, since extensive search is required in configuration space.
Hierarchical processing schemes can be utilized for speed-up, by using information
obtained at coarse spatial scales to restrict searching at finer scales. But, for general
image data, it is not guaranteed that the disparity information obtained at some
coarse scale is valid. The disparity data might be wrong, might have a different value
than at finer scales , or might not be present at all. Furthermore, by processing data
from coarse to fine spatial scales, hierarchical processing schemes are intrinsically
sequential. This creates additional algorithmic overhead which is again difficult to
realize with neuronal structures.
The same comments apply to phase-based approaches, where a locally extracted
Fourier-phase value is used for matching. Phase values are only defined modulo
211", and this wrap-around makes the use of hierarchical processing essential for
these types of algorithms. Moreover, since data is analyzed in different spatial
frequency channels, it is nearly certain that some phase values will be undefined
at intermediate scales, due to missing signal energy in this frequency band (Fleet,
1993) . Thus, in addition to hierarchical processing, some kind of exception handling
is needed with these approaches.
810
2
R. D. Henkel
Stereovision by Coherence Detection
In summary, classical approaches to stereovision seem to have difficulties with the
fast calculation of dense disparity-maps, at least with plausible neural circuitry.
In the following, a neural network implementation will be described which solves
this task by using simple disparity estimators based on motion-energy mechanisms
(Adelson, 1985; Qian, 1997), closely resembling responses of complex cells in visual
cortex (DeAngelis, 1991). Disparity units of these type belong to a class of disparity
estimators which can be derived from optical flow methods (Barron, 1994). Clearly,
disparity calculations and optical flow estimation share many similarities. The two
stereo views of a (static) scene can be considered as two time-slices cut out of
the space-time intensity pattern which would be recorded by an imaginary camera
moving from the position of the left to the position of the right eye. However,
compared to optical flow, disparity estimation is complicated by the fact that only
two discrete "time"-samples are available, namely the images of the left and right
view positions.
disparity calculations
<p
to
Left
Right 1
Right 2
correct
wrong
correct
Figure 1: The velocity of an image patch manifests itself as principal texture direction in the space-time flow field traced out by the intensity pattern in time (left).
Sampling such flow patterns at discrete times can create aliasing-effects which lead
to wrong estimates. If one is using optical flow estimation techniques for disparity
calculations, this problem is always present.
For an explanation consider Fig. 1. A surface patch shifting over time traces out
a certain flow pattern. The principal texture direction of this flow indicates the
relative velocity of the image patch (Fig. 1, left). Sampling the flow pattern only
at discrete time points, the shift between two "time-samples" can be estimated
without ambiguity provided the shift is not too large (Fig. 1, middle). However, if a
certain limit is exceeded, it becomes impossible to estimate the shift correctly, given
the data (Fig. 1, right). This is a simple aliasing-effect in the "time"-direction; an
everyday example can be seen as motion reversal in movies.
In the case of stereovision, aliasing-effects of this type are always present, and they
limit the range of disparities a simple disparity unit can estimate. Sampling theory
gives a relation between the maximal spatial wavevector k~ax (or, equivalently, the
minimum spatial wavelength >'~in) present in the data and the largest disparity
which can be estimated reliably (Henkel, 1997):
II
d <
7r
_1I{J
k~ax - '2>'min .
(1)
A Simple and Fast Neural Network Approach to Stereovision
811
A well-known example of the size-disparity scaling expressed in equation (1) is
found in the context of the spatial frequency channels assumed to exist in the
visual cortex. Cortical cells respond to spatial wavelengths down to about half
their peak wavelength Aopt; therefore, they can estimate reliable only disparities
less than 1/4 Aopt. This is known as Marr's quarter-cycle limit (Blake, 1991).
Equation (1) immediately suggests a way to extend the limited working range of
disparity estimators: a spatial smoothing of the image data before or during disparity calculation reduces k'f:tax, and in turn increases the disparity range. However,
spatial smoothing reduces also the spatial resolution of the resulting disparity map.
Another way of modifying the usable range of disparity estimators is the application of a fixed preshift to the input data before disparity calculation. This would
require prior knowledge of the correct preshift to be applied, which is a nontrivial
problem. One could resort to hierarchical coarse-to-fine schemes, but the difficulties
with hierarchical schemes have already been elal ')rated.
The aliasing effects discussed are a general feature of sampling visual space with
only two eyes; instead of counteracting, one can exploit them in a simple coherencedetection scheme, where the multi-unit activity in stacks of disparity detectors tuned
to a common view direction is analyzed.
Assuming that all disparity units i in a stack have random preshifts or presmoothing
applied to their input data, these units will have different, but slightly overlapping
working ranges Di = [di in , diax] for valid disparity estimates. An object with true
disparity d, seen in the common view direction of such a stack, will therefore split
the stack into two disjunct classes: the class C of estimators with dEDi for all
i E C, and the rest of the stack, C, with d ? D i . All disparity estimators E C will
code more or less the true disparity di ~ d, but the estimates of units belonging to C
will be subject to the random aliasing effects discussed, depending in a complicated
way on image content and disparity range Di of the unit.
We will thus have d i ~ d ~ d j whenever units i and j belong to C, and random relationships otherwise. A simple coherence detection within each stack, i.e. searching
for all units with di ~ d j and extracting the largest cluster found, will be sufficient
to single out C. The true disparity d in the view direction of the stack can be simply
estimated as an average over all coherently coding units:
3
Neural Network Implementation
Repeating this coherence detection scheme in every view direction results in a fully
parallel network structure for disparity calculation. Neighboring disparity stacks
responding to different view directions estimate disparity values independently from
each other, and within each stack, disparity units operate independently from each
other. Since coherence detection is an opportunistic scheme, extensions of the basic
algorithm to mUltiple spatial scales and combinations of different types of disparity
estimators are trivial. Additional units are simply included in the appropriate
coherence stacks. The coherence scheme will combine only the information from
the coherently coding units and ignore the rest of the data. For this reason, the
scheme also turns out to be extremely robust against single-unit failures.
R. D. Henkel
812
disparity data
"h'7"
-----------r?----------
Left eye?" .. ,
:
,
.............. ..
.'
Right eye
,
Cyclopean eye
Figure 2: The network structure for a single horizontal scan-line (left). The view
directions of the disparity stacks split the angle between the left and right lines
of sight in the network and 3D-space in half, therefore analyzing space along the
cyclopean view directions (right).
In the current implementation (Fig. 2), disparity units at a single spatial scale
are arranged into horizontal disparity layers. Left and right image data is fed
into this network along diagonally running data lines. This causes every disparity
layer to receive the stereo data with a certain fixed preshift applied, leading to the
required, slightly different working-ranges of neighboring layers. Disparity units
stacked vertically above each other are collected into a single disparity stack which
is then analyzed for coherent activity.
4
Results
The new stereo network performs comparable on several standard test image sets
(Fig. 3). The calculated disparity maps are similar to maps obtained by classical
area-based approaches, but they display subpixel-precision. Since no smoothing or
regularization is performed by the coherence-based stereo algorithm, sharp disparity
edges can be observed at object borders.
Within the network, a simple validation map is available locally. A measure of local
Figure 3: Disparity maps for some standard test images (small insets), calculated
by the coherence-based stereo algorithm.
A Simple and Fast Neural Network Approach to Stereovision
813
Figure 4: The performance of coherence-based stereo on a difficult scene with specular highlights, transparency and repetitive structures (left). The disparity map
(middle) is dense and correct, except for a few structure-less image regions. These
regions, as well as most object borders, are indicated in the validation map (right)
with a low [dark] validation count.
coherence can be obtained by calculating the relative number of coherently acting
disparity units in each stack, i.e. by calculating the ratio N(C)/ N(CUC), where N(C)
is the number of units in class C. In most cases, this validation map clearly marks
image areas where the disparity calculations failed (for various reasons, notably at
occlusions caused by object borders, or in large structure-less image regions, where
no reliable matching can be obtained - compare Fig 4).
Close inspection of disparity and validation maps reveals that these image maps
are not aligned with the left or the right view of the scene. Instead, both maps are
registered with the cyclopean view. This is caused by the structural arrangement of
data lines and disparity stacks in the network. Reprojecting data lines and stacks
back into 3D-space shows that the stacks analyze three-dimensional space along
lines splitting the angle between the left and right view directions in half. This is
the cyclopean view direction as defined by (Hering, 1879).
It is easy to obtain the cyclopean view of the scene itself. With If and If denoting
the left and right input data at the position of disparity-unit i, a summation over
all coherently coding disparity units in a stack, i.e.,
Figure 5: A simple superposition of the left and right stereo images results in
diplopia (left). By using a vergence system, the two stereo images can be aligned
better (middle), but diplopia is still prominent in most areas of the visual field.
The fused cyclopean view of the scene (left) was calculated by the coherence-based
stereo network.
814
R. D. Henkel
gives the image intensity I C in the cyclopean view-direction of this stack. Collecting
IC from all disparity stacks gives the complete cyclopean view as the third coregistered map of the network (Fig 5).
Acknowledgements
Thanks to Helmut Schwegler and Robert P. O'Shea for interesting discussions. Image data courtesy of G. Medoni, UCS Institute for Robotics & Intelligent Systems, B. Bolles, AIC, SRI International, and G. Sommer, Kiel
Cognitive Systems Group, Christian-Albrechts-Universitat Kiel. An internetbased implementation of the algorithm presented in this paper is available at
http://axon.physik.uni-bremen.de/-rdh/online~alc/stereo/.
References
Adelson, E.H. & Bergen, J.R. (1985): Spatiotemporal Energy Models for the Perception of Motion. J. Opt. Soc. Am. A2: 284-299.
Barron, J.L., Fleet, D.J. & Beauchemin, S.S. (1994): Performance of Optical Flow
Techniques. Int. J. Camp. Vis. 12: 43-77.
Blake, R. & Wilson, H.R. (1991): Neural Models of Stereoscopic Vision. TINS 14:
445-452.
DeAngelis, G.C., Ohzawa, I. & Freeman, R.D. (1991): Depth is Encoded in the
Visual Cortex by a Specialized Field Structure. Nature 11: 156-159.
Fleck, M.M. (1991): A Topological Stereo Matcher. Int. J. of Camp.
197-226.
Vis. 6:
Fleet, D.J. & Jepson, A.D. (1993): Stability of Phase Information. IEEE PAMI 2:
333-340.
Frisby, J.P. & and S. B. Pollard, S.B. (1991): Computational Issues in Solving the
Stereo Correspondence Problem. eds. M.S. Landy and J. A. Movshon, Computational Models of Visual Processing, pp. 331, MIT Press, Cambridge 1991.
Henkel, R.D. (1997): Fast Stereovision by Coherence Detection, in Proc. of
CAIP'97, Kiel, LCNS 1296, eds. G. Sommer, K. Daniilidis and J. Pauli, pp. 297,
LCNS 1296, Springer, Heidelberg 1997.
E. Hering (1879): Der Raumsinn und die Bewegung des Auges, in Handbuch der
Psychologie, ed. 1. Hermann, Band 3, Teil 1, Vogel, Leipzig 1879.
Marr, D. & Poggio, T. (1979): A Computational Theory of Human Stereo Vision.
Proc. R. Soc. Land. B 204: 301-328.
Ohta, Y, & Kanade, T. (1985): Stereo by Intra- and Inter-scanline Search using
dynamic programming. IEEE PAMI 7: 139-154.
Qian, N. & Zhu, Y. (1997): Physiological Computation of Binocular Disparity, to
appear in Vision Research.
Yuille, A.L., Geiger, D. & Biilthoff, H.H. (1991): Stereo Integration, Mean Field
Theory and Psychophysics. Network 2: 423-442.
| 1352 |@word sri:1 middle:4 seems:1 physik:2 dedi:1 configuration:1 disparity:62 tuned:1 zerocrossings:1 denoting:1 imaginary:1 current:1 readily:1 realize:1 christian:1 leipzig:1 half:3 inspection:1 coarse:4 kiel:3 albrechts:1 along:3 direct:2 diplopia:3 overhead:1 combine:1 inter:1 notably:1 aliasing:6 multi:1 freeman:1 rdh:2 reprojecting:1 becomes:1 provided:1 matched:1 moreover:1 kind:1 every:3 collecting:1 wrong:3 unit:20 appear:1 before:2 local:2 vertically:1 limit:3 analyzing:1 pami:2 might:3 suggests:1 limited:1 range:7 camera:1 implement:1 opportunistic:1 displacement:1 area:8 matching:3 close:1 context:1 impossible:2 map:16 missing:3 courtesy:1 resembling:1 primitive:2 independently:2 resolution:1 splitting:1 immediately:2 qian:2 estimator:8 marr:3 stability:1 searching:2 coordinate:1 variation:1 modulo:1 programming:1 velocity:2 expensive:1 located:1 utilized:1 cut:1 cooperative:3 observed:1 solved:1 region:3 cycle:1 und:1 dynamic:1 solving:1 yuille:2 creates:1 basis:1 easily:1 various:2 stacked:1 fast:7 deangelis:2 detected:1 encoded:1 plausible:3 otherwise:1 itself:2 online:1 maximal:1 causing:1 neighboring:2 aligned:2 tax:1 everyday:1 cluster:1 ohta:2 requirement:1 object:5 depending:1 solves:1 soc:2 direction:13 hermann:1 closely:1 correct:4 modifying:1 stochastic:3 human:2 ucs:1 virtual:1 require:1 opt:1 biological:1 schwegler:1 summation:1 extension:1 around:1 considered:1 blake:2 ic:1 algorithmic:2 circuitry:2 a2:1 estimation:4 proc:2 superposition:2 grouped:1 largest:2 create:1 mit:1 clearly:2 always:2 sight:1 wilson:1 conjunction:1 derived:1 ax:2 subpixel:1 indicates:1 helmut:1 am:1 camp:2 bergen:1 relation:1 issue:1 spatial:12 smoothing:3 integration:1 psychophysics:1 field:4 sampling:4 adelson:2 nearly:1 minimized:1 intelligent:1 inherent:1 few:1 phase:6 occlusion:1 detection:5 beauchemin:1 intra:1 analyzed:3 undefined:1 edge:2 experience:1 poggio:1 loosely:1 theoretical:1 disjunct:1 maximization:1 too:1 universitat:1 spatiotemporal:1 thanks:1 peak:1 international:1 systematic:1 physic:1 fused:3 again:1 ambiguity:1 recorded:1 corner:1 cognitive:1 resort:1 usable:1 leading:2 converted:1 de:3 retinal:2 coding:3 int:2 caused:3 vi:2 performed:1 view:20 linked:1 analyze:1 scanline:1 parallel:2 complicated:4 neuronally:2 maximized:1 guessed:1 raw:1 famous:1 daniilidis:1 finer:2 detector:1 whenever:1 complicate:1 ed:3 against:1 failure:1 energy:3 frequency:3 pp:2 associated:1 di:5 static:1 intrinsically:1 manifest:1 knowledge:1 back:1 exceeded:1 higher:1 response:1 arranged:1 box:1 furthermore:2 binocular:1 correlation:1 working:3 horizontal:2 overlapping:1 indicated:1 effect:6 ohzawa:1 true:3 regularization:1 during:1 die:1 prominent:1 complete:1 bolles:1 performs:1 motion:3 image:32 common:3 specialized:1 quarter:1 belong:2 slight:1 extend:1 discussed:2 cambridge:1 fingerprint:1 moving:1 stable:3 cortex:3 similarity:1 surface:1 perspective:1 certain:4 der:2 seen:3 minimum:2 additional:3 converge:1 signal:1 ii:1 multiple:1 needing:1 reduces:2 transparency:1 match:2 calculation:9 basic:2 vision:3 iteration:1 repetitive:1 robotics:1 cell:2 receive:1 addition:1 fine:2 vogel:1 rest:2 operate:1 comment:1 subject:1 flow:10 seem:1 counteracting:1 extracting:1 structural:1 intermediate:1 split:2 easy:1 specular:1 restrict:1 shift:3 fleet:3 movshon:1 stereo:19 pollard:1 cause:3 generally:1 clear:1 repeating:1 dark:1 locally:2 band:2 http:2 exist:1 stereoscopic:1 estimated:3 correctly:1 cyclopean:11 discrete:4 group:1 traced:1 registration:1 fraction:1 angle:2 respond:1 aopt:2 patch:4 geiger:1 coherence:12 scaling:1 comparable:1 layer:3 guaranteed:1 display:1 aic:1 correspondence:1 topological:2 activity:2 nontrivial:1 constraint:2 scene:6 fourier:1 speed:1 min:1 extremely:1 optical:5 combination:1 belonging:1 slightly:2 computationally:1 equation:2 turn:6 count:1 mechanism:1 needed:1 fed:1 reversal:1 available:5 operation:1 apply:1 hierarchical:6 barron:2 appropriate:1 caip:1 pauli:1 psychologie:1 responding:1 running:1 include:1 sommer:2 landy:1 calculating:2 exploit:1 classical:3 already:1 realized:1 coherently:4 arrangement:1 primary:1 wrap:1 collected:1 trivial:1 reason:2 assuming:1 code:1 relationship:1 ratio:1 equivalently:1 difficult:3 robert:1 potentially:1 trace:1 implementation:5 reliably:1 proper:1 stack:19 sharp:1 intensity:5 namely:1 required:2 extensive:1 coherent:1 registered:1 pattern:5 perception:1 rolf:1 reliable:2 explanation:1 shifting:1 suitable:1 difficulty:2 zhu:1 scheme:10 frisby:2 movie:1 rated:1 eye:9 prior:1 acknowledgement:1 relative:3 fully:2 highlight:1 interesting:1 validation:6 sufficient:1 bremen:4 share:1 land:1 summary:1 diagonally:1 institute:2 slice:1 calculated:3 cortical:1 world:1 valid:2 depth:1 biilthoff:1 uni:2 ignore:1 reveals:1 assumed:2 search:6 iterative:1 vergence:1 additionally:1 kanade:1 channel:2 nature:1 robust:1 hering:2 heidelberg:1 complex:1 jepson:1 dense:4 whole:1 border:3 neuronal:1 fig:8 slow:1 axon:2 precision:1 position:6 third:1 tin:1 down:1 inset:1 physiological:1 fusion:1 essential:1 false:2 sequential:1 alc:1 shea:1 texture:2 coregistered:1 wavelength:3 simply:2 visual:6 failed:1 expressed:1 springer:1 extracted:2 content:1 included:1 except:1 acting:1 principal:2 matcher:1 exception:1 fleck:2 mark:1 scan:1 handling:1 |
387 | 1,353 | Stacked Density Estimation
Padhraic Smyth *
Information and Computer Science
University of California, Irvine
CA 92697-3425
smythCics.uci.edu
David Wolpert
NASA Ames Research Center
Caelum Research
MS 269-2, Mountain View, CA 94035
dhwCptolemy.arc.nasa.gov
Abstract
In this paper, the technique of stacking, previously only used for
supervised learning, is applied to unsupervised learning. Specifically, it is used for non-parametric multivariate density estimation,
to combine finite mixture model and kernel density estimators. Experimental results on both simulated data and real world data sets
clearly demonstrate that stacked density estimation outperforms
other strategies such as choosing the single best model based on
cross-validation, combining with uniform weights, and even the single best model chosen by "cheating" by looking at the data used
for independent testing.
1
Introduction
Multivariate probability density estimation is a fundamental problem in exploratory
data analysis, statistical pattern recognition and machine learning. One frequently
estimates density functions for which there is little prior knowledge on the shape
of the density and for which one wants a flexible and robust estimator (allowing
multimodality if it exists). In this context, the methods of choice tend to be finite
mixture models and kernel density estimation methods. For mixture modeling,
mixtures of Gaussian components are frequently assumed and model choice reduces
to the problem of choosing the number k of Gaussian components in the model
(Titterington, Smith and Makov, 1986) . For kernel density estimation, kernel
shapes are typically chosen from a selection of simple unimodal densities such as
Gaussian, triangular, or Cauchy densities, and kernel bandwidths are selected in a
data-driven manner (Silverman 1986; Scott 1994).
As argued by Draper (1996), model uncertainty can contribute significantly to pre? Also with the Jet Propulsion Laboratory 525-3660, California Institute of Technology,
Pasadena, CA 91109
669
Stacked Density Estimation
dictive error in estimation. While usually considered in the context of supervised
learning, model uncertainty is also important in unsupervised learning applications
such as density estimation. Even when the model class under consideration contains
the true density, if we are only given a finite data set, then there is always a chance
of selecting the wrong model. Moreover , even if the correct model is selected, there
will typically be estimation error in the parameters of that model. These difficulties
are summarized by wri ting
P(f I D) =
L JdOMP(OM I D,M) x P(M I D) x fM,9M'
(1)
M
where f is a density, D is the data set, M is a model, and OM is a set of values for
the parameters for model M. The posterior probability P( M I D) reflects model
uncertainty, and the posterior P(OM I D , M) reflects uncertainty in setting the
parameters even once one knows the model. Note that if one is privy to P(M, OM),
then Bayes' theorem allows us to write out both of our posteriors explicitly, so that
we explicitly have P(f I D) (and therefore the Bayes-optimal density) given by
a weighted average of the fM ,9M" (See also Escobar and West (1995)). However
even when we know P(M, OM), calculating the combining weights can be difficult .
Thus, various approximations and sampling techniques are often used, a process
that necessarily introduces extra error (Chickering and Heckerman 1997) . More
generally, consider the case of mis-specified models where the model class does not
include the true model, so our presumption for P(M, OM) is erroneous. In this case
often one should again average.
Thus, a natural approach to improving density estimators is to consider empiricallydriven combinations of multiple density models. There are several ways to do this ,
especially if one exploits previous combining work in supervised learning. For example, Ormontreit and Tresp (1996) have shown that "bagging" (uniformly weighting
different parametrizations of the same model trained on different bootstrap samples) , originally introduced for supervised learning (Breiman 1996a) , can improve
accuracy for mixtures of Gaussians with a fixed number of components. Another
supervised learning technique for combining different types of models is "stacking"
(Wolpert 1992), which has been found to be very effective for both regression and
classification (e .g., Breiman (1996b)) . This paper applies stacking to density estimation , in particular to combinations involving kernel density estimators together
with finite mixture model estimators.
2
2.1
Stacked Density Estimation
Background on Density Estimation with Mixtures and Kernels
Consider a set of d real-valued random variables X = {Xl, . . . , xd} Upper case
symbols denote variable name.s (such as Xi) and lower-case symbols a particular
value of a variable (such as xJ). ~ is a realization of the vector variable X. J(~)
is shorthand for f(X = ~) and represents the joint probability distribution of X.
D = {~1 ' .. . ' ~N} is a training data set where each sample ~i' 1 :::; i :::; N is an
independently drawn sample from the underlying density function J(~) .
A commonly used model for density estimation is the finite mixture model with k
components, defined as:
k
fk(~J =
L aigi(~),
i=l
(2)
P. Smyth and D. Wolpert
670
=
where I:~=1 Ctj
1. The component gj's are usually relatively simple unimodal
densities such as Gaussians. Density estimation with mixtures involves finding the
locations, shapes, and weights of the component densities from the data (using
for example the Expectation-Maximization (EM) procedure). Kernel density estimation can be viewed as a special case of mixture modeling where a component
is centered at each data point, given a weight of 1/ N, and a common covariance
structure (kernel shape) is estimated from the data.
The quality of a particular probabilistic model can be evaluated by an appropriate
scoring rule on independent out-of-sample data, such as the test set log-likelihood
(also referred to as the log-scoring rule in the Bayesian literature). Given a test
data set D te3t , the test log likelihood is defined as
logf(Dte3tlfk(~)) =
l: logfk(~i)
(3)
Dteof
This quantity can play the role played by classification error in classification or
squared error in regression. For example, cross-validated estimates of it can be
used to find the best number of clusters to fit to a given data set (Smyth, 1996) .
2.2
Background on Stacking
Stacking can be used either to combine models or to improve a single model. In
the former guise it proceeds as follows . First, subsamples of the training set are
formed. Next the models are all trained on one subsample and resultant joint
predictive behavior on another subs ample is observed, together with information
concerning the optimal predictions on the elements in that other subsample. This
is repeated for other pairs of subsamples of the training set. Then an additional
( "stacked" ) model is trained to learn, from the subsample-based observations, the
relationship between the observed joint predictive behavior of the models and the
optimal predictions. Finally, this learned relationship is used in conjunction with
the predictions of the individual models being combined (now trained on the entire
data set) to determine the full system's predictions.
2.3
Applying Stacking to Density Estimation
Consider a set of M different density models, fm(~), 1 ~ m ~ M. In this paper each
of these models will be either a finite mixture with a fixed number of component
densities or a kernel density estimate with a fixed kernel and a single fixed global
bandwidth in each dimension. (In general though no such restrictions are needed.)
The procedure for stacking the M density models is as follows:
1. Partition the training data set D v times, exactly as in v-fold cross validation (we use v 10 throughout this paper), and for each fold:
=
(a) Fit each of the M models to the training portion ofthe partition of D .
(b) Evaluate the likelihood of each data point in the test partition of D,
for each of the M fitted models.
2. After doing this one has M density estimates for each of N data points,
and therefore a matrix of size N x M, where each entry is fm(~) , the
out-of-sample likelihood of the mth model on the ith data point.
3. Use that matrix to estimate the combination coefficients {Pl, ... , PM} that
maximize the log-likelihood at the points ~i of a stacked density model of
Stacked Density Estimation
671
the form:
M
fstacked (.~) =
I': f3m f
m
(~J.
m=l
Since this is itself a mixture model, but where the fm(~i) are fixed, the EM
algorithm can be used to (easily) estimate the f3m.
4. Finally, re-estimate the parameters of each of the m component density
models using all of the training data D. The stacked density model is then
the linear combination of those density models, with combining coefficients
given by the f3m.
3
Experimental Results
In our stacking experiments M = 6: three triangular kernels with bandwidths of
0.1,0.4, and 1.5 of the standard deviation (of the full data set) in each dimension,
2,4, and 8 components. This set of
and three Gaussian mixture models with k
models was chosen to provide a reasonably diverse representational basis for stacking. We follow roughly the same experimental procedure as described in Breiman
(1996b) for stacked regression:
=
? Each data set is randomly split into training and test partitions 50 times,
where the test partition is chosen to be large enough to provide reasonable
estimates of out-of-sample log-likelihood.
? The following techniques are run on each training partition:
1. Stacking: The stacked combination of the six constituent models.
2. Cross-Validation: The single best model as indicated by the maximum likelihood score of the M = 6 single models in the N x M
cross-validated table of likelihood scores.
3. Uniform Weighting: A uniform average of the six models.
4. "Cheating:" The best single model, i.e., the model having the largest
likelihood on the test data partition,
5. Truth: The true model structure, if the true model is one of the six
generating the data (only valid for simulated data).
? The log-likelihoods of the models resulting from these techniques are calculated on the test data partition. The log-likelihood of a single Gaussian
model (parameters determined on the training data) is subtracted from
each model's log-likelihood to provide some normalization of scale.
3.1
Results on Real Data Sets
Four real data sets were chosen for experimental evaluation. The diabetes data
set consists of 145 data points used in Gaussian clustering studies by Banfield and
Raftery (1991) and others. Fisher's iris data set is a classic data set in 4 dimensions
with 150 data points. Both of these data sets are thought to consist roughly of
3 clusters which can be reasonably approximated by 3 Gaussians. The Barney
and Peterson vowel data (2 dimensions, 639 data points) contains 10 distinct vowel
sounds and so is highly multi-modal. The star-galaxy data (7 dimensions, 499 data
points) contains non-Gaussian looking structure in various 2d projections.
Table 1 summarizes the results. In all cases stacking had the highest average loglikelihood, even out-performing "cheating" (the single best model chosen from the
test data). (Breiman (1996b) also found for regression that stacking outperformed
672
P. Smyth and D. Wolperl
Table 1: Relative performance of stacking multiple mixture models, for various data
sets, measured (relative to the performance of a single Gaussian model) by mean
log-likelihood on test data partitions. The maximum for each data set is underlined.
II
Data Set
Diabetes
Fisher's Iris
Vowel
Star-Galaxy
I Gaussian I Cross-Validation I "Cheating" I Uniform I Stacking II
-352.9
-52.6
128.9
-257.0
27.8
18.3
53.5
678.9
29.2
18.3
40.2
789.1
30.4
21.2
54.6
721.6
31.8
22.5
55 .8
888 .9
Table 2: Average across 20 runs of the stacked weights found for each constituent
model. The columns with h = .. . are for the triangular kernels and the columns
with k = . .. are for the Gaussian mixtures.
II Data Set I h=O.1 I h=O.4 I h=1.5 I k 2 I k 4 I k 8 1/
0.13
0.32
DIabetes
0.01
0.09
0.03
0.41
Fisher's Iris
0.02
0.16
0.26
0.00
0.40
0.16
Vowel
0.02
0.20
0.53
0.25
0.00
0.00
Star-Galaxy
0.00
0.04
0.03
0.03
0.27
0.62
=
=
=
the "cheating" method.) We considered two null hypotheses: stacking has the same
predictive accuracy as cross-validation, and it has the same accuracy as uniform
weighting. Each hypothesis can be rejected with a chance ofless than 0.01% of being
incorrect, according to the Wilcoxon signed-rank test i.e., the observed differences
in performance are extremely strong even given the fact that this particular test is
not strictly applicable in this situation.
On the vowel data set uniform weighting performs much worse than the other
methods: it is closer in performance to stacking on the other 3 data sets. On three
of the data sets, using cross-validation to select a single model is the worst method.
"Cheating" is second-best to stacking except on the star-galaxy data, where it
is worse than uniform weighting also: this may be because the star-galaxy data
probably induces the greatest degree of mis-specification relative to this 6-model
class (based on visual inspection).
Table 2 shows the averages of the stacked weight vectors for each data set. The
mixture components generally got higher weight than the triangular kernels. The
vowel and star-galaxy data sets have more structure than can be represented by any
of the component models and this is reflected in the fact that for each most weight
is placed on the most complex mixture model with k = 8.
3.2
Results on Simulated Data with no Model Mis-Specification
We simulated data from a 2-dimensional 4-Gaussian mixture model with a reasonable degree of overlap (this is the data set used in Ripley (1994) with the class
labels removed) and compared the same models and combining/selection schemes
as before, except that "truth" is also included, i.e., the scheme which always se4 Gaussians. For each training sample
lects the true model structure with k
size, 20 different training data sets were simulated, and the mean likelihood on an
independent test data set of size 1000 was reported.
=
Stacked Density Estimation
673
250
Slacking
l-
w
.... .
.+ .. . '
Ul
I-
. ~ . ,
0
fl]200
I-
~
~
~
-
.' .0 .
----lJnIfonn
~lSO
,
~
.~
<'
..J
~
~
,-- ~~-: . ?
i
8 100
~
I
I
..J
I
I
~
so
)Y
Ch.-ung
/
/
+
TrueK
I
I
I
0
20
". ?
.0
~
60
80
100
120
1.0
160
180
200
TRAINING SAMPLE SIZE
Figure 1: Plot of mean log-likelihood (relative to a single Gaussian model) for
various density estimation schemes on data simulated from a 4-component Gaussian
mixture.
Note that here we are assured of having the true model in the set of models being considered, something that is presumably never exactly the case in the real
world (and presumably was not the case for the experiments recounted in Table
1.) Nonetheless, as indicated in (Figure 1), stacking performed about the same as
the "cheating" method and significantly outperformed the other methods, including "truth." (Results where some of the methods had log-likelihoods lower than the
single Gaussian are not shown for clarity).
The fact that "truth" performed poorly on the smaller sample sizes is due to the fact
that with smaller sample sizes it was often better to fit a simpler model with reliable
parameter estimates (which is what "cheating" typically would do) than a more
complex model which may overfit (even when it is the true model structure). As the
sample size increases, both "truth" and cross-validation approach the performance
of "cheating" and stacking: uniform weighting is universally poorer as one would
expect when the true model is within the model class. The stacked weights at the
different sample sizes (not shown) start out with significant weight on the triangular
kernel model and gradually shift to the k = 2 Gaussian mixture model and finally
4 Gaussian model as sample size grows. Thus, stacking is seen
to the (true) k
to incur no penalty when the true model is within the model class being fit. In
fact the opposite is true; for small sample sizes stacking outperforms other density
estimation techniques which place full weight on a single (but poorly parametrized)
model.
=
4
Discussion and Conclusions
Selecting a global bandwidth for kernel density estimation is still a topic of debate
among statisticians. Stacking allows the possibility of side-stepping the issue of
a single bandwidth by combining kernels with different bandwidths and different
kernel shapes. A stacked combination of such kernel estimators is equivalent to using
674
P. Smyth and D. Wolpert
a single composite kernel that is a convex combination of the underlying kernels.
For example, kernel estimators based on finite support kernels can be regularized
in a data-driven manner by combining them with infinite support kernels. The key
point is that the shape and width of the resulting "effective" kernel i8 driven by the
data.
It is also worth noting that by combining Gaussian mixture models with different
k values one gets a hierarchical "mixture of mixtures" model. This hierarchical
model can provide a natural multi-scale representation of the data, which is clearly
similar in spirit to wavelet density estimators, although the functional forms and
estimation methodologies for each technique can be quite different. There is also
a representational similarity to Jordan and Jacob's (1994) "mixture of experts"
model where the weights are allowed to depend directly on the inputs. Exploiting
that similarity, one direction for further work is to investigate adaptive weight
parametrizations in the stacked density estimation context.
Acknowledgements
The work of P.S. was supported in part by NSF Grant IRI-9703120 and in part by
the Jet Propulsion Laboratory, California Institute of Technology, under a contract
with the National Aeronautics and Space Administration .
References
Banfield, J. D., and Raftery, A. E., 'Model-based Gaussian and non-Gaussian
clustering, ' Biometrics, 49, 803-821, 1993.
Breiman, L. , 'Bagging predictors,' Machine Learning, 26(2), 123-140, 1996a.
Breiman, L., 'Stacked regressions, ' Machine Learning, 24, 49-64, 1996b.
Chickering, D. M., and Heckerman, D., 'Efficient approximations for the marginal
likelihood of Bayesian networks with hidden variables,' Machine Learning,
In press.
Draper, D, 'Assessment and propagation of m0del uncertainty (with discussion),'
Journal of the Royal Statistical Society B, 57, 45-97, 1995.
Escobar, M. D., and West, M., 'Bayesian density estimation and inference with
mixtures,' J. Am. Stat. Assoc., 90, 577-588, 1995.
Jordan, M. 1. and Jacobs, R. A., 'Hierarchical mixtures of experts and the EM
algorithm,' Neural Computation, 6, 181-214, 1994.
Madigan, D., and Raftery, A. E., 'Model selection and accounting for model uncertainty in graphical models using Occam's window,' J. Am. Stat. Assoc.,
89, 1535-1546, 1994.
Ormeneit, D., and Tresp, V., 'Improved Gaussian mixture density estimates using Bayesian penalty terms and network averaging,' in Advances in Neural
Information Processing 8, 542-548, MIT Press, 1996.
Ripley, B. D. 1994. 'Neural networks and related methods for classification (with
discussion),' J. Roy. Stat. Soc. B, 56,409-456.
Smyth, P.,'Clustering using Monte-Carlo cross-validation,' in Proceedings of the
Second International Conference on Knowledge Discovery and Data Mining, Menlo Park, CA: AAAI Press, pp.126-133, 1996.
Titterington, D. M., A. F. M. Smith, U. E. Makov, Statistical Analysis of Finite
Mixture Distributions, Chichester, UK: John Wiley and Sons, 1985
Wolpert, D. 1992. 'Stacked generalization,' Neural Networks, 5, 241-259,
| 1353 |@word covariance:1 jacob:2 accounting:1 recounted:1 barney:1 contains:3 score:2 selecting:2 outperforms:2 john:1 partition:9 shape:6 plot:1 selected:2 inspection:1 ith:1 smith:2 te3t:1 contribute:1 location:1 ames:1 simpler:1 incorrect:1 shorthand:1 consists:1 combine:2 multimodality:1 manner:2 roughly:2 behavior:2 frequently:2 multi:2 gov:1 little:1 window:1 moreover:1 underlying:2 null:1 what:1 mountain:1 titterington:2 finding:1 xd:1 exactly:2 wrong:1 assoc:2 uk:1 f3m:3 grant:1 before:1 signed:1 wri:1 testing:1 silverman:1 bootstrap:1 procedure:3 significantly:2 thought:1 projection:1 got:1 pre:1 composite:1 madigan:1 get:1 selection:3 context:3 applying:1 restriction:1 equivalent:1 center:1 iri:1 independently:1 convex:1 estimator:8 rule:2 classic:1 exploratory:1 lso:1 play:1 smyth:6 hypothesis:2 diabetes:3 element:1 roy:1 recognition:1 approximated:1 observed:3 role:1 worst:1 highest:1 removed:1 trained:4 depend:1 predictive:3 incur:1 basis:1 easily:1 joint:3 banfield:2 various:4 represented:1 stacked:18 distinct:1 effective:2 monte:1 choosing:2 quite:1 valued:1 loglikelihood:1 triangular:5 itself:1 subsamples:2 uci:1 combining:9 realization:1 parametrizations:2 poorly:2 representational:2 constituent:2 exploiting:1 cluster:2 generating:1 escobar:2 stat:3 measured:1 strong:1 soc:1 involves:1 direction:1 correct:1 centered:1 argued:1 generalization:1 strictly:1 pl:1 considered:3 presumably:2 estimation:25 outperformed:2 applicable:1 label:1 largest:1 reflects:2 weighted:1 mit:1 clearly:2 gaussian:20 always:2 ctj:1 breiman:6 conjunction:1 validated:2 rank:1 likelihood:17 am:2 inference:1 typically:3 entire:1 pasadena:1 mth:1 hidden:1 issue:1 classification:4 flexible:1 among:1 special:1 marginal:1 once:1 never:1 having:2 ung:1 sampling:1 represents:1 park:1 unsupervised:2 others:1 randomly:1 national:1 individual:1 statistician:1 vowel:6 highly:1 possibility:1 investigate:1 mining:1 evaluation:1 chichester:1 introduces:1 mixture:28 poorer:1 closer:1 biometrics:1 re:1 fitted:1 column:2 modeling:2 maximization:1 stacking:22 deviation:1 entry:1 uniform:8 predictor:1 reported:1 combined:1 density:47 fundamental:1 international:1 probabilistic:1 contract:1 together:2 again:1 squared:1 aaai:1 padhraic:1 worse:2 expert:2 makov:2 star:6 summarized:1 coefficient:2 dictive:1 explicitly:2 performed:2 view:1 doing:1 portion:1 start:1 bayes:2 om:6 formed:1 accuracy:3 ofthe:1 bayesian:4 carlo:1 worth:1 nonetheless:1 pp:1 galaxy:6 resultant:1 mi:3 irvine:1 knowledge:2 nasa:2 originally:1 higher:1 supervised:5 follow:1 reflected:1 modal:1 methodology:1 improved:1 evaluated:1 though:1 rejected:1 overfit:1 assessment:1 propagation:1 quality:1 indicated:2 grows:1 name:1 true:11 former:1 laboratory:2 width:1 iris:3 m:1 demonstrate:1 performs:1 consideration:1 common:1 functional:1 stepping:1 significant:1 fk:1 pm:1 had:2 privy:1 specification:2 similarity:2 gj:1 aeronautics:1 wilcoxon:1 something:1 multivariate:2 posterior:3 driven:3 underlined:1 scoring:2 seen:1 additional:1 determine:1 maximize:1 ii:3 multiple:2 unimodal:2 sound:1 reduces:1 full:3 jet:2 cross:10 lects:1 concerning:1 prediction:4 involving:1 regression:5 expectation:1 kernel:25 normalization:1 background:2 want:1 extra:1 probably:1 tend:1 ample:1 spirit:1 jordan:2 logf:1 noting:1 split:1 enough:1 xj:1 fit:4 bandwidth:6 fm:5 opposite:1 administration:1 shift:1 six:3 ul:1 penalty:2 generally:2 induces:1 nsf:1 estimated:1 diverse:1 write:1 key:1 four:1 drawn:1 clarity:1 draper:2 run:2 uncertainty:6 place:1 throughout:1 reasonable:2 summarizes:1 fl:1 played:1 fold:2 extremely:1 performing:1 relatively:1 according:1 combination:7 heckerman:2 across:1 em:3 smaller:2 son:1 gradually:1 previously:1 needed:1 know:2 gaussians:4 hierarchical:3 appropriate:1 subtracted:1 bagging:2 clustering:3 include:1 graphical:1 calculating:1 exploit:1 ting:1 especially:1 society:1 quantity:1 parametric:1 strategy:1 simulated:6 parametrized:1 propulsion:2 topic:1 cauchy:1 relationship:2 difficult:1 debate:1 allowing:1 upper:1 observation:1 arc:1 finite:8 situation:1 looking:2 david:1 introduced:1 cheating:9 pair:1 specified:1 california:3 learned:1 proceeds:1 usually:2 pattern:1 scott:1 including:1 reliable:1 royal:1 greatest:1 overlap:1 difficulty:1 natural:2 regularized:1 scheme:3 improve:2 technology:2 raftery:3 tresp:2 prior:1 literature:1 acknowledgement:1 discovery:1 relative:4 expect:1 presumption:1 validation:8 degree:2 i8:1 occam:1 placed:1 supported:1 side:1 institute:2 peterson:1 dimension:5 calculated:1 world:2 valid:1 commonly:1 adaptive:1 universally:1 global:2 assumed:1 xi:1 ripley:2 table:6 learn:1 reasonably:2 robust:1 ca:4 menlo:1 improving:1 necessarily:1 complex:2 assured:1 subsample:3 repeated:1 allowed:1 west:2 referred:1 wiley:1 guise:1 sub:1 xl:1 chickering:2 weighting:6 wavelet:1 theorem:1 erroneous:1 symbol:2 exists:1 consist:1 wolpert:5 visual:1 applies:1 ch:1 truth:5 chance:2 viewed:1 fisher:3 included:1 specifically:1 determined:1 uniformly:1 except:2 infinite:1 averaging:1 experimental:4 select:1 support:2 evaluate:1 |
388 | 1,354 | Phase transitions and the perceptual
organization of video sequences
Yair Weiss
Dept. of Brain and Cognitive Sciences
Massachusetts Institute of Technology
ElO-120, Cambridge, MA 02139
http://www-bcs.mit.edu;-yweiss
Abstract
Estimating motion in scenes containing multiple moving objects
remains a difficult problem in computer vision. A promising approach to this problem involves using mixture models, where the
motion of each object is a component in the mixture. However, existing methods typically require specifying in advance the number
of components in the mixture, i.e. the number of objects in the
scene.
?
Here we show that the number of objects can be estimated automatically in a maximum likelihood framework, given an assumption
about the level of noise in the video sequence. We derive analytical
results showing the number of models which maximize the likelihood for a given noise level in a given sequence. We illustrate these
results on a real video sequence, showing how the phase transitions
correspond to different perceptual organizations of the scene.
Figure la depicts a scene where motion estimation is difficult for many computer
vision systems. A semi-transparent surface partially occludes a second surface,
and the camera is translating horizontally. Figure 1b shows a slice through the
horizontal component of the motion generated by the camera - points that are
closer to the camera move faster than those further away. In practice, the local
motion information would be noisy as shown in figure lc and this imposes conflicting
demands on a motion analysis system - reliable estimates require pooling together
many measurements while avoiding mixing together measurements derived from the
two different surfaces.
Phase Transitions and the Perceptual Organization of Video Sequences
851
rd rd 'Cd
.
. . .... .. . .. ..
r
I: ____________,
. . .. _.. ....
b
a
... ..'
. . , .. . ,. "....
r
.
'
~ .'~
I:l ....".'-,.. . . .-:"..
... . -" ....
c
'..". ..... :..
. ...... .:. ... .... .
"
d
Figure 1: a: A simple scene that can cause problems for motion estimation. One surface
partially occludes another surface. b: A cross section through the horizontal motion field
generated when the camera translates horizontally. Points closer to the camera move
faster. c: Noisy motion field. In practice each local measurement will be somewhat noisy
and pooling of information is required. d: A cross section through the output of a multiple
motion analysis system. Points are assigned to surfaces (denoted by different plot symbols)
and the motion of each surface is estimated.
II.
0.
?1
? ?
-0..
.
? W. ?
-01
.
w' ?
-04
-a.2
.'
.' I.
0.
"
0..1
0..
O.
??
.'
..? ? ..'I-
.1,. I'? ??
1
-I
-OJ!
..Q.
-0.
?
-OJ!
I,
0
??
0.,
GAl
11
? ? 11 ? ?
O'
01
I
Figure 2: The "correct" number of surfaces in a given scene is often ambiguous. Was the
motion here generated by one or two surfaces?
Significant progress in the analysis of such scenes has been achieved by multiple
motion analyzers - systems that simultaneously segment the scene into surfaces and
estimating the motion of each surface [9]. Mixture models are a commonly used
framework for performing mUltiple motion estimation [5, 1, 10]. Figure 1d shows
a slice through the output of a multiple motion analyzer on this scene - pixels are
assigned to one of two surfaces and motion information is only combined for pixels
belonging to the same surface.
The output shown in figure 1d was obtained by assuming the scene contains two
surfaces. In general, of course, one does not know the number of surfaces in the
scene in advance. Figure 2 shows the difficulty in estimating this number. It is not
clear whether this is very noisy data generated by a single surface, or less noisy
data generated by two surfaces. There seems no reason to prefer one description
over another. Indeed, the description where there are as many surfaces as pixels is
also a valid interpretation of this data.
Here we take the approach that there is no single "correct" number of surfaces for
a given scene in the absence of any additional assumptions. However, given an
assumption about the noise in the sequence, there are more likely and less likely
interpretations. Intuitively, if we know that the data in figure 2a was taken with
a very noisy camera, we would tend to prefer the one surface solution - adding
additional surfaces would cause us to fit the noise rather than the data. However, if
we know that there is little noise in the sequence, we would prefer solutions that use
many surfaces, there is a lot less danger of "overfitting". In this paper} we show,
1
A longer version of this paper is available on the author's web page.
Y. Weiss
852
following [6, 8] that this intuition regarding the dependence of number of surfaces to
assumed noise level is captured in the maximum likelihood framework. We derive
analytical results for the critical values of noise levels where the likelihood function
undergoes a "phase transition" - from being maximized by a single model to being
maximized by mUltiple models. We illustrate these transitions on synthetic and real
video data.
1
1.1
Theory
Mixture Models for optical flow
In mixture models for optical flow (cf. [5, 1]) the scene is modeled as composed of K surfaces with the velocity of each vsurface at location (x, y) given by
(uk(x,y),vk(x,y). The velocity field is parameterized by a vector f)k. A typical
choice [9] is the affine representation:
Uk (x, y) = f)~
+ f)~ x + f)~ Y
f)! + f)~x + f):y
(1)
vk(x, y) =
(2)
The affine family of motions includes rotations, translations, scalings and shears. It
corresponds to the 2D projection of a plane undergoing rigid motion in depth.
Corresponding pixels in subsequent frames are assumed to have identical intensity
values, up to imaging noise which is modeled as a Gaussian with variance a 2 ? The
task of multiple motion estimation is to find the most likely motion parameter values
given the image data. A standard derivation (see e.g. [1]) gives the following log
likelihood function for the parameters e:
lee) =
K
L log(L e-R~(x.Y)/2u2)
x,y
(3)
k=l
With Rk(X, y) the residual intensity at pixel (x, y) for velocity k:
Rk(x, y)
= Ix (x, y)uk(x, y) + Iy(x, y)vk(x, y) + It(x, y)
(4)
where Ix, Iy,It denote the spatial and temporal derivatives of the image sequence.
Although our notation does not make it explicit, Rk(X, y) is a function of f)k through
equations 1-2. As in most mixture estimation applications, equation 3 is not maximized directly, but rather an Expectation-Maximization (EM) algorithm is used to
iteratively increase the likelihood [3].
1.2
Maximum Likelihood not necessarily with maximum number of
models
It may seem that since K is fixed in the likelihood function (equation 3) there is
no way that the number of surfaces can be found by maximizing the likelihood.
However, maximizing over the likelihood may lead to a a solution in which some
of the f) parameters are identical [6, 5, 8]. In this case, although the number of
surfaces is still K, the number of distinct surfaces may be any number less than K.
Consider a very simple case where K = 2 and the motion of each surface is restricted
to horizontal translation u(x, y) = u, vex, y) = O. The advantage of this simplified
Phase Transitions and the Perceptual Organization of Video Sequences
853
Figure 3: The log likelihood for the data in figure 2 undergoes a phase transition when a
is varied. For small values of a the likelihood has two maxima, and at both these maxima
the two motions are distinct. For large a 2 the likelihood function has single maximum at
the origin, corresponding to the solution where both velocities are equal to zero, or only
one unique surface.
case is that the likelihood function is a function of two variables and can be easily
visualized. Figure 3 shows the likelihood function for the data in figure 2 as (7 is
varied. Observe that for small values of (72 the likelihood has two maxima, and
at both these maxima the two motions are distinct. For large (72 the likelihood
function has single maximum at the origin, corresponding to the solution where
both velocities are equal to zero, or only one unique surface. This is a simple
example where the ML solution corresponds to a small number of unique surfaces.
Can we predict the range of values for (7 for which the likelihood function has a
maximum at the origin? This happens when the gradient of the likelihood at the
origin is zero and the Hessian has two negative eigenvalues. It is easy to show
that the if the data has zero mean, the gradient is zero regardless of (7. As for the
Hessian, H, direct calculation gives:
-~
A.-I
)
(5)
20-~
where E is the mean squared residual of a single motion and c is a positive constant.
The two eigenvalues are proportional to -1 and E / (72 -1. So the likelihood function
has a local maximum at the origin if and only if E < (72. (see [6, 4, 8] for a similar
analysis in other contexts).
This result makes intuitive sense. Recall that (72 is the expected noise variance.
Thus if the mean squared residual is less than (72 with a single surface, there is no
need to add additional surfaces. The result on the Hessian shows that this intuition
is captured in the likelihood function. There is no need to introduce additional
"complexity costs" to avoid overfitting in this case.
More generally, if we assume the velocity fields are of general parametric form, the
Hessian evaluated at the point where both surfaces are identical has the form:
-~
b-F
20-
)
(6)
where E and F are matrices:
(7)
z ,y
Y. Weiss
854
.. -..
0":"
o
.
.."
"0:
b
a
Figure 4: a: data generated by two lines. b: the predicted phase diagram for the
likelihood of this dataset in a four component mixture. The phase transitions are at
(J"
= 0.084, 0.112, 0.8088
F =
L d(x, y)d(x, y)t
(8)
:t,Y
with d(x, y) = aR~~,y), and R(x, y) the residual as before.
A necessary and sufficient condition for the Hessian to have only negative eigenvalues
is:
(9)
Thus when the maximal eigenvalue of F- 1 E is less than (12 the fit with a single
model is a local maximum of the likelihood. Note that F- 1 E is very similar to a
weighted mean squared error, with every residual weighted by a positive definite
matrix (E sums all the residuals times their weight, and F sums all the weights, so
F- 1 E is similar to a weighted average).
The above analysis predicts the phase transition of a two component mixture likelihood, i.e. the critical value of (12 such that above this critical value, the maximum
likelihood solution will have identical motion parameters for both surfaces. This
analysis can be straightforwardly generalized to finding the first phase transition of
a K component mixture, although the subsequent transitions are harder to analyze.
2
Results
The fact that the likelihood function undergoes a phase transition as (1 is varied
predicts that a ML technique will converge to different number of distinct models
as (1 is varied. We first illustrate these phase transitions on a ID line fitting problem which shares some of the structure of multiple motion analysis and is easily
visualized.
Figure 4a shows data generated by two lines with additive noise, and figure 4b
shows a phase diagram calculated using repeated application of equation 9; i.e. by
solving equation 9 for all the data, taking the two line solution obtained after the
transition, and repeating the calculation separately for points assigned to each of
the two lines.
Figure 5 shows the output of an EM algorithm on this data set. Initial conditions
are identical in all runs, and the algorithm converges to one, two, three or four
distinct lines depending on (1.
We now illustrate the phase transitions on a real video sequence. Figures 6- 8
show the output of an EM motion segmentation algorithm with four components
on the MPEG flower garden sequence (cf. [9, 10]). The camera is translating in
Phase Transitions and the Perceptual Organization of Video Sequences
x?? ? ?
.
.078
.089
.
0.1183
855
.....
1.0
Figure 5: The data in figure 1 are fit with one, two, three or four models depending on
a. The results of EM with identical initial conditions are shown, only a is varied. The
transitions are consistent with the theoretical predictions .
. ;v~~ \; ""
?
.
?
;
,~
b
a
Figure 6: The first phase transition. The algorithm finds two segments corresponding
to the tree and the rest of the scene. The critical value of a 2 for which this transition
happens is consistent with the theoretical prediction.
the scene, and objects move with different velocities due to parallax. The phase
transitions correspond to different perceptual organizations of the scene - first the
tree is segmented from the background, then branches are split from the tree, and
finally the background splits into the flower bed and the house.
3
Discussion
Estimating the number of components in a Gaussian mixture is a well researched
topic in statistics and data mining [7]. Most approaches involve some tradeoff
parameter to balance the benefit of an additional component versus the added
complexity [2]. Here we have shown how this tradeoff parameter can be implicitly
specified by the assumed level of noise in the image sequence.
While making an assumption regarding a may seem rather arbitrary in the abstract
Gaussian mixture problem, we find it quite reasonable in the context of motion estimation, where the noise is often a property of the imaging system, not of the
underlying surfaces. Furthermore, as the phase diagram in figure 4 shows, a wide
range of assumed a values will give similar answer, suggesting that an exact specification of a is not needed. In current work we are exploring the use of weak priors
on a as well as comparing our method to those based on cross validation [7] .
~3
? , ' ,h
,
?
>
t
'
>
Figure 7: The second phase transition. The algorithm finds three segments - branches
which are closer to the camera than the rest of the tree are segmented from it. Since the
segmentation is based solely on motion, portions of the flower bed that move consistently
with the branches are erroneously grouped with them.
856
Y. Weiss
Figure 8: The third phase transition. The algorithm finds four segments - the Bower bed
and the house are segregated.
Our analytical and simulation results show that an assumption of the noise level
in the sequence enables automatic determination of the number of moving objects
using well understood maximum likelihood techniques. Furthermore, for a given
scene, varying the assumed noise level gives rise to different perceptually meaningful
segmentations. Thus mixture models may be a first step towards a well founded
probabilistic framework for perceptual organization.
Acknowledgments
I thank D. Fleet, E. Adelson, J. Tenenbaum and G. Hinton for stimulating discussions.
Supported by a training grant from NIG MS.
References
[1] Serge Ayer and Harpreet S. Sawhney. Layered representation of motion video using
robust maximum likelihood estimation of mixture models and MDL encoding. In
Proc. Int'l Con/. Comput. Vision, pages 777-784, 1995.
[2] J. Buhmann. Data clustering and learning. In M. Arbib, editor, Handbook of Brain
Theory and Neural Networks. MIT Press, 1995.
[3] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete
data via the EM algorithm. J. R. Statist. Soc. B, 39:1-38, 1977.
[4] R. Durbin, R. Szeliski, and A. Yuille. An analysis of the elastic net approach to the
travelling salesman problem. Neural Computation, 1(3):348-358, 1989.
[5] A. Jepson and M. J. Black. Mixture models for optical Bow computation. In Proc.
IEEE Con/. Comput. Vision Pattern Recog., pages 760-761, New York, June 1993.
[6] K. Rose, F. Gurewitz, and G. Fox. Statistical mechanics and phase transitions in
clustering. Physical Review Letters, 65:945-948, 1990.
[7] P. Smyth. Clustering using monte-carlo cross-validation. In KDD-96, pages 126-133,
1996.
[8] J. B. Tenenbaum and E. V. Todorov. Factorial learning by clustering features. In
G. Tesauro, D.S. Touretzky, and K. Leen, editors, Advances in Neural Information
Processing Systems 7, 1995.
[9] J. Y. A. Wang and E. H. Adelson. Representing moving images with layers.
IEEE Transactions on Image Processing Special Issue: Image Sequence Compression,
3(5):625-638, September 1994.
[10] Y. Weiss and E. H. Adelson. A unified mixture framework for motion segmentation:
incorporating spatial coherence and estimating the number of models. In Proc. IEEE
Con/. Comput. Vision Pattern Recog., pages 321-326, 1996.
| 1354 |@word version:1 compression:1 seems:1 simulation:1 harder:1 initial:2 contains:1 existing:1 current:1 comparing:1 subsequent:2 additive:1 kdd:1 occludes:2 enables:1 plot:1 plane:1 location:1 direct:1 fitting:1 parallax:1 introduce:1 expected:1 indeed:1 mechanic:1 brain:2 automatically:1 researched:1 little:1 estimating:5 notation:1 underlying:1 unified:1 finding:1 gal:1 temporal:1 every:1 uk:3 grant:1 positive:2 before:1 understood:1 local:4 encoding:1 id:1 solely:1 black:1 specifying:1 range:2 unique:3 camera:8 acknowledgment:1 practice:2 definite:1 sawhney:1 danger:1 projection:1 layered:1 context:2 www:1 maximizing:2 regardless:1 exact:1 smyth:1 origin:5 velocity:7 predicts:2 recog:2 wang:1 rose:1 intuition:2 dempster:1 complexity:2 solving:1 segment:4 yuille:1 easily:2 derivation:1 distinct:5 monte:1 quite:1 statistic:1 noisy:6 laird:1 sequence:15 advantage:1 eigenvalue:4 analytical:3 net:1 maximal:1 bow:1 mixing:1 description:2 intuitive:1 bed:3 mpeg:1 converges:1 object:6 derive:2 illustrate:4 depending:2 progress:1 soc:1 predicted:1 involves:1 correct:2 translating:2 require:2 transparent:1 yweiss:1 exploring:1 predict:1 elo:1 estimation:7 proc:3 grouped:1 weighted:3 mit:2 gaussian:3 rather:3 avoid:1 varying:1 derived:1 june:1 vk:3 consistently:1 likelihood:29 sense:1 rigid:1 typically:1 pixel:5 issue:1 denoted:1 spatial:2 special:1 field:4 equal:2 identical:6 adelson:3 composed:1 simultaneously:1 phase:21 organization:7 mining:1 mdl:1 mixture:16 closer:3 necessary:1 fox:1 tree:4 incomplete:1 theoretical:2 ar:1 maximization:1 cost:1 straightforwardly:1 answer:1 synthetic:1 combined:1 lee:1 probabilistic:1 together:2 iy:2 squared:3 containing:1 cognitive:1 derivative:1 suggesting:1 includes:1 int:1 lot:1 analyze:1 portion:1 variance:2 maximized:3 correspond:2 serge:1 weak:1 carlo:1 touretzky:1 con:3 dataset:1 massachusetts:1 recall:1 segmentation:4 ayer:1 wei:5 leen:1 evaluated:1 furthermore:2 horizontal:3 web:1 undergoes:3 assigned:3 iteratively:1 ambiguous:1 m:1 generalized:1 motion:32 image:6 rotation:1 shear:1 physical:1 interpretation:2 measurement:3 significant:1 cambridge:1 rd:2 automatic:1 analyzer:2 moving:3 specification:1 longer:1 surface:36 add:1 tesauro:1 captured:2 additional:5 somewhat:1 converge:1 maximize:1 semi:1 ii:1 multiple:8 branch:3 bcs:1 segmented:2 faster:2 determination:1 calculation:2 cross:4 prediction:2 vision:5 expectation:1 achieved:1 background:2 separately:1 diagram:3 rest:2 nig:1 pooling:2 tend:1 flow:2 seem:2 split:2 easy:1 todorov:1 fit:3 arbib:1 regarding:2 tradeoff:2 translates:1 fleet:1 whether:1 hessian:5 cause:2 york:1 generally:1 clear:1 involve:1 factorial:1 repeating:1 tenenbaum:2 statist:1 visualized:2 http:1 estimated:2 four:5 imaging:2 sum:2 run:1 parameterized:1 letter:1 family:1 reasonable:1 coherence:1 vex:1 prefer:3 scaling:1 layer:1 durbin:1 scene:17 erroneously:1 performing:1 optical:3 belonging:1 em:5 making:1 happens:2 intuitively:1 restricted:1 taken:1 equation:5 remains:1 needed:1 know:3 travelling:1 salesman:1 available:1 observe:1 away:1 yair:1 clustering:4 cf:2 move:4 added:1 parametric:1 dependence:1 september:1 gradient:2 thank:1 topic:1 reason:1 assuming:1 modeled:2 balance:1 difficult:2 negative:2 rise:1 hinton:1 frame:1 varied:5 arbitrary:1 intensity:2 required:1 specified:1 conflicting:1 flower:3 pattern:2 reliable:1 oj:2 garden:1 video:9 critical:4 difficulty:1 buhmann:1 residual:6 representing:1 technology:1 gurewitz:1 prior:1 review:1 segregated:1 proportional:1 versus:1 validation:2 affine:2 sufficient:1 consistent:2 imposes:1 rubin:1 editor:2 share:1 cd:1 translation:2 course:1 supported:1 institute:1 wide:1 szeliski:1 taking:1 benefit:1 slice:2 depth:1 calculated:1 transition:23 valid:1 author:1 commonly:1 simplified:1 founded:1 transaction:1 implicitly:1 ml:2 overfitting:2 handbook:1 assumed:5 promising:1 robust:1 elastic:1 necessarily:1 jepson:1 noise:14 repeated:1 depicts:1 lc:1 explicit:1 comput:3 house:2 perceptual:7 bower:1 third:1 ix:2 rk:3 showing:2 symbol:1 undergoing:1 incorporating:1 adding:1 perceptually:1 demand:1 likely:3 horizontally:2 partially:2 u2:1 corresponds:2 ma:1 stimulating:1 towards:1 absence:1 typical:1 la:1 meaningful:1 dept:1 avoiding:1 |
389 | 1,355 | A Neural Network Model of Naive Preference
and Filial Imprinting in the Domestic Chick
Lucy E. Hadden
Department of Cognitive Science
University of California, San Diego
La Jolla, CA 92093
hadden@cogsci.ucsd.edu
Abstract
Filial imprinting in domestic chicks is of interest in psychology, biology,
and computational modeling because it exemplifies simple, rapid, innately programmed learning which is biased toward learning about some
objects. Hom et al. have recently discovered a naive visual preference
for heads and necks which develops over the course of the first three
days of life. The neurological basis of this predisposition is almost entirely unknown; that of imprinting-related learning is fairly clear. This
project is the first model of the predisposition consistent with what is
known about learning in imprinting. The model develops the predisposition appropriately, learns to "approach" a training object, and replicates
one interaction between the two processes. Future work will replicate
more interactions between imprinting and the predisposition in chicks,
and analyze why the system works.
1 Background
Filial imprinting iIi domestic chicks is of interest in psychology, biology, and computational
modeling (O'Reilly and Johnson, 1994; Bateson and Hom, 1994) because it exemplifies
simple, rapid, innately programmed learning which is biased toward learning about some
particular objects, and because it has a sensitive period in which learning is most efficient.
Domestic chicks will imprint on almost anything (including boxes, chickens, and humans)
which they see for enough time (Hom, 1985). Hom and his colleagues (Hom, 1985) have
recently found a naive visual preference (predisposition) for heads and necks which develops over the course of the first three days of life. In particular, the birds prefer to approach
objects shaped like heads and necks, even if they are the heads and necks of other species,
including ducks and polecats (Hom, 1985). This preference interacts interestingly with filial imprinting, or learning to recognize a parent. Chicks can sti1llearn about (and imprint
L E.Hadden
32
on) other objects even in the presence of this predisposition, and the predisposition can
override previously learned preferences (Johnson et aI., 1985), which is usually hard with
imprinted chicks. These interactions are like other systems which rely on naive preferences
and learning.
While the neurological basis of imprinting is understood to some extent, that of the predisposition for heads and necks is only beginning to be investigated. Imprinting learning is
known to take place in IMHY (intermediate and medial portions of the hyperstriatum ventrale) (Hom, 1985), and to rely on noradrenaline (Davies et ai., 1992). The predisposition's
location is currently unknown, but its strength correlates with plasma testosterone levels
(Hom, 1985).
1.1
Previous Models
No previous models of imprinting have incorporated the predisposition in any meaningful
way. O'Reilly & Johnson's (1994) model focussed on accounting for the sensitive period
via an interaction between hysteresis (slow decay of activation) and a Hebbian learning rule,
and ignored the predisposition. The only model which did try to include a predisposition
(Bateson and Hom, 1994) was a 3-layer Hebbian network with real-valued input vectors,
and outputs which represented the strength of an "approach" behavior. Bateson and Hom
(1994) found a "predisposition" in their model by comparing networks trained on input
vectors of Os and Is (High) to vectors where non-zero entries were 0.6 (Low). Untrained
networks preferred (produced a higher output value for) the high-valued input ("hen"),
and trained networks preferred the stimulus they were trained on ("box"). Of course, in a
network with identical weights, an input with higher input values will naturally excite an
output unit more than one with lower input values. Thus, this model's predisposition is
implicit in the input values, and is therefore hard to apply to chicks.
In this project, I develop a model which incorporates both the predisposition and imprinting, and which is as consistent as possible with the known neurobiology. The overall goals
of the project are to clarify how this predisposition might be implemented, and to examine more generally the kinds of representations that underlie naive preferences that interact
with and facilitate, rather than replace, learning. These particular simulations show that
the model exhibits the same qualitative behavior as chicks under three important sets of
conditions.
The rest of the paper first describes the architecture of the current model (in general terms
and then in more detail). It goes on to describe the particular simulations, and then compares the results of those simulations with the data gathered from chicks.
2 Architecture
The neural network model's architecture is shown in Figure 1. The input layer is a 6x6
pixel "retina" to which binary pictures are presented. The next layer is a feature detector.
The predisposition serves as the home of the network's naive preference, while the IMLL
(intermediate learning layer) is intended to correspond to a chick's IMHY, and is where the
network stores its learned representations. The output layer consists of two units which are
taken to represent different action patterns (following Bateson and Hom (1994): an "approach" unit and a "withdraw" unit. These are the two chick behaviors which researchers
use to assess a chick's degree of preference for a particular stimulus. The feature detector provides input to the predisposition and IMLL layers; they in tum provide input to the
output layer. Where there are connections, layers (and subparts) are fully interconnected.
The feature detector uses a linear activation function; the rest of the network has a hyperbolic tangent activation function. All activations and all connections can be either positive
Model of Predispositions in Chicks
Feature
33
Ou ut
Predisp.
Input
IMLL
box
head/neck
?. . ? ? ? ? ?. ?????0
-
= Fixed Weights
........... = Modifiable Weights
Figure 1: The network architecture sketched. All connections
are feedforward (from input toward output) only.
cylinder
Figure 2: The three input patterns used by the
network. They have between 16 and 18 pixels
each, and the central moment of each image
is the same.
or negative; the connections are limited to ?0.9. Most of the learning takes place via a
covariant Hebb rule, because it is considered to be plausible neurally.
The lowest level ofthe network is a feature-detecting preprocessor. The current implementation of this network takes crude 6x6 binary pictures (examples of which can be seen in
Fig. 2), and produces a IS-place floating-point vector. The output units of the feature detector are clustered into five groups of three units each; each group of three units operates
under a winner-take-all rule, in order to increase the difference between preprocessed patterns for the relevant pictures. The feature detector was trained on random inputs for 400
cycles with a learning rate of .01, and its weights were then frozen. Training on random
input was motivated by the finding that the lower levels of the visual system require some
kind of input in order to organize; Miller et al. (1989) suggest that, at least in cats, the
random firing of retinal neurons is sufficient.
The predisposition layer was trained via backprop using the outputs of the feature detector
as its inputs. The pattern produced by the "head-neck" picture in the feature detector was
trained to excite the "approach" output unit and inhibit the "withdraw" unit; other patterns
were trained to a neutral value on both output units. These weights were stored, and treated
as fixed in the larger network. (In fact, these weights were scaled down by a constant
factor (.8) before being used in the larger network.) Since this is a naive preference, or
predisposition, these weights are assumed to be fixed evolutionarily. Thus, the method of
setting them is irrelevant; they could also have been found by a genetic algorithm.
The IMLL layer is a winner-take-all network of three units. Its connections with the feature
detector's outputs are learned by a Hebb rule with learning rate .01 and a weight decay (to
0) term of .0005. For these simulations, its initial weights were fixed by hand, in a pattern
which insured that each IMLL unit received a substantially different value for the same
input pattern. This pattern of initial weights also increased the likelihood that the three
patterns of interest in the simulations maximally affected different IMLL units.
As previously mentioned, the output layer consists of an "approach" and a "withdraw"
unit. It also learns via a Hebb rule, with the same learning rate and decay term as IMLL.
Its connections with IMLL are learned; those with the predisposition layer are fixed. Initial
weights between IMLL and the output layer are random, and vary from -0.3 to 0.3. The
bias to the approach unit is 0; that to the withdraw unit is 0.05.
34
2.1
L E. Hadden
Training
In the animal experiments on which this model is based, chicks are kept in the dark (and in
isolation) except for training and testing periods. Training periods involve visual exposure
to an object (usually a red box); testing involves allowing the chick to choose between
approaching the training object and some other object (usually either a stuffed hen or a blue
cylinder) (Hom, 1985). The percentage of time the chick approaches the training object (or
other object of interest) is its preference score for that object (Hom, 1985). A preference
score of 50% indicates indifference; scores above 50% indicate a preference for the target
object, and those below indicate a preference for the other object. For the purposes of
modeling, the most relevant information is the change (particularly the direction of change)
in the preference score between two conditions.
Following this approach, the simulations use three preset pictures. One, a box, is the only
one for which weights are changed; it is the training pattern. The other two pictures are
test patterns; when they are shown, the network's weights are not altered. One of these
test patterns is the head/neck picture on which the predisposition network was trained; the
other is a cylinder. As with chicks, the behavioral measure is the preference score. For the
network, this is calculated as pref. score = 100 x at / (at + ac ), where at is the activation
of the approach unit when the network is presented with the training (or target) picture, and
a c is the activation of the approach unit given the comparison picture. It is assumed that
both values are positive; otherwise. the approach unit is taken to be off.
In these simulations, the network gets the training pattern (a "box") during training periods,
and random input patterns (simulating the random firing of retinal neurons) otherwise.
The onset of the predisposition is modeled by allowing the predisposition layer to help
activate the outputs only after the network receives an "experience" signal. This signal
models the sharp rise in plasma testosterone levels in dark-reared chicks following any
sort of handling (Hom, 1985). Once the network has received the "experience" signal, the
weights are modified for random input as well as for the box picture. Until then, weights
are modified only for the box picture. Real chicks can be tested only once because of the
danger of one-trial learning, so all chick data compares the behavior of groups of chicks
under different conditions. The network's weights can be kept constant during testing, and
the same network's responses can be measured before and after it is exposed to the relevant
condition. All simulations were 100 iterations long.
3
Simulations
The simulations using this model currently address three phenomena which have been studied in chicks. First, in simple imprinting chicks learn to recognize a training object, and
usually withdraw from other objects once they have imprinted on the training object. This
simulation requires simply exposing the network to the training object and measuring its
responses. The model "imprints" on the box if its preference for the box relative to both
the head/neck and cylinder pictures increases during training. Ideally, the value of the approach unit for the cylinder and box will also decrease, to indicate the network's tendency
to withdraw from "unfamiliar" stimuli.
Second, chicks with only the most minimal experience (such as being placed in a dark
running wheel) develop a preference for a stuffed fowl over other stimuli. That is, they will
approach the fowl significantly more than another object (Hom, 1985). This is modeled
by turning on the "predisposition" and allowing the network to develop with no training
whatsoever. The network mimics chick behavior if the preference score for the head/neck
picture increases relative to the box and the cylinder pictures.
Third, after the predisposition has been allowed to develop, training on a red box decreases
Model of Predispositions in Chicks
35
+50
-50
+50
a
d
-50
Figure 3: A summary of the results of the model. All bars are differences in preference
scores between conditions for chicks (open bars) and the model (striped bars). a: Imprinting
(change in preference for training object): trained - untrained. b: Predisposition (change
in preference for fowl): experience - no experience (predisposition - no predisposition).
c: Change in preference for fowl vs. box: trained - predisposition only. d: Change in
preference for box vs. cylinder: trained - predisposition only. (Chick data adapted from
(Hom, 1985; Bolhuis et aI., 1989).)
a chick's preference for the fowl relative to the box. It also increases the chick's preference
for the box relative to a blue cylinder or other unfamiliar object (Bolhuis et aI., 1989). In
the model, the predisposition layer is allowed to activate the output layer for 20 iterations
before training starts. Then the model is exposed to the network for 25 iterations. If its
preference score for the fowl decreases after training, the network has shown the same
pattern as chicks.
4 Results and Discussion
A summary of the results is shown in Figure 3. Since these simulations try to capture
the qualitative behavior of chicks, all results are shown as the change in preference scores
between two conditions. For the chick data, the changes are approximate, and calculated
from the means only. The network data is the average of the results for 10 networks, each
with a different random seed (and therefore initial weight patterns). For the three conditions
tested, the model's preference scores moved in the same direction as the chicks.
The interaction between imprinting and the predisposition cannot be investigated computationally unless the model displays both behaviors. These baseline behaviors are shown
in Fig. 3-a and b. Trained chicks prefer the training object more after training than before
(Hom, 1985); so does the model (Fig. 3-a). In the case of the predisposition (Fig. 3-b),
the bird data is a difference between preferences for a stuffed fowl in chicks which had
developed the predisposition (and therefore preferred the fowl) and those which had not
(and therefore did not). Similarly, the network preferred the head/neck picture more after
the predisposition had been allowed to develop than at the beginning of the simulation.
The interactions between imprinting and the predisposition are the real measures of the
model's success. In Fig. 3-c, the predisposition has been allowed to develop before training
begins. Trained birds with the predisposition were compared with untrained birds also
with the predisposition (trained - untrained). Trained birds preferred the stuffed fowl less
than their untrained counterparts (Bolhuis et aI., 1989). The network's preference score
just before training is subtracted from its score after training. As with the real chicks,
the network prefers the head/neck picture less after training than it did before. Fig. 3-d
shows that, as with chicks, the network's preference for the box increased relative to that
for the cylinder during the course of training. For these three conditions, then, the model is
L E.Hadden
36
qualitatively a success.
4.1
Discussion of Basic Results
The point of network models is that their behavior can be analyzed and understood morc
easily than animals'. The predisposition's behavior is quite simple: to the extent that a
random input pattern is similar to the head/neck picture, it activates the predisposition
layer, and through it the approach unit. Thus the winning unit in IMLL is correlated with
the approach unit, and the connections are strengthened by the Hebb rule. Imprinting is
similar, but only goes through the IMLL layer, so the approach unit may not be on. In
both cases, the weights from the other units decay slowly during training, so that usually
the other input patterns fail to excite the approach unit, and even excite the withdraw unit
slightly because of its small positive bias. Only one process is required to obtain both the
predisposition and imprinting, since both build representations in IMLL.
The interaction between imprinting and the predisposition first increases the preference for
the predisposition, and then alters the weights affecting the reaction to the box picture. The
training phase acts just like the ordinary imprinting phase, so that preference for both the
head/neck and the cylinder decrease during training.
Some exploration of the relevant parameters suggests that the predisposition's behavior
does not depend simply on its strength. Because IMLL is a winner-take-alliayer, changing the predisposition's strength can, by moving the winning node around during training,
cause previous learning to be lost. Such motion obviously has a large effect on the outcome
of the simulation.
4.2
Temporal aspects of imprinting
The primary weakness of the model is its failure to account for some critical temporal
aspects of imprinting. It is premature to draw many conclusions about chicks from this
model, because it fails to account for either long-term sensitive periods or the short-term
time course of the predisposition.
Neither the predisposition nor imprinting in the model have yet been shown to have sensitive periods, though both do in real birds (Hom, 1985; Johnson et aI., 1985). Preliminary results, however, suggest that imprinting in the networks does have a sensitive period,
presumably because of weight saturation during learning. It is not yet clear whether the
predisposition's sensitive period will require an exogenous process.
Second, the model does not yet show the appropriate time course for the development of
the predisposition. In chicks, the predisposition develops fairly slowly over the course of
five or so hours (Johnson et al., 1985). In chicks for which the first experience is training,
the predisposition's effect is to increase the bird's preference for the fowl regardless of
training object, over the course of the hours following training (Johnson et aI., 1985). In the
model, the predisposition appears quickly and, because of weight decay and other factors,
the strength of the predisposition slowly decreases over the iterations following training,
rather than increasing. Increasing the learning rate of IMLL over time could solve this
problem. Once it exhibits time course behaviors, especially if no further processes need to
be postulated, the model will facilitate interesting analyses of how a simple set of processes
and assumptions can interact to produce highly complicated behavior.
5 Conclusion
This model displays some important interactions between learning and a predisposition in
filial imprinting. It is the first which accounts for the predisposition at all. Other models
Model of Predispositions in Chicks
37
of imprinting have either ignored the issue or built in the predisposition by hand. In this
model, the interaction between two simple systems, a fixed predisposition and a learned
approach system, gives rise to one important more complex behavior. In addition, the
two representations of the head/neck predisposition can account for lesion studies in which
lesioning IMLL removes a chick's memory of its training object or prevents it from learning
anything new about specific objects, but leaves the preference for heads and necks intact
(Hom, 1985). Clearly, if the IMLL layer is missing, the network loses any infonnation it
might have learned about training objects, and is unable to learn anything new from future
training. The predisposition, however, is still intact and able to influence the network's
behavior.
The nature of predispositions like chicks' naive preference for heads and necks, and how
they interact with learning, are interesting in a number of fields. Morton and Johnson
(1991) have already explored the similarities between chicks' preferences for heads and
necks and human infants' preferences for human faces. Such naive preferences are also
important in any discussion of innate infonnation, and the number of processes needed
to handle innate and learned infonnation. Although this model and its successors cannot
directly address these issues, I hope that their explication of how fairly general predispositions can influence learning will improve understanding of some of the mechanisms
underlying them.
Acknowledgements
This work was supported by a fellowship from the National Physical Sciences Consortium.
References
P. Bateson and G. Hom. Imprinting and recognition memory: A neural net model. Animal
Behaviour, 48(3):695-715,1994.
J. J. Bolhuis, M. H. Johnson, and G. Hom. Interacting mechanisms during the fonnation
of filial preferences: The development of a predisposition does not prevent learning.
Journal ofExperimental Psychology: Animal Behavior Processes, 15(4):376-382, 1989.
D. C. Davies, M. H. Johnson, and G. Hom. The effect of the neurotoxin dsp4 on the
development of a predisposition in the domestic chick. Developmental Psychobiology,
25(2):251-259, 1992.
G. Hom. Memory, Imprinting, and the Brain: An inquiry into mechanisms. Clarendon
Press, Oxford, 1985.
M. H. Johnson, J. 1. Bolhuis, and G. Horn. Interaction between acquired preferences
and developing predispositions during imprinting. Animal Behaviour, 33(3): 1000-1 006,
1985.
K. Miller, J. Keller, and M. Stryker. Ocular dominance column development: analysis and
simulation. Science, 245:605-615, 1989.
J. Morton and M. H. Johnson. Conspec and conlern: a two-process theory of infant face
recognition. Psychological Review, 98(2): 164-181, 1991.
R. C. O'Reilly and M. H. Johnson. Object recognition and sensitive periods: A computational analysis of visual imprinting. Neural Computation, 6(3):357-389,1994.
| 1355 |@word trial:1 replicate:1 open:1 simulation:15 accounting:1 moment:1 initial:4 score:13 genetic:1 interestingly:1 reaction:1 current:2 comparing:1 activation:6 yet:3 exposing:1 remove:1 medial:1 v:2 infant:2 leaf:1 beginning:2 short:1 provides:1 detecting:1 node:1 location:1 preference:42 five:2 qualitative:2 consists:2 behavioral:1 acquired:1 rapid:2 behavior:16 examine:1 nor:1 brain:1 increasing:2 domestic:5 project:3 begin:1 underlying:1 lowest:1 what:1 kind:2 substantially:1 developed:1 whatsoever:1 finding:1 temporal:2 act:1 scaled:1 unit:27 underlie:1 organize:1 positive:3 before:7 understood:2 oxford:1 firing:2 might:2 bird:7 studied:1 suggests:1 programmed:2 limited:1 horn:1 testing:3 lost:1 imprinting:29 danger:1 hyperbolic:1 significantly:1 reilly:3 davy:2 suggest:2 consortium:1 lesioning:1 get:1 cannot:2 wheel:1 influence:2 missing:1 go:2 exposure:1 regardless:1 keller:1 rule:6 his:1 handle:1 predisposition:69 diego:1 target:2 us:1 recognition:3 particularly:1 capture:1 cycle:1 decrease:5 inhibit:1 mentioned:1 developmental:1 ideally:1 imprint:3 trained:15 depend:1 exposed:2 basis:2 easily:1 represented:1 cat:1 describe:1 activate:2 cogsci:1 outcome:1 pref:1 quite:1 larger:2 valued:2 plausible:1 solve:1 otherwise:2 obviously:1 frozen:1 net:1 interaction:10 interconnected:1 relevant:4 moved:1 imprinted:2 parent:1 produce:2 object:26 help:1 develop:6 ac:1 measured:1 received:2 implemented:1 involves:1 indicate:3 direction:2 exploration:1 human:3 successor:1 backprop:1 require:2 behaviour:2 clustered:1 preliminary:1 noradrenaline:1 clarify:1 around:1 considered:1 presumably:1 seed:1 vary:1 purpose:1 currently:2 infonnation:3 sensitive:7 hope:1 clearly:1 activates:1 modified:2 rather:2 exemplifies:2 morton:2 likelihood:1 indicates:1 baseline:1 pixel:2 overall:1 sketched:1 issue:2 development:4 animal:5 fairly:3 field:1 once:4 shaped:1 biology:2 identical:1 fowl:10 future:2 mimic:1 stimulus:4 develops:4 retina:1 recognize:2 national:1 floating:1 intended:1 phase:2 cylinder:10 interest:4 highly:1 replicates:1 weakness:1 analyzed:1 experience:6 unless:1 minimal:1 psychological:1 increased:2 column:1 modeling:3 measuring:1 insured:1 ordinary:1 entry:1 neutral:1 johnson:12 stored:1 off:1 quickly:1 central:1 choose:1 slowly:3 cognitive:1 account:4 retinal:2 hysteresis:1 postulated:1 onset:1 innately:2 try:2 exogenous:1 analyze:1 portion:1 red:2 sort:1 start:1 complicated:1 ass:1 miller:2 gathered:1 correspond:1 ofthe:1 produced:2 psychobiology:1 researcher:1 bateson:5 detector:8 inquiry:1 failure:1 colleague:1 ocular:1 naturally:1 ut:1 ou:1 appears:1 tum:1 higher:2 clarendon:1 day:2 x6:2 response:2 maximally:1 box:19 though:1 just:2 implicit:1 until:1 hand:2 receives:1 o:1 innate:2 facilitate:2 effect:3 counterpart:1 during:10 anything:3 override:1 motion:1 image:1 recently:2 physical:1 winner:3 unfamiliar:2 ai:7 similarly:1 had:3 moving:1 similarity:1 jolla:1 irrelevant:1 store:1 binary:2 success:2 life:2 seen:1 period:10 signal:3 neurally:1 fonnation:1 hebbian:2 long:2 basic:1 iteration:4 represent:1 chicken:1 background:1 affecting:1 fellowship:1 addition:1 appropriately:1 biased:2 rest:2 incorporates:1 presence:1 intermediate:2 iii:1 enough:1 feedforward:1 isolation:1 psychology:3 architecture:4 approaching:1 whether:1 motivated:1 cause:1 action:1 prefers:1 ignored:2 generally:1 withdraw:7 clear:2 involve:1 dark:3 percentage:1 alters:1 subpart:1 modifiable:1 blue:2 affected:1 group:3 dominance:1 changing:1 preprocessed:1 neither:1 prevent:1 kept:2 stuffed:4 chick:47 place:3 almost:2 home:1 draw:1 prefer:2 entirely:1 layer:19 hom:23 display:2 strength:5 adapted:1 striped:1 aspect:2 department:1 developing:1 describes:1 slightly:1 handling:1 taken:2 computationally:1 previously:2 fail:1 mechanism:3 needed:1 serf:1 apply:1 reared:1 appropriate:1 simulating:1 subtracted:1 running:1 include:1 build:1 especially:1 already:1 primary:1 stryker:1 interacts:1 exhibit:2 unable:1 extent:2 toward:3 modeled:2 negative:1 rise:2 implementation:1 unknown:2 allowing:3 neuron:2 neurobiology:1 incorporated:1 head:18 discovered:1 ucsd:1 interacting:1 sharp:1 required:1 connection:7 california:1 learned:7 hour:2 address:2 able:1 bar:3 ofexperimental:1 usually:5 pattern:18 below:1 saturation:1 built:1 including:2 memory:3 critical:1 treated:1 rely:2 turning:1 altered:1 improve:1 picture:18 naive:9 review:1 understanding:1 acknowledgement:1 tangent:1 hen:2 relative:5 fully:1 interesting:2 degree:1 sufficient:1 consistent:2 course:9 changed:1 summary:2 placed:1 supported:1 bias:2 focussed:1 face:2 calculated:2 qualitatively:1 san:1 premature:1 correlate:1 approximate:1 preferred:5 assumed:2 excite:4 why:1 learn:2 nature:1 ca:1 interact:3 investigated:2 untrained:5 complex:1 did:3 allowed:4 evolutionarily:1 lesion:1 fig:6 strengthened:1 hebb:4 slow:1 fails:1 duck:1 winning:2 crude:1 third:1 learns:2 down:1 preprocessor:1 specific:1 explored:1 decay:5 lucy:1 simply:2 visual:5 prevents:1 indifference:1 neurological:2 covariant:1 loses:1 goal:1 replace:1 hard:2 change:8 except:1 operates:1 preset:1 specie:1 neck:18 tendency:1 la:1 plasma:2 meaningful:1 intact:2 tested:2 phenomenon:1 correlated:1 |
390 | 1,356 | On Parallel Versus Serial Processing:
A Computational Study of Visual Search
Eyal Cohen
Department of Psychology
Tel-Aviv University Tel Aviv 69978, Israel
eyalc@devil. tau .ac .il
Eytan Ruppin
Departments of Computer Science & Physiology
Tel-Aviv University Tel Aviv 69978, Israel
ruppin@math.tau .ac.il
Abstract
A novel neural network model of pre-attention processing in visualsearch tasks is presented. Using displays of line orientations taken
from Wolfe's experiments [1992], we study the hypothesis that the
distinction between parallel versus serial processes arises from the
availability of global information in the internal representations of
the visual scene. The model operates in two phases. First, the
visual displays are compressed via principal-component-analysis.
Second, the compressed data is processed by a target detector module in order to identify the existence of a target in the display. Our
main finding is that targets in displays which were found experimentally to be processed in parallel can be detected by the system, while targets in experimentally-serial displays cannot . This
fundamental difference is explained via variance analysis of the
compressed representations, providing a numerical criterion distinguishing parallel from serial displays. Our model yields a mapping
of response-time slopes that is similar to Duncan and Humphreys's
"search surface" [1989], providing an explicit formulation of their
intuitive notion of feature similarity. It presents a neural realization of the processing that may underlie the classical metaphorical
explanations of visual search.
On Parallel versus Serial Processing: A Computational Study a/Visual Search
1
11
Introduction
This paper presents a neural-model of pre-attentive visual processing. The model
explains why certain displays can be processed very fast, "in parallel" , while others
require slower, "serial" processing, in subsequent attentional systems. Our approach
stems from the observation that the visual environment is overflowing with diverse
information, but the biological information-processing systems analyzing it have
a limited capacity [1]. This apparent mismatch suggests that data compression
should be performed at an early stage of perception, and that via an accompanying process of dimension reduction, only a few essential features of the visual
display should be retained. We propose that only parallel displays incorporate
global features that enable fast target detection, and hence they can be processed
pre-attentively, with all items (target and dis tractors) examined at once. On the
other hand, in serial displays' representations, global information is obscure and
target detection requires a serial, attentional scan of local features across the display. Using principal-component-analysis (peA), our main goal is to demonstrate
that neural systems employing compressed, dimensionally reduced representations
of the visual information can successfully process only parallel displays and not serial ones. The sourCe of this difference will be explained via variance analysis of the
displays' projections on the principal axes.
The modeling of visual attention in cognitive psychology involves the use of
metaphors, e.g., Posner's beam of attention [2]. A visual attention system of a
surviving organism must supply fast answers to burning issues such as detecting
a target in the visual field and characterizing its primary features. An attentional
system employing a constant-speed beam of attention [3] probably cannot perform
such tasks fast enough and a pre-attentive system is required. Treisman's feature
integration theory (FIT) describes such a system [4]. According to FIT, features
of separate dimensions (shape, color, orientation) are first coded pre-attentively in
a locations map and in separate feature maps, each map representing the values of
a particular dimension. Then, in the second stage, attention "glues" the features
together conjoining them into objects at their specified locations. This hypothesis
was supported using the visual-search paradigm [4], in which subjects are asked
to detect a target within an array of distractors, which differ on given physical dimensions such as color, shape or orientation. As long as the target is significantly
different from the distractors in one dimension, the reaction time (RT) is short and
shows almost no dependence on the number of distractors (low RT slope). This
result suggests that in this case the target is detected pre-attentively, in parallel.
However, if the target and distractors are similar, or the target specifications are
more complex, reaction time grows considerably as a function of the number of
distractors [5, 6], suggesting that the displays' items are scanned serially using an
attentional process.
FIT and other related cognitive models of visual search are formulated on the conceptual level and do not offer a detailed description of the processes involved in
transforming the visual scene from an ordered set of data points into given values
in specified feature maps. This paper presents a novel computational explanation
of the source of the distinction between parallel and serial processing, progressing
from general metaphorical terms to a neural network realization. Interestingly, we
also come out with a computational interpretation of some of these metaphorical
terms, such as feature similarity.
E. Cohen and E. Ruppin
12
2
The Model
We focus our study on visual-search experiments of line orientations performed by
Wolfe et. al. [7], using three set-sizes composed of 4, 8 and 12 items. The number of
items equals the number of dis tractors + target in target displays, and in non-target
displays the target was replaced by another distractor, keeping a constant set-size.
Five experimental conditions were simulated: (A) - a 20 degrees tilted target among
vertical distractors (homogeneous background). (B) - a vertical target among 20
degrees tilted distractors (homogeneous background). (C) - a vertical target among
heterogeneous background ( a mixture of lines with ?20, ?40 , ?60 , ?80 degrees
orientations). (E) - a vertical target among two flanking distractor orientations (at
?20 degrees), and (G) - a vertical target among two flanking distractor orientations
(?40 degrees). The response times (RT) as a function of the set-size measured by
Wolfe et. al. [7] show that type A, Band G displays are scanned in a parallel
manner (1.2, 1.8,4.8 msec/item for the RT slopes), while type C and E displays are
scanned serially (19.7,17.5 msec/item). The input displays of our system were prepared following Wolfe's prescription: Nine images of the basic line orientations were
produced as nine matrices of gray-level values. Displays for the various conditions
of Wolfe's experiments were produced by randomly assigning these matrices into
a 4x4 array, yielding 128x100 display-matrices that were transformed into 12800
display-vectors. A total number of 2400 displays were produced in 30 groups (80
displays in each group): 5 conditions (A, B, C, E, G ) x target/non-target x 3
set-sizes (4,8, 12).
Our model is composed of two neural network modules connected in sequence as
illustrated in Figure 1: a peA module which compresses the visual data into a set
of principal axes, and a Target Detector (TD) module. The latter module uses the
compressed data obtained by the former module to detect a target within an array
of distractors. The system is presented with line-orientation displays as described
above.
NO?TARGET =?1
TARGET-I
Tn [JUTPUT LAYER
(I
UNIT)-------,
TARGET
DETECTOR
MODULE
Tn INrnRMEDIATE LAYER
PeA
(12 UNITS)
O~=~ LAYER
(11))
J
--..;;:::~~~
DATA
COMPRESSION
MODULE
(PeA)
INPUT LAYER (12Il00 UNITS)
t
t
/---
--
-
_
/
DISPLAY
Figure 1: General architecture of the model
For the PCA module we use the neural network proposed by Sanger, with the
connections' values updated in accordance with his Generalized Hebbian Algorithm
(GHA) [8]. The outputs of the trained system are the projections of the displayvectors along the first few principal axes, ordered with respect to their eigenvalue
magnitudes. Compressing the data is achieved by choosing outputs from the first
13
On Parallel versus Serial Processing: A Computational Study o/Visual Search
few neurons (maximal variance and minimal information loss). Target detection in
our system is performed by a feed-forward (FF) 3-layered network, trained via a
standard back-propagation algorithm in a supervised-learning manner. The input
layer of the FF network is composed of the first eight output neurons of the peA
module. The transfer function used in the intermediate and output layers is the
hyperbolic tangent function.
3
3.1
Results
Target Detection
The performance of the system was examined in two simulation experiments. In
the first, the peA module was trained only with "parallel" task displays, and in the
second, only with "serial" task displays. There is an inherent difference in the ability
of the model to detect targets in parallel versus serial displays . In parallel task
conditions (A, B, G) the target detector module learns the task after a comparatively
small number (800 to 2000) of epochs, reaching performance level of almost 100%.
However, the target detector module is not capable of learning to detect a target
in serial displays (e, E conditions) . Interestingly, these results hold (1) whether
the preceding peA module was trained to perform data compression using parallel
task displays or serial ones, (2) whether the target detector was a linear simple
perceptron, or the more powerful, non-linear network depicted in Figure 1, and (3)
whether the full set of 144 principal axes (with non-zero eigenvalues) was used.
3.2
Information Span
To analyze the differences between parallel and serial tasks we examined the eigenvalues obtained from the peA of the training-set displays. The eigenvalues of
condition B (parallel) displays in 4 and 12 set-sizes and of condition e (serial-task)
displays are presented in Figure 2. Each training set contains a mixture of target
and non-target displays.
(b)
(a)
PARALLEL
SERIAL
40
35
40
l!J
+4 ITEMS
II>
"'I;l
~
30
25
~
w
25
~
~20
o 12 ITEMS
30
~
w
w
~20
~
15
w
10
15
10
~
5
0
-5
+4 ITEMS
35
o 12 ITEMS
0
30
40
10
20
No. of PRINCIPAL AXIS
-
5
0
-5
0
10
20
30
40
No. of PRINCIPAL AXIS
Figure 2: Eigenvalues spectrum of displays with different set-sizes, for parallel and
serial tasks. Due to the sparseness of the displays (a few black lines on white
background), it takes only 31 principal axes to describe the parallel training-set in
full (see fig 2a. Note that the remaining axes have zero eigenvalues, indicating that
they contain no additional information.), and 144 axes for the serial set (only the
first 50 axes are shown in fig 2b).
E. Cohen and E. Ruppin
14
As evident, the eigenvalues distributions of the two display types are fundamentally
different: in the parallel task, most of the eigenvalues "mass" is concentrated in the
first few (15) principal axes, testifying that indeed, the dimension of the parallel
displays space is quite confined. But for the serial task, the eigenvalues are distributed almost uniformly over 144 axes. This inherent difference is independent of
set-size: 4 and 12-item displays have practically the same eigenvalue spectra.
3.3
Variance Analysis
The target detector inputs are the projections of the display-vectors along the first
few principal axes. Thus, some insight to the source of the difference between
parallel and serial tasks can be gained performing a variance analysis on these
projections. The five different task conditions were analyzed separately, taking a
group of 85 target displays and a group of 85 non-target displays for each set-size.
Two types of variances were calculated for the projections on the 5th principal axis:
The "within groups" variance, which is a measure of the statistical noise within
each group of 85 displays, and the "between groups" variance, which measures the
separation between target and non-target groups of displays for each set-size. These
variances were averaged for each task (condition), over all set-sizes. The resulting
ratios Q of within-groups to between-groups standard deviations are: QA = 0.0259,
QB = 0.0587 ,and Qa = 0.0114 for parallel displays (A, B, G), and QE 0.2125
Qc = 0.771 for serial ones (E, C).
=
As evident, for parallel task displays the Q values are smaller by an order of magnitude compared with the serial displays, indicating a better separation between
target and non-target displays in parallel tasks. Moreover, using Q as a criterion
for parallel/serial distinction one can predict that displays with Q << 1 will be
processed in parallel, and serially otherwise, in accordance with the experimental
response time (RT) slopes measured by Wolfe et. al. [7]. This differences are further
demonstrated in Figure 3, depicting projections of display-vectors on the sub-space
spanned by the 5, 6 and 7th principal axes. Clearly, for the parallel task (condition
B), the PCA representations of the target-displays (plus signs) are separated from
non-target representations (circles), while for serial displays (condition C) there is
no such separation. It should be emphasized that there is no other principal axis
along which such a separation is manifested for serial displays.
1"1
110"
-11106
-1
...
un
o
o
_1157
.11615
o
+
o
::::~
+ ++
.
. l
-11M
? 0
~
: . , Hill
.+
+0
o
-'.1'
-11025
-'181
_1163
'.II
o
..
_1182
'10
,
.,0'
0
o
"
'07
,.II
? 10~
Til
7.&12
"'AXIS
'.7
1.1186
INIIS
11166
71hAXIS
,.
18846
,.
, ow
. ..
,
'~AXIS
.
o
0-
o
1114
1113
1.1e2
,.,
,.
no AXIS
1.1'11
'.71
1 iTT
.,.
1.178
1.175
1 114
Figure 3: Projections of display-vectors on the sub-space spanned by the 5, 6 and
7th principal axes. Plus signs and circles denote target and non-target displayvectors respectively, (a) for a parallel task (condition B), and (b) for a serial task
(condition C). Set-size is 8 items.
On Parallel versus Serial Processing: A Computational Study o/Visual Search
15
While Treisman and her co-workers view the distinction between parallel and serial tasks as a fundamental one, Duncan and Humphreys [5] claim that there is
no sharp distinction between them, and that search efficiency varies continuously
across tasks and conditions. The determining factors according to Duncan and
Humphreys are the similarities between the target and the non-targets (T-N similarities) and the similarities between the non-targets themselves (N-N similarity).
Displays with homogeneous background (high N-N similarity) and a target which is
significantly different from the distractors (low T-N similarity) will exhibit parallel,
low RT slopes, and vice versa. This claim was illustrated by them using a qualitative
"search surface" description as shown in figure 4a. Based on results from our variance analysis, we can now examine this claim quantitatively: We have constructed
a "search surface", using actual numerical data of RT slopes from Wolfe's experiments, replacing the N-N similarity axis by its mathematical manifestation, the
within-groups standard deviation, and N-T similarity by between-groups standard
deviation 1. The resulting surface (Figure 4b) is qualitatively similar to Duncan and
Humphreys's. This interesting result testifies that the PCA representation succeeds
in producing a viable realization of such intuitive terms as inputs similarity, and is
compatible with the way we perceive the world in visual search tasks.
(a)
(b)
SEARCH SURFACE
o
CIo-.....
~:..-4.:0,........::~"""""'"
1_.-...-_
l.rgeI-.-Jargel
IImllarll),
Figun J. The seatcllaurface.
Figure 4: RT rates versus: (a) Input similarities (the search surface, reprinted from
Duncan and Humphreys, 1989). (b) Standard deviations (within and between) of
the PCA variance analysis. The asterisks denote Wolfe's experimental data.
4
Summary
In this work we present a two-component neural network model of pre-attentional
visual processing. The model has been applied to the visual search paradigm performed by Wolfe et. al. Our main finding is that when global-feature compression
is applied to visual displays, there is an inherent difference between the representations of serial and parallel-task displays: The neural network studied in this paper
has succeeded in detecting a target among distractors only for displays that were
experimentally found to be processed in parallel. Based on the outcome of the
1 In general, each principal axis contains information from different features, which may
mask the information concerning the existence of a target. Hence, the first principal axis
may not be the best choice for a discrimination task. In our simulations, the 5th axis
for example, was primarily dedicated to target information, and was hence used for the
variance analysis (obviously, the neural network uses information from all the first eight
principal axes).
16
E. Cohen andE. Ruppin
variance analysis performed on the PCA representations of the visual displays, we
present a quantitative criterion enabling one to distinguish between serial and parallel displays. Furthermore, the resulting 'search-surface' generated by the PCA
components is in close correspondence with the metaphorical description of Duncan
and Humphreys.
The network demonstrates an interesting generalization ability: Naturally, it can
learn to detect a target in parallel displays from examples of such displays. However,
it can also learn to perform this task from examples of serial displays only! On the
other hand, we find that it is impossible to learn serial tasks, irrespective of the
combination of parallel and serial displays that are presented to the network during
the training phase. This generalization ability is manifested not only during the
learning phase, but also during the performance phase; displays belonging to the
same task have a similar eigenvalue spectrum, irrespective of the actual set-size of
the displays, and this result holds true for parallel as well as for serial displays.
The role of PCA in perception was previously investigated by Cottrell [9], designing
a neural network which performed tasks as face identification and gender discrimination. One might argue that PCA, being a global component analysis is not
compatible with the existence of local feature detectors (e.g. orientation detectors)
in the cortex. Our work is in line with recent proposals [10J that there exist two
pathways for sensory input processing: A fast sub-cortical pathway that contains
limited information, and a slow cortical pathway which is capable of providing richer
representations of the stimuli. Given this assumption this paper has presented the
first neural realization of the processing that may underline the classical metaphorical explanations involved in visual search.
References
[1] J. K. Tsotsos. Analyzing vision at the complexity level. Behavioral and Brain
Sciences, 13:423-469, 1990.
[2J M. I. Posner, C. R. Snyder, and B. J. Davidson. Attention and the detection
of signals. Journal of Experimental Psychology: General, 109:160-174, 1980.
[3J Y. Tsal. Movement of attention across the visual field. Journal of Experimental
Psychology: Human Perception and Performance, 9:523-530, 1983.
[4] A. Treisman and G. Gelade. A feature integration theory of attention. Cognitive
Psychology, 12:97-136,1980.
[5] J. Duncan and G. Humphreys. Visual search and stimulus similarity. Psychological Review, 96:433-458, 1989.
[6] A. Treisman and S. Gormican. Feature analysis in early vision: Evidence from
search assymetries. Psychological Review, 95:15-48, 1988.
[7] J . M. Wolfe, S. R. Friedman-Hill, M. I. Stewart, and K. M. O'Connell. The
role of categorization in visual search for orientation. Journal of Experimental
Psychology: Human Perception and Performance, 18:34-49, 1992.
[8] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Network, 2:459-473, 1989.
[9] G. W. Cottrell. Extracting features from faces using compression networks:
Face, identity, emotion and gender recognition using holons. Proceedings of the
1990 Connectionist Models Summer School, pages 328-337, 1990.
[10] J. L. Armony, D. Servan-Schreiber, J . D. Cohen, and J. E. LeDoux. Computational modeling of emotion: exploration through the anatomy and physiology
of fear conditioning. Trends in Cognitive Sciences, 1(1):28-34, 1997.
| 1356 |@word compression:5 underline:1 glue:1 simulation:2 reduction:1 contains:3 interestingly:2 reaction:2 assigning:1 must:1 cottrell:2 subsequent:1 tilted:2 numerical:2 shape:2 discrimination:2 item:12 short:1 detecting:2 math:1 location:2 five:2 mathematical:1 along:3 constructed:1 supply:1 viable:1 qualitative:1 pathway:3 behavioral:1 manner:2 mask:1 indeed:1 themselves:1 examine:1 distractor:3 brain:1 td:1 actual:2 metaphor:1 moreover:1 mass:1 israel:2 finding:2 quantitative:1 holons:1 demonstrates:1 unit:3 underlie:1 producing:1 local:2 accordance:2 analyzing:2 black:1 plus:2 might:1 gormican:1 studied:1 examined:3 suggests:2 co:1 limited:2 averaged:1 physiology:2 significantly:2 projection:7 hyperbolic:1 pre:7 cannot:2 close:1 layered:1 impossible:1 map:4 demonstrated:1 attention:9 qc:1 perceive:1 tsal:1 insight:1 array:3 spanned:2 posner:2 his:1 notion:1 testifies:1 updated:1 target:57 homogeneous:3 distinguishing:1 us:2 hypothesis:2 designing:1 wolfe:10 trend:1 recognition:1 role:2 module:14 compressing:1 connected:1 movement:1 environment:1 transforming:1 complexity:1 asked:1 trained:4 efficiency:1 ande:1 various:1 x100:1 separated:1 fast:5 describe:1 detected:2 choosing:1 outcome:1 apparent:1 quite:1 richer:1 otherwise:1 compressed:5 ability:3 obviously:1 sequence:1 eigenvalue:11 ledoux:1 propose:1 maximal:1 realization:4 intuitive:2 description:3 categorization:1 object:1 ac:2 measured:2 school:1 involves:1 come:1 differ:1 anatomy:1 pea:8 exploration:1 human:2 enable:1 explains:1 require:1 generalization:2 biological:1 accompanying:1 hold:2 practically:1 mapping:1 predict:1 claim:3 early:2 schreiber:1 vice:1 successfully:1 clearly:1 reaching:1 ax:14 focus:1 detect:5 progressing:1 her:1 transformed:1 issue:1 among:6 orientation:11 integration:2 field:2 once:1 equal:1 emotion:2 x4:1 unsupervised:1 others:1 stimulus:2 fundamentally:1 inherent:3 few:6 quantitatively:1 primarily:1 randomly:1 connectionist:1 composed:3 replaced:1 phase:4 friedman:1 overflowing:1 detection:5 mixture:2 analyzed:1 yielding:1 succeeded:1 capable:2 worker:1 circle:2 minimal:1 psychological:2 modeling:2 servan:1 stewart:1 deviation:4 answer:1 varies:1 considerably:1 fundamental:2 treisman:4 together:1 continuously:1 cognitive:4 til:1 suggesting:1 availability:1 performed:6 view:1 eyal:1 analyze:1 parallel:40 slope:6 cio:1 il:2 variance:13 yield:1 identify:1 identification:1 produced:3 detector:9 attentive:2 involved:2 e2:1 naturally:1 color:2 distractors:10 back:1 feed:1 supervised:1 response:3 formulation:1 furthermore:1 stage:2 hand:2 replacing:1 propagation:1 gray:1 aviv:4 grows:1 contain:1 true:1 former:1 hence:3 illustrated:2 white:1 during:3 qe:1 criterion:3 generalized:1 manifestation:1 hill:2 evident:2 demonstrate:1 tn:2 dedicated:1 image:1 ruppin:5 novel:2 physical:1 cohen:5 conditioning:1 organism:1 interpretation:1 versa:1 specification:1 similarity:13 surface:7 metaphorical:5 cortex:1 recent:1 certain:1 dimensionally:1 manifested:2 additional:1 preceding:1 paradigm:2 signal:1 ii:3 full:2 stem:1 hebbian:1 offer:1 long:1 prescription:1 concerning:1 serial:36 coded:1 basic:1 heterogeneous:1 vision:2 confined:1 achieved:1 beam:2 proposal:1 background:5 separately:1 source:3 probably:1 subject:1 surviving:1 extracting:1 intermediate:1 feedforward:1 enough:1 fit:3 psychology:6 architecture:1 reprinted:1 whether:3 pca:8 nine:2 detailed:1 prepared:1 band:1 concentrated:1 processed:6 reduced:1 conjoining:1 exist:1 sign:2 diverse:1 snyder:1 group:12 tsotsos:1 powerful:1 almost:3 separation:4 duncan:7 layer:7 summer:1 distinguish:1 display:68 correspondence:1 scanned:3 scene:2 speed:1 span:1 qb:1 performing:1 connell:1 department:2 according:2 combination:1 belonging:1 across:3 describes:1 smaller:1 explained:2 flanking:2 taken:1 previously:1 eight:2 slower:1 existence:3 compress:1 remaining:1 sanger:2 classical:2 comparatively:1 burning:1 primary:1 rt:8 dependence:1 exhibit:1 ow:1 attentional:5 separate:2 simulated:1 capacity:1 argue:1 retained:1 providing:3 ratio:1 attentively:3 perform:3 vertical:5 observation:1 neuron:2 enabling:1 sharp:1 required:1 specified:2 connection:1 distinction:5 qa:2 gelade:1 perception:4 mismatch:1 tau:2 explanation:3 serially:3 representing:1 axis:11 irrespective:2 epoch:1 review:2 tangent:1 determining:1 tractor:2 loss:1 interesting:2 versus:7 asterisk:1 degree:5 obscure:1 testifying:1 compatible:2 summary:1 supported:1 keeping:1 dis:2 perceptron:1 characterizing:1 taking:1 face:3 distributed:1 dimension:6 calculated:1 world:1 cortical:2 sensory:1 forward:1 qualitatively:1 employing:2 global:5 conceptual:1 davidson:1 spectrum:3 search:21 un:1 why:1 learn:3 transfer:1 itt:1 tel:4 depicting:1 investigated:1 complex:1 main:3 noise:1 fig:2 gha:1 ff:2 slow:1 sub:3 explicit:1 msec:2 humphreys:7 learns:1 emphasized:1 evidence:1 essential:1 gained:1 displayvectors:2 magnitude:2 sparseness:1 depicted:1 visual:28 ordered:2 fear:1 gender:2 goal:1 formulated:1 identity:1 experimentally:3 operates:1 uniformly:1 principal:18 total:1 eytan:1 experimental:6 succeeds:1 indicating:2 internal:1 latter:1 arises:1 devil:1 scan:1 incorporate:1 |
391 | 1,357 | The Canonical Distortion Measure in Feature
Space and I-NN Classification
Jonathan Baxter*and Peter Bartlett
Department of Systems Engineering
Australian National University
Canberra 0200, Australia
{jon,bartlett}@syseng.anu.edu.au
Abstract
We prove that the Canonical Distortion Measure (CDM) [2, 3] is the
optimal distance measure to use for I nearest-neighbour (l-NN) classification, and show that it reduces to squared Euclidean distance in feature
space for function classes that can be expressed as linear combinations
of a fixed set of features. PAC-like bounds are given on the samplecomplexity required to learn the CDM. An experiment is presented in
which a neural network CDM was learnt for a Japanese OCR environment and then used to do I-NN classification.
1 INTRODUCTION
Let X be an input space, P a distribution on X, F a class of functions mapping X into Y
(called the "environment"), Q a distribution on F and (J' a function (J': Y X Y -t [0 , ."1].
The Canonical Distortion Measure (CDM) between two inputs x, Xl is defined to be:
p(x, Xl) =
L
(J'(f(x) , f(x l )) dQ(f).
(1)
Throughout this paper we will be considering real-valued functions and squared loss, so
Y = ~ and (J'(y, yl) := (y - yl)2. The CDM was introduced in [2, 3], where it was
analysed primarily from a vector quantization perspective. In particular, the CDM was
proved to be the optimal distortion measure to use in vector quantization, in the sense of
producing the best approximations to the functions in the environment F. In [3] some
experimental results were also presented (in a toy domain) showing how the CDM may be
learnt.
The purpose of this paper is to investigate the utility of the CDM as a classification tool.
In Section 2 we show how the CDM for a class of functions possessing a common feature
*The first author was supported in part by EPSRC grants #K70366 and #K70373
1. Baxter and P. Bartlett
246
set reduces, via a change of variables, to squared Euclidean distance in feature space. A
lemma is then given showing that the CDM is the optimal distance measure to use for 1nearest-neighbour (l-NN) classification. Thus, for functions possessing a common feature
set, optimall-NN classification is achieved by using squared Euclidean distance in feature
space.
In general the CDM will be unknown, so in Section 4 we present a technique for learning
the CDM by minimizing squared loss, and give PAC-like bounds on the sample-size required for good generalisation. In Section 5 we present some experimental results in which
a set of features was learnt for a machine-printed Japanese OCR environment, and then
squared Euclidean distance was used to do I-NN classification in feature space. The experiments provide strong empirical support for the theoretical results in a difficult real-world
application.
2 THE CDM IN FEATURE SPACE
f
E F can be expressed as a linear combination of a fixed set of features
~ := (?l, ... , ?k). That is, for all f E F, there exists w := (WI,???, Wk) such that
Suppose each
w . ~ = 2:7=1 Wi?i. In this case the distribution Q over the environment F is a
distribution over the weight vectors w. Measuring the distance between function values by
()(y, y') := (y - yl)2, the CDM (1) becomes:
f =
p(x, x')
=
r
iE.k
[w? ~(x) - w?
~(X,)]2
dQ(w)
= (~(x) - ~(X'))W(~(x) - ~(X'))'
(2)
where W =
fw w'w dQ(w).
is a k x k matrix. Making the change of variable ~ -t
~JW, we have p(x, x') = 11~(x) - ~(x')112 . Thus, the assumption that the functions in
the environment can be expressed as linear combinations of a fixed set of features means
that the CDM is simply squared Euclidean distance in a feature space related to the original
by a linear transformation.
3
I-NN CLASSIFICATION AND THE CDM
Suppose the environment F consists of classifiers, i.e. {O, 1}-valued functions. Let f be
some function in F and z := (Xl, f(Xl)), ... , (Xn, f(x n )) a training set of examples of
f. In I-NN classification the classification of a novel x is computed by f(x*) where X* =
argmin x ? d(x, Xi)), i.e. the classification of X is the classification of the nearest training
point to x under some distance measure d. If both f and x are chosen at random, the
expected misclassification error of the 1-NN scheme using d and the training points x :=
(xl, ... ,xn)is
er(x, d) := EF Ex [J(x) - f(x* )]2 ,
(3)
where x* is the nearest neighbour to x from {Xl, . . . , x n }. The following lemma is now
immediate from the definitions.
Lemma 1. For all sequences x
= (Xl, . .. , X n ). er{x, d) is minimized ifd is the CDM p.
Remarks. Lemma 1 combined with the results of the last section shows that for function
classes possessing a common feature set, optimall-NN classification is achieved by using
squared Euclidean distance in feature space. In Section 5 some experimental results on
Japanese OCR are presented supporting this conclusion.
The property of optimality of the CDM for I-NN classification may not be stable to small
perturbations. That is, if we learn an approximationg to p, then even ifE xxx (g(x, x') -
The Canonical Distortion Measure in Feature Space and I-NN Classification
247
p(x, x ' ))2 is small it may not be the case that l-NN classification using 9 is also small.
However, one can show that stability is maintained for classifier environments in which
positive examples of different functions do not overlap significantly (as is the case for the
Japanese OCR environment of Section 5, face recognition environments, speech recognition environments and so on). We are currently investigating the general conditions under
which stability is maintained.
4
LEARNING THE CDM
For most environments encountered in practice (e.g speech recognition or image recognition), P will be unknown. In this section it is shown how p may be estimated or learnt using
function approximation techniques (e.g. feedforward neural networks).
4.1
SAMPLING THE ENVIRONMENT
To learn the CDM p, the learner is provided with a class of functions (e.g. neural networks)
~ [0, M]. The goal of the learner is to find a 9 such
that the error between 9 and the CDM p is small. For the sake of argument this error will
be measured by the expected squared loss:
9 where each 9 E 9 maps X x X
erp(g) := Exxx [g(x, x') - p(x, x')f ,
(4)
where the expectation is with respect to p2.
Ordinarily the learner would be provided with training data in the form (x, x', p( x, x'})
and would use this data to minimize an empirical version of (4). However, p is unknown
so to generate data of this form p must be estimated for each training pair x, x'. Hence to
generate training sets for learning the CDM, both the distribution Q over the environment
:F and the distribution P over the input space X must be sampled. So let f := (it, ... , f m)
be m i.i.d. samples from :F according to Q and let x := (Xl, ... , x n ) be n i.i.d. samples
from X according to P. For any pair Xi, Xj an estimate of p( Xi, Xj) is given by
1
P(Xi' Xj} :=
m
m
~ (J'(fdxd,fk(Xj )).
(5)
k=l
This gives n (n - 1) /2 training triples,
{(xi,Xj,p(xi,xj)),l::; i
< j::; n} ,
which can be used as data to generate an empirical estimate of er p (g):
(6)
Only n(n - 1)/2 of the possible n 2 training triples are used because the functions 9 E 9
are assumed to already be symmetric and to satisfy 9 (x, x) = 0 for all x (if this is not
the case then set g'(X, x') := (g(x, x') + g(x', x))/2 if x =j:. x' and g'(X, x) = 0 and use
g' := {g': 9 E g} instead).
In [3] an experiment was presented in which 9 was a neural network class and (6) was
minimized directly by gradient descent. In Section 5 we present an alternative technique
in which a set of features is first learnt for the environment and then an estimate of p in
feature space is constructed explicitly.
J. Baxter and P. Bartlett
248
4.2
UNIFORM CONVERGENCE
We wish to ensure good generalisation from a 9 minimizing
small 6, 5),
Pr { x, r :
:~~ lerx,f(g) -
I
erp(g) >
e~r
r,
x,
6} <
in the sense that (for
5,
The following theorem shows that this occurs if both the number of functions m and the
number of input samples n are sufficiently large. Some exotic (but nonetheless benign)
measurability restrictions have been ignored in the statement of the theorem. In the statement of the theorem, N (E , 9) denotes the smallest 6-cover of 9 under the L 1 ( P 2 ) norm,
where {gl , . . . , gN} is an 6-cover of9 iffor all 9 E 9 there exists gi such that Ilgi - gil ~ 6.
Theorem 2. Assume the range of the functions in the environment :F is no more than
B /2, B /2) and in the class 9 (used to approximate the CDM) is no more than
[0 , VB). For all 6 > 0 and 0 < 5 ~ 1. if
32B 4
4
m> --log(7)
62
5
and
512B 2 (
512B 2
8)
(8)
n 2: 6 2
logN(6,9) + log 6 2 + log;5
[- J
J
then
(9)
Proof For each 9 E 9 , define
erx(g) := (2 )
nn-1
If for any x
= (Xl, . . . , X n ),
(10)
l~i<j~n
~}
~ ~2 ,
2
(II)
> ~} ~ ~ ,
(12)
Pr {r: sup ler r(g) - erx(g) I >
gE9
and
Pr
{x:
x,
sup lerx(g) - erp(g)1
gE9
2
2
then by the triangle inequality (9) will hold. We treat (11) and (12) separately.
Equation (11). To simplify the notation let gij , Pij and Pij denote 9 (Xi, Xj), p( Xi, Xj) and
p(Xi' Xj) respectively. Now,
L
(g ij - pij )2 -
l~i<j~n
2
n(n - 1)
4B
- n(n - 1)
< -,----:-
2:
(Pij - Pij) (2g ij - Pij - Pij)
(Pij - Pij)
19<j~n
1
E.rx(J) -
m
m
2: X(Jk)
k=l
(gij - Pij)2
19<j~n
l~i<j~n
L
2:
249
The Canonical Distortion Measure in Feature Space and J-NN Classification
where x: :F -+ [0, 4B2] is defined by
4B
n(n - 1)
x (f) := ----,-----
Thus,
Pr
{f: :~g I"r.,f(g) - e-r.(g) I >
1 :Si<J:S n
n
S Pr
{f
EJ'x(f) -
~
t,
xU,) >
~}
which is ~ 2 exp (_m?2/ (32B4)) by Hoeffding's inequality. Setting this less than 6/2
gives the bound on m in theorem 2.
Equation (12). Without loss of generality, suppose that n is even. The trick here is to split
the sum over all pairs (Xi, Xj) (with i < j) appearing in the definition of er x (g) into a
double sum:
~
2
erx(g)
[g(Xj, Xj) - p(Xi, Xj)]2
nn-1
6
= (
)"
1 ::;i<j:S n
1
n-1
= n_ 1
2
n /2
L ;; L
i=l
2
[g(xo ,U), xO:(j)) - p(xo.(j), xO:(j))]
,
j=l
where for each i = 1, ... , n - 1, (J"i and (J"~ are permutations on {I, ... , n} such that
{(J"d 1) , ... , (J"i (n/2)) n {(J"H 1), ... , (J"~( n/2)} is empty. That there exist permutations with
this property such that the sum can be broken up in this way can be proven easily by induction. Now, conditional on each (J"i, the n/2 pairs Xi := {(Xo.(j), xO:(j)), j = 1, ... , n/2}
are an i.i.d. sample from X x X according to p2. So by standard results from real-valued
function learning with squared loss [4]:
Pr {
;;?=
2
Xi:
su P
gEQ
n/2
[g(XO.(j), Xo:u)) - p(xo.U). XO:(j))]2 - erp(g)
>~
}
J=l
~ 4N (48~2 ' g) exp ( - 2;:~2 ).
Hence, by the union bound,
Pr { x:
~~g /erx(g) -
erp(g) I >
~} ~ 4(n -
l)N
(48~2 ' g) exp ( - 2;:~2
Setting n as in the statement of the theorem ensures this is less than 6/2.
).
D
Remark. The bound on m (the number of functions that need to be sampled from the
environment) is independent of the complexity of the class g. This should be contrasted
with related bias learning (or equivalently, learning to learn) results [1] in which the number
of functions does depend on the complexity. The heuristic explanation for this is that here
we are only learning a distance function on the input space (the CDM), whereas in bias
learning we are learning an entire hypothesis space that is appropriate for the environment.
However, we shall see in the next section how for certain classes of problems the CDM can
also be used to learn the functions in the environment. Hence in these cases learning the
CDM is a more effective method of learning to learn.
5
EXPERIMENT: JAPANESE OCR
To verify the optimality of the CDM for I-NN classification, and also to show how
it can be learnt in a non-trivial domain (only a toy example was given in [3]), the
1. Baxter and P Bartlett
250
COM was learnt for a Japanese OCR environment. Specifically, there were 3018 functions I in the environment F, each one a classifier for a different Kanji character. A
database containing 90,918 segmented, machine-printed Kanji characters scanned from
various sources was purchased from the CEDAR group at the State University of New
York, Buffalo The quality of the images ranged from clean to very degraded (see
http://www . cedar .buffalo. edu/Databases/JOcR/).
The main reason for choosing Japanese OCR rather than English OCR as a test-bed was
the large number of distinct characters in Japanese. Recall from Theorem 2 that to get good
generalisation from a learnt COM, sufficiently many functions must be sampled from the
environment. If the environment just consisted of English characters then it is likely that
"sufficiently many" characters would mean all characters, and so it would be impossible to
test the learnt COM on novel characters not seen in training.
Instead of learning the COM directly by minimizing (6), it was learnt implicitly by first
learning a set of neural network features for the functions in the environment. The features
were learnt using the method outlined in [1], which essentially involves learning a set of
classifiers with a common final hidden layer. The features were learnt on 400 out of the
3000 classifiers in the environment, using 90% of the data in training and 10% in testing.
Each resulting classifier was a linear combination of the neural network features. The
average error of the classifiers was 2.85% on the test set (which is an accurate estimate as
there were 9092 test examples).
Recall from Section 2 that if all f E F can be expressed as I = W . 4> for a fixed feature
set 4>, then the COM reduces to p{x, x') = (4)(x) - 4>(x ' ))W(4>{x) - 4>(X I ) ) ' where
W =
w/w dQ(w). The result of the learning procedure above is a set of features
ci> and 400 weight vectors w l, . . . , W 400, such that for each of the character classifiers fi
used in training, Ii :: Wi . ci>. Thus, g(x, x') := (ci>(x) - ci>(X'))W(ci>(x) - 4>(X'))' is
an empirical estimate of the true CDM, where W := L;~~ W:Wi. With a linear change
of variable ci> -+ ci>VW, 9 becomes g(x, x') = 114>(x) - ci>(x')112. This 9 was used to do
I-NN classification on the test examples in two different experiments.
fw
In the first experiment, all testing and training examples that were not an example of one
of the 400 training characters were lumped into an extra category for the purpose of classification. All test examples were then given the label of their nearest neighbour in the
training set under 9 (i.e. , initially all training examples were mapped into feature space
to give {ci>( Xl)' ... , ci>( X n )}. Then each test example was mapped into feature space and
assigned the same label as argminx.llci>( x) - ci>( Xi) 11 2 ).The total misclassification error was
2.2%, which can be directly compared with the misclassification error of the original classifiers of 2.85%. The COM does better because it uses the training data explicitly and the
information stored in the network to make a comparison, whereas the classifiers only use
the information in the network. The learnt COM was also used to do k-NN classification
with k > 1. However this afforded no improvement. For example, the error of the 3-NN
classifier was 2.54% and the error of the 20-NN classifier was 3.99%. This provides an
indication that the COM may not be the optimal distortion measure to use if k- NN classification (k > 1) is the aim.
In the second experiment 9 was again used to do I-NN classification on the test set, but
this time all 3018 characters were distinguished. So in this case the learnt COM was being
asked to distinguish between 2618 characters that were treated as a single character when
it was being trained. The misclassification error was a surprisingly low 7.5%. The 7.5%
error compares favourably with the 4.8% error achieved on the same data by the CEDAR
group, using a carefully selected feature set and a hand-tailored nearest-neighbour routine
[5]. In our case the distance measure was learnt from raw-data input, and has not been the
subject of any optimization or tweaking.
The Canonical Distortion Measure in Feature Space and I-NN Classification
251
Figure 1: Six Kanji characters (first character in each row) and examples of their four
nearest neighbours (remaining four characters in each row).
As a final, more qualitative assessment, the learnt CDM was used to compute the distance between every pair of testing examples, and then the distance between each pair of
characters (an individual character being represented by a number of testing examples)
was computed by averaging the distances between their constituent examples. The nearest neighbours of each character were then calculated. With this measure, every character
turned out to be its own nearest neighbour, and in many cases the next-nearest neighbours
bore a strong subjective similarity to the original. Some representative examples are shown
in Figure 1.
6
CONCLUSION
We have shown how the Canonical Distortion Measure (CDM) is the optimal distortion
measure for I-NN classification, and that for environments in which all the functions can
be expressed as a linear combination of a fixed set of features, the Canonical Distortion
Measure is squared Euclidean distance in feature space. A technique for learning the CDM
was presented and PAC-like bounds on the sample complexity required for good generalisation were proved.
Experimental results were presented in which the CDM for a Japanese OCR environment
was learnt by first learning a common set of features for a subset of the character classifiers
in the environment. The learnt CDM was then used as a distance measure in I-NN neighbour classification, and performed remarkably well, both on the characters used to train it
and on entirely novel characters.
References
[1] Jonathan Baxter. Learning Internal Representations. In Proceedings of the Eighth
International Conference on Computational Learning Theory, pages 311-320. ACM
Press, 1995.
[2] Jonathan Baxter. The Canonical Metric for Vector Quantisation. Technical Report
NeuroColt Technical Report 047, Royal Holloway College, University of London, July
1995.
[3] Jonathan Baxter. The Canonical Distortion Measure for Vector Quantization and Function Approximation. In Proceedings of the Fourteenth International Conference on
Machine Learning, July 1997. To Appear.
[4] W S Lee, P L Bartlett, and R C Williamson. Efficient agnostic learning of neural
networks with bounded fan-in. IEEE Transactions on Information Theory, 1997.
[5] S.N. Srihari, T. Hong, and Z. Shi. Cherry Blossom: A System for Reading Unconstrained Handwritten Page Images. In Symposium on Document Image Understanding
Technology (SDIUT), 1997.
| 1357 |@word version:1 norm:1 document:1 subjective:1 com:9 analysed:1 si:1 must:3 benign:1 selected:1 provides:1 constructed:1 symposium:1 qualitative:1 prove:1 consists:1 expected:2 considering:1 becomes:2 provided:2 exotic:1 notation:1 bounded:1 agnostic:1 argmin:1 transformation:1 ife:1 every:2 classifier:13 grant:1 appear:1 producing:1 positive:1 engineering:1 treat:1 au:1 range:1 testing:4 practice:1 union:1 procedure:1 empirical:4 significantly:1 printed:2 tweaking:1 get:1 impossible:1 restriction:1 www:1 map:1 shi:1 stability:2 suppose:3 us:1 hypothesis:1 trick:1 recognition:4 jk:1 database:2 epsrc:1 ensures:1 environment:28 broken:1 complexity:3 asked:1 kanji:3 trained:1 depend:1 learner:3 triangle:1 easily:1 various:1 represented:1 train:1 distinct:1 effective:1 london:1 choosing:1 heuristic:1 valued:3 distortion:12 gi:1 final:2 sequence:1 indication:1 turned:1 bed:1 constituent:1 convergence:1 double:1 empty:1 n_:1 measured:1 ij:2 nearest:10 strong:2 p2:2 involves:1 australian:1 australia:1 of9:1 hold:1 sufficiently:3 exp:3 mapping:1 smallest:1 purpose:2 label:2 currently:1 tool:1 cdm:33 aim:1 rather:1 ej:1 improvement:1 sense:2 nn:26 entire:1 initially:1 hidden:1 classification:26 logn:1 sampling:1 jon:1 minimized:2 report:2 simplify:1 primarily:1 neighbour:10 national:1 individual:1 argminx:1 investigate:1 cherry:1 accurate:1 euclidean:7 theoretical:1 gn:1 cover:2 measuring:1 cedar:3 subset:1 uniform:1 stored:1 learnt:18 combined:1 international:2 ie:1 lee:1 yl:3 squared:11 again:1 containing:1 hoeffding:1 toy:2 b2:1 wk:1 satisfy:1 explicitly:2 performed:1 sup:2 minimize:1 degraded:1 raw:1 handwritten:1 rx:1 definition:2 nonetheless:1 proof:1 sampled:3 proved:2 recall:2 routine:1 carefully:1 xxx:1 jw:1 generality:1 just:1 hand:1 favourably:1 su:1 assessment:1 quality:1 measurability:1 verify:1 ranged:1 consisted:1 true:1 hence:3 assigned:1 symmetric:1 lumped:1 maintained:2 hong:1 image:4 novel:3 possessing:3 ef:1 fi:1 common:5 b4:1 unconstrained:1 fk:1 outlined:1 stable:1 similarity:1 quantisation:1 own:1 perspective:1 certain:1 inequality:2 seen:1 july:2 ii:2 reduces:3 segmented:1 technical:2 essentially:1 expectation:1 metric:1 tailored:1 achieved:3 whereas:2 remarkably:1 separately:1 source:1 extra:1 subject:1 vw:1 feedforward:1 split:1 baxter:7 xj:13 six:1 bartlett:6 utility:1 iffor:1 syseng:1 peter:1 speech:2 york:1 remark:2 ignored:1 erx:4 category:1 generate:3 http:1 exist:1 canonical:10 gil:1 estimated:2 shall:1 group:2 four:2 erp:5 clean:1 sum:3 fourteenth:1 throughout:1 vb:1 entirely:1 bound:6 layer:1 distinguish:1 fan:1 encountered:1 scanned:1 afforded:1 sake:1 argument:1 optimality:2 department:1 according:3 combination:5 character:22 wi:4 making:1 pr:7 xo:10 equation:2 ocr:9 appropriate:1 appearing:1 distinguished:1 alternative:1 original:3 denotes:1 remaining:1 ensure:1 purchased:1 already:1 occurs:1 gradient:1 distance:17 mapped:2 neurocolt:1 trivial:1 reason:1 induction:1 ler:1 minimizing:3 equivalently:1 difficult:1 statement:3 ordinarily:1 unknown:3 descent:1 buffalo:2 supporting:1 immediate:1 perturbation:1 introduced:1 pair:6 required:3 eighth:1 reading:1 royal:1 explanation:1 misclassification:4 overlap:1 treated:1 scheme:1 technology:1 understanding:1 loss:5 permutation:2 proven:1 triple:2 pij:10 dq:4 row:2 supported:1 last:1 gl:1 english:2 surprisingly:1 bias:2 blossom:1 face:1 calculated:1 xn:2 world:1 author:1 transaction:1 approximate:1 implicitly:1 investigating:1 assumed:1 xi:14 learn:6 williamson:1 japanese:9 domain:2 main:1 xu:1 canberra:1 representative:1 wish:1 xl:10 theorem:7 pac:3 showing:2 er:4 exists:2 quantization:3 ci:11 anu:1 simply:1 likely:1 srihari:1 expressed:5 llci:1 acm:1 conditional:1 goal:1 change:3 fw:2 bore:1 generalisation:4 specifically:1 contrasted:1 averaging:1 lemma:4 called:1 gij:2 total:1 experimental:4 holloway:1 college:1 internal:1 support:1 jonathan:4 ex:1 |
392 | 1,358 | Monotonic Networks
Joseph Sill
Computation and Neural Systems program
California Institute of Technology
MC 136-93, Pasadena, CA 91125
email: joe@cs.caltech.edu
Abstract
Monotonicity is a constraint which arises in many application domains. We present a machine learning model, the monotonic network, for which monotonicity can be enforced exactly, i.e., by virtue
offunctional form . A straightforward method for implementing and
training a monotonic network is described. Monotonic networks
are proven to be universal approximators of continuous, differentiable monotonic functions. We apply monotonic networks to a
real-world task in corporate bond rating prediction and compare
them to other approaches.
1
Introduction
Several recent papers in machine learning have emphasized the importance of priors and domain-specific knowledge. In their well-known presentation of the biasvariance tradeoff (Geman and Bienenstock, 1992)' Geman and Bienenstock conclude
by arguing that the crucial issue in learning is the determination of the "right biases" which constrain the model in the appropriate way given the task at hand .
The No-Free-Lunch theorem of Wolpert (Wolpert, 1996) shows, under the 0-1 error
measure, that if all target functions are equally likely a priori, then all possible
learning methods do equally well in terms of average performance over all targets .
One is led to the conclusion that consistently good performance is possible only
with some agreement between the modeler's biases and the true (non-flat) prior.
Finally, the work of Abu-Mostafa on learning from hints (Abu-Mostafa, 1990) has
shown both theoretically (Abu-Mostafa, 1993) and experimentally (Abu-Mostafa,
1995) that the use of prior knowledge can be highly beneficial to learning systems.
One piece of prior information that arises in many applications is the monotonicity
constraint, which asserts that an increase in a particular input cannot result in a
decrease in the output. A method was presented in (Sill and Abu-Mostafa, 1996)
which enforces monotonicity approximately by adding a second term measuring
J.Sill
662
"monotonicity error" to the usual error measure. This technique was shown to
yield improved error rates on real-world applications. Unfortunately, the method
can be quite expensive computationally. It would be useful to have a model which
obeys monotonicity exactly, i.e., by virtue of functional form .
We present here such a model, which we will refer to as a monotonic network.
A monotonic network implements a piecewise-linear surface by taking maximum
and minimum operations on groups of hyperplanes. Monotonicity constraint'> are
enforced by constraining the signs of the hyperplane weights. Monotonic networks
can be trained using the usual gradient-based optimization methods typically used
with other models such as feedforward neural networks. Armstrong (Armstrong et.
al. 1996) has developed a model called the adaptive logic network which is capable
of enforcing monotonicity and appears to have some similarities to the approach
presented here. The adaptive logic network, however, is available only through a
commercial software package. The training algorithms are proprietary and have
not been fully disclosed in academic journals. The monotonic network therefore
represents (to the best of our knowledge) the first model to be presented in an
academic setting which has the ability to enforce monotonicity.
Section II describes the architecture and training procedure for monotonic networks.
Section III presents a proof that monotonic networks can uniformly approximate
any continuous monotonic function with bounded partial derivatives to an arbitrary
level of accuracy. Monotonic networks are applied to a real-world problem in bond
rating prediction in Section IV. In Section V, we discuss the results and consider
future directions.
2
Architecture and Training Procedure
A monotonic network has a feedforward, three-layer (two hidden-layer) architecture
(Fig. 1). The first layer of units compute different linear combinations of the input
vector. If increasing monotonicity is desired for a particular input, then all the
weights connected to that input are constrained to be positive. Similarly, weights
connected to an input where decreasing monotonicity is required are constrained to
be negative. The first layer units are partitioned into several groups (the number
of units in each group is not necessarily the same). Corresponding to each group is
a second layer unit, which computes the maximum over all first-layer units within
the group. The final output unit computes the minimum over all groups.
More formally, if we have f{ groups with outputs 91,92, ... 9K, and if group k
consists of hk hyperplanes w(k, 1) , w(k,2), ... w(k,h k ), then
9k(X)
= m~xw(kJ) . x -
t(k,i), 1::; j ::; hk
3
Let y be the final output of the network. Then
or, for classification problems,
where u(u)
= e.g.
l+!-u.
Monotonic Networks
663
positive
Input Vector
Figure 1: This monotonic network obeys increasing monotonicity in all 3 inputs
because all weights in the first layer are constrained to be positive.
In the discussions which follow, it will be useful to define the term active. We will
call a group 1 active at x if
g/(x) = mingk(x)
k
, i.e., if the group determines the output of the network at that point. Similarly, we
will say that a hyperplane is active at x if its group is active at x and the hyperplane
is the maximum over all hyperplanes in the group.
As will be shown in the following section, the three-layer architecture allows a monotonic network to approximate any continuous, differentiable monotonic function
arbitrarily well, given sufficiently many groups and sufficiently many hyperplanes
within each group. The maximum operation within each group allows the network
to approximate convex (positive second derivative) surfaces, while the minimum operation over groups enables the network to implement the concave (negative second
derivative) areas of the target function (Figure 2).
,,
,
,,
network
output
........... '
group I
active
......
group 2
active
group 3
active
input
Figure 2: This surface is implemented by a monotonic network consisting of three
groups. The first and third groups consist of three hyperplanes, while the second
group has only two.
Monotonic networks can be trained using many of the standard gradient-based
optimization techniques commonly used in machine learning. The gradient for
1. Sill
664
each hyperplane is found by computing the error over all examples for which the
hyperplane is active. After the parameter update is made according to the rule of
the optimization technique, each training example is reassigned to the hyperplane
that is now active at that point. The set of examples for which a hyperplane is
active can therefore change during the course of training.
The constraints on the signs of the weights are enforced using an exponential
transformation. If increasing monotonicity is desired in input variable i, then
Vj, k the weights corresponding to the input are represented as Wi (j ,k) ::::: eZ, (i ,k) .
The optimization algorithm can modify zlj,k) freely during training while maintaining the constraint. If decreasing monotonicity is required, then Vj, k we take
(i,k)
( . k)
.
Wi)'
_e z ,
=
3
Universal Approximation Capability
In this section, we demonstrate that monotonic networks have the capacity to approximate uniformly to an arbitrary degree of accuracy any continuous, bounded,
differentiable function on the unit hypercube [0, I]D which is monotonic in all variables and has bounded partial derivatives. We will say that x' dominates x if
VI :S d:S D, x~ ~ Xd. A function m is monotonic in all variables if it satisfies the
constraint that Vx,x', if x' dominates x then m(x') ~ m(x).
Theorem 3.1 Let m(x) be any continuous, bounded monotonic function with
bounded partial derivatives, mapping [0, I]D to R. Then there exists a function
mnet(x) which can be implemented by a monotonic network and is such that, for
any f and any x E [0, I]D ,Im(x) - mnet(x)1 < f.
Proof:
Let b be the maximum value and a be the minimum value which m takes on [0, I]D.
Let a bound the magnitude of all partial first derivatives of m on [0, I]D. Define
an equispaced grid of points on [0, 1]D, where = ~ is the spacing between grid
points along each dimension. I.e., the grid is the set S of points (ilO, i 2 o, .. .iDOl
where 1 :S i 1 :S n,1 :S i2 :S n, ... 1 :S iD :S n. Corresponding to each grid point
x' (x~, x~, ... xv), assign a group consisting of D+ 1 hyperplanes. One hyperplane
in the group is the constant output plane y = m(x'). In addition, for each dimension
d, place a hyperplane y = ,(Xd - x~) + m(x') , where, > b'6 a . This construction
ensures that the group associated with x' cannot be active at any point x* where
there exists a d such that xd - x~ > 0, since the group's output at such a point
must be greater than b and hence greater than the output of a group associated
with another grid point.
?
=
Now consider any point x E [0, I]D. Let S(l) be the unique grid point in S such that
Vd, :S Xd - si 1 ) < 0, i.e., S(l) is the closest grid point to x which x dominates.
Then we can show that mnet(x) ~ m(s(l?). Consider an arbitrary grid point s' =f.
s(l). By the monotonicity of m, if s' dominates S(l), then m(s') ~ m(s(l?), and
hence, the group associated with s' has a constant output hyperplane y = m(s') ~
m(s(l?) and therefore outputs a value ~ m(s(l?) at x. If 8' does not dominate S(l),
then there exists a d such that Sd(l) > s~. Therefore, Xd - s~ ~ 0, meaning that
the output of the group associated with s' is at least b ~ m(s(l?). All groups have
output at least as large as m(s(l?), so we have indeed shown that mnet(X) ~ m(s(l?).
Now consider the grid point S(2) that is obtained by adding 0 to each coordinate of
s(l). The group associated with s(2) outputs m(s(2?) at x, so mnet(x) :S m(s(2?).
Therefore, we have m(s(l?) :S mnet(x) :S m(s(2?). Since x dominates s(l) and
?
Monotonic Networks
665
is dominated by S(2), by mono tonicity we also have m(s(l)) :S m(x) :S m(s(2)).
Im(x) - mnet(x)1 is therefore bounded by Im(s{2)) - m(s(l))I. By Taylor's theorem
for multivariate functions, we know that
for some point c on the line segment between S(I) and s(2). Given the assumptions
made at the outset, Im(s(2))-m(s(1))j, and hence, \m(x)-mnedx)1 can be bounded
by d.5Ct. We take .5 < d~ to complete the proof ?.
4
Experimental Results
We tested monotonic networks on a real-world problem concerning the prediction
of corporate bond ratings. Rating agencies such as Standard & Poors (S & P) issue
bond ratings intended to assess the level of risk of default associated with the bond.
S & P ratings can range from AAA down to B- or lower.
A model which accurately predicts the S & P rating of a bond given publicly available financial information about the issuer has considerable value. Rating agencies
do not rate all bonds, so an investor could use the model to assess the risk associated
with a bond which S & P has not rated. The model can also be used to anticipate
rating changes before they are announced by the agency.
The dataset, which was donated by a Wall Street firm, is made up of 196 examples.
Each training example consists of 10 financial ratios reflecting the fundamental
characteristics of the issuing firm, along with an associated rating. The meaning of
the financial ratios was not disclosed by the firm for proprietary reasons. The rating
labels were converted into integers ranging from 1 to 16. The task was treated as a
single-output regression problem rather than a 16-class classification problem.
Monotonicity constraints suggest themselves naturally in this context. Although
the meanings of the features are not revealed, it is reasonable to assume that they
consist of quantities such as profitability, debt, etc. It seems intuitive that, for
instance, the higher the profitability of the firm is , the stronger the firm is, and
hence, the higher the bond rating should be. Monotonicity was therefore enforced
in all input variables.
Three different types of models (all trained on squared error) were compared: a
linear model, standard two-layer feedforward sigmoidal neural networks, and monotonic networks. The 196 examples were split into 150 training examples and 46
test examples. In order to get a statistically significant evaluation of performance,
a leave-k-out procedure was implemented in which the 196 examples were split 200
different ways and each model was trained on the training set and tested on the
test set for each split. The results shown are averages over the 200 splits.
Two different approaches were used with the standard neural networks. In both
cases, the networks were trained for 2000 batch-mode iterations of gradient descent
with momentum and an adaptive learning rate, which sufficed to allow the networks
to approach minima of the training error. The first method used all 150 examples
for direct training and minimized the training error as much as possible. The
second technique split the 150 examples into 110 for direct training and 40 used for
validation, i.e., to determine when to stop training. Specifically, the mean-squarederror on the 40 examples was monitored over the course of the 2000 iterations,
1. Sill
666
and the state of the network at the iteration where lowest validation error was
obtained was taken as the final network to be tested on the test set. In both
cases, the networks were initialized with small random weights. The networks had
direct input-output connections in addition to hidden units in order to facilitate the
implementation of the linear aspects of the target function.
The monotonic networks were trained for 1000 batch-mode iterations of gradient
descent with momentum and an adaptive learning rate. The parameters of each
hyperplane in the network were initialized to be the parameters of the linear model
obtained from the training set, plus a small random perturbation. This procedure
ensured that the network was able to find a reasonably good fit to the data. Since
the meanings of the features were not known, it was not known a priori whether
increasing or decreasing mono tonicity should hold for each feature . The directions
of monotonicity were determined by observing the signs of the weights of the linear
model obtained from the training data.
Model
Linear
10-2-1 net
10-4-1 net
10-6-1 net
10-8-1 net
training
3.45 ?
1.83 ?
1.22 ?
0.87 ?
0.65 ?
error
.02
.01
.01
.01
.01
test error
4.09 ? .06
4.22 ? .14
4.86 ? .16
5.57 ? .20
5.56 ? .16
Table 1: Performance of linear model and standard networks on bond rating problem
The results support the hypothesis of a monotonic (or at least roughly monotonic)
target function. As Table 1 shows, standard neural networks have sufficient flexibility to fit the training data quite accurately (n-k-l network means a 2-layer
network with n inputs, k hidden units, and 1 output). However, their excessive,
non-monotonic degrees of freedom lead to overfitting, and their out-of-sample performance is even worse than that of a linear model. The use of early stopping
alleviates the overfitting and enables the networks to outperform the linear model.
Without the monotonicity constraint, however, standard neural networks still do
not perform as well as the monotonic networks. The results seem to be quite robust
with respect to the choice of number of hidden units for the standard networks and
number and size of groups for the monotonic networks.
Model
10-2-1 net
10-4-1 net
10-6-1 net
10-8-1 net
training
2.46 ?
2.19 ?
2.14 ?
2.13 ?
error
.04
.05
.05
.06
test error
3.83 ? .09
3.82? .08
3.77 ? .07
3.86 ? .09
Table 2: Performance of standard networks using early stopping on bond rating
problem
5
Conclusion
We presented a model, the monotonic network, in which monotonicity constraints
can be enforced exactly, without adding a second term to the usual objective function. A straightforward method for implementing and training such models was
667
Monotonic Networks
2 groups,
3 groups,
4 groups,
5 groups,
Model
2 planes
3 planes
4 planes
5 planes
per
per
per
per
group
group
group
group
training error
2.78 ? .05
2.64 ? .04
2.50 ? .04
2.44 ? .03
test error
3.71 ? .07
3.56 ? .06
3.48 ? .06
3.43 ? .06
Table 3: Performance of monotonic networks on bond rating problem
demonstrated, and the method was shown to outperform other methods on a realworld problem.
Several areas of research regarding monotonic networks need to be addressed in
the future. One issue concerns the choice of the number of groups and number of
planes in each group. In general, the usual bias-variance tradeoff that holds for
other models will apply here, and the optimal number of groups and planes will be
quite difficult to determine a priori. There may be instances where additional prior
information regarding the convexity or concavity of the target function can guide
the decision, however. Another interesting observation is that a monotonic network
could also be implemented by reversing the maximum and minimum operations,
i.e., by taking the maximum over groups where each group outputs the minimum
over all of its hyperplanes. It will be worthwhile to try to understand when one
approach or the other is most appropriate.
Acknowledgments
The author is very grateful to Yaser Abu-Mostafa for considerable guidance. I also
thank John Moody for supplying the data. Amir Atiya, Eric Bax, Zehra Cataltepe,
Malik Magdon-Ismail, Alexander Nicholson, and Xubo Song supplied many useful
comments.
References
[1] S. Geman and E. Bienenstock (1992). Neural Networks and the Bias-Variance
Dilemma. Neural Computation 4, pp 1-58.
[2] D. Wolpert (1996). The Lack of A Priori Distinctions Between Learning Algorithms. Neural Computation 8, pp 1341-1390.
[3] Y. Abu-Mostafa (1990). Learning from Hints in Neural Networks Journal of
Complexity 6, 192-198.
[4] Y. Abu-Mostafa (1993) Hints and the VC Dimension Neural Computation 4,
278-288
[5] Y. Abu-Mostafa (1995) Financial Market Applications of Learning from Hints
Neural Networks in the Capital Markets, A. Refenes, ed., 221-232. Wiley, London,
UK.
[6] J. Sill and Y. Abu-Mostafa (1996) Monotonicity Hints. To appear in it Advances
in Neural Information Processing Systems 9.
[7] W.W. Armstrong, C. Chu, M. M. Thomas (1996) Feasibility of using Adaptive
Logic Networks to Predict Compressor Unit Failure Applications of Neural Networks
in Environment, Energy, and Health, Chapter 12. P. Keller, S. Hashem, L. Kangas,
R. Kouzes, eds, World Scientific Publishing Company, Ltd., London.
| 1358 |@word seems:1 stronger:1 nicholson:1 si:1 issuing:1 must:1 chu:1 john:1 enables:2 update:1 amir:1 plane:7 supplying:1 hyperplanes:7 sigmoidal:1 along:2 direct:3 consists:2 theoretically:1 indeed:1 market:2 roughly:1 themselves:1 decreasing:3 company:1 increasing:4 bounded:7 lowest:1 developed:1 transformation:1 concave:1 xd:5 donated:1 exactly:3 ensured:1 refenes:1 uk:1 unit:11 appear:1 positive:4 before:1 modify:1 xv:1 sd:1 id:1 approximately:1 plus:1 sill:6 range:1 statistically:1 obeys:2 unique:1 acknowledgment:1 arguing:1 enforces:1 implement:2 procedure:4 area:2 universal:2 outset:1 suggest:1 get:1 cannot:2 risk:2 context:1 demonstrated:1 straightforward:2 keller:1 convex:1 rule:1 dominate:1 financial:4 coordinate:1 target:6 commercial:1 construction:1 equispaced:1 hypothesis:1 agreement:1 expensive:1 geman:3 predicts:1 ensures:1 connected:2 decrease:1 zehra:1 agency:3 convexity:1 complexity:1 environment:1 hashem:1 trained:6 grateful:1 segment:1 dilemma:1 eric:1 represented:1 chapter:1 london:2 firm:5 quite:4 say:2 ability:1 final:3 differentiable:3 net:8 alleviates:1 flexibility:1 ismail:1 intuitive:1 asserts:1 leave:1 implemented:4 c:1 direction:2 vc:1 vx:1 implementing:2 assign:1 wall:1 anticipate:1 im:4 hold:2 sufficiently:2 bax:1 mapping:1 predict:1 mostafa:10 early:2 bond:12 label:1 tonicity:2 rather:1 zlj:1 consistently:1 hk:2 xubo:1 stopping:2 typically:1 pasadena:1 bienenstock:3 hidden:4 issue:3 classification:2 priori:4 constrained:3 represents:1 excessive:1 future:2 minimized:1 piecewise:1 hint:5 intended:1 consisting:2 freedom:1 highly:1 evaluation:1 ilo:1 capable:1 partial:4 iv:1 taylor:1 initialized:2 desired:2 guidance:1 instance:2 measuring:1 fundamental:1 moody:1 squared:1 worse:1 derivative:6 converted:1 vi:1 piece:1 try:1 observing:1 sufficed:1 investor:1 capability:1 idol:1 ass:2 publicly:1 accuracy:2 variance:2 characteristic:1 yield:1 accurately:2 mc:1 ed:2 email:1 failure:1 energy:1 pp:2 naturally:1 proof:3 associated:8 modeler:1 monitored:1 stop:1 dataset:1 knowledge:3 reflecting:1 appears:1 higher:2 follow:1 improved:1 profitability:2 hand:1 lack:1 mode:2 scientific:1 facilitate:1 true:1 hence:4 i2:1 during:2 complete:1 demonstrate:1 meaning:4 ranging:1 functional:1 refer:1 significant:1 grid:9 similarly:2 had:1 similarity:1 surface:3 etc:1 closest:1 multivariate:1 recent:1 arbitrarily:1 approximators:1 caltech:1 minimum:7 greater:2 additional:1 freely:1 determine:2 ii:1 corporate:2 determination:1 academic:2 concerning:1 equally:2 feasibility:1 prediction:3 regression:1 iteration:4 addition:2 spacing:1 addressed:1 biasvariance:1 crucial:1 comment:1 seem:1 call:1 integer:1 revealed:1 constraining:1 iii:1 feedforward:3 split:5 fit:2 architecture:4 regarding:2 tradeoff:2 whether:1 ltd:1 song:1 yaser:1 proprietary:2 useful:3 atiya:1 outperform:2 supplied:1 sign:3 per:4 abu:10 group:45 cataltepe:1 mono:2 capital:1 enforced:5 realworld:1 package:1 place:1 reasonable:1 decision:1 announced:1 layer:10 bound:1 ct:1 constraint:9 constrain:1 flat:1 software:1 dominated:1 aspect:1 according:1 combination:1 poor:1 beneficial:1 describes:1 partitioned:1 wi:2 joseph:1 lunch:1 aaa:1 taken:1 computationally:1 discus:1 know:1 available:2 operation:4 magdon:1 apply:2 worthwhile:1 appropriate:2 enforce:1 batch:2 thomas:1 publishing:1 maintaining:1 xw:1 hypercube:1 objective:1 malik:1 quantity:1 usual:4 gradient:5 thank:1 capacity:1 vd:1 street:1 reason:1 enforcing:1 ratio:2 difficult:1 unfortunately:1 negative:2 implementation:1 perform:1 observation:1 descent:2 kangas:1 perturbation:1 arbitrary:3 rating:15 squarederror:1 required:2 connection:1 california:1 distinction:1 able:1 program:1 debt:1 treated:1 technology:1 rated:1 health:1 kj:1 prior:5 fully:1 interesting:1 proven:1 validation:2 degree:2 sufficient:1 course:2 free:1 bias:4 allow:1 guide:1 understand:1 institute:1 taking:2 dimension:3 default:1 world:5 computes:2 concavity:1 author:1 commonly:1 adaptive:5 made:3 approximate:4 logic:3 monotonicity:21 active:11 overfitting:2 conclude:1 continuous:5 table:4 reassigned:1 reasonably:1 robust:1 ca:1 necessarily:1 domain:2 vj:2 fig:1 wiley:1 momentum:2 exponential:1 third:1 theorem:3 down:1 specific:1 emphasized:1 virtue:2 dominates:5 concern:1 disclosed:2 consist:2 joe:1 exists:3 adding:3 importance:1 magnitude:1 wolpert:3 led:1 likely:1 ez:1 compressor:1 monotonic:40 determines:1 satisfies:1 presentation:1 considerable:2 experimentally:1 change:2 specifically:1 determined:1 uniformly:2 reversing:1 hyperplane:11 called:1 experimental:1 formally:1 support:1 arises:2 alexander:1 armstrong:3 tested:3 |
393 | 1,359 | On Parallel Versus Serial Processing:
A Computational Study of Visual Search
Eyal Cohen
Department of Psychology
Tel-Aviv University Tel Aviv 69978, Israel
eyalc@devil. tau .ac .il
Eytan Ruppin
Departments of Computer Science & Physiology
Tel-Aviv University Tel Aviv 69978, Israel
ruppin@math.tau .ac.il
Abstract
A novel neural network model of pre-attention processing in visualsearch tasks is presented. Using displays of line orientations taken
from Wolfe's experiments [1992], we study the hypothesis that the
distinction between parallel versus serial processes arises from the
availability of global information in the internal representations of
the visual scene. The model operates in two phases. First, the
visual displays are compressed via principal-component-analysis.
Second, the compressed data is processed by a target detector module in order to identify the existence of a target in the display. Our
main finding is that targets in displays which were found experimentally to be processed in parallel can be detected by the system, while targets in experimentally-serial displays cannot . This
fundamental difference is explained via variance analysis of the
compressed representations, providing a numerical criterion distinguishing parallel from serial displays. Our model yields a mapping
of response-time slopes that is similar to Duncan and Humphreys's
"search surface" [1989], providing an explicit formulation of their
intuitive notion of feature similarity. It presents a neural realization of the processing that may underlie the classical metaphorical
explanations of visual search.
On Parallel versus Serial Processing: A Computational Study a/Visual Search
1
11
Introduction
This paper presents a neural-model of pre-attentive visual processing. The model
explains why certain displays can be processed very fast, "in parallel" , while others
require slower, "serial" processing, in subsequent attentional systems. Our approach
stems from the observation that the visual environment is overflowing with diverse
information, but the biological information-processing systems analyzing it have
a limited capacity [1]. This apparent mismatch suggests that data compression
should be performed at an early stage of perception, and that via an accompanying process of dimension reduction, only a few essential features of the visual
display should be retained. We propose that only parallel displays incorporate
global features that enable fast target detection, and hence they can be processed
pre-attentively, with all items (target and dis tractors) examined at once. On the
other hand, in serial displays' representations, global information is obscure and
target detection requires a serial, attentional scan of local features across the display. Using principal-component-analysis (peA), our main goal is to demonstrate
that neural systems employing compressed, dimensionally reduced representations
of the visual information can successfully process only parallel displays and not serial ones. The sourCe of this difference will be explained via variance analysis of the
displays' projections on the principal axes.
The modeling of visual attention in cognitive psychology involves the use of
metaphors, e.g., Posner's beam of attention [2]. A visual attention system of a
surviving organism must supply fast answers to burning issues such as detecting
a target in the visual field and characterizing its primary features. An attentional
system employing a constant-speed beam of attention [3] probably cannot perform
such tasks fast enough and a pre-attentive system is required. Treisman's feature
integration theory (FIT) describes such a system [4]. According to FIT, features
of separate dimensions (shape, color, orientation) are first coded pre-attentively in
a locations map and in separate feature maps, each map representing the values of
a particular dimension. Then, in the second stage, attention "glues" the features
together conjoining them into objects at their specified locations. This hypothesis
was supported using the visual-search paradigm [4], in which subjects are asked
to detect a target within an array of distractors, which differ on given physical dimensions such as color, shape or orientation. As long as the target is significantly
different from the distractors in one dimension, the reaction time (RT) is short and
shows almost no dependence on the number of distractors (low RT slope). This
result suggests that in this case the target is detected pre-attentively, in parallel.
However, if the target and distractors are similar, or the target specifications are
more complex, reaction time grows considerably as a function of the number of
distractors [5, 6], suggesting that the displays' items are scanned serially using an
attentional process.
FIT and other related cognitive models of visual search are formulated on the conceptual level and do not offer a detailed description of the processes involved in
transforming the visual scene from an ordered set of data points into given values
in specified feature maps. This paper presents a novel computational explanation
of the source of the distinction between parallel and serial processing, progressing
from general metaphorical terms to a neural network realization. Interestingly, we
also come out with a computational interpretation of some of these metaphorical
terms, such as feature similarity.
E. Cohen and E. Ruppin
12
2
The Model
We focus our study on visual-search experiments of line orientations performed by
Wolfe et. al. [7], using three set-sizes composed of 4, 8 and 12 items. The number of
items equals the number of dis tractors + target in target displays, and in non-target
displays the target was replaced by another distractor, keeping a constant set-size.
Five experimental conditions were simulated: (A) - a 20 degrees tilted target among
vertical distractors (homogeneous background). (B) - a vertical target among 20
degrees tilted distractors (homogeneous background). (C) - a vertical target among
heterogeneous background ( a mixture of lines with ?20, ?40 , ?60 , ?80 degrees
orientations). (E) - a vertical target among two flanking distractor orientations (at
?20 degrees), and (G) - a vertical target among two flanking distractor orientations
(?40 degrees). The response times (RT) as a function of the set-size measured by
Wolfe et. al. [7] show that type A, Band G displays are scanned in a parallel
manner (1.2, 1.8,4.8 msec/item for the RT slopes), while type C and E displays are
scanned serially (19.7,17.5 msec/item). The input displays of our system were prepared following Wolfe's prescription: Nine images of the basic line orientations were
produced as nine matrices of gray-level values. Displays for the various conditions
of Wolfe's experiments were produced by randomly assigning these matrices into
a 4x4 array, yielding 128x100 display-matrices that were transformed into 12800
display-vectors. A total number of 2400 displays were produced in 30 groups (80
displays in each group): 5 conditions (A, B, C, E, G ) x target/non-target x 3
set-sizes (4,8, 12).
Our model is composed of two neural network modules connected in sequence as
illustrated in Figure 1: a peA module which compresses the visual data into a set
of principal axes, and a Target Detector (TD) module. The latter module uses the
compressed data obtained by the former module to detect a target within an array
of distractors. The system is presented with line-orientation displays as described
above.
NO?TARGET =?1
TARGET-I
Tn [JUTPUT LAYER
(I
UNIT)-------,
TARGET
DETECTOR
MODULE
Tn INrnRMEDIATE LAYER
PeA
(12 UNITS)
O~=~ LAYER
(11))
J
--..;;:::~~~
DATA
COMPRESSION
MODULE
(PeA)
INPUT LAYER (12Il00 UNITS)
t
t
/---
--
-
_
/
DISPLAY
Figure 1: General architecture of the model
For the PCA module we use the neural network proposed by Sanger, with the
connections' values updated in accordance with his Generalized Hebbian Algorithm
(GHA) [8]. The outputs of the trained system are the projections of the displayvectors along the first few principal axes, ordered with respect to their eigenvalue
magnitudes. Compressing the data is achieved by choosing outputs from the first
13
On Parallel versus Serial Processing: A Computational Study o/Visual Search
few neurons (maximal variance and minimal information loss). Target detection in
our system is performed by a feed-forward (FF) 3-layered network, trained via a
standard back-propagation algorithm in a supervised-learning manner. The input
layer of the FF network is composed of the first eight output neurons of the peA
module. The transfer function used in the intermediate and output layers is the
hyperbolic tangent function.
3
3.1
Results
Target Detection
The performance of the system was examined in two simulation experiments. In
the first, the peA module was trained only with "parallel" task displays, and in the
second, only with "serial" task displays. There is an inherent difference in the ability
of the model to detect targets in parallel versus serial displays . In parallel task
conditions (A, B, G) the target detector module learns the task after a comparatively
small number (800 to 2000) of epochs, reaching performance level of almost 100%.
However, the target detector module is not capable of learning to detect a target
in serial displays (e, E conditions) . Interestingly, these results hold (1) whether
the preceding peA module was trained to perform data compression using parallel
task displays or serial ones, (2) whether the target detector was a linear simple
perceptron, or the more powerful, non-linear network depicted in Figure 1, and (3)
whether the full set of 144 principal axes (with non-zero eigenvalues) was used.
3.2
Information Span
To analyze the differences between parallel and serial tasks we examined the eigenvalues obtained from the peA of the training-set displays. The eigenvalues of
condition B (parallel) displays in 4 and 12 set-sizes and of condition e (serial-task)
displays are presented in Figure 2. Each training set contains a mixture of target
and non-target displays.
(b)
(a)
PARALLEL
SERIAL
40
35
40
l!J
+4 ITEMS
II>
"'I;l
~
30
25
~
w
25
~
~20
o 12 ITEMS
30
~
w
w
~20
~
15
w
10
15
10
~
5
0
-5
+4 ITEMS
35
o 12 ITEMS
0
30
40
10
20
No. of PRINCIPAL AXIS
-
5
0
-5
0
10
20
30
40
No. of PRINCIPAL AXIS
Figure 2: Eigenvalues spectrum of displays with different set-sizes, for parallel and
serial tasks. Due to the sparseness of the displays (a few black lines on white
background), it takes only 31 principal axes to describe the parallel training-set in
full (see fig 2a. Note that the remaining axes have zero eigenvalues, indicating that
they contain no additional information.), and 144 axes for the serial set (only the
first 50 axes are shown in fig 2b).
E. Cohen and E. Ruppin
14
As evident, the eigenvalues distributions of the two display types are fundamentally
different: in the parallel task, most of the eigenvalues "mass" is concentrated in the
first few (15) principal axes, testifying that indeed, the dimension of the parallel
displays space is quite confined. But for the serial task, the eigenvalues are distributed almost uniformly over 144 axes. This inherent difference is independent of
set-size: 4 and 12-item displays have practically the same eigenvalue spectra.
3.3
Variance Analysis
The target detector inputs are the projections of the display-vectors along the first
few principal axes. Thus, some insight to the source of the difference between
parallel and serial tasks can be gained performing a variance analysis on these
projections. The five different task conditions were analyzed separately, taking a
group of 85 target displays and a group of 85 non-target displays for each set-size.
Two types of variances were calculated for the projections on the 5th principal axis:
The "within groups" variance, which is a measure of the statistical noise within
each group of 85 displays, and the "between groups" variance, which measures the
separation between target and non-target groups of displays for each set-size. These
variances were averaged for each task (condition), over all set-sizes. The resulting
ratios Q of within-groups to between-groups standard deviations are: QA = 0.0259,
QB = 0.0587 ,and Qa = 0.0114 for parallel displays (A, B, G), and QE 0.2125
Qc = 0.771 for serial ones (E, C).
=
As evident, for parallel task displays the Q values are smaller by an order of magnitude compared with the serial displays, indicating a better separation between
target and non-target displays in parallel tasks. Moreover, using Q as a criterion
for parallel/serial distinction one can predict that displays with Q << 1 will be
processed in parallel, and serially otherwise, in accordance with the experimental
response time (RT) slopes measured by Wolfe et. al. [7]. This differences are further
demonstrated in Figure 3, depicting projections of display-vectors on the sub-space
spanned by the 5, 6 and 7th principal axes. Clearly, for the parallel task (condition
B), the PCA representations of the target-displays (plus signs) are separated from
non-target representations (circles), while for serial displays (condition C) there is
no such separation. It should be emphasized that there is no other principal axis
along which such a separation is manifested for serial displays.
1"1
110"
-11106
-1
...
un
o
o
_1157
.11615
o
+
o
::::~
+ ++
.
. l
-11M
? 0
~
: . , Hill
.+
+0
o
-'.1'
-11025
-'181
_1163
'.II
o
..
_1182
'10
,
.,0'
0
o
"
'07
,.II
? 10~
Til
7.&12
"'AXIS
'.7
1.1186
INIIS
11166
71hAXIS
,.
18846
,.
, ow
. ..
,
'~AXIS
.
o
0-
o
1114
1113
1.1e2
,.,
,.
no AXIS
1.1'11
'.71
1 iTT
.,.
1.178
1.175
1 114
Figure 3: Projections of display-vectors on the sub-space spanned by the 5, 6 and
7th principal axes. Plus signs and circles denote target and non-target displayvectors respectively, (a) for a parallel task (condition B), and (b) for a serial task
(condition C). Set-size is 8 items.
On Parallel versus Serial Processing: A Computational Study o/Visual Search
15
While Treisman and her co-workers view the distinction between parallel and serial tasks as a fundamental one, Duncan and Humphreys [5] claim that there is
no sharp distinction between them, and that search efficiency varies continuously
across tasks and conditions. The determining factors according to Duncan and
Humphreys are the similarities between the target and the non-targets (T-N similarities) and the similarities between the non-targets themselves (N-N similarity).
Displays with homogeneous background (high N-N similarity) and a target which is
significantly different from the distractors (low T-N similarity) will exhibit parallel,
low RT slopes, and vice versa. This claim was illustrated by them using a qualitative
"search surface" description as shown in figure 4a. Based on results from our variance analysis, we can now examine this claim quantitatively: We have constructed
a "search surface", using actual numerical data of RT slopes from Wolfe's experiments, replacing the N-N similarity axis by its mathematical manifestation, the
within-groups standard deviation, and N-T similarity by between-groups standard
deviation 1. The resulting surface (Figure 4b) is qualitatively similar to Duncan and
Humphreys's. This interesting result testifies that the PCA representation succeeds
in producing a viable realization of such intuitive terms as inputs similarity, and is
compatible with the way we perceive the world in visual search tasks.
(a)
(b)
SEARCH SURFACE
o
CIo-.....
~:..-4.:0,........::~"""""'"
1_.-...-_
l.rgeI-.-Jargel
IImllarll),
Figun J. The seatcllaurface.
Figure 4: RT rates versus: (a) Input similarities (the search surface, reprinted from
Duncan and Humphreys, 1989). (b) Standard deviations (within and between) of
the PCA variance analysis. The asterisks denote Wolfe's experimental data.
4
Summary
In this work we present a two-component neural network model of pre-attentional
visual processing. The model has been applied to the visual search paradigm performed by Wolfe et. al. Our main finding is that when global-feature compression
is applied to visual displays, there is an inherent difference between the representations of serial and parallel-task displays: The neural network studied in this paper
has succeeded in detecting a target among distractors only for displays that were
experimentally found to be processed in parallel. Based on the outcome of the
1 In general, each principal axis contains information from different features, which may
mask the information concerning the existence of a target. Hence, the first principal axis
may not be the best choice for a discrimination task. In our simulations, the 5th axis
for example, was primarily dedicated to target information, and was hence used for the
variance analysis (obviously, the neural network uses information from all the first eight
principal axes).
16
E. Cohen andE. Ruppin
variance analysis performed on the PCA representations of the visual displays, we
present a quantitative criterion enabling one to distinguish between serial and parallel displays. Furthermore, the resulting 'search-surface' generated by the PCA
components is in close correspondence with the metaphorical description of Duncan
and Humphreys.
The network demonstrates an interesting generalization ability: Naturally, it can
learn to detect a target in parallel displays from examples of such displays. However,
it can also learn to perform this task from examples of serial displays only! On the
other hand, we find that it is impossible to learn serial tasks, irrespective of the
combination of parallel and serial displays that are presented to the network during
the training phase. This generalization ability is manifested not only during the
learning phase, but also during the performance phase; displays belonging to the
same task have a similar eigenvalue spectrum, irrespective of the actual set-size of
the displays, and this result holds true for parallel as well as for serial displays.
The role of PCA in perception was previously investigated by Cottrell [9], designing
a neural network which performed tasks as face identification and gender discrimination. One might argue that PCA, being a global component analysis is not
compatible with the existence of local feature detectors (e.g. orientation detectors)
in the cortex. Our work is in line with recent proposals [10J that there exist two
pathways for sensory input processing: A fast sub-cortical pathway that contains
limited information, and a slow cortical pathway which is capable of providing richer
representations of the stimuli. Given this assumption this paper has presented the
first neural realization of the processing that may underline the classical metaphorical explanations involved in visual search.
References
[1] J. K. Tsotsos. Analyzing vision at the complexity level. Behavioral and Brain
Sciences, 13:423-469, 1990.
[2J M. I. Posner, C. R. Snyder, and B. J. Davidson. Attention and the detection
of signals. Journal of Experimental Psychology: General, 109:160-174, 1980.
[3J Y. Tsal. Movement of attention across the visual field. Journal of Experimental
Psychology: Human Perception and Performance, 9:523-530, 1983.
[4] A. Treisman and G. Gelade. A feature integration theory of attention. Cognitive
Psychology, 12:97-136,1980.
[5] J. Duncan and G. Humphreys. Visual search and stimulus similarity. Psychological Review, 96:433-458, 1989.
[6] A. Treisman and S. Gormican. Feature analysis in early vision: Evidence from
search assymetries. Psychological Review, 95:15-48, 1988.
[7] J . M. Wolfe, S. R. Friedman-Hill, M. I. Stewart, and K. M. O'Connell. The
role of categorization in visual search for orientation. Journal of Experimental
Psychology: Human Perception and Performance, 18:34-49, 1992.
[8] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Network, 2:459-473, 1989.
[9] G. W. Cottrell. Extracting features from faces using compression networks:
Face, identity, emotion and gender recognition using holons. Proceedings of the
1990 Connectionist Models Summer School, pages 328-337, 1990.
[10] J. L. Armony, D. Servan-Schreiber, J . D. Cohen, and J. E. LeDoux. Computational modeling of emotion: exploration through the anatomy and physiology
of fear conditioning. Trends in Cognitive Sciences, 1(1):28-34, 1997.
Data-Dependent Structural Risk
Minimisation for Perceptron Decision
Trees
John Shawe-Taylor
Dept of Computer Science
Royal Holloway, University of London
Egham, Surrey TW20 OEX, UK
Email: jst@dcs.rhbnc.ac.uk
N ello Cristianini
Dept of Engineering Mathematics
University of Bristol
Bristol BS8 ITR, UK
Email: nello.cristianini@bristol.ac. uk
Abstract
Perceptron Decision 'frees (also known as Linear Machine DTs,
etc.) are analysed in order that data-dependent Structural Risk
Minimisation can be applied. Data-dependent analysis is performed which indicates that choosing the maximal margin hyperplanes at the decision nodes will improve the generalization. The
analysis uses a novel technique to bound the generalization error in
terms of the margins at individual nodes. Experiments performed
on real data sets confirm the validity of the approach.
1
Introduction
Neural network researchers have traditionally tackled classification problems byassembling perceptron or sigmoid nodes into feedforward neural networks. In this
paper we consider a less common approach where the perceptrons are used as decision nodes in a decision tree structure. The approach has the advantage that more
efficient heuristic algorithms exist for these structures, while the advantages of inherent parallelism are if anything greater as all the perceptrons can be evaluated in
parallel, with the path through the tree determined in a very fast post-processing
phase.
Classical Decision 'frees (DTs), like the ones produced by popular packages as
CART [5] or C4.5 [9], partition the input space by means ofaxis-parallel hyperplanes
(one at each internal node), hence inducing categories which are represented by
(axis-parallel) hyperrectangles in such a space.
A natural extension of that hypothesis space is obtained by associating to each
internal node hyperplanes in general position, hence partitioning the input space
by means of polygonal (polyhedral) categories.
Data-Dependent SRMfor Perceptron Decision Trees
337
This approach has been pursued by many researchers, often with different motivations, and hE.nce the resulting hypothesis space has been given a number of different
names: multivariate DTs [6], oblique DTs [8], or DTs using linear combinations of
the attributes [5], Linear Machine DTs, Neural Decision Trees [12], Perceptron Trees
[13], etc.
We will call them Perceptron Decision Trees (PDTs), as they can be regarded as
binary trees having a simple perceptron associated to each decision node.
Different algorithms for Top-Down induction of PDTs from data have been proposed, based on different principles, [10], [5], [8],
Experimental study of learning by means of PDTs indicates that their performances
are sometimes better than those of traditional decision trees in terms of generalization error, and usually much better in terms of tree-size [8], [6], but on some data
set PDTs can be outperformed by normal DTs.
We investigate an alternative strategy for improving the generalization of these
structures, namely placing maximal margin hyperplanes at the decision nodes. By
use of a novel analysis we are able to demonstrate that improved generalization
bounds can be obtained for this approach. Experiments confirm that such a method
delivers more accurate trees in all tested databases.
2
Generalized Decision Trees
Definition 2.1 Generalized Deci.ion Tree. (GDT).
Given a space X and a set of boolean functions
~ = {/ : X -+ {O, I}}, the class GDT(~) of Generalized Decision Trees over ~ are
functions which can be implemented using a binary tree where each internal node
is labeled with an element of ~, and each leaf is labeled with either 1 or O.
To evaluate a particular tree T on input z EX, All the boolean functions associated
to the nodes are assigned the same argument z EX, which is the argument of T( z).
The values assumed by them determine a unique path from the root to a leaf: at
each internal node the left (respectively right) edge to a child is taken if the output
of the function associated to that internal node is 0 (respectively 1). The value of
the function at the assignment of a z E X is the value associated to the leaf reached.
We say that input z reaches a node of the tree, if that node is on the evaluation
path for z.
In the following, the nodu are the internal nodes of the binary tree, and the leave.
are its external ones.
Examples.
? Given X = {O, I}", a Boolean Deci6ion Tree (BDT) is a GDT over
~BDT = U : "(x) = Xi, "Ix E X}
? Given X = lR", a C-I.5-like Deci.ion Tree (CDT) is a GDT over
~CDT = U" : ",,(x) = 1 ?:>
> 8}
This kind of decision trees defined on a continuous space are the output of
common algorithms like C4.5 and CART, and we will call them - for short
- CDTs.
? Given X = lR", a Perceptron Deci.ion Tree (PDT) is a GDT over
~PDT = {wT x : W E lR"+1},
where we have assumed that the inputs have been augmented with a coordinate of constant value, hence implementing a thresholded perceptron.
z,
338
3
1. Shawe-Taylor and N. Cristianini
Data-dependent SRM
We begin with the definition of the fat-shattering dimension, which was first introduced in [7], and has been used for several problems in learning since [1, 4, 2, 3].
Definition 3.1 Let F be a ,et of real valued functiom. We ,ay that a ,et of point.
X u1-shattered by F relative to r = (r.).ex if there are real number, r. indezed
by z E X ,uch that for all binary vector' b indezed by X, there u a function I" E F
,atufying
~ (z) { ~ r. + 1 if b. = .1
~ r. -1 otheMDue.
I"
The fat shattering dimension fat:F of the ,et F i, a function from the po,itive real
number, to the integer' which map' a value 1 to the ,ize of the largut 1-,hattered
,et, if thi' i, finite, or infinity otherwi6e.
As an example which will be relevant to the subsequent analysis consider the class:
J=nn = {z -+ (w, z) + 8: IIwl! = 1}.
We quote the following result from [11].
Corollary 3.2 [11} Let J=nn be reltricted to point' in a ball of n dimemiom of
radiu, R about the origin and with thre,hold8 181 ~ R. Then
fat~ (1) ~ min{9R2 /1 2 , n + I} + 1.
The following theorem bounds the generalisation of a classifier in terms of the
fat shattering dimension rather than the usual Vapnik-Chervonenkis or Pseudo
dimension.
Let T9 denote the threshold function at 8: T9: 1R -+ {O,I}, T9(a) = 1 iff a> 8. For
a class offunctions F, T9(F) = {T9(/): IE F}.
Theorem 3.3 [11} Comider a real valued function dOl, F having fat ,hattering
function bounded above by the function &lat : 1R -+ N which i, continuOtU from
the right. Fi:D 8 E 1R. If a learner correctly cIOl,ifie, m independently generated
ezample, ? with h T9(/) E T9(F) ,uch that er.(h) 0 and 1 min I/(z,) - 81,
then with confidence 1 - i the ezpected error of h u bounded from above by
=
=
=
e(m,k,6) = ! (kiog (8~m) log(32m) + log (8;a)) ,
where k = &lath/8).
The importance of this theorem is that it can be used to explain how a classifier
can give better generalisation than would be predicted by a classical analysis of its
VC dimension. Essentially expanding the margin performs an automatic capacity
control for function classes with small fat shattering dimensions. The theorem shows
that when a large margin is achieved it is as if we were working in a lower VC class.
We should stress that in general the bounds obtained should be better for cases
where a large margin is observed, but that a priori there is no guarantee that such
a margin will occur. Therefore a priori only the classical VC bound can be used. In
view of corresponding lower bounds on the generalisation error in terms of the VC
dimension, the a posteriori bounds depend on a favourable probability distribution
making the actual learning task easier. Hence, the result will only be useful if
the distribution is favourable or at least not adversarial. In this sense the result
is a distribution dependent result, despite not being distribution dependent in the
Data-Dependent SRMfor Perceptron Decision Trees
339
traditional sense that assumptions about the distribution have had to be made in
its derivation. The benign behaviour of the distribution is automatically estimated
in the learning process.
In order to perform a similar analysis for perceptron decision trees we will consider
the set of margins obtained at each of the nodes, bounding the generalization as a
function of these values.
4
Generalisation analysis of the Tree Class
It turns out that bounding the fat shattering dimension of PDT's viewed as real
function classifiers is difficult. We will therefore do a direct generalization analysis
mimicking the proof of Theorem 3.3 but taking into account the margins at each of
the decision nodes in the tree.
Definition 4.1 Let (X, d) be a {p,eudo-} metric 'pace, let A be a ,ub,et of X and
E > O. A ,et B ~ X i, an E-cover for A if, for every a E A, there eNtI b E B ,uch
that d(a,b) < E. The E-covering number of A, A'd(E,A), is the minimal cardinality
of an E-cover for A (if there is no ,uch finite cover then it i, defined to be (0).
We write A'(E,:F, x) for the E-covering number of:F with respect to the lao pseudometric measuring the maximum discrepancy on the sample x. These numbers are
bounded in the following Lemma.
Lemma 4.2 (.Alon et al. [1]) Let:F be a cla.s, of junction, X -+ [0,1] and P a
distribution over X. Choo,e 0 < E < 1 and let d = fat:F(E/4). Then
(4m)dlos(2em/(cU?
E (A'(E,:F, x? ~ 2 \ -;;-
,
where the ezpectation E i, taken w.r.t. a ,ample x E
xm
drawn according to pm.
Corollary 4.3 [11} Let :F be a cla" of junctiom X -+ [a, b] and P a distribution
over X. Choo,e 0 < E < 1 and let d = fat:F(E/4). Then
4m(b _ a)2)dlos(2em("-Cl)/(cU?
E (A'(E,:F, x? ~ 2 (
E2
'
where the ezpectation E is over ,amples x E
xm
drawn according to pm.
We are now in a position to tackle the main lemma which bounds the probability
over a double sample that the first half has lero error and the second error greater
than an appropriate E. Here, error is interpreted as being differently classified at
the output of tree. In order to simplify the notation in the following lemma we
assume that the decision tree has K nodes. We also denote fat:Flin (-y) by fat(-y) to
simplify the notation.
Lemma 4.4 Let T be a perceptron decision tree with K decuion node, with margim
'1 1 , '1 2 , ??? ,'1K at the decision nodes. If it ha.s correctly cla.s,ified m labelled ezamples
generated independently according to the unknown (but jized) distribution P, then
we can bound the following probability to be Ie" than
~,
p2m { xy: 3 a tree T : T correctly cla.s,ifie, x,
fraction of y mi,cla"ified > E( m, K,~) }
where E(m,K,~) = !(Dlog(4m) + log ~).
where D = E~1 kslog(4em/k.) and k, = fat(-y./8).
< ~,
1. Shawe-Taylor and N. Cristianini
340
Proof: Using the standard permutation argument, we may fix a sequence xy and
bound the probability under the uniform distribution on swapping permutations
that the sequence satisfies the condition stated. We consider generating minimal
'YI&/2-covers B!y for each value of Ie, where "11& = min{'Y' : fath' /8) :5 Ie}. Suppose
that for node i oCthe tree the margin 'Yi of the hyperplane 'Wi satisfies fath i /8) = ~.
We can therefore find Ii E B!~ whose output values are within 'Yi /2 of 'Wi. We now
consider the tree T' obtained by replacing the node perceptrons 'Wi of T with the
corresponding Ii. This tree performs the same classification function on the first
half of the sample, and the margin remains larger than 'Yi - "1".12 > "11&.12. If a
point in the second half of the sample is incorrectly classified by T it will either
still be incorrectly classified by the adapted tree T' or will at one of the decision
nodes i in T' be closer to the decision boundary than 'YI&i /2. The point is thus
distinguishable from left hand side points which are both correctly classified and
have margin greater than "11&.12 at node i. Hence, that point must be kept on the
right hand side in order for the condition to be satisfied. Hence, the fraction of
permutations that can be allowed for one choice of the functions from the covers
is 2-"". We must take the union bound over all choices of the functions from the
covers. Using the techniques of [11] the numbers of these choices is bounded by
Corollory 4.3 as follows
n~12(8m)I&.los(4emll&.)
= 2K (8m)D,
where D = ~~1 ~ log(4em/lei). The value of E in the lemma statement therefore
ensures that this the union bound is less than 6.
o
Using the standard lemma due to Vapnik [14, page 168] to bound the error probabilities in terms of the discrepancy on a double sample, combined with Lemma 4.4
gives the following result.
Theorem 4.5 Suppo,e we are able to cleu,i/y an m ,ample of labelled ezamplea
wing a perceptron decilion tree with K node, and obtaining margina 'Yi at node i,
then we can bound the generali,ation error with probability greater than 1 - 6 to be
Ie" than
1
(8m)K(2K)
-(Dlog(4m) + log
K
)
m
(K + 1)6
where D = E~l ~log(4em//cj) and lei = fathi/8).
Proof: We must bound the probabilities over different architectures of trees and
different margins. We simply have to choose the values of E to ensure that the
individual 6's are sufficiently small that the total over all possible choices is less
than 6. The details are omitted in this abstract.
o
5
Experiments
The theoretical results obtained in the previous section imply that an algorithm
which produces large margin splits should have a better generalization, since increasing the margins in the internal nodes, has the effect of decreasing the bound
on the test error.
In order to test this strategy, we have performed the following experiment, divided
in two parts: first run a standard perceptron decision tree algorithm and then for
each decision node generate a maximal margin hyperplane implementing the same
dichotomy in place of the decision boundary generated by the algorithm.
Data-Dependent SRMfor Perceptron Decision Trees
341
Input: Random m sample x with corresponding classification b.
Algorithm: Find a perceptron decision tree T which correctly classifies the sample
using a standard algorithm;
Let Ie = number of decision nodes of Tj
From tree T create T' by executing the following loop:
For each decision node i replace the weight vector w, by the vector wi
which realises the maximal margin hyperplane agreeing with
on the
set of inputs reaching node i;
Let the margin of w~ on the inputs reaching node i be 'Y,j
Output: Classifier T', with bound on the generalisation error in terms of the number of decision nodes K and D = 2:~11e, log(4em/~) where Ie, = fath,/8).
w,
Note that the classification of T and T' agree on the sample and hence, that T' is
consistent with the sample.
As a PDT learning algorithm we have used OC1 [8], created by Murthy, Kasif and
Salzberg and freely available over the internet. It is a randomized algorithm, which
performs simulated annealing for learning the perceptrons. The details about the
randomization, the pruning, and the splitting criteria can be found in [8].
The data we have used for the test are 4 of the 5 sets used in the original OC1
paper, which are publicly available in the UCI data repository [16].
The results we have obtained on these data are compatible with the ones reported
in the original OC1 paper, the differences being due to different divisions between
training and testing sets and their sizesj the absence in our experiments of crossv&l.idation and other techniques to estimate the predictive accuracy of the PDT;
and the inherently randomized nature of the algorithm.
The second stage of the experiment involved finding - for each node - the hyperplane
which performes the lame split as performed by the OC1 tree but with the ma.ximal
margin. This can be done by considering the subsample reaching each node as
perfectly divided in two parts, and feeding the data accordingly relabelled to an
algorithm which finds the optimal split in the linearly separable case. The ma.ximal
margin hyperplanes are then placed in the decision nodes and the new tree is tested
on the same testing set.
The data sets we have used are: Wi,eoun,in Brealt Caneer, Pima Indiana Diabetel,
BOlton Houling transformed into a classification problem by thresholding the price
at ? 21.000 and the classical Inl studied by Fisher (More informations about the
databases and their authors are in [8]). All the details about sample sizes, number
of attributes and results (training and testing accuracy, tree size) are summarised
in table 1.
We were not particularly interested in achieving a high testing accuracy, but rather
in observing if improved performances can be obtained by increasing the margin.
For this reason we did not try to optimize the performance of the original classifier
by using cross-v&l.idation, or a convenient training/testing set ratio. The relevant
quantity, in this experiment, is the different in the testing error between a PDT
with arbitrary margins and the same tree with optimized margins. This quantity
has turned out to be always positive, and to range from 1.7 to 2.8 percent of gain,
on test errors which were already very low.
train OC1 test FAT test #trs #ts attrib. classes nodes
1
95.37
249 108
9
2
CANC 96.53
93.52
4
3
2
90
60
IRIS
96.67
96.67
98.33
4
2
209 559
89.00
70.48
72.45
8
DIAB
13
2
7
81.43
84.29
306 140
HOUS 95.90
1. Shawe-Taylor and N. Cristianini
342
References
[1] Ncga Alon, Shai Ben-David, Nicolo Cesa-Bianchi and David Haussler, "Scalesensitive Dimensions, Uniform Convergence, and Learnability," in Proceeding.
of the Conference on Foundation. of Computer Science (FOCS), (1993). Also
to appear in Journal of the ACM.
[2] Martin Anthony and Peter Bartlett, "Function learning from interpolation",
Technical Report, (1994). (An extended abstract appeared in Computational
Learning Theory, Proceeding. 2nd European Conference, EuroCOLT'95, pages
211-221, ed. Paul Vitanyi, (Lecture Notes in Artificial Intelligence, 904)
Springer-Verlag, Berlin, 1995).
[3] Peter L. Bartlett and Philip M. Long, "Prediction, Learning, Uniform Convergence, and Scale-Sensitive Dimensions," Preprint, Department of Systems
Engineering, Australian National University, November 1995.
[4] Peter L. Bartlett, Philip M. Long, and Robert C. Williamson, "Fat-shattering
and the learnability of Real-valued Functions," Journal of Computer and Sy.tem Science., 52(3), 434-452, (1996).
[5] Breiman L., Friedman J.H., Olshen R.A., Stone C.J., "Classification and Regression Trees", Wadsworth International Group, Belmont, CA, 1984.
[6] Brodley C.E., UtgofF P.E., Multivariate Decision Trees, Machine Learning 19,
pp. 45-77, 1995.
[7] Michael J. Kearns and Robert E. Schapire, "Efficient Distribution-free Learning
of Probabilistic Concepts," pages 382-391 in Proceeding. of the Slit Sympo.ium
on the Foundation. of Computer Science, IEEE Computer Society Press, Los
Alamitos, CA, 1990.
[8] Murthy S.K., Kasif S., Salzberg S., A System for Induction of Oblique Decision
Trees, Journal of Artificial Intelligence Research, 2 (1994), pp. 1-32.
[9] Quinlan J.R., "C4.5: Programs for Machine Learning", Morgan Kaufmann,
1993.
[10] Sankar A., Mammone R.J., Growing and Pruning Neural Tree Networks, IEEE
Transactions on Computers, 42:291-299, 1993.
[11] John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, Martin Anthony,
Structural Risk Mjnjmization over Data-Dependent Hierarchi~, NeuroCOLT
Technical Report NC-TR-96-053, 1996.
(ftp:llftp.dc ?? rhbDc.ac.uk/pub/Deurocolt/t.c~.port.).
[12] J.A. Sirat, and J.-P. Nadal, "Neural trees: a new tool for classification", Network, 1, pp. 423-438, 1990
[13] UtgofF P.E., Perceptron Trees: a Case Study in Hybrid Concept Representations, Connection Science 1 (1989), pp. 377-391.
[14] Vladimir N. Vapnik, E.timation of Dependence. Baled on Empirical Data,
Springer-Verlag, New York, 1982.
[15] Vladimir N. Vapnik, The Nature of Statiltical Learning Theory, SpringerVerlag, New York, 1995
[16] University of California, Irvine
Machine Learning Repository,
http://www.icB.uci.edu/ mlearn/MLRepoBitory.html
| 1359 |@word cu:2 repository:2 compression:5 underline:1 glue:1 nd:1 simulation:2 cla:5 tr:1 reduction:1 contains:3 pub:1 chervonenkis:1 relabelled:1 interestingly:2 reaction:2 thre:1 analysed:1 assigning:1 must:4 john:2 tilted:2 numerical:2 subsequent:2 cottrell:2 partition:1 shape:2 offunctions:1 benign:1 belmont:1 discrimination:2 pursued:1 leaf:3 half:3 item:12 intelligence:2 accordingly:1 short:2 oblique:2 lr:3 detecting:2 math:1 node:37 location:2 hyperplanes:5 five:2 mathematical:1 along:3 constructed:1 direct:1 supply:1 viable:1 qualitative:1 focs:1 pathway:3 behavioral:1 polyhedral:1 manner:2 mask:1 indeed:1 themselves:1 examine:1 distractor:3 growing:1 brain:1 eurocolt:1 decreasing:1 td:1 automatically:1 actual:3 metaphor:1 cardinality:1 increasing:2 considering:1 begin:1 classifies:1 moreover:1 bounded:4 notation:2 mass:1 israel:2 kind:1 interpreted:1 lame:1 nadal:1 finding:3 indiana:1 guarantee:1 pseudo:1 quantitative:1 every:1 holons:1 tackle:1 fat:15 demonstrates:1 classifier:5 uk:5 hyperrectangles:1 unit:3 underlie:1 partitioning:1 control:1 producing:1 appear:1 positive:1 engineering:2 local:2 accordance:2 despite:1 analyzing:2 path:3 interpolation:1 black:1 plus:2 might:1 gormican:1 studied:2 examined:3 suggests:2 co:1 limited:2 range:1 averaged:1 unique:1 testing:6 union:2 tw20:1 thi:1 empirical:1 physiology:2 significantly:2 projection:7 hyperbolic:1 pre:7 confidence:1 convenient:1 cannot:2 close:1 layered:1 risk:3 impossible:1 optimize:1 www:1 map:5 demonstrated:1 attention:9 independently:2 qc:1 splitting:1 perceive:1 tsal:1 insight:1 haussler:1 array:3 regarded:1 spanned:2 posner:2 his:1 notion:1 traditionally:1 testifies:1 coordinate:1 updated:1 target:57 suppose:1 homogeneous:3 distinguishing:1 us:3 hypothesis:4 designing:1 origin:1 wolfe:10 trend:1 recognition:1 element:1 particularly:1 database:2 labeled:2 observed:1 role:2 module:14 preprint:1 compressing:1 connected:1 ensures:1 movement:1 environment:1 transforming:1 complexity:1 utgoff:2 asked:1 cristianini:5 trained:4 depend:1 predictive:1 division:1 efficiency:1 ande:1 learner:1 scalesensitive:1 po:1 differently:1 kasif:2 various:1 x100:1 represented:1 derivation:1 train:1 separated:1 fast:6 describe:1 london:1 detected:2 bdt:2 dichotomy:1 artificial:2 choosing:2 outcome:1 mammone:1 apparent:1 quite:1 richer:1 heuristic:1 valued:3 say:1 larger:1 otherwise:1 compressed:5 whose:1 ability:3 obviously:1 sequence:3 eigenvalue:11 ledoux:1 advantage:2 propose:1 maximal:5 relevant:2 loop:1 realization:4 uci:2 turned:1 iff:1 intuitive:2 description:3 inducing:1 los:2 convergence:2 double:2 produce:1 categorization:1 generating:1 leave:1 executing:1 object:1 ben:1 ftp:1 alon:2 ac:5 measured:2 school:1 implemented:1 predicted:1 involves:1 come:1 australian:1 differ:1 anatomy:1 attribute:2 pea:8 vc:4 exploration:1 human:2 enable:1 jst:1 implementing:2 explains:1 require:1 behaviour:1 feeding:1 sympo:1 fix:1 generalization:10 randomization:1 attrib:1 biological:1 extension:1 accompanying:1 hold:2 practically:1 sufficiently:1 normal:1 mapping:1 predict:1 claim:3 early:2 omitted:1 outperformed:1 diab:1 quote:1 sensitive:1 schreiber:1 vice:1 create:1 successfully:1 cdt:2 tool:1 bs8:1 clearly:1 always:1 reaching:4 rather:2 breiman:1 minimisation:2 corollary:2 ax:14 focus:1 indicates:2 adversarial:1 detect:5 progressing:1 posteriori:1 sense:2 dependent:10 nn:2 shattered:1 her:1 transformed:2 interested:1 mimicking:1 issue:1 among:6 orientation:11 classification:7 html:1 priori:2 icb:1 integration:2 wadsworth:1 field:2 once:1 equal:1 emotion:2 having:2 x4:1 placing:1 shattering:6 unsupervised:1 tem:1 discrepancy:2 report:2 others:1 stimulus:2 fundamentally:1 inherent:4 few:6 quantitatively:1 primarily:1 randomly:1 connectionist:1 composed:3 simplify:2 national:1 individual:2 replaced:1 phase:5 friedman:2 overflowing:1 detection:5 deci:3 investigate:1 evaluation:1 mixture:2 analyzed:1 yielding:1 swapping:1 tj:1 accurate:1 succeeded:1 capable:2 worker:1 edge:1 xy:2 closer:1 tree:50 taylor:5 oex:1 circle:2 theoretical:1 minimal:3 psychological:2 modeling:2 boolean:3 salzberg:2 cover:6 servan:1 stewart:1 measuring:1 assignment:1 deviation:4 uniform:3 srm:1 learnability:2 reported:1 answer:1 varies:1 considerably:1 combined:1 fundamental:2 randomized:2 international:1 ie:7 probabilistic:1 canc:1 treisman:4 together:1 continuously:1 michael:1 satisfied:1 cesa:1 choose:1 cognitive:4 external:1 wing:1 til:1 suggesting:1 account:1 amples:1 availability:1 performed:10 view:2 root:1 try:1 eyal:1 analyze:1 observing:1 reached:1 parallel:43 shai:1 slope:6 cio:1 il:2 publicly:1 accuracy:3 variance:13 kaufmann:1 sy:1 yield:1 identify:1 identification:1 produced:4 researcher:2 bristol:3 classified:4 murthy:2 detector:9 explain:1 reach:1 mlearn:1 ed:1 email:2 definition:4 attentive:2 surrey:1 involved:3 pp:4 e2:2 naturally:1 associated:4 proof:3 mi:1 gain:1 irvine:1 popular:1 color:2 distractors:10 cj:1 ezample:1 fath:3 back:1 feed:1 supervised:1 response:3 improved:2 formulation:1 evaluated:1 done:1 furthermore:1 stage:3 hand:4 working:1 replacing:2 propagation:1 gray:1 lei:2 grows:1 aviv:4 name:1 effect:1 validity:1 contain:1 true:1 ize:1 concept:2 former:1 hence:10 assigned:1 illustrated:2 white:1 during:3 covering:2 qe:1 anything:1 iris:1 criterion:4 generalized:4 manifestation:1 stone:1 stress:1 hill:2 evident:2 demonstrate:2 ay:1 tn:2 performs:3 dedicated:1 delivers:1 percent:1 image:1 ruppin:5 novel:4 fi:1 sigmoid:1 common:2 physical:1 cohen:5 conditioning:1 organism:1 interpretation:1 he:1 versa:1 automatic:1 mathematics:1 pm:2 shawe:5 had:1 specification:1 similarity:13 surface:7 metaphorical:5 cortex:1 etc:2 nicolo:1 multivariate:2 recent:1 certain:1 dimensionally:1 manifested:2 verlag:2 binary:4 yi:6 morgan:1 additional:1 greater:4 preceding:1 freely:1 determine:1 paradigm:2 signal:1 ii:5 full:2 stem:1 hebbian:1 technical:2 offer:1 long:3 cross:1 dept:2 prescription:1 concerning:1 serial:36 post:1 divided:2 coded:1 prediction:1 basic:1 regression:1 heterogeneous:1 vision:2 essentially:1 metric:1 sometimes:1 achieved:2 confined:1 beam:2 proposal:1 background:5 ion:3 separately:1 annealing:1 idation:2 source:3 probably:1 subject:1 cart:2 ample:2 surviving:1 extracting:1 structural:3 call:2 integer:1 intermediate:1 feedforward:2 enough:1 split:3 fit:3 psychology:6 architecture:2 associating:1 perfectly:1 reprinted:1 itr:1 whether:3 pca:8 rhbnc:1 bartlett:4 gdt:5 peter:4 york:2 nine:2 useful:1 detailed:1 prepared:1 band:1 concentrated:1 processed:6 category:2 p2m:1 reduced:1 conjoining:1 generate:1 hous:1 exist:2 schapire:1 sankar:1 http:1 sign:2 estimated:1 correctly:5 suppo:1 pace:1 diverse:1 summarised:1 write:1 snyder:1 group:13 threshold:1 achieving:1 drawn:2 flin:1 thresholded:1 kept:1 tsotsos:1 nce:1 fraction:2 run:1 package:1 powerful:1 place:1 almost:3 separation:4 decision:34 duncan:7 bolton:1 layer:7 internet:1 bound:17 summer:1 distinguish:1 display:68 correspondence:1 tackled:1 vitanyi:1 trs:1 adapted:1 scanned:3 occur:1 infinity:1 scene:2 u1:1 speed:1 argument:3 span:1 min:3 qb:1 performing:1 connell:1 pseudometric:1 separable:1 martin:2 department:3 according:5 combination:2 ball:1 belonging:1 across:3 describes:1 smaller:1 em:6 agreeing:1 wi:5 fathi:1 making:1 explained:2 dlog:2 flanking:2 taken:3 agree:1 previously:1 remains:1 turn:1 junction:1 available:2 eight:2 pdt:6 appropriate:1 egham:1 alternative:1 slower:1 existence:3 original:3 compress:1 top:1 remaining:1 ensure:1 lat:1 quinlan:1 sanger:2 classical:6 comparatively:1 society:1 already:1 quantity:2 alamitos:1 burning:1 primary:1 rt:8 dependence:2 traditional:2 strategy:2 usual:1 exhibit:1 ow:1 attentional:5 separate:2 neurocolt:1 simulated:2 capacity:2 berlin:1 philip:2 argue:1 nello:1 reason:1 induction:2 retained:1 ified:2 providing:3 ratio:2 vladimir:2 nc:1 difficult:1 olshen:1 robert:3 statement:1 attentively:3 pima:1 stated:1 uch:4 unknown:1 perform:4 bianchi:1 vertical:5 observation:1 neuron:2 enabling:1 finite:2 november:1 t:1 incorrectly:2 extended:1 dc:2 sharp:1 arbitrary:1 introduced:1 david:2 namely:1 required:1 specified:2 connection:2 optimized:1 ium:1 c4:3 california:1 distinction:5 dts:7 qa:2 gelade:1 able:2 parallelism:1 perception:4 mismatch:1 usually:1 xm:2 appeared:1 oc1:5 program:1 royal:1 tau:2 explanation:3 ation:1 serially:3 natural:1 hybrid:1 representing:1 improve:1 lao:1 imply:1 brodley:1 axis:12 irrespective:2 created:1 epoch:1 review:2 tangent:1 determining:1 tractor:2 relative:1 loss:1 lecture:1 permutation:3 interesting:2 versus:7 asterisk:1 foundation:2 degree:5 consistent:1 principle:1 thresholding:1 port:1 obscure:1 testifying:1 compatible:3 summary:1 supported:1 placed:1 keeping:1 free:3 dis:2 side:2 perceptron:18 characterizing:1 taking:2 face:3 distributed:1 boundary:2 dimension:16 calculated:1 world:1 cortical:2 sensory:1 forward:1 qualitatively:1 made:1 author:1 employing:2 transaction:1 pruning:2 confirm:2 global:5 conceptual:1 assumed:2 jized:1 davidson:1 xi:1 spectrum:3 search:21 un:1 continuous:1 timation:1 why:1 table:1 nature:2 learn:3 transfer:1 ca:2 inherently:1 itt:1 obtaining:1 expanding:1 tel:4 depicting:1 improving:1 williamson:2 investigated:1 complex:1 cl:1 anthony:2 european:1 did:1 main:4 linearly:1 motivation:1 noise:1 bounding:2 subsample:1 paul:1 child:1 allowed:1 augmented:1 fig:2 gha:1 ff:2 slow:1 sub:3 position:2 explicit:1 msec:2 humphreys:7 learns:1 ix:1 down:1 theorem:6 itive:1 emphasized:1 er:1 dol:1 r2:1 favourable:2 evidence:1 t9:7 essential:1 polygonal:1 vapnik:4 gained:1 importance:1 displayvectors:2 magnitude:2 sparseness:1 margin:23 easier:1 depicted:1 distinguishable:1 simply:1 visual:28 ordered:2 inl:1 fear:1 springer:2 gender:2 satisfies:2 acm:1 ma:2 goal:1 formulated:1 identity:1 ello:1 viewed:1 choo:2 labelled:2 replace:1 absence:1 price:1 experimentally:3 fisher:1 springerverlag:1 determined:1 generalisation:5 operates:1 uniformly:1 wt:1 hyperplane:4 principal:18 lemma:8 total:2 kearns:1 slit:1 eytan:1 experimental:7 succeeds:1 perceptrons:4 indicating:2 holloway:1 internal:8 latter:1 arises:1 devil:1 scan:1 ub:1 incorporate:1 evaluate:1 tested:2 ex:3 |
394 | 136 | 602
AUTOMATIC LOCAL ANNEALING
Jared Leinbach
Deparunent of Psychology
Carnegie-Mellon University
Pittsburgh, PA 15213
ABSTRACT
This research involves a method for finding global maxima
in constraint satisfaction networks. It is an annealing
process butt unlike most others t requires no annealing
schedule. Temperature is instead determined locally by
units at each update t and thus all processing is done at the
unit level. There are two major practical benefits to
processing this way: 1) processing can continue in 'bad t
areas of the networkt while 'good t areas remain stable t and
2) processing continues in the 'bad t areas t as long as the
constraints remain poorly satisfied (i.e. it does not stop
after some predetermined number of cycles). As a resultt
this method not only avoids the kludge of requiring an
externally determined annealing schedule t but it also finds
global maxima more quickly and consistently than
externally scheduled systems (a comparison to the
Boltzmann machine (Ackley et alt 1985) is made). FinallYt
implementation of this method is computationally trivial.
INTRODUCTION
A constraint satisfaction network, is a network whose units represent hypotheses,
between which there are various constraints. These constraints are represented by bidirectional connections between the units. A positive connection weight suggests that if
one hypothesis is accepted or rejected, the other one should be also, and a negative
connection weight suggests that if one hypothesis is accepted or rejected. the other one
should not be. The relative importance of satisfying each constraint is indicated by the
absolute size of the corresponding weight. The acceptance or rejection of a hypothesis is
indicated by the activation of the corresponding unit Thus every point in the activation
space corresponds to a possible solution to the constraint problem represented by the
network. The quality of any solution can be calculated by summing the 'satisfiedn~ss' of
all the constraints. The goal is to find a point in the activation space for which the quality
is at a maximum.
Automatic Local Annealing
Unfortunately, if units update dettnninistically (i.e. if they always move toward the state
that best satisfies their constraints) there is no means of avoiding local quality maxima in
the activation space. This is simply a fimdamental problem of all gradient decent
procedures. Annealing systems attempt to avoid this problem by always giving units
some probability of not moving towards the state Ihat best satisfaes their constraints. This
probability is called the 'temperature' of the network. When the temperature is high,
solutions are generally not good, but the network moves easily throughout the activation
space. When the temperature is low, the network is committed to one area of the
activation space, but it is very good at improving its solution within that area. Thus the
annealing analogy is born. The notion is that if you start with the temperature high, and
lower it slowly enough, the network will gradually replace its 'state mobility' with 'state
improvement ability' , in such a way as to guide itself into a globally maximal state (much
as the atoms in slowly annealed metals find optimal bonding structures).
To search for solutions this way, requires some means of detennining a temperature for
the network, at every update. Annealing systems simply use a predetennined schedule to
provide this information. However, there are both practical and theoretical problems with
this approach. The main practical problems are the following: 1) once an annealing
schedule comes to an end. all processing is finished regardless of the quality of the
current solution, and 2) temperature must be unifonn across the network, even though
different parts of the network may merit different temperatures (this is the case any time
one part of the network is in a 'better' area of the activation space than another, which is
a natural condition). The theoretical problem with this approach involves the selection of
annealing schedules. In order to pick an appropriate schedule for a network. one must
use some knowledge about what a good solution for that network is. Thus in order to get
the system to find a solution, you must already know something about the solution you
want it to find. The problem is that one of the most critical elements of the process. the
way that the temperature is decreased, is handled by something other than the network
itself. Thus the quality of the fmal solution must depend. at least in part. on that system's
understanding of the problem.
By allowing each unit to control its own temperature during processing, Automatic Local
Annealing avoids this serious kludge. In addition. by resolving the main practical
problems. it also ends up fmding global maxima more quickly and reliably than
externally controlled systems.
MECHANICS
All units take on continuous activations between a unifonn minimum and maximum
value. There is also a unifonn resting activation for all units (between the minimum and
maximum). Units start at random activations. and are updated synchronously at each
cycle in one of two possible ways. Either they are updated via any ordinary update rule
for which a positive net input (as defined below) increases activation and a negative net
input decreases activation, or they are simply reset to their resting activation. There is an
update probability function that detennines the probability of normal update for a unit
based on its temperature (as defmed below). It should be noted that once the net input for
603
604
Leinbach
a unit has been calculated, rmding its temperature is trivial (the quantity (a; - rest) in the
equation for g~ss; can come outside the summation).
Definitions:
= kJ
~ .(a-rest)xw
..
"}
IJ
temperature;
= -g~sS;l1ltlUpOsgdnssi
g~ssilnuuneggdnssi
if g~ss;
otherwise
~
0
goodness; = Lj(a,rest)xwijx(arrest)
1ItIUpOsgdnssi = the largest pos. v31ue that goodness; could be
maxneggdnss; = the largest neg. value that goodness; could be
Maxposgdnss and maxneggdnss are constants that can be calculated once for each unit at
the beginning of simulation. They depend only on the weights into the unit, and the
constant maximum, minimum and resting activation values. Temperature is always a
value between 1 and -1, with 1 representing high temperature and -1 low.
SIMULATIONS
The parameters below were used in processing both of the networks that were tested.
The first network processed (Figure la) has two local maxima that are extremely close. to
its two global maxima. This is a very 'difficult' network in the sense that the search for a
global maximum must be extremely sensitive to the minute difference between the global
maxima and the next-best local maxima. The other network processed (Figure Ib) has
many local maxima, but none of them are especially close to the global maxima. This is
an 'easy' network in the sense that the slow and cautious process that was used, was not
really necessary. A more appropriate set of parameters would have improved
performance on this second network, but it was not used in order to illustrate the relative
generality of the algorithm.
Parameters:
=1
minimum activation =0
maximum activation
resting activation = O.S
normal update rule:
A activation;
with'k = 0.6
= netinput; x (moxactivation -
activation;) x k
netinput; x (activation; - minactivation) x k
if netinput;
otherwise
~
0
Automatic Local Annealing
update probability fwlction:
-I
-.79
o
TE\fPERA TCRE
This function defines a process that moves slowly towards a global maximum, moves
away from even good solutions easily, and 'freezes' units that are colder than -0.79.
RESULTS
The results of running the Automatic Local Annealing process on these two networks (in
comparison to a standard Boltzmann Machine's performance) are summarized in figures
2a and 2b. With Automatic Local Annealing (ALA), the probability of having found a
stable global maximum departs from zero fairly soon after processing begins. and
increases smoothly up to one. The Boltzmann Machine, instead, makes little 'useful'
progress until the end of the annealing schedule, and then quickly moves into a solution
which mayor may not be a global maximum. In order to get its reliability near that of
ALA, the Boltzmann Machine's schedule must be so slow that solutions are found much
more slowly than ALA. Conversely in ordet to start finding solution as quickly as ALA.
such a short schedule is necessary that the reliability becomes much worse than ALA's.
Finally, if one makes a more reasonable comparison to the Boltzmann Machine (either by
changing the parameters of the ALA process to maximize its performance on each
network. or by using a single annealing schedule with the Boltzmann Machine for both
networks). the overall performance advantage for ALA increases substantially.
DISCUSSION
HOW IT WORKS
The characteristics of the approach to a global maximum are determined by the shape of
the update probability function. By modifying this shape, one can control such things as:
how quickly/steadily the network moves towards a global maximum, how easily it moves
away from local maxima, how good a solution must be in order for it to become
completely stable, and so on. The only critical feature of the function, is that as
temperature decreases the probability of normal update increases. In this way, the colder
a unit gets the more steadily .it progresses towards an extreme activation value, and the
hoUrz a wit gets the more time it spends near resting activation. From this you get hot
605
606
Leinbach
1
~5
~5
-2.5
-2.5
1
Figure la. A 'Difficult' Network.
Global maxima are: 1) all eight upper units on, with the remaining units off, 2) all eight
lower units on with the remaining units off. Next best local maxima are: 1) four uppel'
left and four lower right units on, with the remaiiung units off, 2) four upper right and
four lower left units on, with the remaining units off.
-1.5
1~
1
1
Figure lb. An 'Easy' Network.
Necker cube network (McClelland & Rumelhart 1988). Each set of four corresponding
units are connected as shown above. Connections for the other three such sets were
omitted for clarity. The global maxima have all units in one cube on with all units in the
other off.
Automatic Local Annealing
8
Automatic Local Annealing
8 .M. with 125
.
de sctIedule'
0
.
,I
0
(\I
:
,
0
0
50
100
200
150
250
Cycles 01 ProcesSing
Figure la. Performance On A 'Difficult' Network (Figure la).
o
o
? . WI
Y
u
--------crnrwTh~~~re~~~---
- - - - - - - - - - - - B~M-: with -30 cycle schedul;;a- - - ---------------------S.M.-w;th 2()cycie -sciieduiili ------
...
.
,,
.
I " . - .,,'
8 .M with 10 cycle schedule
o
o
20
40
60
80
100
Cycles 01 Processing
Figure 2b. Performance On An 'Easy' Network (Figure Ib).
line is based on 100 trials. A stable global maxima is one that the network
remained in for the rest of the trial.
2All annealing schedules were the best performing three-leg schedules found.
I Each
607
608
Leinbach
units that have little effect on movement in the activation space (since they conbibute
little to any unit's net input), and cold units that compete to control this critical
movement.
The cold units 'coor connected units that are in agreement with them, and 'heat'
connected units that are in disagreement (see temperature equation). As the connected
agreeing units are cooled, they too begin to cool their connected agreeing units. In this
way coldness spreads out. stabilizing sets d units whose hypotheses agree. This
spreading is what makes the ALA algorithm wort. A units decision about its hypothesis
can now be felt by units that are only distantly connected, as must be the case if units are
to act in accordance with any global criterion (e.g. the overall quality d the states of
these networks).
In order to see why global maxima are found, one must consider the network as a whole.
In general, the amount of time spent in any state is proportional to the amount of heat in
that state (since heat is directly related to stability). The state(s) containing the least
possible heat for a given network. will be the most stable. These state(s) will also
represent the global maxima (since they have the least total 'dissatisfaction' of
constraints). Therefore, given infinite processing time, the most commonly visited states
will be the global maxima. More importantly, the 'visitedness.' of ~ry state will be
proportional to its overall quality (a mathematical description of this has not yet been
developed).
This later characteristic provides good practical benefits, when one employs a notion of
solution satisficing. This is done by using an update probability function that allows
units to 'freeze' (i.e. have normal update pobabiUties of 1) at temperatures higher than-I
(as was done with the simulations described above). In this condition, states can become
completely stable, without perfectly satisfying all constraints. As the time of simulation
increases, the probability of being in any given state approaches approaches a value
proportional to its quality. Thus, if there are any states good enough to be frozen, the
chances of not having hit one will decrease with time. The amount of time necessary to
satisfice is directly related to the freezing point used. Times as small as 0 (for freezing
points> 1) and as large as infmity (for freezing points < -1) can be achieved. This type
of time/quality trade-off, is extremely useful in many practical applications.
MEASURING PERFORMANCE
While ALA finds global maxima faster and more reliably than Boltzmann Machine
annealing, these are not the only benefits to ALA processing. A number of othex
elements make it preferable to externally scheduled annealing processes: 1) Various
solutions to subparts of problems are found and, at least temporarily, maintained during
processing. If one considers constraint satisfaction netwOJks in terms of schema
processors, this corresponds nicely to the simultaneous processing of all levels of
scbemas and subschemas. Subschemas with obvious solutions get filled in quickly, even
when the higher level schemas have still not found real solutions. While these initial
sub-solutions may not end up as part of the final solution, their appearance during
Automatic Local Annealing
processing can still be quite useful in some settings. 2) ALA is much more biologically
feasible than externally scheduled systems. Not only can units flDlCtion on their own
(without the use of an intelligent external processor), but the paths travened through the
activation space (as described by the schema example above) also parallel human
processing more closely. 3) ALA processing may lend itself to simple learning
algorithms. During processing, units are always acting in close accord with the
constraints that are present At fU'St distant corwtraint are ignmed in favor of more
immediate ones, but regardless the units rarely actually defy any constraints in the
network. Thus basic approaches to making weight adjustments, such as continuously
increasing weights between units that are in agreement about their hypotheses, and
decreasing weights between units that are in disagreement about their hypotheses
(Minsky & Papert, 1968), may have new power. This is an area of current ~h,
which would represent an enonnous time savings over Boltzmann Machine type learning
(Ackley et at 1985) if it were to be found feasible.
REFERENCES
Ackley, D. H., Hinton, G. E., & Sejnowski, T. I. (1985). A Learning Algorithm for
Boltzmann Machines. Cognitive Science, 9,141-169.
McClelland, I. L., & Rumelhart. D. E. (1988). Explorations in Parallel Distributed
Processing. Cambridge, MA: MIT Press.
Minsky, M., & Papert, S. (1968). Perceptrons. Cambridge, MA: MIT Press.
609
| 136 |@word trial:2 simulation:4 pick:1 initial:1 born:1 ala:12 current:2 activation:24 yet:1 must:9 distant:1 predetermined:1 shape:2 update:12 beginning:1 short:1 provides:1 mathematical:1 become:2 mechanic:1 ry:1 globally:1 decreasing:1 little:3 increasing:1 becomes:1 begin:2 what:2 substantially:1 spends:1 developed:1 finding:2 every:2 act:1 preferable:1 hit:1 control:3 unit:46 positive:2 local:15 accordance:1 path:1 suggests:2 conversely:1 practical:6 cold:2 procedure:1 area:7 coor:1 get:6 close:3 selection:1 annealed:1 regardless:2 wit:1 stabilizing:1 rule:2 importantly:1 wort:1 stability:1 notion:2 cooled:1 updated:2 hypothesis:8 agreement:2 pa:1 element:2 rumelhart:2 satisfying:2 continues:1 ackley:3 cycle:6 connected:6 decrease:3 movement:2 trade:1 depend:2 completely:2 easily:3 po:1 various:2 represented:2 heat:4 sejnowski:1 outside:1 whose:2 quite:1 s:4 otherwise:2 ability:1 favor:1 itself:3 final:1 advantage:1 frozen:1 net:4 maximal:1 reset:1 poorly:1 description:1 cautious:1 spent:1 illustrate:1 ij:1 progress:2 cool:1 involves:2 kludge:2 come:2 closely:1 detennines:1 modifying:1 exploration:1 human:1 really:1 summation:1 normal:4 major:1 omitted:1 spreading:1 visited:1 sensitive:1 largest:2 mit:2 always:4 avoid:1 improvement:1 consistently:1 sense:2 lj:1 overall:3 fairly:1 cube:2 once:3 saving:1 having:2 nicely:1 atom:1 distantly:1 others:1 intelligent:1 serious:1 employ:1 minsky:2 attempt:1 acceptance:1 extreme:1 fu:1 necessary:3 mobility:1 filled:1 re:1 theoretical:2 goodness:3 measuring:1 ordinary:1 too:1 st:1 off:6 quickly:6 continuously:1 satisfied:1 containing:1 slowly:4 worse:1 external:1 cognitive:1 de:1 summarized:1 later:1 schema:3 start:3 parallel:2 characteristic:2 necker:1 none:1 processor:2 simultaneous:1 definition:1 steadily:2 obvious:1 stop:1 knowledge:1 schedule:13 actually:1 bidirectional:1 higher:2 improved:1 fmal:1 done:3 though:1 generality:1 rejected:2 until:1 freezing:3 defines:1 quality:9 scheduled:3 indicated:2 effect:1 requiring:1 during:4 defmed:1 maintained:1 noted:1 arrest:1 criterion:1 temperature:18 detennining:1 resting:5 mellon:1 freeze:2 cambridge:2 automatic:9 reliability:2 moving:1 stable:6 something:2 own:2 continue:1 neg:1 minimum:4 maximize:1 resolving:1 faster:1 long:1 controlled:1 basic:1 mayor:1 represent:3 accord:1 achieved:1 addition:1 want:1 annealing:22 decreased:1 rest:4 unlike:1 thing:1 near:2 enough:2 decent:1 easy:3 psychology:1 perfectly:1 coldness:1 handled:1 generally:1 useful:3 amount:3 locally:1 processed:2 mcclelland:2 unifonn:3 subpart:1 carnegie:1 four:5 changing:1 clarity:1 compete:1 you:4 throughout:1 reasonable:1 decision:1 constraint:15 felt:1 extremely:3 performing:1 remain:2 across:1 agreeing:2 wi:1 biologically:1 making:1 leg:1 gradually:1 computationally:1 equation:2 agree:1 know:1 merit:1 jared:1 end:4 eight:2 away:2 appropriate:2 disagreement:2 running:1 remaining:3 xw:1 giving:1 especially:1 move:7 already:1 quantity:1 gradient:1 considers:1 trivial:2 toward:1 fmding:1 difficult:3 unfortunately:1 negative:2 implementation:1 reliably:2 boltzmann:9 allowing:1 upper:2 immediate:1 hinton:1 committed:1 synchronously:1 lb:1 connection:4 bonding:1 below:3 lend:1 hot:1 critical:3 satisfaction:3 natural:1 power:1 representing:1 finished:1 netinput:3 satisficing:1 kj:1 understanding:1 relative:2 proportional:3 analogy:1 metal:1 soon:1 guide:1 absolute:1 benefit:3 distributed:1 calculated:3 avoids:2 made:1 commonly:1 global:20 butt:1 summing:1 pittsburgh:1 search:2 continuous:1 why:1 defy:1 improving:1 main:2 spread:1 whole:1 slow:2 colder:2 sub:1 papert:2 ib:2 externally:5 minute:1 departs:1 remained:1 bad:2 alt:1 importance:1 te:1 rejection:1 smoothly:1 simply:3 appearance:1 dissatisfaction:1 adjustment:1 temporarily:1 corresponds:2 satisfies:1 chance:1 ma:2 goal:1 towards:4 replace:1 feasible:2 determined:3 infinite:1 acting:1 called:1 total:1 accepted:2 la:4 perceptrons:1 rarely:1 tested:1 avoiding:1 |
395 | 1,360 | Silicon Retina with Adaptive Filtering
Properties
Shih-Chii Liu
Computation and Neural Systems
136-93 California Institute of Technology
Pasadena, CA 91125
shih@pcmp.caltech.edu
Abstract
This paper describes a small, compact circuit that captures the
temporal and adaptation properties both of the photoreceptor and
of the laminar layers of the fly. This circuit uses only six transistors and two capacitors. It is operated in the subthreshold domain.
The circuit maintains a high transient gain by using adaptation to
the background intensity as a form of gain control. The adaptation time constant of the circuit can be controlled via an external
bias. Its temporal filtering properties change with the background
intensity or signal-to-noise conditions. The frequency response of
the circuit shows that in the frequency range of 1 to 100 Hz, the
circuit response goes from highpass filtering under high light levels
to lowpass filtering under low light levels (Le., when the signal-tonoise ratio is low). A chip with 20x20 pixels has been fabricated
in 1.2J.Lm ORBIT CMOS nwell technology.
1 BACKGROUND
The first two layers in the fly visual system are the retina layer and the laminar
layer. The photo receptors in the retina synapse onto the monopolar cells in the
laminar layer. The photoreceptors adapt to the background intensity, and use this
adaptation as a form of gain control in maintaining a high response to transient
signals. The laminar layer performs bandpass filtering under high background intensities, and reverts to low pass filtering in the case of low background intensities
where the signal-to-noise (SIN) ratio is low. This adaptive filtering response in the
temporal domain is analogous to the spatial center-surround response of the bipolar
cells in the vertebrate retina.
Silicon Retina with Acklptive Filtering Properties
713
Figure 1: Circuit diagram of retino-Iaminar circuit. The feedback consists of a
resistor implemented by a pFET transistor, Q1. The conductance of the resistor is
controlled by the external bias, Vm .
ga
Vr
iin
VI
CdI
-
CI
~
Figure 2: Small signal model of the circuit shown in Figure 1. Cr is the parasitic
capacitance at the node, Vr .
The Delbriick silicon receptor circuit (Delbriick, 1994) modeled closely the step responses and the adaptation responses of the biological receptors. By including two
additional transistors, the retino-Iaminar (RL) circuit described here captures
the properties of both the photoreceptor layer (Le., the adaptation properties and
phototransduction) and the cells in the laminar layer (Le., the adaptive filtering).
The time constant of the circuit is controllable via an external bias, and the adaptation behavior of the circuit over different background intensities is more symmetrical
than that of Delbriick's photoreceptor circuit.
2
CIRCUIT DESCRIPTION
The RL circuit which has the basic form of Delbriick's receptor circuit is shown
in Figure 1. I have replaced the adaptive element in his receptor circuit by a
nonlinear resistor consisting of a pFET transistor, Q1. The implementation of a
floating, voltage-controlled resistor has been described earlier by (Banu and Tsividis,
1982). The bias for Q1, Vb, is generated by Q3 and Q4. The conductance of Q1
is determined by the output voltage, Vi, and an external bias, Vm . We give a brief
description of the circuit operation here; details are described in (Delbriick, 1994).
The receptor node, Vr , is clamped to the voltage needed to sink the current sourced
s-c. Liu
714
Frequency (Hz)
Figure 3: Frequency plot of the RL circuit over five decades of background intensity.
The number next to each curve corresponds to the log intensity of the mean value;
a log corresponds to the intensity of a red LED. The plot shows that, in the range
of 1 to 100 Hz, the circuit is a bandpass filter at high light levels, and reduces to a
lowpass filter at low light levels.
by Q6, which is biased by an external voltage, Vu. Changes in the photo current are
amplified by the transistors, Q2 and Q6, resulting in a change in Vi, This change in
Vi is capacitively coupled through the capacitive divider, consisting of Cl and Cd,
into Vp1 , so that Qs supplies the extra increase in photocurrent.
The feedback transistor, Qs, is operated in subthreshold so that v;. and Vi is logarithmic in the photocurrent. A large change in the photocurrent resulting from a
change in the background intensity, leads to a large change in the circuit output,
Vi. Significant current then flows through Ql, thus charging or discharging Vp1 .
3
PROPERTIES OF RL CIRCUIT
The temporal responses and adaptation properties of this circuit are expounded
in the following sections. In Section 3.1, we solve for the transfer function of the
circuit and in Section 3.2, we describe the dependence of the conductance of Ql
on the background intensity. In Sections 3.3 and 3.4, we describe the temporal
responses of this circuit, and compare the adaptation response of RL circuit with
that of Delbriick's circuit.
3.1
TRANSFER FUNCTION
We can solve for the transfer function of the RL circuit in Figure 1 by writing the
KCL equations of the small-signal model shown in Figure 2. The transfer function,
~, is given by:
'.n
-=-lin
grnS
(1)
715
Silicon Retina with Aooptive Filtering Properties
17
~
.~
.
B
.:
.~
j
..
I 2 0~--::1l'::2 ----::0':.4 ----::0:';.6 ----::0:'::.8 - - - !
Time (sec)
Figure 4: Temporal responses of the circuit over five decades of background intensity. The input stimulus is a square-wave-modulated red LED of contrast 0.15. The
circuit acts as a high pass filter (that is, a differentiator) at high intensities, and as
a lowpass filter as the intensity drops.
where Aamp = 9~2, gm is the transconductance, and gd is the output conductance
of a transistor. We define the time constants, TI, TT' and Tid, as follows:
TI
=
Cl
gm2
-;Tr
=
Cr
--;Tld
Cd
= --,
gm2
gm5
where ga is the output conductance of Ql and Cr is the parasitic capacitance at the
node, Yr'
The frequency response curves in Figure 3 are measured from the fabricated circuit
over five decades of background intensity. We obtain the curves by using a sinewave-modulated red LED source. The number next to each curve is the log intensity
of the mean value; 0 log is the intensity of a red LED. We obtain the remaining
curves by interposing neutral density filters between the LED source and the chip.
Figure 3 shows that, in the range of 1 to 100 Hz, the circuit is a bandpass filter
at high light levels, and reduces to a lowpass filter at low light levels. For each
frequency curve, the gain is flat in the middle, and is given by Ad = CI~~d. The
cutoff frequencies change with the background intensity; this change is analyzed in
Section 3.2.
3.2
DEPENDENCE OF CIRCUIT'S TIME CONSTANT ON
BACKGROUND INTENSITY
The cutoff frequencies of the circuit depend on the conductance, ga, of Ql . Here,
we analyze the dependence of ga on the background intensity. Since Ql is operated
in subthreshold, the conductance depends on the current flowing through Ql' The
I-V relationship for Ql can be written as
I
=
2IoP(fh
r
p e(l-Itnltp)y e- Itnltp t!.V/2
sinh (.6Yj2)
(2)
on
=
where V
t
lal~~-Itnltp )/ltn e(1-2lt n lt p )t!. V/2 sinh(~ Vj2)
"l~VPI, ~V =
Vi - Vp1 ,
lph
is the photo current , and
(3)
fa
=
2Iop (-f!: p ( ( : ) (l-Itnltp)/lt n . The exponential relationship for Equations 2 and 3
is for a FET transistor operating in subthreshold, where lop is the quiescent leakage current of the transistor, and K. is the effectiveness of the gate in controlling
s-c. Liu
716
~
"0
~
:;
1.8
16
9-
5
ti
c
...
.~
b
1.2
.."C
~
Time (sec)
Figure 5: Plots of adaptation responses of the RL circuit and of Delbriick's circuit.
The input stimulus is a red LED driven by a square wave of contrast 0.18. The
bottom curve corresponding to Delbriick's receptor has been shifted down so that
we can compare the two curves. The adaptation response of the RL circuit is more
symmetrical than that of Delbriick's circuit when the circuit goes from dark to light
conditions and back.
the surface potential of the channel of the transistor. Equation 3 shows that ga
is proportional to the photocurrent, I ph , hence, the background intensity. A more
intuitive way of understanding how ga changes with Iph is that the change in Vb
with a fixed change in the output, Vi, depends on the output level of Vi. The change
in Vb is larger for a higher DC output, Vi, because of the increased body effect at
Q4 due to its higher source voltage. The larger change in Vb leads to an increase in
the conductance, gao
As Iph increases, ga increases, so the cutoff frequencies shift to the right, as seen in
Figure 3. If we compare both the "0" curve and the "-1" curve, we can see that the
cutoff frequencies are approximately different by a factor of 10. Thus, the exponent
of I ph , (1- /'i,n/'i,p)//'i,n ::::: 1. Since the /'i, values change with the current through the
transistor, the exponent also changes. The different values of the exponent with
Iph can be seen from the different amounts of shifts in the cutoff frequencies of the
curves.
3.3
TEMPORAL RESPONSES
The adaptive temporal filtering of the circuit over five decades of background intensity can also be observed from the step response of the RL circuit to a squarewave-modulated LED of contrast 0.15, as shown in Figure 4. The data in Figure
4 show that the time constant of the circuit increases as the light level decreases.
The temporal responses observed in these circuits are comparable to the contrast
responses recorded from the LMCs by Juusola and colleagues (Juusola et aI., 1995).
3.4
ADAPTATION PROPERTIES
The RL circuit also differs from Delbriick's circuit in that the adaptation time
constant can be set by an external bias. In the Delbriick circuit, the adaptation time
constant is predetermined at the design phase and by process parameters. In Figure
5, we compare the adaptation properties of the RL circuit with those of Delbriick's
Silicon Retina with Atklptive Filtering Properties
717
V m..o.7l. 0.71. 0.8, 0.14. 0 .? 7. 0.9
'"
1.45 0
01
0.2
03
04
0.5
0.6
Time <sec)
Figure 6: Step response of the RL circuit for different values of Vm . The input
stimulus is a square-wave-modulated red LED source. The value of Vm was varied
from 0.73 to 0.9 V. The curve with the longest time constant of decay corresponds
to the lowest value of Vm .
circuit. The input stimulus consists of a square-wave-modulated LED source with
a contrast of about 0.18. We take the circuit from dark to light conditions, and
back again, by using neutral density filters. The top curve corresponds to the
response from the RL circuit, and the bottom curve corresponds to the response
from the Delbriick circuit. The RL circuit adapts symmetrically, when it goes
from light to dark conditions and back. In contrast, Delbriick's circuit shows an
asymmetrical adaptative behavior; it adapts more slowly when it goes from dark to
light conditions.
The adaptation time constant of the RL circuit depends on the conductance, ga,
and the capacitors, CI and Cd. From Equation 3, we see that ga is dependent on h
which is set by the bias, Vm . Hence , we can change the adaptation time constant by
varying Vm . The dependence of the time constant on Vm is further demonstrated
by recording the step response of the circuit to a LED source of contrast 0.15 for
various values of Vm . The output data is shown in Figure 6 for five different values
of Vm . The time constant of the circuit decreases as Vm increases.
A chip consisting of 20x20 pixels was fabricated in 1.2l-tm ORBIT CMOS nwell
technology. An input stimulus consisting of a rotating flywheel, with black strips
on a white background, was initially presented to the imager. The flywheel was then
stopped, and the response of the chip was recorded one sec after the motion was
ceased. I repeated the experiment for two adaptation time constants by changing
the value of Vm . Figure 7a shows the output of the chip with the longer adaptation
time constant. We see that the image is still present, whereas the image in Figure
7b has almost faded away; that is, the chip has adapted away the stationary image.
4
CONCLUSIONS
I have described a small circuit captures the temporal and adaptation properties
of both the photoreceptor and the laminar layers in the fly retina. By adapting to
the background intensity, the circuit maintains a high transient gain. The temporal
behavior of the circuit also changes with the background intensity, such that, at
high SIN ratios, the circuit acts as a highpass filter and, at low SIN ratios , the
circuit acts as a lowpass filter to average out the noise . The circuit uses only six
transistors and two capacitors and is compact. The adaptation time constant of the
soc. Liu
718
(a)
(b)
Figure 7: Adaptation results from a two-dimensional array of 20 x 20 pixels. The
output of the array was recorded one sec after cessation of the pattern motion.
The experiment was repeated for two different adaptation time constants. Figure
(a) corresponds to the longer adaptation time constant. The image is still present,
whereas the image in Figure (b) has almost faded away.
circuit can be controlled via an external bias.
Acknowledgments
I thank Bradley A. Minch for discussions of this work, Carver Mead for supporting
this work, and the MOSIS foundation for fabricating this circuit. I also thank Lyn
Dupre for editing this document. This work was supported in part by the Office of
Naval Research, by DARPA, and by the Beckman Foundation.
References
T. Delbriick, "Analog VLSI phototransduction by continous-time, adaptive, logarithmic photoreceptor circuits," eNS Memo No.SO, California Institute of Technology, Pasadena, CA, 1994.
M. Banu and Y. Tsividis, "Floating voltage-controlled resistors in CMOS technology," Electronics Letters, 18:15, pp. 678-679, 1982.
M. Juusola, R.O. Uusitola, and M. Weckstrom, " Transfer of graded potentials at
the photoreceptor-interneuron synapse," J. of General Physiology, 105, pp. 115148,1995.
| 1360 |@word middle:1 q1:4 tr:1 electronics:1 liu:4 document:1 bradley:1 current:7 written:1 predetermined:1 plot:3 drop:1 stationary:1 yr:1 fabricating:1 node:3 five:5 supply:1 consists:2 behavior:3 monopolar:1 vertebrate:1 circuit:67 lowest:1 q2:1 fabricated:3 temporal:11 act:3 ti:3 bipolar:1 control:2 imager:1 discharging:1 tid:1 receptor:7 mead:1 approximately:1 black:1 lop:1 range:3 acknowledgment:1 vu:1 differs:1 adapting:1 physiology:1 onto:1 ga:9 writing:1 demonstrated:1 center:1 go:4 yj2:1 q:2 array:2 his:1 analogous:1 controlling:1 gm:1 us:2 element:1 bottom:2 observed:2 fly:3 capture:3 decrease:2 depend:1 cdi:1 sink:1 lowpass:5 darpa:1 chip:6 various:1 kcl:1 describe:2 sourced:1 larger:2 solve:2 expounded:1 transistor:12 adaptation:24 amplified:1 adapts:2 description:2 intuitive:1 ceased:1 cmos:3 measured:1 soc:1 implemented:1 closely:1 filter:10 transient:3 biological:1 nwell:2 lm:1 fh:1 beckman:1 cr:3 varying:1 voltage:6 office:1 q3:1 naval:1 longest:1 contrast:7 dependent:1 initially:1 pasadena:2 vlsi:1 pixel:3 exponent:3 spatial:1 stimulus:5 retina:8 floating:2 replaced:1 phase:1 consisting:4 conductance:9 analyzed:1 operated:3 light:11 lmcs:1 capacitively:1 carver:1 rotating:1 orbit:2 stopped:1 increased:1 earlier:1 neutral:2 minch:1 gd:1 density:2 ltn:1 vm:12 again:1 recorded:3 slowly:1 external:7 potential:2 sec:5 vi:10 ad:1 depends:3 analyze:1 red:6 wave:4 maintains:2 square:4 subthreshold:4 chii:1 q6:2 strip:1 colleague:1 frequency:11 pp:2 gain:5 adaptative:1 back:3 higher:2 response:23 flowing:1 synapse:2 editing:1 grns:1 nonlinear:1 cessation:1 vp1:3 effect:1 divider:1 asymmetrical:1 iop:2 hence:2 white:1 sin:3 fet:1 tt:1 performs:1 motion:2 iin:1 image:5 rl:15 analog:1 silicon:5 significant:1 surround:1 ai:1 phototransduction:2 longer:2 operating:1 surface:1 driven:1 caltech:1 seen:2 additional:1 signal:6 reduces:2 adapt:1 lin:1 juusola:3 banu:2 gm2:2 controlled:5 basic:1 cell:3 background:20 whereas:2 diagram:1 source:6 biased:1 extra:1 hz:4 recording:1 flow:1 capacitor:3 effectiveness:1 symmetrically:1 tm:1 shift:2 six:2 differentiator:1 retino:2 interposing:1 amount:1 dark:4 squarewave:1 ph:2 shifted:1 shih:2 changing:1 cutoff:5 mosis:1 letter:1 almost:2 vb:4 vpi:1 comparable:1 layer:9 sinh:2 laminar:6 adapted:1 flat:1 transconductance:1 pfet:2 describes:1 equation:4 needed:1 photo:3 operation:1 away:3 photocurrent:4 gate:1 capacitive:1 top:1 remaining:1 pcmp:1 maintaining:1 graded:1 leakage:1 capacitance:2 fa:1 dependence:4 thank:2 modeled:1 relationship:2 ratio:4 x20:2 ql:7 memo:1 implementation:1 design:1 supporting:1 lyn:1 vj2:1 dc:1 highpass:2 varied:1 delbriick:15 intensity:24 continous:1 lal:1 california:2 pattern:1 reverts:1 including:1 charging:1 technology:5 brief:1 coupled:1 understanding:1 faded:2 proportional:1 filtering:12 foundation:2 tld:1 cd:3 supported:1 bias:8 institute:2 feedback:2 curve:14 adaptive:6 compact:2 q4:2 photoreceptors:1 symmetrical:2 quiescent:1 decade:4 channel:1 tonoise:1 transfer:5 ca:2 controllable:1 tsividis:2 cl:2 domain:2 noise:3 repeated:2 body:1 en:1 vr:3 bandpass:3 exponential:1 resistor:5 clamped:1 down:1 sinewave:1 decay:1 ci:3 interneuron:1 led:10 logarithmic:2 lt:3 gao:1 visual:1 corresponds:6 iph:3 change:18 determined:1 pas:2 photoreceptor:6 parasitic:2 modulated:5 |
396 | 1,361 | Recurrent Neural Networks Can Learn to
Implement Symbol-Sensitive Counting
Paul Rodriguez
Janet Wiles
Department of Cognitive Science
University of California, San Diego
La Jolla, CA. 92093
prodrigu@cogsci.ucsd.edu
School of Information Technology and
Department of Psychology
University of Queensland
Brisbane, Queensland 4072 Australia
janetw@it.uq.edu.au
Abstract
Recently researchers have derived formal complexity analysis of analog
computation in the setting of discrete-time dynamical systems. As an
empirical constrast, training recurrent neural networks (RNNs) produces
self-organized systems that are realizations of analog mechanisms. Previous work showed that a RNN can learn to process a simple context-free
language (CFL) by counting. Herein, we extend that work to show that a
RNN can learn a harder CFL, a simple palindrome, by organizing its resources into a symbol-sensitive counting solution, and we provide a dynamical systems analysis which demonstrates how the network: can not
only count, but also copy and store counting infonnation.
1 INTRODUCTION
Several researchers have recently derived results in analog computation theory in the setting of discrete-time dynamical systems(Siegelmann, 1994; Maass & Opren, 1997; Moore,
1996; Casey, 1996). For example, a dynamical recognizer (DR) is a discrete-time continuous dynamical system with a given initial starting point and a finite set of Boolean output
decision functions(pollack. 1991; Moore, 1996; see also Siegelmann, 1993). The dynamical system is composed of a space,~n , an alphabet A, a set of functions (1 per element of A)
that each maps ~n -+ ~n and an accepting region H lie, in ~n. With enough precision and
appropriate differential equations, DRs can use real-valued variables to encode contents of
a stack or counter (for details see Siegelmann, 1994; Moore, 1996).
As an empirical contrast, training recurrent neural networks (RNNs) produces selforganized implementations of analog mechanisms. In previous work we showed that an
RNN can learn to process a simple context-free language, anb n , by organizing its resources
into a counter which is similar to hand-coded dynamical recognizers but also exhibits some
88
P. Rodriguez and J. Wiles
novelties (Wlles & Elman, 1995). In particular, similar to band-coded counters, the network
developed proportional contracting and expanding rates and precision matters - but unexpectedly the network distributed the contraction/expansion axis among hidden units, developed a saddle point to transition between the first half and second half of a string, and used
oscillating dynamics as a way to visit regions of the phase space around the fixed points.
In this work we show that an RNN can implement a solution for a harder CFL, a simple
palindrome language(desaibed below), which requires a symbol-sensitive counting solution. We provide a dynamical systems analysis which demonstrates how the network can
not only count, but also copy and store counting information implicitly in space around a
fixed point.
2 TRAINING an RNN TO PROCESS CFLs
We use a discrete-time RNN that has 1 hidden layer with recurreot connections, and 1 output
layer withoutrecurreot connections so that the accepting regions are determined by the output units. The RNN processes output in Tune(n), where n is the length of the input, and it
can recognize languages that are a proper subset of context-sensitive languages and a proper
superset of regular languages(Moore, 1996). Consequently, the RNN we investigate can in
principle embody the computational power needed to process self-recursion.
Furthermore, many connectionist models of language processing have used a prediction
task(e.g. Elman, 1990). Hence, we trained an RNN to be a real-time transducer version
of a dynamical recognizer that predicts the next input in a sequence. Although the network
does not explicitly accept or reject strings, if our network makes all the right predictions
possible then perlorming the prediction task subsumes the accept task, and in principle one
could simply reject unmatched predictions. We used a threshbold criterion of .5 such that if
an ouput node has a value greater than .5 then the network is considered to be making that
prediction. If the network makes all the right predictions possible for some input string,
then it is correctly processing that string. Although a finite dimensional RNN cannot process CFLs robustly with a margin for error (e.g.Casey, 1996;Maass and Orponen,I997), we
will show that it can acquire the right kind of trajectory to process the language in a way
that generalizes to longer strings.
2.1
A SIMPLE PALINDROME LANGUAGE
A palindrome language (mirror language) consists of a set of strings, S, such that each
string, 8 eS, 8 = ww r , is a concatenation of a substring, w, and its reverse, w r ? The relevant aspect of this language is that a mechanism cannot use a simple counter to process the
string but must use the functional equivalent of a stack that enables it to match the symbols
in second half of the string with the first half.
We investigated a palindrome language that uses only two symbols for w, two other symboIs for w r , such that the second half of the string is fully predictable once the change in
symbols occurs. The language we used is a simple version restricted such that one symbol is always present and precedes the other, for example: w = anb m , w r = B m An, e.g.
aaaabbbBBBAAAA, (where n > 0, m >= 0). Note that the embedded subsequence
bm B m is just the simple-CFL used in Wlles & Elman (1995) as mentioned above, hence,
one can reasonably expect that a solution to this task has an embedded counter for the subsequence b... B.
202 LINEAR SYSTEM COUNTERS
A basic counter in analog computation theory uses real-valued precision (e.g. Siegelman
1994; Moore 1996). For example, a l-dimensional up/down counter for two symbols {a I b}
89
RNNs Can Learn Symbol-Sensitive Counting
is the system J(z) = .5z + .5a, J(z) = 2z - .5b where z is the state variable, a is the input
variable to count up(push), and b is the variable to count down(pop). A sequence of input
aaabbb has state values(starting at 0): .5,.75,.875, .75,.5,0.
Similarly, for our transducer version one can develop piecewise linear system equations in
which counting takes place along different dimensions so that different predictions can be
made at appropriate time stepSI. The linear system serves as a hypothesis before running any
simulations to understand the implementation issues for an RNN. For example, using the
function J(z) = z for z E [0,1], for z < 0, 1 for z > 1, then for the simple palindrome
task one can explicitly encode a mechanism to copy and store the count for a across the
b... B subsequences. If we assign dimension-l to a, dimension-2 to b, dimension-3 to A,
dimension-4 to B, and dimension-5 to store the a value, we can build a system so that for
a sequence aaabbBBAAA we get state variables values: initial, (0,0,0,0,0), (.5,0,0,0,0),
(.75,0,0,0,0), (.875,0,0,0,0), (0,.5,0,0,.875), (0,.75,0,0,.875), (0,0,0,.5,.875), (0,0,0,0,.875),
(0,0,.75,0,0), (0,0,.5,0,0), (0,0,0,0,0). The matrix equations for such a system could be:
?
X t = J(
[~5o .~ ?~ ~ ?~ 1
? ??
* X t- 1 +
[~5?
1 ~l? : : 1
* It}
?
?
2
2
-5
-1
1
1
-5
-5
where t is time, X t is the 5-dimensional state vector, It is the 4-dimensional input vector
using l-hotencodingofa = [1,0,0,0];6 = [O,I,O,O];A = [O,O,I,O],B = [0,0,0,1].
The simple trick is to use the input weights to turn on or off the counting. For example,
the dimension-5 state variable is turned off when input is a or A, but then turned on when
b is input, at which time it copies the last a value and holds on to it. It is then easy to add
Boolean output decision functions that keep predictions linearly separable.
However, other solutions are possible. Rather than store the a count one could keep counting up in dimension-l for b input and then cancel it by counting down for B input. The
questions that arise are: Can an RNN implement a solution that generalizes? What kind of
store and copy mechanism does an RNN discover?
1.3 TRAINING DATA & RESULTS
The training set consists of 68 possible strings of total length $ 25, which means a maximum of n + m = 12, or 12 symbols in the first half, 12 symbols in the second half, and
1 end symbol 2. The complete training set has more short strings so that the network does
not disregard the transitions at the end of the string or at the end of the b... B subsequence.
The network consists of 5 input, 5 hidden, 5 output units, with a bias node. The hidden and
recurrent units are updated in the same time step as the input is presented. The recurrent
layer activations are input on the next time step. The weight updates are performed using
back-propagation thru time training with error injected at each time step backward for 24
time stepS for each input.
We found that about half our simulations learn to make predictions for transitions, and most
will have few generalizations on longer strings not seen in the training set. However, no
network learned the complete training set perfectly. The best network was trained for 250K
sweeps (1 per character) with a learning parameter of .001, and 136K more sweeps with
.0001, for a total of about 51K strings. The network made 28 total prediction errors on 28
l1bese can be expanded relatively easily to include more symbols, different symbol representations, harder palindrome sequences, or different kind of decision planes.
2We removed training strings w = a"b,for n > 1; it turns out that the network interpolates on
the B-to-A transition for these. Also, we added an end symbol to help reset the system to a consistent
starting value.
90
P. Rodriguez and 1. Wiles
different strings in the test set of 68 possible strings seen in training. All of these errors were
isolated to 3 situations: when the number of a input = 2or4 the error occurred at the B-toA transition, when the number of a input = 1, for m > 2, the error occurred as an early
A-to-end transition.
Importantly, the networlcmade correct predictions on many strings longer than seen in training, e.g. strings that have total length > 25 (or n + m > 12). It counted longer strings
of a .. As with or without embedded b.. Bs; such as: w a 13 ; w = a 13 b2 ; w = an b7 , n
6, 7or8 (recall that w is the first half of the string). It also generalized to count longer subsequences ofb .. Bs withorwithoutmorea .. As; suchasw = a 5 h'" where n = 8,9,10,11,12.
The longest string it processed correctly was w = a 9 b9 , which is 12 more characters than
seen during training. The network learned to store the count for a9 for up to 9bs, even though
the longest example it had seen in training had only 3bs - clearly it's doing something right.
=
=
2.4 NETWORK EVALUATION
Our evaluation will focus on how the best network counts, copies, and stores information.
We use a mix of graphical analysis and linear system analysis, to piece together a global
picture of how phase space trajectories hold informational states. The linear system analysis
consists of investigating the local behaviour of the Jacobian at fixed points under each input
condition separately. We refer to Fa as the autonomous system under a input condition and
similarly for Fb, FA, and FB.
The most salient aspect to the solution is that the network divides up the processing along
different dimensions in space. By inspection we note that hidden unitl (HUI) takes on low
values for the first half of the string and high values for the second half, which helps keep
the processing linearly separable. Therefore in the graphical analysis of the RNN we can
set HUI to a constant.
FIrst, we can evaluate how the network counts the b.. B subsequences. Again, by inspection
the network uses dimensions HU3,HU4. The graphical analysis in FIgure Ia and Figure Ib
plots the activity ofHU3xHU4. It shows how the network counts the right number of Bs and
then makes a transition to predict the first A. The dominant eigenvalues at the Fb attracting
point and F B saddle point are inversely proportional, which indicates that the contraction
rate to and expansion rate away from the fixed points are inversely matched. The FB system expands out to a periodic-2 fixed point in HU3xHU4 subspace, and the unstable eigenvector corresponding to the one unstable eigenvalue has components only in HU3,HU4. In
Fi~ure 2 we plot the vector field that describes the flow in phase space for the composite
F B' which shows the direction where the system contracts along the stable manifold, and
expands on the unstable manifold. One can see that the nature of the transition after the last
b to the first B is to place the state vector close to saddle point for FB so that the number of
expansion steps matches the number of the Fb contraction steps. In this way the b count is
copied over to a different region of phase space.
Now we evaluate how the network counts a ... A, first without any b... B embedding. Since
the output unit for the end symbol bas very high weight values for HU2, and the Fa system
bas little activity in HU4, we note that a is processed in HU2xHU3xHU5. The trajectories
in Figure 3 show a plot of a 13 A 13 that properly predicts all As as well as the transition at
the end. Furthermore, the dominant eigenvalues for the Fa attracting point and the FA saddle point are nearly inversely proportional and the FA system expands to a periodic-2 fixed
point in 4-dimensions (HUI is constant, whereas the other HU values are periodic). The
Fa eigenvectors have strong-moderate components in dimensions HU2, HU3, HU5; and
likewise in HU2, HU3, HU4, HU5 for FA.
The much harder question is: How does the network maintain the information about the
count of as that were input while it is processing the b.. B subsequence? Inspection shows
RNNs Can Learn Symbol-Sensitive Counting
91
that after processing an the activation values are not directly copied over any HU values,
nor do they latch any HU values that indicate how many as were processed. Instead, the
last state value after the last a affects the dynamics for b... B in such a way that clusters the
last state value after the last B, but only in HU3xHU4 space (since the other HU dimensions
were unchanging throughout b... B processing).
We show in Figure 4 the clusters for state variables in HU3xHU4 space after processing
an bm B m , where n 2,3,4, 50r6; m 1.. 10. The graph shows that the information about
how many a's occurred is "stored" in the HU3xHU4 region where points are clustered. Figure 4 includes the dividing line from Figure Ib for the predict A region. The network does
not predict the B-to-A transition after a 4 or a 2 because it ends up on the wrong side of the
dividing line of Figure Ib, but the network in these cases still predicts the A-to-end transition. We see that if the network did not oscillate around the FB saddle point while exanding
then the trajectory would end up correctly on one side of the decision plane.
=
=
It is important to see that the clusters themselves in Figure 4 are on a contracting trajectory toward a fixed point, which stores information about increasing number of as when
matched by an expansion of the FA system. For example, the state values after a 5 AA and
a5 bm Bm AA, m = 2.. 10 have a total hamming distance for all 5 dimensions that ranged
from .070 to .079. Also, the fixed point for the Fa. system, the estimated fixed point for the
composite F'B 0 Fb 0 F;: , and the saddle point of the FA system are colinear 3. in all the
relevant counting dimensions: 2,3,4, and 5. In other words, the FA system contracts the
different coordinate points, one for an and one for anbm B m , towards the saddle point to
nearly the same location in phase space, treating those points as having the same information. Unfortunately, this is a contraction occuring through a 4 dimensional subspace which
we cannot easily show grapbically.
3 CONCLUSION
In conclusion, we have shown that an RNN can develop a symbol-sensitive counting s0lution for a simple palindrome. In fact, this solution is not a stack but consists of nonindependent counters that use dynamics to visit different regions at appropriate times. Furthermore, an RNN can implement counting solutions for a prediction task that are functionally similar to that prescribed by analog computation theory, but the store and copy functions
rely on distance in phase space to implicitly affect other trajectories.
Acknowledgements
This research was funded by the UCSD, Center for Research in Language Training Grant
to Paul Rodriguez, and a grant from the Australian Research Council to Janet Wlles.
References
Casey, M. (1996) The Dynamics of Discrete-TIme Computation, With Application to Recurrent Neural Networks and Fmite State Machine Extraction. Neural Computation, 8.
Elman, JL. (1990) Finding Structure in TIme. Cognitive Science, 14, 179-211.
Maass, W. ,Orponen, P. (1997) On the Effect of Analog Noise in Discrete-TIme Analog
Computations. Proceedings Neural Information Processing Systems, 1996.
Moore, C. (1996) Dynamical Recognizers: Real-TIme Language Recognition by Analog
Computation. Santa Fe InstituteWorking Paper 96-05-023.
)Relative to the saddle point, the vector for one fixed point, multiplied by a constant had the same
value(to within .OS) in each of 4 dimensions as the vector for the other fixed point
P. Rodriguez and 1. Wiles
92
p~ct
HU4
1
b
HU4
1
regiOD
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
p~ctB
~giOD
",
,---
o
o
o
0.2
0.4
0.6
o~~':"'o.~2--0:-.~4-"":0-.6=----=-0"":
. 8:---~1 HU3
0.8
Figure 1: la)Trajectory of 610 (after 0 5 ) in HU3xHU4. Each arrow represents a trajectory
step:the base is a state vector at time t, the head is a state at time t + 1. The first b trajectory
step has a base near (.9,.05), which is the previous state from the last o. The output node
b is> .5 above the dividing line. Ib) Trajectory of BI0 (after 05b 10 ) in HU3xHU4. The
output node B is > .5 above the dashed dividing line, and the output node A is > .5 below
the solid dividing line. The system crosses the line on the last B step, hence it predicts the
B-to-A transitioo.
Pollack, J.B. (1991) The Induction of Dynamical Recognizers. Machine Learning, 7, 227-
252.
Siegelmann, H.(1993) Foundations of Recurrent Neural Networks. PhD. dissertation, unpublished. New Brunswick Rutgers, The State University of New Jersey.
Wtles, 1., Elman, J. (1995) Counting Without a Counter: A Case Study in Activation Dynamics. Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society. Hillsdale, N J .: Lawrence Erlbaum Associates.
HU4
0.8
\
0.6
t
,
t
I
, ", ,.... \
....... _-..,.-/
\
\
\
\
l
I
~
~
~
..
,.
,
, ,
, ... ?
~
I
, ,
" //
",
I
"
~
Figure 2: Vector field that desaibes the flow of Fj projected onto HU3xHU4. The graph
shows a saddle point near (.5,.5)and a periodic-2 attracting point.
RNNs Can Learn Symbol-Sensitive Counting
93
803
1
I
I
,
,
I
0.8
I
I
0.6
0.4
I
0.4
predict
II
,
I
I
I
reaioo
I
0.2
o .2 predict eDdrepOIl "
o
I
I
I
I
o
0~-0~.~2--0~.74-~0~.~6-~0~.8~~lmn
0 - - - 0.-2--0-.-4--0-.-6- - 0 - . 8 - - 1 802
Figure 3: 3a) Trajectory of a 13 projected onto HU2xHU3. The output node a is> .5 below
and right of dividing line. The projection for HU2xHU5 is very similar. 3b) Trajectory of
A 13 (after a 13 ) projected onto HU2xHU3. The output node for the end symbol is > .5 on
the 13th trajectory step left of the solid dividing line, and it is > .5 on the 11th step left
of the dashed dividing line (the hyperplane projection must use values at the appropriate
time steps), hence the system predicts the A-to-end transition. The graph for HU2xHU5
and HU2xHU4 is very similar.
IIU4
"
II
J/
fJIJ
fJIJ
1.0
fJIJ
"
fJIJ
41
lib
u
fJIJ
0.9
0.8
0. 7
z??
fJIJ. , ?
0.1 L.-_ _--::---:--_:-=-~----:---'':-.:....:-::--_:_::___:_" HU3
0.2
0.3
0.4
0. 5
0.6
0.7
0.8
0.9
1.0
Figure4: Clusters of laststate values anb m Bm, m > 1, projected onto HU3xHU4. Notice
that for increasing n the system oscillates toward an attracting point of the system F'B 0
Ft:0F:;.
| 1361 |@word version:3 fjij:6 hu:4 simulation:2 queensland:2 contraction:4 fmite:1 solid:2 harder:4 initial:2 orponen:2 activation:3 must:2 unchanging:1 enables:1 plot:3 treating:1 update:1 half:11 plane:2 inspection:3 short:1 dissertation:1 accepting:2 node:7 location:1 along:3 differential:1 ouput:1 transducer:2 consists:5 embody:1 elman:5 nor:1 themselves:1 informational:1 little:1 increasing:2 lib:1 discover:1 matched:2 what:1 kind:3 string:24 eigenvector:1 developed:2 finding:1 expands:3 oscillates:1 demonstrates:2 wrong:1 unit:5 grant:2 before:1 local:1 ure:1 rnns:5 au:1 seventeenth:1 implement:4 nonindependent:1 rnn:16 empirical:2 reject:2 composite:2 projection:2 word:1 regular:1 get:1 cannot:3 onto:4 close:1 janet:2 context:3 equivalent:1 map:1 center:1 starting:3 constrast:1 importantly:1 embedding:1 autonomous:1 coordinate:1 updated:1 diego:1 us:3 hypothesis:1 trick:1 element:1 associate:1 recognition:1 predicts:5 ft:1 unexpectedly:1 region:7 counter:10 removed:1 mentioned:1 predictable:1 complexity:1 dynamic:5 trained:2 colinear:1 easily:2 jersey:1 alphabet:1 cogsci:1 precedes:1 valued:2 a9:1 sequence:4 eigenvalue:3 ctb:1 reset:1 relevant:2 turned:2 realization:1 organizing:2 cluster:4 produce:2 oscillating:1 help:2 recurrent:7 develop:2 school:1 strong:1 dividing:8 indicate:1 australian:1 direction:1 correct:1 australia:1 hillsdale:1 behaviour:1 assign:1 generalization:1 clustered:1 hold:2 around:3 considered:1 b9:1 lawrence:1 predict:5 early:1 recognizer:2 bi0:1 infonnation:1 sensitive:8 council:1 clearly:1 always:1 rather:1 encode:2 derived:2 focus:1 casey:3 longest:2 properly:1 indicates:1 contrast:1 accept:2 hidden:5 janetw:1 issue:1 among:1 figure4:1 field:2 once:1 having:1 extraction:1 represents:1 cancel:1 nearly:2 connectionist:1 piecewise:1 few:1 composed:1 recognize:1 phase:6 maintain:1 a5:1 investigate:1 cfl:4 evaluation:2 divide:1 isolated:1 pollack:2 boolean:2 subset:1 erlbaum:1 stored:1 periodic:4 contract:2 off:2 together:1 again:1 unmatched:1 dr:1 cognitive:3 b2:1 subsumes:1 includes:1 matter:1 explicitly:2 piece:1 performed:1 hu2:3 doing:1 likewise:1 substring:1 trajectory:13 researcher:2 hamming:1 recall:1 organized:1 back:1 though:1 furthermore:3 just:1 hand:1 o:1 propagation:1 rodriguez:5 effect:1 ranged:1 hence:4 moore:6 maass:3 latch:1 during:1 self:2 criterion:1 generalized:1 complete:2 occuring:1 fj:1 recently:2 fi:1 functional:1 jl:1 analog:9 extend:1 occurred:3 functionally:1 refer:1 similarly:2 language:16 had:3 funded:1 stable:1 recognizers:3 longer:5 attracting:4 base:2 add:1 something:1 dominant:2 showed:2 jolla:1 moderate:1 reverse:1 store:10 aaabbb:1 seen:5 greater:1 novelty:1 dashed:2 ii:2 mix:1 match:2 cross:1 visit:2 coded:2 prediction:12 basic:1 rutgers:1 whereas:1 separately:1 brisbane:1 flow:2 near:2 counting:17 enough:1 superset:1 easy:1 b7:1 affect:2 psychology:1 perfectly:1 interpolates:1 oscillate:1 santa:1 eigenvectors:1 tune:1 band:1 processed:3 notice:1 estimated:1 per:2 correctly:3 discrete:6 salient:1 backward:1 graph:3 injected:1 place:2 throughout:1 decision:4 toa:1 layer:3 ct:1 copied:2 annual:1 activity:2 aspect:2 prescribed:1 separable:2 expanded:1 relatively:1 department:2 across:1 describes:1 character:2 making:1 b:5 wile:4 restricted:1 resource:2 equation:3 turn:2 count:14 mechanism:5 needed:1 drs:1 serf:1 end:12 generalizes:2 multiplied:1 away:1 appropriate:4 uq:1 robustly:1 running:1 include:1 graphical:3 siegelmann:4 build:1 society:1 sweep:2 question:2 added:1 occurs:1 fa:12 exhibit:1 subspace:2 distance:2 concatenation:1 manifold:2 unstable:3 toward:2 induction:1 length:3 acquire:1 unfortunately:1 fe:1 ba:2 implementation:2 proper:2 hu3:6 finite:2 situation:1 anb:3 head:1 ww:1 ucsd:2 stack:3 ofb:1 palindrome:8 unpublished:1 connection:2 california:1 learned:2 herein:1 pop:1 dynamical:11 below:3 power:1 ia:1 rely:1 recursion:1 technology:1 inversely:3 picture:1 axis:1 thru:1 acknowledgement:1 relative:1 embedded:3 contracting:2 fully:1 expect:1 proportional:3 foundation:1 consistent:1 principle:2 lmn:1 last:8 free:2 copy:7 formal:1 bias:1 understand:1 side:2 distributed:1 dimension:16 transition:12 fb:8 made:2 san:1 projected:4 bm:5 counted:1 implicitly:2 keep:3 global:1 investigating:1 subsequence:7 continuous:1 learn:8 reasonably:1 nature:1 ca:1 expanding:1 selforganized:1 expansion:4 investigated:1 did:1 linearly:2 arrow:1 noise:1 paul:2 arise:1 precision:3 lie:1 r6:1 ib:4 jacobian:1 down:3 symbol:20 hui:3 mirror:1 phd:1 push:1 margin:1 simply:1 saddle:9 aa:2 consequently:1 towards:1 content:1 change:1 determined:1 hyperplane:1 total:5 e:1 la:2 disregard:1 brunswick:1 evaluate:2 |
397 | 1,362 | Multi-time Models for Temporally Abstract
Planning
Doina Precup, Richard S. Sutton
University of Massachusetts
Amherst, MA 01003
{dprecuplrich}@cs.umass.edu
Abstract
Planning and learning at multiple levels of temporal abstraction is a key
problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov
decision processes and reinforcement learning. Current model-based reinforcement learning is based on one-step models that cannot represent
common-sense higher-level actions, such as going to lunch, grasping an
object, or flying to Denver. This paper generalizes prior work on temporally abstract models [Sutton, 1995] and extends it from the prediction
setting to include actions, control, and planning. We introduce a more
general form of temporally abstract model, the multi-time model, and establish its suitability for planning and learning by virtue of its relationship
to the Bellman equations. This paper summarizes the theoretical framework of multi-time models and illustrates their potential advantages in a
grid world planning task.
The need for hierarchical and abstract planning is a fundamental problem in AI (see, e.g.,
Sacerdoti, 1977; Laird et aI., 1986; Korf, 1985; Kaelbling, 1993; Dayan & Hinton, 1993).
Model-based reinforcement learning offers a possible solution to the problem of integrating
planning with real-time learning and decision-making (Peng & Williams, 1993, Moore &
Atkeson, 1993; Sutton and Barto, 1998). However, current model-based reinforcement
learning is based on one-step models that cannot represent common-sense, higher-level
actions. Modeling such actions requires the ability to handle different, interrelated levels
of temporal abstraction.
A new approach to modeling at multiple time scales was introduced by Sutton (1995) based
on prior work by Singh, Dayan, and Sutton and Pinette. This approach enables models
of the environment at different temporal scales to be intermixed, producing temporally
abstract models. However, that work was concerned only with predicting the environment.
This paper summarizes an extension of the approach including actions and control of the
environment [Precup & Sutton, 1997]. In particular, we generalize the usual notion of a
1051
Multi-time Models for Temporally Abstract Planning
primitive, one-step action to an abstract action, an arbitrary, closed-loop policy. Whereas
prior work modeled the behavior of the agent-environment system under a single, given
policy, here we learn different models for a set of different policies. For each possible way
of behaving, the agent learns a separate model of what will happen. Then, in planning, it
can choose between these overall policies as well as between primitive actions.
To illustrate the kind of advance we are trying to make, consider the example shown in
Figure 1. This is a standard grid world in which the primitive actions are to move from one
grid cell to a neighboring cell. Imagine the learning agent is repeatedly given new tasks
in the form of new goal locations to travel to as rapidly as possible. If the agent plans at
the level of primitive actions, then its plans will be many actions long and take a relatively
long time to compute. Planning could be much faster if abstract actions could be used to
plan for moving from room to room rather than from cell to cell. For each room, the agent
learns two models for two abstract actions, one for traveling efficiently to each adjacent
room. We do not address in this paper the question of how such abstract actions could be
discovered without help; instead we focus on the mathematical theory of abstract actions.
In particular, we define a very general semantics for them-a property that seems to be
required in order for them to be used in the general kind of planning typically used with
Markov decision processes. At the end of this paper we illustrate the theory in this example
problem, showing how room-to-room abstract actions can substantially speed planning.
4 unreliable
primitive actions
18?+
up
fight
FaU33%
01
th&tlm8
down
8 abstract actions
(to each room's 2 hallways)
Figure 1: Example Task. The Natural abstract actions are to move from room to room.
1 Reinforcement Learning (MDP) Framework
In reinforcement learning, a learning agent interacts with an environment at some discrete,
lowest-level time scale t = 0,1,2, ... On each time step, the agent perceives the state of
the environment, St , and on that basis chooses a primitive action, at. In response to each
primitive action, at, the environment produces one step later a numerical reward, Tt+l,
and a next state, St+l. The agent's objective is to learn a policy, a mapping from states to
probabilities of taking each action, that maximizes the expected discounted future reward
from each state s:
00
v"{s) = E7r{L: ';lTt+l
I
So =
s},
t=O
where'Y E [0, 1) is a discount-rate parameter, and E7r {} denotes an expectation implicitly
conditional on the policy 7f being followed. The quantity v7r( s) is called the value of state S
under policy 7f, and v7r is called the value function for policy 7f. The value under the optimal
policy is denoted:
v*(S) = maxv 7r(s}.
7r
Planning in reinforcement learning refers to the use of models of the effects of actions to
compute value functions, particularly v*.
D. Precup and R. S. Sutton
]052
We assume that the states are discrete and fonn a finite set, St E {1,2, ... ,m}. This
is viewed as a temporary theoretical convenience; it is not a limitation of the ideas we
present. This assumption allows us to alternatively denote the value functions, v7r and v*,
as column vectors, v7r and v*, each having m components that contain the values of the
m states. In general, for any m-vector, x, we will use the notation x( s) to refer to its sth
component.
The model of an action, a, whether primitive or abstract, has two components. One is an
m x m matrix, Pa , predicting the state that will result from executing the action in each
state. The other is a vector, ga, predicting the cumulative reward that will be received along
the way. In the case of a primitive action, P a is the matrix of I-step transition probabilities
of the environment, times ,:
P;(s)
= ,E {St+!
1st
= s, at = a},
Vs
where P;(s) denotes the sth column of P; (these are the predictions corresponding to
state s) and St denotes the unit basis m-vector corresponding to St. The reward prediction,
ga, for a primitive action contains the expected immediate rewards:
ga(s)
= E {rt+l
1st
= s, at = a},
Vs
For any stochastic policy, 1f, we can similarly define its I-step model, g7r, P 7r as:
and
Vs
(1)
2 Suitability for Planning
In conventional planning, one-step models are used to compute value functions via the
Bellman equations for prediction and control. In vector notation, the prediction and control
Bellman equations are
and
v* = max{ga
a
+ PaV*},
(2)
respectively, where the max function is applied component-wise in the control equation.
In planning, these equalities are turned into updates, e.g., v k+! ~ g7r + P 7r v k' which
converge to the value functions. Thus, the Bellman equations are usually used to define
and compute value functions given models of actions. Following Sutton (1995), here we
reverse the roles: we take the value functions as given and use the Bellman equations to
define and compute models of new, abstract actions.
In particular, a model can be used in planning only if it is stable and consistent with the
Bellman equations. It is useful to define special tenns for consistency with each Bellman
equation. Let g, P denote an arbitrary model (an m-vector and an m x m matrix). Then
this model is said to be vaLid for policy 1f [Sutton, 1995] if and only if limk-+oo pk = 0
and
v7r = g + P v 7r.
(3)
Any valid model can be used to compute v7r via the iteration algorithm v k+ 1 t- g + Pv k.
This is a direct sense in which the validity of a model implies that it is suitable for planning.
We introduce here a parallel definition that expresses consistency with the control Bellman
equation. The model g, P is said to be non-overpromising (NaP) if and only if P has only
positive elements, limk-+oo pk = 0, and
V* ~
g
+ Pv*,
(4)
where the ~ relation holds component-wise. If a Nap model is added inside the max operator in the control Bellman equation (2), this condition ensures that the true value, v*,
will not be exceeded for any state. Thus, any model that does not promise more than it
Multi-time Models for Temporally Abstract Planning
1053
is achievable (is not (;>verpromising) can serve as an option for planning purposes. The
one-step models of primitive actions are obviously NOP, due to (2). It is similarly straightforward to show that the one-step model of any policy is also NOP.
For some purposes, it is more convenient to write a model g, P as a single (m+ 1) x (m+ 1)
matrix:
o
P
We say that the model M has been put in homogeneous coordinates. The vectors corresponding to the value functions can also be put into homogeneous coordinates, by adding
an initial element that is always 1.
Using this notation, new models can be combined using two basic operations: composition
and averaging. Two models Ml and M2 can be composed by matrix multiplication, yielding a new model M = M1 M2 . A set of models Mi can be averaged, weighted by a set of
diagonal matrices D i , such that I::i Di = I, to yield a new model M = I::i DiMi. Sutton
(1995) showed that the set of models that are valid for a policy 7r is closed under composition and averaging. This enables models acting at different time scales to be mixed together, and the resulting model can still be used to compute v 1T? We have proven that the set
of NOP models is also closed under composition and averaging [Precup & Sutton, 1997].
These operations permit a richer variety of combinations for NOP models than they do for
valid models because the NOP models that are combined need not correspond to a particular policy.
3 Multi-time models
The validity and NOP-ness of a model do not imply each other [Precup & Sutton, 1997] .
Nevertheless, we believe a good model should be both valid and NOP. We would like to
describe a class of models that, in some sense, includes all the "interesting" models that are
valid and non-overpromising, and which is expressive enough to include common-sense
notions of abstract action. These goals have led us to the notion of a multi-time model.
The simplest example of multi-step model, called the n-step model for policy 7r, predicts
the n-step truncated return and the state n steps into the future (times Tn). If different nstep models of the same policy are averaged, the result is called a mixture model. Mixtures
are valid and non-overpromising due to the closure properties established in the previous
section. One kind of mixture suggested in [Sutton, 1995] allows an exponential decay of
the weights over time, controlled by a parameter {3.
Figure 2: Two hypothetical Markov environments
Are mixture models expressive enough for capturing the properties of the environment?
In order to get some intuition about the expressive power that a model should have, let
us consider the example in figure 2. If we are only interested if state G is attained, then
the two environments presented shOUld be characterized by significantly different models.
However, n-step models, or 2ny linear mixture of n-step models cannot achieve this goal.
In order to remediate this problem, models should average differently over all the different
trajectories that are possible through the state space. A full {3-model [Sutton, 1995] can
D. Precup and R. S. Sutton
1054
distinguish between these two situations. A ,B-model is a more general form of mixture
model, in which a different ,B parameter is associated with each state. For a state i, ,Bi
can be viewed as the probability that the trajectory through the state space ends in state
i. Although ,B-models seem to have more expressive power, they cannot describe n-step
models. We would like to have a more general form of model, that unifies both classes.
This goal is achieved by accurate multi-time models.
Multi-time models are defined with respect to a policy. Just as the one-step model for a
policy is defined by (1), we define g, P to be an accurate multi-time model if and only if
00
pT (s)
=
Ell'{ 2: Wt 'l St ISo = s},
t=l
00
g(s)
= Ell'{2: wdrl + ,r2 + ... + ,t-Irt) ISo = s}
t=l
for some Jr, for all s, and for some sequence of random weights, WI, W2, ?.. such that
> 0 and 2::1 Wt = 1. The weights are random variables chosen according to a
distribution that depends only on states visited at or before time t. The weight Wt is a
measure of the importance given to the t-th state of the trajectory. In particular, if Wt
0,
then state t has no weight associated with it. If Wt = 1- 2:~:~ Wi, all the remaining weight
along the trajectory is given to state t. The effect is that state St is the "outcome" state for
the trajectory.
Wt
=
The random weights along each trajectory make this a very general form of model. The
only necessary constraint is that the weights depend only on previously visited states. In
particular, we can choose weighting sequences that generate the types of multi-step models
described in [Sutton, 1995]. If the weighting variables are such that wn=l, and Wt =
O;v't i= n , we obtain n-step models. A weighting sequence of the form Wt = rr~:6,Bi 'tit,
where ,Bi is the parameter associated to the state visited on time step i, describes a full
,B-model.
The main result for multi-time models is that they satisfy the two criteria defined in the
previous section. Any accurate multi-time model is also NOP and valid for Jr. The proofs
of these results are too long to include here.
4 Illustrative Example
In order to illustrate the way in which multi-time models can be used in practice, let us
return to the grid world example (Figure I). The cells of the grid correspond to the states of
the environment. From any state the agent can perform one of four primitive actions, up,
down, left or right. With probability 2/3, the actions cause the agent to move one cell
in the corresponding direction (unless this would take the agent into a wall, in which case
it stays in the same state). With probability 1/3, the agent instead moves in one of the other
three directions (unless this takes it into a wall of course). There is no penalty for bumping
into walls.
In each room, we also defined two abstract actions, for going to each of the adjacent hallways. Each abstract action has a set of input states (the states in the room) and two outcome
states: the target hallway, which corresponds to a successful outcome, and the state adjacent to the other hallway, which corresponds to failure (the agent has wandered out of the
room). Each abstract action is given by its complete model g:;-',
where Jr is the optimal
policy for getting into the target hallway, and the weighting variables W along any trajectory
have the value I for the outcome states and 0 everywhere else.
P:;,
J055
Multi-time Models for Temporally Abstract Planning
I
..
?
I
....
I
...
.
???? ?????? ?
?
???
?
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Iteration #5
Iteration #6
Figure 3: Value iteration using primitive and abstract actions
The goal state can have an arbitrary position in any of the rooms, but for this illustration let
us suppose that the goal is two steps down from the right hallway. The value of the goal
state is 1, there are no rewards along the way, and the discounting factor is , = 0.9. We
perfonned planning according to the standard value iteration method:
where vo( s) = 0 for all the states except the goal state (which starts at 1). In one experiment, a ranged only over the primitive actions, in the other it ranged over the set including
both the primitive and the abstract actions.
When using only primitive actions, the values are propagated one step away on each iteration. After six iterations, for instance, only the states that are at most six steps away from
the goal will be attributed non-zero values. The models of abstract actions produce a significant speed-up in the propagation of values at each step. Figure 3 shows the value function
after each iteration, using both primitive and abstract actions for planning. The area of the
circle drawn in each state is proportional to the value attributed to the state. The first three
iterations are identical with the case when only primitive actions are used. However, once
the values are propagated to the first hallway, all the states in the rooms adjacent to that
hallway will receive values as well. For the states in the room containing the goal, these
values correspond to perfonning the abstract action of getting into the right hallway, and
then following the optimal primitive actions to get to the goal. At this point, a path to the
goal is known from each state in the right half of the environment, even if the path is not
optimal for all states. After six iterations, an optimal policy is known for all the states in
the environment.
The models of the abstract actions do not need to be given a priori, they can be learned
from experience. In fact, the abstract models that were used in this experiment have been
learned during a I ,OOO,DOO-step random walk in the environment. The starting point for
1056
D. Precup and R. S. Sutton
learning was represented by the outcome states of each abstract action, along with the
hypothetical utilities U associated with these states. We used Q-Iearning [Watkins, 1989]
to learn the optimal state-action value function Q'U B associated with each abstract action.
The greedy policy with respect to Q'U,B is the pol'icy associated with the abstract action.
At the same time, we used the ,B-model learning algorithm presented in [Sutton, 1995]
to compute the model corresponding to the policy. The learning algorithm is completely
online and incremental, and its complexity is comparable to that of regular I-step TDlearning.
Models of abstract actions can be built while an agent is acting in the environment without
any additional effort. Such models can then be used in the planning process as if they would
represent primitive actions, ensuring more efficient learning and planning, especially if the
goal is changing over time.
Acknowledgments
The authors thank Amy McGovern and Andy Fagg for helpful discussions and comments contributing
to this paper. This research was supported in part by NSF grant ECS-951 1805 to Andrew G. Barto
and Richard S. Sutton, and by AFOSR grant AFOSR-F49620-96-1-0254 to Andrew G. Barto and
Richard S. Sutton. Doina Precup also acknowledges the support of the Fulbright foundation.
References
Dayan, P. (1993). Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5, 613-624.
Dayan, P. & Hinton, G. E. (1993). Feudal reinforcement learning. In Advances in Neural Information
Processing Systems, volume 5, (pp. 271-278)., San Mateo, CA. Morgan Kaufmann.
Kaelbling, L. P. (1993). Hierarchical learning in stochastic domains: Preliminary results. In Proceedings of the Tenth International Conference on Machine Learning ICML'93, (pp. 167-173)., San
Mateo, CA. Morgan Kaufmann.
Korf, R. E. (1985). Learning to Solve Problems by Searching for Macro-Operators. London: Pitman
Publishing Ltd.
Laird, J. E., Rosenbloom, P. S., & Newell, A. (1986). Chunking in SOAR: The anatomy of a general
learning mechanism. Machine Learning, I, 11-46.
Moore, A. W. & Atkeson, C. G. (1993). Prioritized sweeping: Reinforcement learning with less data
and less real time. Machine Learning, 13, 103-130.
Peng, J. & Williams, J. (1993). Efficient learning and planning within the Dyna framework. Adaptive
Behavior, 4, 323-334.
Precup, D. & Sutton, R. S. (1997). Multi-Time models for reinforcement learning. In ICML'97
Workshop: The Role of Models in Reinforcement Learning.
Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. North-Holland, NY: Elsevier.
Singh, S. P. (1992). Scaling reinforcement learning by learning variable temporal resolution models.
In Proceedings of the Ninth International Conference on Machine Learning ICML'92, (pp. 202207)., San Mateo, CA. Morgan Kaufmann.
Sutton, R. S. (1995). TD models: Modeling the world as a mixture of time scales. In Proceedings
of the Twelfth International Conference on Machine Learning ICML '95, (pp. 531-539)., San Mateo,
CA. Morgan Kaufmann.
Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning. An Introduction. Cambridge, MA:
MIT Press.
Sutton, R. S. & Pinette, B. (1985). The learning of world models by connectionist networks. In
Proceedings of the Seventh Annual Conference of the Cognitive Science Society, (pp. 54-64).
Watkins, C. 1. C. H. (1989). Learning with Delayed Rewards. PhD thesis, Cambridge University.
| 1362 |@word achievable:1 seems:1 twelfth:1 closure:1 korf:2 fonn:1 initial:1 contains:1 uma:1 current:2 numerical:1 happen:1 enables:2 update:1 maxv:1 v:3 intelligence:1 half:1 greedy:1 hallway:9 iso:2 location:1 mathematical:2 along:6 direct:1 inside:1 introduce:2 peng:2 expected:2 behavior:3 planning:26 multi:17 bellman:9 discounted:1 td:1 perceives:1 notation:3 maximizes:1 lowest:1 what:1 kind:3 substantially:1 temporal:5 hypothetical:2 iearning:1 control:7 unit:1 grant:2 producing:1 positive:1 before:1 sutton:24 nap:2 path:2 mateo:4 bi:3 averaged:2 acknowledgment:1 practice:1 area:1 significantly:1 convenient:1 integrating:1 refers:1 regular:1 get:2 cannot:4 convenience:1 ga:4 operator:2 put:2 conventional:1 williams:2 primitive:20 straightforward:1 starting:1 resolution:1 m2:2 amy:1 handle:1 notion:3 searching:1 coordinate:2 imagine:1 pt:1 target:2 suppose:1 homogeneous:2 bumping:1 pa:1 element:2 particularly:1 predicts:1 role:2 ensures:1 tdlearning:1 grasping:1 intuition:1 environment:16 complexity:1 pol:1 reward:7 singh:2 depend:1 tit:1 flying:1 serve:1 basis:2 completely:1 differently:1 represented:1 describe:2 london:1 artificial:1 mcgovern:1 outcome:5 richer:1 solve:1 say:1 ability:1 laird:2 online:1 obviously:1 advantage:1 sequence:3 rr:1 macro:1 neighboring:1 turned:1 loop:1 rapidly:1 achieve:1 getting:2 produce:2 incremental:1 executing:1 object:1 help:1 illustrate:3 oo:2 andrew:2 received:1 c:1 implies:1 direction:2 anatomy:1 stochastic:2 successor:1 wandered:1 generalization:1 wall:3 suitability:2 preliminary:1 extension:1 hold:1 mapping:1 purpose:2 travel:1 pinette:2 visited:3 weighted:1 mit:1 always:1 rather:1 barto:4 focus:1 e7r:2 sense:5 helpful:1 elsevier:1 abstraction:2 dayan:4 typically:1 fight:1 relation:1 going:2 interested:1 semantics:1 overall:1 denoted:1 priori:1 plan:4 special:1 ness:1 ell:2 once:1 having:1 identical:1 icml:4 future:2 connectionist:1 richard:3 composed:1 delayed:1 mixture:7 yielding:1 accurate:3 andy:1 necessary:1 experience:1 unless:2 walk:1 circle:1 theoretical:2 instance:1 column:2 modeling:3 kaelbling:2 successful:1 seventh:1 too:1 chooses:1 combined:2 st:10 fundamental:1 amherst:1 international:3 stay:1 together:1 precup:9 thesis:1 containing:1 choose:2 cognitive:1 return:2 potential:1 includes:1 north:1 satisfy:1 doina:2 depends:1 later:1 closed:3 start:1 option:1 parallel:1 kaufmann:4 efficiently:1 v7r:6 yield:1 correspond:3 generalize:1 unifies:1 trajectory:7 dimi:1 definition:1 failure:1 pp:5 associated:6 mi:1 di:1 proof:1 propagated:2 attributed:2 massachusetts:1 nstep:1 exceeded:1 higher:2 attained:1 response:1 ooo:1 just:1 traveling:1 expressive:4 propagation:1 mdp:1 believe:1 effect:2 validity:2 contain:1 true:1 ranged:2 equality:1 discounting:1 moore:2 adjacent:4 pav:1 during:1 illustrative:1 criterion:1 trying:1 tt:1 complete:1 vo:1 tn:1 wise:2 common:3 denver:1 volume:1 m1:1 refer:1 composition:3 significant:1 cambridge:2 ai:2 grid:5 consistency:2 similarly:2 moving:1 stable:1 behaving:1 rosenbloom:1 nop:8 showed:1 reverse:1 tenns:1 morgan:4 additional:1 converge:1 multiple:2 full:2 faster:1 characterized:1 offer:1 long:3 controlled:1 ensuring:1 prediction:5 basic:1 expectation:1 iteration:14 represent:3 achieved:1 cell:6 receive:1 whereas:1 doo:1 else:1 w2:1 limk:2 comment:1 seem:1 enough:2 concerned:1 wn:1 variety:1 idea:1 whether:1 six:3 utility:1 ltd:1 effort:1 penalty:1 cause:1 action:52 repeatedly:1 useful:1 discount:1 simplest:1 generate:1 nsf:1 discrete:2 write:1 promise:1 express:1 key:1 four:1 nevertheless:1 drawn:1 changing:1 tenth:1 fagg:1 everywhere:1 extends:1 decision:3 summarizes:2 scaling:1 comparable:1 capturing:1 followed:1 distinguish:1 annual:1 constraint:1 feudal:1 speed:2 relatively:1 according:2 combination:1 jr:3 describes:1 sth:2 wi:2 lunch:1 making:1 ltt:1 chunking:1 equation:10 previously:1 mechanism:1 dyna:1 end:2 generalizes:1 operation:2 permit:1 hierarchical:2 away:2 denotes:3 remaining:1 include:3 publishing:1 especially:1 establish:1 society:1 move:4 objective:1 question:1 quantity:1 added:1 rt:1 usual:1 interacts:1 diagonal:1 said:2 separate:1 thank:1 modeled:1 relationship:1 illustration:1 intermixed:1 irt:1 policy:22 perform:1 markov:3 finite:1 truncated:1 immediate:1 situation:1 hinton:2 discovered:1 ninth:1 arbitrary:3 sweeping:1 introduced:1 required:1 learned:2 icy:1 temporary:1 established:1 address:1 suggested:1 usually:1 summarize:1 built:1 including:2 max:3 power:2 suitable:1 perfonned:1 natural:1 predicting:3 imply:1 temporally:7 acknowledges:1 prior:3 multiplication:1 contributing:1 afosr:2 mixed:1 interesting:1 limitation:1 proportional:1 proven:1 foundation:1 agent:14 consistent:1 course:1 supported:1 taking:1 pitman:1 f49620:1 world:5 cumulative:1 transition:1 valid:8 author:1 reinforcement:13 san:4 adaptive:1 atkeson:2 ec:1 implicitly:1 unreliable:1 ml:1 alternatively:1 learn:3 ca:4 improving:1 domain:1 pk:2 main:1 ny:2 position:1 pv:2 exponential:1 watkins:2 weighting:4 learns:2 down:3 showing:1 r2:1 decay:1 virtue:1 workshop:1 adding:1 importance:1 phd:1 illustrates:1 led:1 interrelated:1 soar:1 holland:1 corresponds:2 newell:1 ma:2 conditional:1 goal:13 viewed:2 prioritized:1 room:15 except:1 averaging:3 acting:2 wt:8 called:4 support:1 perfonning:1 |
398 | 1,363 | Active Data Clustering
Thomas Hofmann
Center for Biological and Computational Learning, MIT
Cambridge, MA 02139, USA, hofmann@ai.mit.edu
Joachim M. Buhmann
Institut fur Informatik III, Universitat Bonn
RomerstraBe 164, D-53117 Bonn, Germany, jb@cs.uni-bonn.de
Abstract
Active data clustering is a novel technique for clustering of proximity data which utilizes principles from sequential experiment design
in order to interleave data generation and data analysis. The proposed active data sampling strategy is based on the expected value
of information, a concept rooting in statistical decision theory. This
is considered to be an important step towards the analysis of largescale data sets, because it offers a way to overcome the inherent
data sparseness of proximity data. '''Ie present applications to unsupervised texture segmentation in computer vision and information
retrieval in document databases.
1
Introduction
Data clustering is one of the core methods for numerous tasks in pattern recognition,
exploratory data analysis, computer vision, machine learning, data mining, and in
many other related fields. Concerning the data representation it is important to
distinguish between vectorial data and proximity data, cf. [Jain, Dubes, 1988]. In
vectorial data each measurement corresponds to a certain 'feature' evaluated at an
external scale. The elementary measurements of proximity data are, in contrast,
(dis-)similarity values obtained by comparing pairs of entities from a given data set.
Generating proximity data can be advantageous in cases where 'natural' similarity
functions exist, while extracting features and supplying a meaningful vector-space
metric may be difficult. We will illustrate the data generation process for two
exemplary applications: unsupervised segmentation of textured images and data
mining in a document database.
Textured image segmentation deals with the problem of partitioning an image into
regions of homogeneous texture. In the unsupervised case, this has to be achieved on
Active Data Clustering
529
the basis of texture similarities without prior knowledge about the occuring textures.
Our approach follows the ideas of [Geman et al., 1990] to apply a statistical test to
empirical distributions of image features at different sites. Suppose we decided to
work with the gray-scale representation directly. At every image location P = (x, y)
we consider a local sample of gray-values, e.g., in a squared neighborhood around p.
Then, the dissimilarity between two sites Pi and Pj is measured by the significance of
rejecting the hypothesis that both samples were generated from the same probability
distribution. Given a suitable binning (tk h :5: k :5: R and histograms Ii, Ij, respectively,
we propose to apply a x2-test, i.e.,
(1)
In fact, our experiments are based on a multi-scale Gabor filter representation instead of the raw data, cf. [Hofmann et al. , 1997] for more details . The main advantage of the similarity-based approach is that it does not reduce the distributional
information, e.g., to some simple first and second order statistics, before comparing
textures. This preserves more information and also avoids the ad hoc specification of a suitable metric like a weighted Euclidean distance on vectors of extracted
moment statistics.
As a second application we consider structuring a database of documents for improved information retrieval. Typical measures of association are based on the
number of shared index terms [Van Rijsbergen , 1979]. For example, a document
is represented by a (sparse) binary vector B, where each entry corresponds to the
occurrence of a certain index term . The dissimilarity can then be defined by the
cosme measure
(2)
Notice, that this measure (like many other) may violate the triangle inequality.
2
Clustering Sparse Proximity Data
In spite of potential advantages of similarity-based methods, their major drawback
seems to be the scaling behavior with the number of data: given a dataset with N
entities, the number of potential pairwise comparisons scales with O(N2). Clearly,
it is prohibitive to exhaustively perform or store all dissimilarities for large datasets,
and the crucial problem is how to deal with this unavoidable data sparseness. More
fundamentally, it is already the data generation process which has to solve the
problem of experimental design, by selecting a subset of pairs (i, j) for evaluation.
Obviously, a meaningful selection strategy could greatly profit from any knowledge
about the grouping structure of the data. This observation leads to the concept of
performing a sequential experimental design which interleaves the data clustering
with the data acquisition process. \Ve call this technique active data clustering,
because it actively selects new data, and uses tentative knowledge to estimate the
relevance of missing data. It amounts to inferring from the available data not
only a grouping structure, but also to learn which future data is most relevant for
the clustering problem. This fundamental concept may also be applied to other
unsupervised learning problems suffering from data sparseness.
The first step in deriving a clustering algorithm is the specification of a suitable
objective function . In the case of similarity-based clustering this is not at all a
trivial problem and we have systematically developed an axiomatic approach based
on invariance and robustness principles [Hofmann et al. , 1997] . Here, we can only
T. Hofmann and J. M. Buhmann
530
give some informal justifications for our choice. Let us introduce indicator functions to represent data partitionings, M iv being the indicator function for entity 0i
belonging to cluster Cv ' For a given number J{ of clusters, all Boolean functions
are summarized in terms of an assignment matrix M E {O, 1 }NXK. Each row of M
is required to sum to one in order to guarantee a unique cluster membership. To
distinguish between known and unknown dissimilarities, index sets or neighborhoods
N = (N1 , ? ?. , NN) are introduced. If j EM this means the value of Dij is available,
otherwise it is not known. For simplicity we assume the dissimilarity measure (and
in turn the neighborhood relation) to be symmetric, although this is not a necessary
requjrement. With the help of these definition the proposed criterion to assess the
quality of a clustering configuration is given by
N
1i(M;D,N)
K
LLMivdiv,
(3)
i=1 v=1
1i additively combines contributions div for each entity, where div corresponds to
the average dissimilarity to entities belonging to cluster Cv . In the sparse data case,
averages are restricted to the fraction of entities with known dissimilarities, i.e., the
subset of entities belonging to Cv n;Vi.
3
Expected Value of Information
To motivate our active data selection criterion, consider the simplified sequential
problem of inserting a new entity (or object) ON to a database of N - 1 entities
with a given fixed clustering structure. Thus we consider the decision problem of
optimally assigning the new object to one of the J{ clusters. If all dissimilarities
between objects 0i and object ON are known, the optimal assignment only depends
on the average dissimilarities to objects in the different clusters, and hence is given
by
(4)
For incomplete data, the total population averages dNv are replaced by point estimators dNv obtained by restricting the sums in (4) to N N, the neighborhood of ON.
Let us furthermore assume we want to compute a fixed number L of dissimilarities
before making the terminal decision. If the entities in each cluster are not further
distinguished, we can pick a member at random, once we have decided to sample
from a cluster Cv . The selection problem hence becomes equivalent to the problem of optimally distributing L measurements among J{ populations, such that the
risk of making the wrong decision based on the resulting estimates dNv is minimal.
More formally, this risk is given by n = dNcx - dNcx .' where a is the decision based
on the subpopulation estimates {d Nv } and a* is the true optimum.
To model the problem of selecting an optimal experiment we follow the Bayesian
approach developed by Raiffa & Schlaifer [Raiffa, Schlaifer, 1961] and compute the
so-called Expected Value of Sampling Information (EVSI). As a fundamental step
this involves the calculation of distributions for the quantities dNv ' For reasons
of computational efficiency we are assuming that dissimilarities resulting from a
comparison with an object in cluster Cv are normally distributed 1 with mean dNv
and variance uNv 2. Since the variances are nuisance parameters the risk function n does not depend on, it suffices to calculate the marginal distribution of
lOther computationally more expensive choices to model within cluster dissimilarities
are skewed distributions like the Gamma-d.istribution.
Active Data Clustering
531
a)
c)
800
600
400
200
J{O
b)
I~,
RANDOM 1-+----4
ACTIVE ........
PI
I~~
I ,_
? 200
?400
-600
?800
\1!Jln
'ill
50000
'--.
100000
150000
200000
# samples
Figure 1: (a) Gray-scale visualization of the generated proximity matrix (N = 800).
Dark/light gray values correspond to low/high dissimilarities respectively, Dij being
encoded by pixel (i, j). (b) Sampling snapshot for active data clustering after 60000
samples, queried values are depicted in white. (c) Costs evaluated on the complete
data for sequential active and random sampling.
dN/I' For the class of statistical models we will consider in the sequel the empirical mean dN/I, the unbiased variance estimator O"Jv/l and the sample size mN/I are
a sufficient statistic. Depending on these empirical quantities the marginal posterior distribution of dN/I for uninformative priors is a Student t distribution with
t = .jmN/I(dN/I - dN/I)/O"N/I and mN/I - 1 degrees of freedom . The corresponding
density will be denoted by !/I(dNII\dN/I,O"JvIl,mNII)' With the help of the posterior densities !/I we define the Expected Value of Perfect Information (EVPI) after
having observed (dN/I,O"Jv/l,mNII) by
EVPI =
1
g
+00 1+00
K
-00'" -00 m;x{dNa-dNII }
!/I(drvll\dNII , O"~II' mN/I) d drvll,
(5)
where a = arg minll dNII . The EVPI is the loss one expects to incur by making
the decision a based on the incomplete il1formation {dN/I} instead of the optimal
decision a", or, put the other way round, the expected gain we would obtain if a"
was revealed to us.
In the case of experimental design, the main quantity of interest is not the EVPI but
the Expected Value of Sampling Information (EVSI). The EVSI quantifies how much
gain we are expecting from additional data. The outcome of additional experiments
can only be anticipated by making use of the information which is already available . This is known as preposterior analysis. The linearity of the utility measure
implies that it suffices to calculate averages with respect to the preposterous distribution [Raiffa, Schlaifer, 1961, Chapter 5.3]. Drawing mt/l additional samples from
the lI-th population, and averaging possible outcomes with the (prior) distribution
!/I(dN/I\dN/I,O"Jv/l,mNII) will not affect the unbiased estimates dN/I,O"Jv/l, but only
increase the number of samples mN/I --;. mNII + mt/l ' Thus, we can compute the
EVSI from (5) by replacing the prior densities with.its preposterous counterparts.
To evaluate the K-dimensional integral in (5) or its EVSI variant we apply MonteCarlo techniques, sampling from the Student t densities using Kinderman's re-
T. Hofmann and 1. M. Buhmann
532
b)
:J{
.30000
AAN[)OM -
ACTlVe
,,'20000
L=IO(HMMI
L=5IHNMI
-+-.
I, = I3IUMI
'00000
\\
90000
00000
j
'0000
f
\
\\
\
60000
50000
\
''''
.
.............................. ... . ... . ....- .
150000
50000
200000
. . - ... .
250000
300000
3!OOOO
~
# of samples
# of samples
Figure 2: (a) Solution quality for active and random sampling on data generated
from a mixture image of 16 Brodatz textures (N = 1024). (b) Cost trajectories and
segmentation results for an active and random sampling example run (N = 4096).
jection sampling scheme, to get an empirical estimate of the random variable
'l/Ja(d N1 , ... , dNK ) = maxv{dNa-dNIJ. Though this enables us in principle to approximate the EVSI of any possible experiment, we cannot efficiently compute it for
all possible ways of distributing the L samples among J{ populations. In the large
sample limit, however, the EVSI becomes a concave function of the sampling sizes.
This motivates a greedy design procedure of drawing new samples incrementally
one by one.
4
Active Data Clustering
So far we have assumed the assignments of all but one entity ON to be given in
advance. This might be realistic in certain on-line applications, but more often
we want to simultaneously find assignments for all entities in a dataset. The active
data selection procedure hence has to be combined with a recalculation of clustering
solutions, because additional data may help us not only to improve our terminal
decision, but also with respect to our sampling'strategy. A local optimization of 'Ii
for assignments of a single object OJ can rely on the quantities
L
JEN.
[f- + ~il
IV
njv
MjvDij -
L +}
-i
L
MjvM/cvDjk, (6)
JEN. njvnjv kENj-{i}
where njv = 2: jE N. Mjv, n;:
njv - M iv , and nj: = n;: + 1, by setting
Mia = 1 {==> a = arg minv giv = argminv'li(M!Miv
1), a claim which can
be proved by straightforward algebraic manipulations (cf. [Hofmann et al., 1997]).
This effectively amounts to a cluster readjustment by reclassification of objects. For
additional evidence arising from new dissimilarities, one thus performs local reassignments, e.g., by cycling through all objects in random order, until no assignment
is changing.
=
To avoid unfavorable local minima one may also introduce a computational temperature T and utilize {9iv} for simulated annealing based on the Gibbs sampler
[Geman, Geman, 1984], P{Mia = I} = exp [-;J.gia]J2:~=l exp [-;J.9iv], Alternatively, Eq. (6) may also serve as the starting point to derive mean-field equations in
a deterministic annealing framework, cf. [Hofmann, Buhmann, 1997]. These local
Active Data Clustering
533
1
dugter
2
dugter
model
.:U atc
diuribu
procc:u
3tudi
cloud
,up
rcoult
pa.rlid
Jotudi
tempera-tur
de,;re
'up
a.lpha.
sta.te
ion
ra.ndom
pa.llid
particl
i.ntera. c
tempera.tur
11
12
a Konthm
facta.l
event
a ?oruhm
problem
cluUer
method
optim
heuntt
cluster
4
cluHeT
altr;orhhm
propOJ
method
t .u k
clu,itcr
~chedul
itrUc.tur
cluiter
a.lgorithm
new
~raph
a.lloi
:speech
3c hedul
a.tom
continu
tiuk
error
conHruct
:spea.ker
pla.cern
connect
fa-mill
queri
aCceu
deciiion
ioftwa.r
30ft war
manufactur
Qualiti
va.ria.bl
placern
~h}'. ic. l
14
melhod
1~
c uiter
17
18
robu,u
lmao,;
CI~,uter
model
?a.lc~Xl
do c.u m
siKna.tUJ
cluuer
datiJ.
duater
c1uucr
da.ta
cluiter
dU:ltcr
a.1?orithm
:lciJ.le
function
nonlinea.r
correl
red.hifl
h,up
red:lhih
rnpe
gala.xi
e)cc\ron
13
c uater
da.ta.
"
7
dUiter
object
8
model
ciu3ter
clu3ter
method
a.pproa.ch
method
al,;orithm
ba,3 C
altortthm
,;ener
loop
video
biue
Uicr
objec t
da.ta.
da.ta.
mct.hod
fuzzi
membcuhip
rul e
co ntrol
idcntif
G
16
fuzzi
propoi
propo:l
da.ta.
conyer,;
method
link
:lin,1
method
retnev
previou
retriev
hiera.rchi
a.na.lyt
literatur
proceuoT
queli
m&uix
to ol
cmea.n
a.lgorithm
program
rna.chin
criteria.
:lolv
3
du gter
alom
fern
re s ult
file
dOe?m
model
context
0
fu:n.i
19
tcchniqu
rC:5uh
:5Y:ltcm
,e~men'
complex
a.lgorithm
,..
p.per
ei~enva.lu
m ethod
method
vuua.
PIXel
video
uncenatnh
rabun
ta.rKe t
perturb
:le?ment
irna.g
di:Ulr.nilar
pOint
da.ta.
:limul
nbodl
,r;ravit
dark
motion
color
center
kmean
mau
ma.Her
bound
10
network
ciu3ter
neura.l
lea.rn
ClI,;orithm
n e ura.l
network
com petit
icifor,an
lCiJ.rn
20
i urvei
Figure 3: Clustering solution with 20 clusters for 1584 documents on 'clustering'.
Clusters are characterized by their 5 most topical and 5 most typical index terms.
optimization algorithms are well-suited for an incremental update after new data
has been sampled, as they do not require a complete recalculation from scratch. The
probabilistic reformulation in an annealing framework has the further advantage to
provide assignment probabilities which can be utilized to improve the randomized
'partner' selection procedure. For any of these algorithms we sequentially update
data assignments until a convergence criterion is fulfilled.
5
Results
To illustrate the behavior of the active data selection criterion we have run a series
of repeated experiments on artificial data. For N = 800 the data has been divided into 8 groups of 100 entities. Intra-group dissimilarities have been set to zero,
while inter-group dissimilarities were defined hierarchically. All values have been
corrupted by Gaussian noise. The proximity matrix, the sampling performance,
and a sampling snapshot are depicted in Fig. 1. The sampling exactly performs as
expected: after a short initial phase the active clustering algorithm spends more
samples to disambiguate clusters which possess a higher mean similarity, while less
dissimilarities are queried for pairs of entities belonging to well separated clusters.
For this type of structured data the gain of active sampling increases with the depth
of the hierarchy. The final solution variance is due to local minima. Remarkably
the active sampling strategy not only shows a faster improvement, it also finds on
average significantly better solution. Notice that the sampling has been decomposed into stages, refining clustering solutions after sampling of 1000 additional
dissimilari ties.
The results of an experiment for unsupervised texture segmentation is shown
Fig. 2. To obtain a close to optimal solution the active sampling strategy
roughly needs less than 50% of the sample size required by random sampling for
both, a resolution of N = 1024 and N = 4096. At a 64 x 64 resolution, for
L = 100[{, 150[{, 200[{ actively selected samples the random strategy needs on
average L = 120[{, 300[{, 440f{ samples, respectively, to obtain a comparable solution quality. Obviously, active sampling can only be successful in an intermediate
regime: if too little is known, we cannot infer additional information to improve our
sampling, if the sample is large enough to reliably detect clusters, there is no need
to sample any more. Yet, this intermediate regime significantly increases with [{
(and N).
T. Hofmann and I. M. Buhmann
534
Finally, we have clustered 1584 documents containing abstracts of papers with clustering as a title word. For I{ = 20 clusters2 active clustering needed 120000 samples
? 10% of the data) to achieve a solution quality within 1% of the asymptotic solution. A random strategy on average required 230000 samples. Fig. 3 shows the
achieved clustering solution, summarizing clusters by topical (most frequent) and
typical (most characteristic) index terms . The found solution gives a good overview
over areas dealing with clusters and clustering 3 .
6
Conclusion
As we have demonstrated, the concept of expected value of information fits nicely
into an optimization approach to clustering of proximity data, and establishes a
sound foundation of active data clustering in statistical decision theory. On the
medium size data sets used for validation, active clustering achieved a consistently
better performance as compared to random selection. This makes it a promising
technique for automated structure detection and data mining applications in large
data bases. Further work has to address stopping rules and speed-up techniques
to accelerate the evaluation of the selection criterion, as well as a unification with
annealing methods and hierarchical clustering.
Acknowledgments
This work was supported by the Federal Ministry of Education and Science BMBF
under grant # 01 M 3021 Aj4 and by a M.l.T. Faculty Sponser's Discretionary
Fund.
References
[Geman et al., 1990] Geman, D., Geman, S., Graffigne, C., Dong, P. (1990). Boundary Detection by Constrained Optimization. IEEE Transactions on Pattern A nalysis and Machine Intelligence, 12(7), 609-628.
[Geman, Geman, 1984] Geman, S., Geman, D. (1984). Stochastic Relaxation,
Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Transactions on Pattern Analysis and Machine Intelligen?ce, 6(6), 721-741.
[Hofmann, Buhmann, 1997] Hofmann, Th., Buhmann, J. M. (1997). Pairwise Data
Clustering by Deterministic Annealing. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 19(1), 1-14.
[Hofmann et al., 1997] Hofmann, Th., Puzicha, J., Buhmann, J.M. 1997. Deterministic Annealing for Unsupervised Texture Segmentation. Pages 213-228 of:
Proceedings of the International Workshop on Energy Minimization Methods in
Computer Vision and Pattern Recognition. Lecture Notes in Computer Science,
vol. 1223.
[Jain, Dubes, 1988] Jain, A. K., Dubes, R. C. (1988). Algorithms for Clustering
Data. Englewood Cliffs, NJ 07632: Prentice Hall.
(Raiffa, Schlaifer, 1961] Raiffa, H., Schlaifer, R. (1961). Applied Statistical Decision
Theory. Cambridge MA: MIT Press.
(Van Rijsbergen, 1979] Van Rijsbergen, C. J. (1979). Information Retrieval. Butterworths, London Boston.
2The number of clusters was determined by a criterion based on complexity costs.
3Is it by chance, that 'fuzzy' techniques are 'softly' distributed over two clusters?
| 1363 |@word faculty:1 interleave:1 advantageous:1 seems:1 additively:1 pick:1 profit:1 moment:1 initial:1 configuration:1 series:1 selecting:2 document:6 comparing:2 optim:1 com:1 assigning:1 yet:1 realistic:1 hofmann:13 enables:1 update:2 maxv:1 fund:1 greedy:1 prohibitive:1 selected:1 intelligence:2 ria:1 core:1 short:1 supplying:1 location:1 ron:1 continu:1 dn:11 robu:1 rc:1 combine:1 introduce:2 pairwise:2 inter:1 ra:1 expected:8 roughly:1 behavior:2 multi:1 ol:1 terminal:2 retriev:1 decomposed:1 little:1 becomes:2 dnv:5 linearity:1 medium:1 spends:1 fuzzy:1 developed:2 nj:2 guarantee:1 every:1 concave:1 tie:1 exactly:1 wrong:1 partitioning:2 normally:1 grant:1 before:2 local:6 limit:1 io:1 cliff:1 might:1 co:1 kinderman:1 decided:2 unique:1 pla:1 acknowledgment:1 minv:1 graffigne:1 procedure:3 ker:1 area:1 empirical:4 gabor:1 significantly:2 word:1 subpopulation:1 spite:1 get:1 cannot:2 close:1 selection:8 put:1 risk:3 context:1 prentice:1 equivalent:1 deterministic:3 petit:1 center:2 missing:1 demonstrated:1 straightforward:1 starting:1 resolution:2 simplicity:1 orithm:3 estimator:2 rule:1 deriving:1 population:4 exploratory:1 justification:1 hierarchy:1 suppose:1 homogeneous:1 us:1 hypothesis:1 pa:2 recognition:2 expensive:1 utilized:1 geman:10 database:4 binning:1 distributional:1 observed:1 cloud:1 itcr:1 ft:1 calculate:2 region:1 tur:3 expecting:1 complexity:1 exhaustively:1 motivate:1 depend:1 incur:1 serve:1 efficiency:1 textured:2 basis:1 triangle:1 uh:1 accelerate:1 represented:1 chapter:1 separated:1 jain:3 london:1 artificial:1 neighborhood:4 outcome:2 encoded:1 solve:1 drawing:2 otherwise:1 statistic:3 final:1 obviously:2 hoc:1 advantage:3 exemplary:1 propose:1 ment:1 frequent:1 inserting:1 relevant:1 j2:1 loop:1 achieve:1 cli:1 convergence:1 cluster:21 optimum:1 generating:1 perfect:1 brodatz:1 incremental:1 tk:1 help:3 illustrate:2 object:10 depending:1 dubes:3 derive:1 measured:1 ij:1 eq:1 c:1 involves:1 implies:1 drawback:1 filter:1 stochastic:1 istribution:1 education:1 ja:1 require:1 suffices:2 clustered:1 biological:1 elementary:1 proximity:9 around:1 considered:1 ic:1 hall:1 exp:2 claim:1 clu:1 gia:1 major:1 axiomatic:1 title:1 establishes:1 weighted:1 minimization:1 federal:1 mit:3 clearly:1 rna:1 gaussian:1 avoid:1 structuring:1 refining:1 joachim:1 improvement:1 consistently:1 fur:1 greatly:1 contrast:1 detect:1 summarizing:1 stopping:1 membership:1 nn:1 softly:1 readjustment:1 her:1 nxk:1 relation:1 jln:1 selects:1 germany:1 pixel:2 arg:2 among:2 ill:1 denoted:1 constrained:1 marginal:2 field:2 once:1 having:1 nicely:1 sampling:22 unsupervised:6 anticipated:1 future:1 jb:1 fundamentally:1 inherent:1 sta:1 preserve:1 ve:1 gamma:1 simultaneously:1 replaced:1 phase:1 n1:2 freedom:1 detection:2 interest:1 englewood:1 mining:3 intra:1 evaluation:2 mixture:1 light:1 integral:1 fu:1 unification:1 necessary:1 institut:1 raph:1 iv:5 euclidean:1 incomplete:2 re:3 minimal:1 boolean:1 restoration:1 assignment:8 cost:3 entry:1 subset:2 expects:1 successful:1 dij:2 too:1 universitat:1 optimally:2 connect:1 corrupted:1 combined:1 objec:1 density:4 fundamental:2 randomized:1 international:1 ie:1 sequel:1 probabilistic:1 dong:1 na:1 squared:1 aan:1 unavoidable:1 containing:1 external:1 giv:1 actively:2 li:2 potential:2 de:2 jection:1 summarized:1 student:2 ad:1 vi:1 depends:1 red:2 contribution:1 ass:1 om:1 il:1 variance:4 characteristic:1 efficiently:1 correspond:1 cern:1 raw:1 bayesian:2 uter:1 rejecting:1 informatik:1 fern:1 lu:1 trajectory:1 cc:1 mia:2 definition:1 energy:1 acquisition:1 di:1 gain:3 sampled:1 dataset:2 proved:1 knowledge:3 color:1 oooo:1 segmentation:6 ta:7 higher:1 follow:1 tom:1 improved:1 evaluated:2 though:1 furthermore:1 stage:1 until:2 replacing:1 ei:1 incrementally:1 quality:4 gray:4 dnk:1 usa:1 concept:4 true:1 unbiased:2 counterpart:1 lgorithm:3 hence:3 symmetric:1 deal:2 white:1 round:1 skewed:1 nuisance:1 reassignment:1 criterion:7 chin:1 occuring:1 complete:2 performs:2 motion:1 temperature:1 image:7 novel:1 mt:2 overview:1 association:1 measurement:3 cambridge:2 gibbs:2 ai:1 cv:5 queried:2 specification:2 interleaf:1 similarity:7 base:1 posterior:2 argminv:1 store:1 certain:3 manipulation:1 inequality:1 binary:1 minimum:2 additional:7 ministry:1 rooting:1 ii:3 violate:1 sound:1 infer:1 faster:1 characterized:1 calculation:1 offer:1 retrieval:3 lin:1 divided:1 concerning:1 va:1 variant:1 vision:3 metric:2 histogram:1 represent:1 achieved:3 ion:1 lea:1 want:2 uninformative:1 remarkably:1 annealing:6 crucial:1 posse:1 file:1 nv:1 ura:1 member:1 propo:1 call:1 extracting:1 revealed:1 iii:1 intermediate:2 enough:1 automated:1 affect:1 fit:1 reduce:1 idea:1 war:1 utility:1 distributing:2 algebraic:1 speech:1 amount:2 dark:2 dna:2 exist:1 notice:2 fulfilled:1 arising:1 per:1 vol:1 group:3 reformulation:1 jv:4 changing:1 pj:1 ce:1 nalysis:1 utilize:1 kmean:1 relaxation:1 fraction:1 sum:2 run:2 utilizes:1 decision:10 scaling:1 comparable:1 bound:1 distinguish:2 vectorial:2 x2:1 bonn:3 speed:1 performing:1 structured:1 belonging:4 em:1 making:4 restricted:1 ener:1 computationally:1 equation:1 visualization:1 ndom:1 turn:1 montecarlo:1 needed:1 lyt:1 informal:1 available:3 apply:3 raiffa:5 hierarchical:1 jmn:1 occurrence:1 distinguished:1 robustness:1 thomas:1 clustering:32 cf:4 neura:1 atc:1 perturb:1 bl:1 objective:1 already:2 quantity:4 strategy:7 fa:1 cycling:1 div:2 distance:1 link:1 simulated:1 entity:14 partner:1 trivial:1 reason:1 assuming:1 index:5 rijsbergen:3 difficult:1 ba:1 design:5 ethod:1 motivates:1 reliably:1 unknown:1 perform:1 observation:1 snapshot:2 datasets:1 topical:2 rn:2 lother:1 introduced:1 pair:3 required:3 tentative:1 address:1 pattern:5 regime:2 program:1 oj:1 video:2 suitable:3 event:1 natural:1 rely:1 buhmann:8 largescale:1 indicator:2 mn:4 scheme:1 improve:3 miv:1 numerous:1 pproa:1 prior:4 asymptotic:1 loss:1 lecture:1 generation:3 recalculation:2 men:1 validation:1 foundation:1 degree:1 sufficient:1 reclassification:1 principle:3 systematically:1 pi:2 row:1 supported:1 dis:1 sparse:3 van:3 distributed:2 overcome:1 depth:1 boundary:1 avoids:1 simplified:1 far:1 transaction:3 approximate:1 uni:1 dealing:1 active:24 rul:1 sequentially:1 assumed:1 butterworths:1 xi:1 mau:1 alternatively:1 gala:1 quantifies:1 tuj:1 disambiguate:1 learn:1 promising:1 du:2 complex:1 da:6 significance:1 main:2 hierarchically:1 noise:1 n2:1 suffering:1 repeated:1 site:2 je:1 fig:3 lc:1 bmbf:1 inferring:1 xl:1 lcij:2 jen:2 evidence:1 grouping:2 workshop:1 restricting:1 sequential:4 effectively:1 ci:1 texture:8 dissimilarity:17 te:1 hod:1 sparseness:3 tempera:2 boston:1 suited:1 depicted:2 mill:1 mct:1 ch:1 corresponds:3 chance:1 extracted:1 ma:3 towards:1 shared:1 dissimilari:1 typical:3 determined:1 averaging:1 sampler:1 total:1 called:1 invariance:1 experimental:3 unfavorable:1 meaningful:2 formally:1 puzicha:1 romerstrabe:1 ult:1 relevance:1 evaluate:1 scratch:1 |
399 | 1,364 | An Analog VLSI Neural Network for Phasebased Machine Vision
Bertram E. Shi
Department of Electrical and Electronic
Engineering
Hong Kong University of Science and
Technology
Clear Water Bay, Kowloon, Hong Kong
KwokFaiHui
Fujitsu Microelectronics Pacific Asia Ltd.
Suite 1015-20, Tower 1
Grand Century Place
193 Prince Edward Road West
Mongkok, Kowloon, Hong Kong.
Abstract
We describe the design, fabrication and test results of an analog CMOS
VLSI neural network prototype chip intended for phase-based machine
vision algorithms. The chip implements an image filtering operation
similar to Gabor-filtering. Because a Gabor filter's output is complex
valued, it can be used to define a phase at every pixel in an image. This
phase can be used in robust algorithms for disparity estimation and binocular stereo vergence control in stereo vision and for image motion
analysis. The chip reported here takes an input image and generates two
outputs at every pixel corresponding to the real and imaginary parts of
the output.
1
INTRODUCTION
Gabor filters are used as preprocessing stages for different tasks in machine vision and
image processing. Their use has been partially motivated by findings that two dimensional
Gabor filters can be used to model receptive fields of orientation selective neurons in the
visual cortex (Daugman, 1980) and three dimensional spatio-temporal Gabor filters can be
used to model biological image motion analysis (Adelson, 1985).
A Gabor filter has a complex valued impulse response which is a complex exponential
modulated by a Gaussian function. In one dimension,
x2
x2
g(x) = _1_e- 202/OOxox = _1_e-202 (cos (00 x)
./2itcr
./2itcr
xo
+jsin (00 x?
xo
where OOxo and cr are real constants corresponding to the angular frequency of the complex exponential and the standard deviation of the Gaussian.
An Analog VLSI Neural Network/or Phase-based Machine Vision
727
The phase of the complex valued filter output at a given pixel is related to the location of
edges and other features in the input image near that pixel. Because translating the image
input results in a phase shift in the Gabor output, several authors have developed "phasebased" approaches to disparity estimation (Westelius, 1995) and binocular vergence control (Theimer, 1994) in stereo vision and image motion analysis (Fleet, 1992). Barron et.
al.'s comparison (Barron, 1992) of algorithms for optical flow estimation indicates that
Fleet's algorithm is the most accurate among those tested.
The remainder of this paper describes the design, fabrication and test results of a prototype
analog VLSI continuous time neural network which implements a complex valued filter
similar to the Gabor.
2
NETWORK AND CIRCmT ARCmTECTURE
The prototype implements a Cellular Neural Network (CNN) architecture for Gabor-type
image filtering (Shi, 1996). It consists of an array of neurons, called "cells," each corresponding to one pixel in the image to be processed. Each cell has two outputs v,(n) and
vi(n) which evolve over time according to the equation
rv,(n)l
lvj(n)j
l
[coS(o -sinw [v (n - l~ [2 +).2 0 1[v (n)l [cosw sinw] [v (n + 1)1 [).2 u(n)1
= sinw:: cosw:: v~(n - l)j 0 2 + ).2J v~n)j + -sin:: cosW:: v~n + 1)j +
0 J
o
where A. > 0 and 0)0 E [0,21t] are real constants and u(n) is the input image. The feedback from neighbouring cells' outputs enables information to be spread globally throughout the array. This network has a unique equilibrium point where the outputs correspond to
the real and imaginary parts of the result of filtering the image with a complex valued discrete space convolution kernel which can be approximated by
g(n)
= ~e-A.lnli!ll..o(n).
2
The Gaussian function of the Gabor filter has been replaced by (A./2) e-A.1xt . The larger A.
is, the narrower the impulse response and the larger the bandwidth. Figure 1 shows the
real (a) and imaginary (b) parts of g(n) for A. = 0.3 and O)xo = 0.93. The dotted lines
show the function which modulates the complex exponential.
'.,
,'.
,
,,
-4:10
(a)
_1~
_10
..
-I
(b)
Figure 1: The Real and Imaginary Parts of the Impulse Response.
In the circuit implementation of this CNN, each output corresponds to the voltage across a
capacitor. We selected the circuit architecture in Figure 2 because it was the least sensitive
to the effects of random parameter variations among those we considered (Hui, 1996). In
the figure, resistor labels denote conductances and trapezoidal blocks represent transconductance amplifiers labelled by their gains.
B. E. Shi and K. F. Hui
728
Figure 2: Circuit Implementation of One Neuron.
The circuit implementation also gives good intuitive understanding of the CNN's operation. Assume that the input image is an impulse at pixel n. In the circuit, this corresponds
to setting the current source A.2u(n) to 1.. 2 amps and setting the remaining current sources
to zero. If the gains and conductances were chosen so that').. = 0.3 and w.w = 0.93. then
the steady state voltages across the lower capacitors would follow the spatial distribution
shown in Figure l(a) where the center peak occurs at cell n and the voltages across the
upper capacitors would follow the distribution shown in Figure l(b). To see how this
would arise in the circuit, consider the current supplied by the source ')..2u(n) . Part of the
current flows through the conductance Go pushing the voltage v,(n) positive. As this
voltage increases, the two resistors with conductance G 1 cause a smoothing effect which
pulls the voltages v,(n-l) and v,(n + 1) up towards v,(n). Current also flows through
the diagonal resistor with conductance G 2 pulling vj(n + 1) positive as well. At the same
time, the transconductance amplifier with input v,(n) draws current from node vj(n - 1)
pushing vj(n - 1) negative. The larger G 2 , the more the voltages at nodes vj(n - 1) and
vj(n + 1) are pushed negative and positive. On the other hand, the larger G 1 ' the greater
the smoothing between nodes. Thus, the larger the ratio
sinwxo
- - - = tanwxo '
coswxo
the higher the spatial frequency wxo at which the impulse response oscillates.
3
DESIGN OF CMOS BUILDING BLOCKS
This section describes CMOS transistor circuits which implement the transconductance
amplifiers and resistors in Figure 2. It is not necessary to implement the capacitors explicitly. Since the equilibrium point of the CNN is unique, the parasitic capacitances of the circuit are sufficient to ensure the circuit operates correctly.
3.1
?TRANSCONDUCTANCE AMPLIFIER
The transconductance amplifiers can be implemented using the circuit shown in
Figure 3(a). For Yin == V GND' the output current is approximately lout = Jf3/ss Yin where
f3 n = Iln Cox ( W / L) and (W/ L) is the widthllength ratio of the differential pair. The
transistors in the current mirrors are assumed to be matched. Using cascoded current mir-
An Analog VLSI Neural Networkfor Phase-based Machine l-lsion
729
rors decreases static errors such as offsets caused by the finite output impedance of the
MOS transistors in saturation.
(a)
(b)
Figure 3: The CMOS Circuits Implementing OTAs and Resistors
3.2
RESISTORS
Since the convolution kernels implemented are modulated sine and cosine functions, the
nodal voltages ve(n) and vo(n) can be both positive and negative with respect to the
ground potential. The resistors in the circuit must be floating and exhibit good linearity
and invariance to common mode offsets for voltages around the ground potential. Many
MOS resistor circuits require bias circuitry implemented at every resistor. Since for image
processing tasks, we are interested in maximizing the number of pixels processed, eliminating the need for bias circuitry at each cell will decrease its area and in turn increase the
number of cells implementable within a given area.
Figure 3(b) shows a resistor circuit which satisfies the requirements above. This circuit is
essentially a CMOS transmission gate with adjustable gate voltages. The global bias circuit which generates the gate voltages in the CMOS resistor is shown on the left. The gate
bias voltages V GI and VG2 are distributed to each resistor designed with the same value.
Both transistors Mn and Mp operate in the conduction region where (Enz, 1995)
I Dn =
nn~n(Vpn- VD;VS)(VD_VS)
andIDp =
-np~p(vpp- VD;VS)(VD_VS)
and V Pn and V P are nonlinear functions of the gate and threshold voltages. The sizing of
the NMOS and PMOS transistors can be chosen to decrease the effect of the nonlinearity
730
B. E. Shi and K. F. Hui
due to the (VD + Vs) 12 tenns. The conductance of the resistors can be adjusted using
[bias .
3.3
LIMITATIONS
Due to the physical constraints of the circuit realizations, not all values of A. and ooxo can
be realized. Because the conductance values are non-negative and the OTA gains are nonpositive both G 1 and G 2 must be non-negative. This implies that ooxo must lie between 0
and 1t/2. Because the conductance Go is non-negative, 1..2 ~ - 2 + 2cosooxo + sinooxo '
Figure 4 shows the range of center frequencies ooxo (nonnalized by 1t) and relative bandwidths (2A./oo xo ) achievable by this realization. Not all bandwidths are achievable for
ooxo ~ 2atanO.5 == 0.31t .
I
I
07
0.1
01
Figure 4: The filter parameters implementable by the circuit realization.
4
TEST RESULTS
The circuit architecture and CMOS building blocks described above were fabricated using
the Orbit 2Jlm n-well process available through MOSIS. In this prototype, a 13 cell one
dimensional array was fabricated on a 2.2mm square die. The value of ooxo is fixed at
2 atan 0.5 == 0.927 by transistor sizing. This is the smallest spatial frequency for which all
bandwidths can be obtained. In addition, Go = 1..2 for this value of ooxo . The width of the
impulse response is adjustable by changing the externally supplied bias current shown in
Figure 3(b) controlling Go .
The transconductance amplifiers and resistors are designed to operate between ?300m V .
The currents representing the input image are provided by transconductance amplifiers
internal to the chip which are controlled by externally applied voltages. Outputs are read
off the chip in analog fonn through two common read-out amplifiers: one for the real part
of the impulse response and one for the imaginary part. The outputs of the cells are connected in tum to the inputs of the read-out amplifier through transmission gates controlled
by a shift register. The chip requires ?4 V supplies and dissipates 35m W.
To measure the impulse response of the filters, we applied 150mV to the input corresponding to the middle cell of the array and OV to the remaining inputs. The output voltages
from one chip as a function of cell number are shown as solid lines in Figure 5(a, b). To
correct for DC offsets, we also measured the output voltages when all of the inputs were
grounded, as shown by the dashed lines in the figure. The DC offsets can be separated into
two components: a constant offset common to all cells in the array and a small offset
which varies from cell to cell. For the chip shown, the constant offset is approximately
An Analog VISI Neural Networkfor Phase-based Machine VISion
731
(a)
(b)
(c)
(d)
Figure 5: DC Measurements from the Prototype
l00mV and the small variations have a standard deviation of 2OmY. These results are consistent with the other chips. The constant offset is primarily due to the offset voltage in the
read-out amplifier. The small variations from cell to cell are the result of both parameter
variations from cell to cell and offsets in the transconductance amplifiers of each cell.
By subtracting the DC zero-input offsets at each cell from the outputs, we can observe that
the impulse response closely matches that predicted by the theory. The dotted lines in
Figure 5(c, d) show the offset corrected outputs for the same chip as shown in Figure 5(a,
b). The solid lines shows the theoretical output of the chip using parameters A. and O)xo
chosen to minimize the mean squared error between the theory and the data. The chip was
designed for A. = 0.210 and O)xo = 0.927 . The parameters for the best fit are A. = 0.175
and O)xo = 0.941 . The signal to noise ratio, as defined by the energy in the theoretical
output divided by the energy in the error between theory and data, is 19.3dB. Similar
measurements from two other chips gave SIgnal to noise ratios of 29.OdB (A. = 0.265,
O)xo = 0.928) and 30.6dB (A. = 0.200, O)xo = 0.938).
To measure the speed of the chips, we grounded all of the inputs except that of the middle
cell to which we attached a function generator generating a square wave switching
between ?200mV. The rise times (10% to 90%) at the output of the chip for each cell
were measured and ranged between 340 and 528 nanoseconds. The settling times will not
increase if the number of cells increases since the outputs are computed in parallel. The
settling time is primarily determined by the width of the impUlse response. The wider the
impulse response, the farther information must propagate through the array and the slower
the settling time.
B. E Shi and K. F. Hui
732
5
CONCLUSION
We have described the architecture, design and test results from an analog VLSI prototype
of a neural network which filters images with convolution kernels similar to those of the
Gabor filter. Our future work on chip design includes fabricating chips with larger numbers of cells, two dimensional arrays and chips with integrated photosensors which
acquire and process images simultaneously. We are also investigating the use of these neural network chips in binocular vergence control of an active stereo vision system.
Acknowledgements
This work was supported by the Hong Kong Research Grants Council (RGC) under grant
number HKUST675/95E.
References
E. H. Adelson, and J. R. Bergen, "Spatiotemporal energy models for the perception of
motion",1. Optical Society of America A, vol. 2, pp. 284-299, Feb. 1985.
J. Barron, D. S. Fleet, S. S. Beauchemin, and T. A. Burkitt, "Performance of optical flow
techniques," in Proc. ofCVPR, (Champaign, IL), pp. 236-242, IEEE, 1992.
J. G. Daugman, "Two-dimensional spectral analysis of cortical receptive field profiles,"
Vision Research, vol. 20, pp. 847-856, 1980.
C. C. Enz, F. Krummenacher, and E. A. Vittoz, "An analytical MaS transistor model valid
in all regions of operation and dedicated to low-voltage and low-current applications,"
Analog Integrated Circuits and Signal Processing, vol.8, no.l, p83-114, Jut 1995.
D. J. Fleet, Measurement of Image Velocity, Boston. MA: Kluwer Academic Publishers,
1992.
K. F. Hui and B. E. Shi, "Robustness of CNN Implementations for Gabor-type Filtering,"
Proc. of Asia Pacific Conference on Circuits and Systems, pp. 105-108, Nov. 1996.
B. E. Shi, "Gabor-type image filtering with cellular neural networks," Proceedings of the
1996 IEEE International Symposium on Circuits and Systems, vol. 3, pp. 558-561, May
1996.
W. M. Theimer and H. A Mallot, "Phase-based binocular vergence control and depth
reconstruction using active vision," CVGIP: Image Understanding, vol. 60, no. 3, pp.
343-358, Nov. 1994.
C.-J. Weste1ius, H. Knutsson, J. Wiklund and c.-F. Westin, "Phase-based disparity estimation," in 1. L. Crowley and H. I. Christensen, eds., Vision as Process, chap. 11, pp. 157178, Springer-Verlag, Berlin, 1995.
| 1364 |@word kong:4 cnn:5 cox:1 eliminating:1 achievable:2 middle:2 propagate:1 fonn:1 solid:2 disparity:3 amp:1 imaginary:5 vg2:1 current:12 must:4 ota:1 enables:1 designed:3 v:3 selected:1 farther:1 fabricating:1 node:3 location:1 nodal:1 dn:1 differential:1 supply:1 symposium:1 consists:1 otas:1 globally:1 chap:1 jsin:1 p83:1 provided:1 matched:1 linearity:1 circuit:22 developed:1 finding:1 fabricated:2 suite:1 jlm:1 temporal:1 every:3 oscillates:1 control:4 grant:2 positive:4 engineering:1 switching:1 approximately:2 co:2 range:1 unique:2 block:3 implement:5 area:2 gabor:13 road:1 shi:7 center:2 maximizing:1 go:4 array:7 crowley:1 pull:1 century:1 variation:4 controlling:1 neighbouring:1 velocity:1 approximated:1 trapezoidal:1 itcr:2 electrical:1 region:2 connected:1 decrease:3 ov:1 chip:19 america:1 separated:1 describe:1 pmos:1 larger:6 valued:5 s:1 gi:1 transistor:7 analytical:1 reconstruction:1 subtracting:1 remainder:1 realization:3 intuitive:1 requirement:1 transmission:2 generating:1 cmos:7 wider:1 oo:1 measured:2 edward:1 implemented:3 predicted:1 implies:1 vittoz:1 closely:1 correct:1 filter:12 translating:1 implementing:1 require:1 sizing:2 biological:1 adjusted:1 mm:1 around:1 considered:1 ground:2 equilibrium:2 mo:2 circuitry:2 smallest:1 estimation:4 proc:2 label:1 sensitive:1 council:1 kowloon:2 gaussian:3 pn:1 cr:1 voltage:18 indicates:1 bergen:1 nn:1 integrated:2 vlsi:6 selective:1 interested:1 atan:1 pixel:7 among:2 orientation:1 spatial:3 smoothing:2 field:2 f3:1 adelson:2 future:1 np:1 primarily:2 simultaneously:1 ve:1 floating:1 replaced:1 intended:1 phase:10 amplifier:11 conductance:8 beauchemin:1 accurate:1 edge:1 necessary:1 orbit:1 prince:1 theoretical:2 deviation:2 fabrication:2 reported:1 conduction:1 varies:1 spatiotemporal:1 grand:1 peak:1 international:1 off:1 squared:1 potential:2 includes:1 vpn:1 explicitly:1 caused:1 vi:1 mp:1 register:1 sine:1 mv:2 wave:1 parallel:1 minimize:1 square:2 il:1 correspond:1 ed:1 energy:3 frequency:4 pp:7 wxo:1 static:1 nonpositive:1 gain:3 tum:1 higher:1 follow:2 asia:2 response:10 photosensors:1 angular:1 binocular:4 stage:1 hand:1 nonlinear:1 mode:1 impulse:11 pulling:1 building:2 effect:3 ranged:1 read:4 sin:1 ll:1 width:2 iln:1 steady:1 die:1 cosine:1 hong:4 visi:1 vo:1 motion:4 dedicated:1 image:21 nanosecond:1 common:3 physical:1 attached:1 analog:9 kluwer:1 measurement:3 nonlinearity:1 odb:1 cortex:1 feb:1 verlag:1 tenns:1 greater:1 dashed:1 signal:3 rv:1 champaign:1 match:1 academic:1 divided:1 phasebased:2 controlled:2 bertram:1 vision:11 essentially:1 kernel:3 represent:1 grounded:2 cell:23 addition:1 source:3 publisher:1 operate:2 mir:1 db:2 mallot:1 flow:4 capacitor:4 near:1 fit:1 gave:1 architecture:4 bandwidth:4 prototype:6 shift:2 fleet:4 motivated:1 ltd:1 stereo:4 cause:1 clear:1 daugman:2 processed:2 gnd:1 supplied:2 dotted:2 correctly:1 discrete:1 vol:5 threshold:1 changing:1 mosis:1 networkfor:2 place:1 throughout:1 electronic:1 wiklund:1 draw:1 pushed:1 constraint:1 krummenacher:1 x2:2 generates:2 speed:1 nmos:1 transconductance:8 optical:3 department:1 pacific:2 according:1 describes:2 across:3 christensen:1 xo:9 equation:1 turn:1 available:1 operation:3 observe:1 barron:3 spectral:1 rors:1 robustness:1 gate:6 slower:1 remaining:2 ensure:1 pushing:2 cvgip:1 society:1 capacitance:1 omy:1 realized:1 occurs:1 receptive:2 diagonal:1 exhibit:1 berlin:1 vd:3 tower:1 cellular:2 water:1 ratio:4 acquire:1 jut:1 negative:6 rise:1 design:5 implementation:4 adjustable:2 upper:1 neuron:3 convolution:3 finite:1 implementable:2 nonnalized:1 dc:4 pair:1 perception:1 lout:1 saturation:1 settling:3 dissipates:1 mn:1 representing:1 technology:1 understanding:2 acknowledgement:1 evolve:1 relative:1 limitation:1 filtering:6 generator:1 sufficient:1 consistent:1 fujitsu:1 supported:1 bias:6 distributed:1 feedback:1 dimension:1 cortical:1 valid:1 depth:1 author:1 preprocessing:1 arcmtecture:1 nov:2 global:1 active:2 investigating:1 assumed:1 spatio:1 continuous:1 bay:1 vergence:4 impedance:1 robust:1 complex:8 vj:5 spread:1 noise:2 arise:1 profile:1 west:1 burkitt:1 resistor:14 exponential:3 lie:1 externally:2 xt:1 offset:12 microelectronics:1 modulates:1 hui:5 mirror:1 westin:1 vpp:1 boston:1 yin:2 visual:1 partially:1 springer:1 corresponds:2 satisfies:1 ma:2 narrower:1 towards:1 labelled:1 determined:1 except:1 operates:1 corrected:1 called:1 invariance:1 rgc:1 parasitic:1 internal:1 modulated:2 tested:1 |