Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
800
1,732
Optimal sizes of dendritic and axonal arbors Dmitri B. Chklovskii Sloan Center for Theoretical Neurobiology The Salk Institute, La Jolla, CA 92037 mitya@salk.edu Abstract I consider a topographic projection between two neuronal layers with different densities of neurons. Given the number of output neurons connected to each input neuron (divergence or fan-out) and the number of input neurons synapsing on each output neuron (convergence or fan-in) I determine the widths of axonal and dendritic arbors which minimize the total volume ofaxons and dendrites. My analytical results can be summarized qualitatively in the following rule: neurons of the sparser layer should have arbors wider than those of the denser layer. This agrees with the anatomical data from retinal and cerebellar neurons whose morphology and connectivity are known. The rule may be used to infer connectivity of neurons from their morphology. 1 Introduction Understanding brain function requires knowing connections between neurons. However, experimental studies of inter-neuronal connectivity are difficult and the connectivity data is scarce. At the same time neuroanatomists possess much data on cellular morphology and have powerful techniques to image neuronal shapes. This suggests using morphological data to infer inter-neuronal connections. Such inference must rely on rules which relate shapes of neurons to their connectivity. The purpose of this paper is to derive such rule for a frequently encountered feature in the brain organization: a topographic projection. Two layers of neurons are said to form a topographic projection if adjacent neurons of the input layer connect to adjacent neurons of the output layer, Figure 1. As a result, output neurons form an orderly map of the input layer. I characterize inter-neuronal connectivity for a topographic projection by divergence and convergence factors defined as follows, Figure 1. Divergence, D, of the projection is the number of output neurons which receive connections from an input neuron. Convergence, C, of the projection is the number of input neurons which connect with an output neuron. I assume that these numbers are the same for each neuron in a given layer. Furthermore, each neuron makes the required connections with the nearest neurons of the other layer. In most cases, this completely specifies the wiring diagram. A typical topographic wiring diagram shown in Figure 1 misses an important biological detail. In real brains, connections between cell bodies are implemented by neuronal processes: axons which carry nerve pulses away from the cell bodies and dendrites which carry signals Optimal Sizes of Dendritic and Axonal Arbors 109 nJ n2 Figure 1: Wiring diagram of a topographic projection between input (circles) and output (squares) layers of neurons. Divergence, D, is the number of outgoing connections (here, D = 2) from an input neuron (wavey lines). Convergence, C, is the number of connections incoming (here, C = 4) to an output neuron (bold lines). Arrow shows the direction of signal propagation. a) ... ~~ ... wiring diagram b) ... QQQQQQQQQQQQ ... [5 c) Type I [5 "'9~?9~?'" Type II Figure 2: Two different arrangements implement the same wiring diagram. (a) Topographic wiring diagram with C = 6 and D = 1. (b) Arrangement with wide dendritic arbors and no axonal arbors (Type I) (c) Arrangement with wide axonal arbors and no dendritic arbors (Type II). Because convergence exceeds divergence type I has shorter wiring than type II. towards cell bodies.[I] Therefore each connection is interrupted by a synapse which separates an axon of one neuron from a dendrite of another. Both axons and dendrites branch away from cell bodies fonning arbors. In general, a topographic projection with given divergence and convergence may be implemented by axonal and dendritic arbors of different sizes, which depend on the locations of synapses. For example, consider a wiring diagram with D = 1 and C = 6, Figure 2a. Narrow axonal arbors may synapse onto wide dendritic arbors, Figure 2b or wide axonal arbors may synapse onto narrow dendritic arbors, Figure 2c. I call these arrangements type I and type II, correspondingly. The question is: which arbor sizes are preferred? I propose a rule which specifies the sizes of axonal arbors of input neurons and dendritic arbors of output neurons in a topographic projection: High divergence/convergence ratio favors wide axonal and narrow dendritic arbors while low divergence/convergence ratio favors narrow axonal arbors and wide dendritic arbors. Alternatively, this rule may be formulated in tenns of neuronal densities in the two layers: Sparser layer has wider arbors. In the above example, divergence/convergence (and neuronal density) ratio is 116 and, according to the rule, type I arrangement, Figure 2b, is preferred. In this paper I derive a quantitative version of this rule from the principle of wiring economy which can be summarized as follows. [2, 3, 4, 5, 6] Space constraints require keeping the brain volume to a minimum. Because wiring (axons and dendrites) takes up a significant fraction of the volume, evolution has probably designed axonal and dendritic arbors in a way that minimizes their total volume. Therefore we may understand the existing arbor sizes as a result of wiring optimization. D. B. Chklovsldi 110 To obtain the rule I formulate and solve a wiring optimization problem. The goal is to find the sizes ofaxons and dendrites which minimize the total volume of wiring in a topographic wiring diagram for fixed locations of neurons. I specify the wiring diagram with divergence and convergence factors. Throughout most of the paper I assume that the cross-sectional area of dendrites and axons are constant and equal. Therefore, the problem reduces to the wire length minimization. Extension to unequal fiber diameters is given below. 2 Topographic projection in two dimensions Consider two parallel layers of neurons with densities nl and n2. The topographic wiring diagram has divergence and convergence factors, D and C, requiring each input neuron to connect with D nearest output neurons and each output neuron with C nearest input neurons. Again, the problem is to find the arrangement of arbors which minimizes the total length of axons and dendrites. For different arrangements I compare the wirelength per unit area, L. I assume that the two layers are close to each other and include only those parts of the wiring which are parallel to the layers. I start with a special case where each input neuron connects with only one output neuron (D = 1). Consider an example with C = 16 and neurons arranged on a square grid in each layer, Figure 3a. Two extreme arrangements satisfy the wiring diagram: type I has wide dendritic arbors and no axonal arbors, Figure 3b; type II has wide axonal arbors and no dendritic arbors, Figure 3c. I take the branching angles equal to 120?, an optimal value for constant crossectional area. [4] Assuming "point" neurons the ratio of wire length for type I and type II arrangements: Lr -L ~0.57. (1) II Thus, the type I arrangement with wide dendritic arbors has shorter wire length. This conclusion holds for other convergence values much greater than one, provided D 1. However, there are other arrangements with non-zero axonal arbors that give the same wire length. One of them is shown in Figure 3d. Degenerate arrangements have axonal arbor width 0 < Sa < 1/ vnI where the upper bound is given by the approximate inter-neuronal distance. This means that the optimal arbor size ratio for D = 1 = (2) By using the symmetry in respect to the direction of signal propagation I adapt this result for the C = 1 case. For D > 1, arrangements with wide axonal arbors and narrow dendritic arbors (0 < Sd < 1/ have minimal wirelength. The arbor size ratio is vnv (3) Next, I consider the case when both divergence and convergence are greater than one. Due to complexity of the problem I study the limit of large divergence and convergence (D, C ? 1). I find analytically the optimal layout which minimizes the total length of axons and dendrites. Notice that two neurons may form a synapse only if the axonal arbor of the input neuron overlaps with the dendritic arbor of the output neuron in a two-dimensional projection, Figure 4. Thus the goal is to design optimal dendritic and axonal arbors so that each dendritic arbor intersects C axonal arbors and each axonal arbor intersects D dendritic arbors. To be specific, I consider a wiring diagram with convergence exceeding divergence, C > D (the argument can be readily adapted for the opposite case). I make an assumption, to be 111 Optimal Sizes of Dendritic and Axonal Arbors a) b) wiring diagram Type I Type II Type I' c) Figure 3: Different arrangements implement the same wiring diagram in two dimensions. (a) Topographic wiring diagram with D = 1 and C = 16. (b) Arrangement with wide dendritic arbors and no axonal arbors, Type I. (c) Arrangement with wide axonal arbors and no dendritic arbors, Type II. Because convergence exceeds divergence type I has shorter wiring than type II. (d) Intermediate arrangement which has the same wire length as type I. Figure 4: Topographic projection between the layers of input (circles) and output (squares) neurons. For clarity, out of the many input and output neurons with overlapping arbors only few are shown. The number of input neurons is greater than the number of output neurons (C / D > 1). Input neurons have narrow axonal arbors of width Sa connected to the wide but sparse dendritic arbors of width Sd. Sparseness of the dendritic arbor is given by Sa because all the input neurons spanned by the dendritic arbor have to be connected. 112 D. B. Chklovskii verified later, that dendritic arbor diameter Sd is greater than axonal one, Sa. In this regime each output neuron's dendritic arbor forms a sparse mesh covering the area from which signals are collected, Figure 4. Each axonal arbor in that area must intersect the dendritic arbor mesh to satisfy the wiring diagram. This requires setting mesh size equal to the axonal arbor diameter. By using this requirement I express the total length of axonal and dendritic arbors as a function of only the axonal arbor size, Sa. Then I find the axonal arbor size which minimizes the total wirelength. Details of the calculation will be published elsewhere. Here, I give an intuitive argument for why in the optimal layout both axonal and dendritic size are non-zero. Consider two extreme layouts. In the first one, dendritic arbors have zero width, type II. In this arrangement axons have to reach out to every output neuron. For large convergence, C ? 1, this is a redundant arrangement because of the many parallel axonal wires whose signals are eventually merged. In the second layout, axonal arbors are absent and dendrites have to reach out to every input neuron. Again, because each input neuron connects to many output neurons (large divergence, D ? 1) many dendrites run in parallel inefficiently carrying the same signal. A non-zero axonal arbor rectifies this inefficiency by carrying signals to several dendrites along one wire. I find that the optimal ratio of dendritic and axonal arbor diameters equals to the square root of the convergenceldivergenceratio, or, alternatively, to the square root of the neuronal density ratio: (4) Since I considered the case with C > D this result also justifies the assumption about axonal arbors being smaller than dendritic ones. For arbitrary axonal and dendritic cross-sectional areas, ha and hd, expressions ofthis Section are modified. The wiring economy principle requires minimizing the total volume occupied by axons and dendrites resulting in the following relation for the optimal arrangement: (5) Notice that in the optimal arrangement the total axonal volume of input neurons is equal to the total dendritic volume of the output neurons. 3 Discussion 3.1 Comparison of the theory with anatomical data This theory predicts a relationship between the con-/divergence ratio and the sizes of axonal and dendritic arbors. I test these predictions on several cases of topographic projection in two dimensions. The predictions depend on whether divergence and convergence are both greater than one or not. Therefore, I consider the two regimes separately. First, I focus on topographic projections of retinal neurons whose divergence factor is equal or close to one. Because retinal neurons use mostly graded potentials the difference between axons and dendrites is small and I assume that their cross-sectional areas are equal. The theory predicts that the ratio of dendritic and axonal arbor sizes must be greater than the square root of the input/output neuronal density ratio, Sd/ Sa > (ndn2)1/2 (Eq.2). I represent the data on the plot of the relative arbor diameter, Sd/ Sa, vs. the square root of the relative densities, (ndn2)1/2, (Figure 5). Because neurons located in the same layer may belong to different classes, each having different arbor size and connectivity, I plot data 113 Optimal Sizes of Dendritic and Axonal Arbors ?..--_ _ _ _ _ _ _...,--_ _ _ _ _ _--." s~/s 50 I. A C o ..I U5 uz C=l 5 I. ZO Figure 5: Anatomical data for several pairs of retinal cell classes which form topographic projections with D 1. All the data points fall in the triangle above the Sd/ Sa = (ndn2)1/2 line in agreement with the theoretical prediction, Eq.2. The following data has been used: 0 - midget bipolar -+ midget ganglion,[7, 8, II]; U - diffuse bipolar -+ parasol ganglion,[7, 9]; 'V - rods -+ rod bipolar,[lO]; b. - cones -+ HI horizontals.[12]; 0 - rods -+ telodendritic arbors of HI horizontals,[ 13]. = from different classes separately. All the data points lie above the Sd/ Sa in agreement with the prediction. = (ndn2)1/2line Second, I apply the theory to cerebellar neurons whose divergence and convergence are both greater than one. I consider a projection from granule cell axons (parallel fibers) onto Purkinje cells. Ratio of granule cells to Purkinje cells is 33()(),[14], indicating a high convergence/divergence ratio. This predicts a ratio of dendritic and axonal arbor sizes of 58. This is qualitatively in agreement with wide dendritic arbors of Purkinje cells and no axonal arbors on parallel fibers. Quantitative comparison is complicated because the projection is not strictly twodimensional: Purkinje dendrites stacked next to each other add up to a significant third dimension. Naively, given that the dendritic arbor size is about 400ILm Eq.4 predicts axonal arbor of about 7 ILm. This is close to the distance between two adjacent Purkinje cell arbors of about 9ILm. Because the length of parallel fibers is greater than 7ILm absence of axonal arbors comes as no surprise. 3.2 Other factors affecting arbor sizes One may argue that dendrites and axons have functions other than linking cell bodies to synapses and, therefore, the size of the arbors may be dictated by other considerations. Although I can not rule out this possibility, the primary function ofaxons and dendrites is to connect cell bodies to synapses in order to conduct nerve pulses between them. Indeed, if neurons were not connected more sophisticated effects such as non-linear interactions between different dendritic inputs could not take place. Hence the most basic parameters of axonal and dendritic arbors such as their size should follow from considerations of connectivity. Another possibility is that the size of dendritic arbors is dictated by the surface area needed D. B. Chklovskii J14 to arrange all the synapses. This argument does not specify the arbor size, however: a compact dendrite of elaborate shape can have the same surface area as a wide dendritic arbor. Finally, agreement of the predictions with the existing anatomical data suggests that the rule is based on correct principles. Further extensive testing of the rule is desirable. Violation of the rule in some system would suggest the presence of other overriding considerations in the design of that system, which is also interesting. Acknowledgements I benefited from helpful discussions with E.M. Callaway, EJ. Chichilnisky, H.J. Karten, C.P. Stevens and TJ. Sejnowski and especially with A.A. Koulakov. I thank G.D. Brown for suggesting that the size of axonal and dendritic arbors may be related to con-/divergence. References [1] Cajal, S.R.y. (1995a). Histology of the nervous system p.95 (Oxford University Press, NewYork). [2] Cajal, S.R.y. ibid. p.1l6. [3] Mitchison, G. (1991). Neuronal branching patterns and the economy of cortical wiring. Proc R Soc Lond B Bioi Sci 245, 151-8. [4] Chemiak, C. (1992). Local optimization of neuron arbors, Bioi Cybem 66,503-510. [5] Young, M.P. (1992). Objective analysis of the topological organization of the primate cortical visual system Nature 358, 152-5. [6] Chklovskii, D.B. & Stevens, c.F. (1999). Wiring the brain optimally, submitted Nature Neuroscience. [7] Watanabe, M. & Rodieck, R. W. (1989). Parasol and midget ganglion cells of the primate retina. J Comp Neurol 289, 434-54. [8] Milam, A.H., Dacey, D.M. & Dizhoor, A.M. (1993). Recoverin immunoreactivity in mammalian cone bipolar cells. Vis Neurosci 10, 1-12. [9] Grunert, U., Martin, P.R. & Wassle H. (1994). Immunocytochemical analysis of bipolar cells in the macaque monkey retina. J Comp Neuro1348, 607-27. [10] Grunert, U. & Martin, P.R. (1991). Rod bipolar cells in the macaque monkey retina: immunoreactivity and connectivity. J Neurosci 11,2742-58. [11] Dacey, D.M. (1993). The mosaic of midget ganglion cells in the human retina. J Neurosci 13, 5334-55. [12] Wassle, H., Boycott, B.B. & Rohrenbeck, J. (1989). Horizontal cells in the monkey retina: cone connections and dendritic network. Eur J Neurosci I, 421-435. [13] Rodieck, R.W. (1989) The First Steps in Seeing (Sinauer Associates, Sunderland, MA). [14] Andersen, B.B., Korbo, L. & Pakkenberg, B. (1992). A quantitative study of the human cerebellum with unbiased stereological techniques. J Comp Neurol 326, 549-60. [15] Peters A., Payne B.R. & Budd, J. (1994). A numerical analysis of the geniculocortical input to striate cortex in the monkey. Cereb Cortex 4, 215-229. [16] Blasdel, G.G. & Lund, J.S. (1983) Termination of afferent axons in macaque striate cortex. J Neurosci 3, 1389-1413. [17] Wiser, A.K. & Callaway, E.M. (1996). Contributions of individual layer 6 pyramidal neurons to local circuitry in macaque primary visual cortex. J Neurosci 16,2724-2739.
1732 |@word version:1 termination:1 pulse:2 carry:2 inefficiency:1 existing:2 must:3 readily:1 mesh:3 interrupted:1 numerical:1 shape:3 designed:1 plot:2 overriding:1 v:1 nervous:1 lr:1 location:2 along:1 boycott:1 inter:4 indeed:1 frequently:1 morphology:3 brain:5 uz:1 provided:1 minimizes:4 monkey:4 nj:1 quantitative:3 every:2 bipolar:6 unit:1 vni:1 local:2 sd:7 limit:1 oxford:1 suggests:2 callaway:2 testing:1 implement:2 area:9 intersect:1 projection:17 seeing:1 suggest:1 onto:3 close:3 twodimensional:1 map:1 center:1 layout:4 formulate:1 rule:13 spanned:1 hd:1 mosaic:1 agreement:4 associate:1 located:1 mammalian:1 predicts:4 connected:4 morphological:1 complexity:1 depend:2 carrying:2 completely:1 triangle:1 fiber:4 newyork:1 intersects:2 zo:1 stacked:1 sejnowski:1 whose:4 solve:1 denser:1 favor:2 topographic:17 koulakov:1 analytical:1 propose:1 interaction:1 payne:1 degenerate:1 intuitive:1 convergence:20 requirement:1 wider:2 derive:2 nearest:3 eq:3 sa:9 soc:1 implemented:2 come:1 direction:2 merged:1 correct:1 stevens:2 human:2 j14:1 require:1 dendritic:48 biological:1 extension:1 strictly:1 hold:1 considered:1 blasdel:1 circuitry:1 dacey:2 arrange:1 purpose:1 proc:1 agrees:1 minimization:1 modified:1 occupied:1 ej:1 immunoreactivity:2 focus:1 helpful:1 inference:1 economy:3 relation:1 sunderland:1 special:1 equal:7 having:1 vnv:1 few:1 retina:5 cajal:2 divergence:22 individual:1 connects:2 organization:2 possibility:2 rectifies:1 violation:1 extreme:2 nl:1 tj:1 shorter:3 conduct:1 circle:2 theoretical:2 minimal:1 purkinje:5 characterize:1 optimally:1 connect:4 my:1 eur:1 density:7 connectivity:9 again:2 andersen:1 suggesting:1 potential:1 retinal:4 parasol:2 summarized:2 bold:1 ilm:4 satisfy:2 sloan:1 afferent:1 vi:1 later:1 root:4 start:1 parallel:7 complicated:1 contribution:1 minimize:2 square:7 comp:3 published:1 submitted:1 synapsis:4 reach:2 con:2 sophisticated:1 nerve:2 follow:1 specify:2 synapse:4 arranged:1 furthermore:1 horizontal:3 overlapping:1 propagation:2 effect:1 requiring:1 brown:1 unbiased:1 evolution:1 analytically:1 hence:1 adjacent:3 wiring:27 cerebellum:1 width:5 branching:2 covering:1 ofaxons:3 cereb:1 image:1 consideration:3 volume:8 belong:1 linking:1 significant:2 grid:1 cortex:4 surface:2 add:1 dictated:2 histology:1 jolla:1 tenns:1 minimum:1 greater:8 determine:1 redundant:1 signal:7 ii:12 branch:1 desirable:1 infer:2 reduces:1 exceeds:2 adapt:1 calculation:1 cross:3 prediction:5 basic:1 cerebellar:2 represent:1 cell:19 receive:1 affecting:1 chklovskii:4 separately:2 diagram:16 pyramidal:1 posse:1 probably:1 call:1 axonal:48 presence:1 intermediate:1 opposite:1 knowing:1 absent:1 rod:4 whether:1 expression:1 peter:1 u5:1 ibid:1 diameter:5 specifies:2 notice:2 neuroscience:1 per:1 anatomical:4 express:1 clarity:1 verified:1 dmitri:1 fraction:1 cone:3 run:1 angle:1 synapsing:1 powerful:1 place:1 throughout:1 bound:1 layer:19 hi:2 fan:2 topological:1 encountered:1 adapted:1 constraint:1 diffuse:1 argument:3 lond:1 martin:2 according:1 smaller:1 primate:2 mitya:1 eventually:1 needed:1 apply:1 away:2 rodieck:2 inefficiently:1 include:1 l6:1 especially:1 graded:1 granule:2 objective:1 question:1 arrangement:21 primary:2 striate:2 said:1 distance:2 separate:1 thank:1 sci:1 argue:1 collected:1 cellular:1 geniculocortical:1 assuming:1 length:9 relationship:1 ratio:14 minimizing:1 difficult:1 mostly:1 relate:1 wiser:1 design:2 upper:1 neuron:60 wire:7 neurobiology:1 arbitrary:1 fonning:1 pair:1 required:1 chichilnisky:1 extensive:1 connection:9 unequal:1 narrow:6 wassle:2 macaque:4 below:1 pattern:1 lund:1 regime:2 overlap:1 rely:1 scarce:1 midget:4 understanding:1 acknowledgement:1 relative:2 sinauer:1 interesting:1 principle:3 lo:1 elsewhere:1 keeping:1 understand:1 institute:1 wide:15 fall:1 correspondingly:1 sparse:2 dimension:4 cortical:2 qualitatively:2 approximate:1 compact:1 preferred:2 orderly:1 incoming:1 cybem:1 alternatively:2 mitchison:1 why:1 nature:2 ca:1 symmetry:1 dendrite:18 neurosci:6 arrow:1 n2:2 body:6 neuronal:12 benefited:1 elaborate:1 salk:2 axon:13 watanabe:1 exceeding:1 lie:1 third:1 young:1 specific:1 neurol:2 ofthis:1 naively:1 justifies:1 sparseness:1 sparser:2 surprise:1 ganglion:4 visual:2 sectional:3 ma:1 bioi:2 goal:2 formulated:1 towards:1 absence:1 typical:1 miss:1 total:10 arbor:86 la:1 experimental:1 indicating:1 outgoing:1
801
1,733
Maximum entropy discrimination Tommi Jaakkola MIT AI Lab 545 Technology Sq. Cambridge, MA 02139 Marina Meila MIT AI Lab 545 Technology Sq. Cambridge, MA 02139 Tony Jebara MIT Media Lab 20 Ames St. Cambridge, MA 02139 tommi@ai.mit.edu mmp@ai. mit. edu jebara@media. mit. edu Abstract We present a general framework for discriminative estimation based on the maximum entropy principle and its extensions. All calculations involve distributions over structures and/or parameters rather than specific settings and reduce to relative entropy projections. This holds even when the data is not separable within the chosen parametric class, in the context of anomaly detection rather than classification, or when the labels in the training set are uncertain or incomplete. Support vector machines are naturally subsumed under this class and we provide several extensions. We are also able to estimate exactly and efficiently discriminative distributions over tree structures of class-conditional models within this framework. Preliminary experimental results are indicative of the potential in these techniques. 1 Introduction Effective discrimination is essential in many application areas. Employing generative probability models such as mixture models in this context is attractive but the criterion (e.g., maximum likelihood) used for parameter/structure estimation is suboptimal. Support vector machines (SVMs) are, for example, more robust techniques as they are specifically designed for discrimination [9]. Our approach towards general discriminative training is based on the well known maximum entropy principle (e.g., [3]). This enables an appropriate training of both ordinary and structural parameters of the model (cf. [5, 7]). The approach is not limited to probability models and extends, e.g., SVMs. 2 Maximum entropy classification Consider a two-class classification problem 1 where labels y E {-I, I} are assigned IThe extension to a multi-class is straightforward[4]. The formulation also admits an easy extension to regression problems, analogously to SVMs. Maximum Entropy Discrimination 471 to examples X E X. Given two generative probability distributions P(XIO y ) with parameters Oy, one for each class, the corresponding decision rule follows the sign of the discriminant function: P(XIOl) C(XI8) = log P(XIO-l) +b (1) where 8 = {Ol,O-l,b} and b is a bias term, usually expressed as a log-ratio b = log p/(l - p). The class-conditional distributions may come from different families of distributions or the parametric discriminant function could be specified directly without any reference to models. The parameters Oy may also include the model structure (see later sections). The parameters 8 = {01, 0-1, b} should be chosen to maximize classification accuracy. We consider here the more general problem of finding a distribution P(8) over parameters and using a convex combination of discriminant functions, i.e., J P(8)C(XI8)d8 in the decision rule. The search for the optimal P(8) can be formalized as a maximum entropy (ME) estimation problem. Given a set of training examples {Xl, ... , X T} and corresponding labels {Yl, ... ,YT} we find a distribution P(8) that maximizes the entropy H(P) subject to the classification constraints J P(8) [Yt C(Xt I8)] d8 2: , for all t. Here, > 0 specifies a desired classification margin. The solution is unique (if it exists) since H(P) is concave and the linear constraints specify a convex region. Note that the preference towards high entropy distributions (fewer assumptions) applies only within the admissible set of distributions P'"Y consistent with the constraints. See [2] for related work. We will extend this basic idea in a number of ways. The ME formulation assumes, for example, that the training examples can be separated with the specified margin. We may also have a reason to prefer some parameter values over others and would therefore like to incorporate a prior distribution Po (8). Other extensions and generalizations will be discussed later in the paper. A more complete formulation is based on the following minimum relative entropy principle: Definition 1 Let {Xt, yd be the training examples and labels, C(XI8) a parametric discriminant function, and, = [,1, ... "tl a set of margin variables. Assuming a prior distribution Po(8,,), we find the discriminative minimum relative entropy (MRE) distribution P(8,,) by minimizing D(PIIPo) subject to (2) for all t. Here fj = sign ( J P(8) C(XI8) d8) specifies the decision rule for any new example X. The margin constraints and the preference towards large margin solutions are encoded in the prior Po('). Allowing negative margin values with non-zero probabilities also guarantees that the admissible set P consisting of distributions P(8,,) consistent with the constraints, is never empty. Even when the examples cannot be separated by any discriminant function in the parametric class (e.g., linear), we get a valid solution. The miss-classification penalties follow from Pob) as well. T. Jaakkola, M. Meila and T. Jebara 472 .??... ..... b) -. -. c) -~:C--~-;----;-----.---:. Figure 1: a) Minimum relative entropy (MRE) projection from the prior distribution to the admissible set. b) The margin prior Po(Tt). c) The potential terms in the MRE formulation (solid line) and in SVMs (dashed line). c = 5 in this case. Suppose po(e , ,) = po(e)Po(T) and poe,) = Dt Po (Tt) , where Po(Tt) = ee- c (I-"Yt) for ,t ~ 1, (3) This is shown in Figure lb. The penalty for margins smaller than I-lie (the prior mean of,t) is given by the relative entropy distance between P(T) and Po(T). This is similar but not identical to the use of slack variables in support vector machines. Other choices of the prior are discussed in [4]. The MRE solution can be viewed as a relative entropy projection from the prior distribution po(e,,) to the admissible set P . Figure la illustrates this view. From the point of view of regularization theory, the prior probability Po specifies the entropic regularization used in this approach. Theorem 1 The solution to the MRE problem has the following general form [1] pee,,) = ztA)Po(e,,) el: t At[Yt,C(xtle)-"Y,] (4) where Z (A) is the normalization constant (partition function) and A = {AI, ... , AT} defines a set of non-negative Lagrange multipliers, one for each classification constraint. A are set by finding the unique maximum of the following jointly concave objective function: J(A) = -logZ(A) The solution is sparse, Le., only a few Lagrange mUltipliers will be non-zero. This arises because many of the classification constraints become irrelevant once the constraints are enforced for a small subset of examples. Sparsity leads to immediate but weak generalization guarantees expressed in terms of the number of non-zero Lagrange multipliers [4]. Practicalleave-one-out cross-validation estimates can be also derived. 2.1 Practical realization of the MRE solution We now turn to finding the MRE solution. To begin with, we note that any disjoint factorization of the prior Po (e, ,), where the corresponding parameters appear in distinct additive components in YtC(Xt, e) - ,t, leads to a disjoint factorization of the MRE solution pee, ,) . For example, {e \ b, b, ,} provides such a factorization . As a result of this factorization, the bias term could be eliminated by imposing additional constraints on the Lagrange multipliers [4]. This is analogous to the handling of the bias term in support vector machines [9]. We consider now a few specific realizations such as support vector machines and a class of graphical models. 473 Maximum Entropy Discrimination 2.1.1 Support vector machines It is well known that the log-likelihood ratio of two Gaussian distributions with equal covariance matrices yields a linear decision rule. With a few additional assumptions, the MRE formulation gives support vector machines: Theorem 2 Assuming C(X, e) = OT X - band po(e, ,) = Po(O)Po(b)Po(,) where Po (0) is N (0,1), Po (b) approaches a non-informative prior, and Po (J) is given by eq. (3) then the Lagrange multipliers A are obtained by maximizing J(A) subject to At ::; c and 2:t AtYt = 0, where ?: ; J(A) = :~:) At + log(l - At/C)]- ~ 2:, AtAt'Ytyd X [ X t,) (5) t,t' t The only difference between our J(A) and the (dual) optimization problem for SVMs is the additional potential term log(l - At/C). This highlights the effect of the different miss-classification penalties, which in our case come from the MRE projection. Figure Ib shows, however, that the additional potential term does not always carry a huge effect (for c = 5). Moreover, in the separable case, letting c -+ 00, the two methods coincide. The decision rules are formally identical. We now consider the case where the discriminant function C(X, e) corresponds to the log-likelihood ratio of two Gaussians with different (and adjustable) covariance matrices. The parameters e in this case are both the means and the covariances. The prior paCe) must be the conjugate Normal-Wishart to obtain closed form integrals 2 for the partition function, Z. Here, p(e l , e- l ) is P(m1' VdP(m-1, V-d, a density over means and covariances. The prior distribution has the form po(ed = N(m1; mo, Vdk) IW(V1; kVo, k) with parameters (k, mo, Vo) that can be specified manually or one may let k -+ 0 to get a non-informative prior. Integrating over the parameters and the margin, we get Z = Z"( X Zl X Z-l, where (6) .:l -.:l w 6 T - -T . N1 = 2:t Wt, Xl = 2:t ~Xt, 3 1 = 2: t WtXtXt - N 1X I X 1 . Here, Wt IS a scalar weight given by Wt = u(Yt)+YtAt. For Z-l, the weights are set to Wt = u( -Yt)-YtAt; u(?) is the step function. Given Z, updating A is done by maximizing J(A). The resulting marginal MRE distribution over the parameters (normalized by Zl x Z-d is a Normal-Wishart distribution itself, p(e 1 ) = N(m1; Xl, VdNd IW(V1; 3 1 , N 1) with the final A values. Predicting the label for a new example X involves taking expectations of the discriminant function under a Normal-Wishart. This is We thus obtain discriminative quadratic decision boundaries. These extend the linear boundaries without (explicitly) resorting to kernels. More generally, the covariance estimation in this framework adaptively modifies the kernel. 2This can be done more generally for conjugate priors in the exponential family. T. Jaakkola, M Meila and T. Jebara 474 2.1.2 Graphical models We consider here graphical models with no hidden variables. The ME (or MRE) distribut ion is in this case a distribution over both structures and parameters. Finding the distribution over parameters can be done in closed form for conjugate priors when the observations are complete. The distribution over structures is, in general, intractable. A notable exception is a tree model that we discuss in the forthcoming . A tree graphical model is a graphical model for which the structure is a tree. This model has the property that its log-likelihood can be expressed as a sum of local terms [8] logP(X,EIO) = 2: hu(X, 0) + 2: wuv(X,O) (8) uvEE u The discriminant function consisting of the log-likelihood ratio of a pair of tree models (depending on the edge sets E 1, E_l, and parameters 01, 0_ 1) can be also expressed in this form . We consider here the ME distribution over tree structures for fixed parameters 3 . The treatment of the general case (i.e. including the parameters) is a direct extension of this result. The ME distribution over the edge sets E1 and E-1 factorizes with components IT P(E?l) = _1_ e ?2:,)."Yt[2:uvEE?1 w;!'v1(X"O?I)+2:u h U(X"O?I?) = h?1 W~1 (9) Z?1 Z?1uv EE ?1 where Z?1 , h?l, W?1 are functions of the same Lagrange multipliers..\. To completely define the distribution we need to find ..\ that optimize J(..\) in Theorem 1; for classification we also need to compute averages with respect to P(E?d. For these, it suffices to obtain an expression of the partition function( s) Z?1. P is a discrete distribution over all possible tree structures for n variables (there are nn-2 trees). However, a remarkable graph theory result, called the Matrix Tree Theorem [10], enables us to perform all necessary summations in closed form in polynomial time. On the basis of this result , we find Theorem 3 The normalization constant Z of a distribution of the form (9) is Z Quv(W) IT h.2: Wuv = h 'IQ(W)I, where E uvEE u=f:.v { -Wuv 2:~'=l WV'v u=v (10) (11) This shows that summing over the distribution of all trees, when this distribution factors according to the trees' edges, can be done in closed form by computing the value of a determinant in time O(n 3 ). Since we obtain a closed form expression, optimization of the Lagrange multipliers and evaluating the resulting classification rule are also tractable . Figure 2a provides a comparison of the discriminative tree approach and a maximum likelihood tree estimation method on a DNA splice junction problem. 3Each tree relies on a different set of n -1 pairwise node marginals. In our experiments the class-conditional pairwise marginals were obtained directly from data. Maximum Entropy Discrimination 475 t??????????? .. ~:: ,//._ : 8:: : -.. CD 2" a) .. "false pOsitives" .. . b) ?0 02 -04 04 o. 04 c) - ----=-... 000-----= 0 .-=. .... :---::": 0.C---::": 0 .? ---' Figure 2: ROC curves based on independent test sets. a) Tree estimation: discriminative (solid) and ML (dashed) trees. b) Anomaly detection: MRE (solid) and Bayes (dashed) . c) Partially labeled case: 100% labeled (solid), 10% labeled + 90% unlabeled (dashed), and 10% labeled + 0% unlabeled training examples (dotted). 3 Extensions Anomaly detection: In anomaly detection we are given a set of training examples representing only one class, the "typical" examples. We attempt to capture regularities among the examples to be able to recognize unlikely members of this class. Estimating a probability distribution P(XIO) on the basis of the training set {Xl, " . , X T} via the ML (or analogous) criterion is not appropriate; there is no reason to further increase the probability of those examples that are already well captured by the model. A more relevant measure involves the level sets X)' = {X EX: log P(X 10) 2:: ,} which are used in deciding the class membership in any case. We estimate the parameters 0 to optimize an appropriate level set. Definition 2 Given a probability model P(XIO), 0 E e, a set of training examples {X 1, ... , XT }, a set of margin variables , = bl, ... , ,T], and a prior distribution Po(O, ,) we find the MRE distribution P(O, ,) such that minimizes D(PIIPo) subject to the constraints J P(O, ,) [log P(XtIO) - ,t] dOd, 2:: 0 for all t = 1, ... ,T. Note that this again a MRE projection whose solution can be obtained as before. The choice of Pob) in Po(O, ,) = Po (O)Po b) is not as straightforward as before since each margin ,t needs to be close to achievable log-probabilities. We can nevertheless find a reasonable choice by relating the prior mean of ,t to some a-percentile of the training set log-probabilities generated through ML or other estimation criterion. Denote the resulting value by la and define the prior Pobt) as Pobt) = ee -c (l ,, -),.) for,t ::; lao In this case the prior mean of,t is la - lie. Figure 2b shows in the context of a simple product distribution that this choice of prior together with the MRE framework leads to a real improvement over standard (Bayesian) approach. We believe, however, that the effect will be more striking for sophisticated models such as HMMs that may otherwise easily capture spurious regularities in the data. An extension of this formalism to latent variable models is provided in [4]. Uncertain or incompletely labeled examples: Examples with uncertain labels are hard to deal with in any (probabilistic or not) discriminative classification method. Uncertain labels can be, however, handled within the maximum entropy formalism: let Y = {Yl , ' .. , YT} be a set of binary variables corresponding to the labels for the training examples. We can define a prior uncertainty over the labels by specifying Po(Y) ; for simplicity, we can take this to be a product distribution T. Jaakkola, M Meila and T. Jebara 476 Po{Y) = TIt Pt ,o(Yt) where a different level of uncertainty can be assigned to each example. Consequently, we find the minimum relative entropy projection from the prior distribution po(e", y) = po{e)Po([)Po(Y) to the admissible set of distributions (no longer a function of the labels) that are consistent with the constraints: E y f e ,,"( p(e", y) [YtC(Xt, e) -,tl de d, ~ 0 for all t = 1, ... , T. The MRE principle differs from transduction [9], provides a soft rather than hard assignment of unlabeled examples, and is fundamentally driven by large margin classification. The MRE solution is not, however, often feasible to obtain in practice. We can nevertheless formulate an efficient mean field approach in this context [4]. Figure 2c demonstrates that even the approximate method is able to reap most of the benefit from unlabeled examples (compare, e.g., [6]). The results are for a DNA splice junction classification problem. For more details see [4]. 4 Discussion We have presented a general approach to discriminative training of model parameters, structures, or parametric discriminant functions. The formalism is based on the minimum relative entropy principle reducing all calculations to relative entropy projections. The idea naturally extends beyond standard classification and covers anomaly detection, classification with partially labeled examples, and feature selection. References [1] Cover and Thomas (1991). Elements of information theory. John Wiley & Sons. [2] Kivinen J. and Warmuth M. (1999). Boosting as Entropy Projection. Proceedings of the 12th Annual Conference on Computational Learning Theory. [3] Levin and Tribus (eds.) (1978). The maximum entropy formalism. Proceedings of the Maximum entropy formalism conference, MIT. [4] Jaakkola T., Meila M. and Jebara T. (1999). Maximum entropy discrimination. MIT AITR-1668, http://www.ai .mit. edu;-tommi/papers .html. [5] Jaakkola T. and Haussler D. (1998). Exploiting generative models in discriminative classifiers. NIPS 11. [6] Joachims, T. (1999). Transductive inference for text classification using support vector machines. International conference on Machine Learning. [7] Jebara T. and Pentland A. (1998). Maximum conditional likelihood via bound maximization and the CEM algorithm. NIPS 11. [8] Meila M. and Jordan M. (1998). Estimating dependency structure as a hidden variable. NIPS 11. [9] Vapnik V. (1998). Statistical learning theory. John Wiley & Sons. [10] West D. (1996). Introduction to graph theory. Prentice Hall.
1733 |@word determinant:1 achievable:1 polynomial:1 hu:1 covariance:5 reap:1 solid:4 carry:1 must:1 john:2 additive:1 partition:3 informative:2 enables:2 designed:1 discrimination:7 generative:3 fewer:1 warmuth:1 indicative:1 provides:3 boosting:1 node:1 ames:1 preference:2 direct:1 become:1 pairwise:2 multi:1 ol:1 begin:1 estimating:2 moreover:1 provided:1 maximizes:1 medium:2 minimizes:1 finding:4 guarantee:2 concave:2 exactly:1 demonstrates:1 classifier:1 zl:2 appear:1 positive:1 before:2 local:1 wuv:3 yd:1 specifying:1 hmms:1 limited:1 factorization:4 unique:2 practical:1 practice:1 differs:1 sq:2 logz:1 area:1 projection:8 integrating:1 get:3 cannot:1 unlabeled:4 close:1 selection:1 prentice:1 context:4 optimize:2 www:1 yt:9 maximizing:2 modifies:1 straightforward:2 convex:2 formulate:1 formalized:1 simplicity:1 rule:6 haussler:1 analogous:2 pt:1 suppose:1 anomaly:5 element:1 updating:1 labeled:6 capture:2 region:1 xio:4 ithe:1 tit:1 completely:1 basis:2 po:32 easily:1 separated:2 distinct:1 effective:1 whose:1 encoded:1 otherwise:1 transductive:1 jointly:1 itself:1 final:1 xi8:4 product:2 relevant:1 realization:2 exploiting:1 empty:1 regularity:2 depending:1 iq:1 ex:1 eq:1 involves:2 come:2 tommi:3 suffices:1 generalization:2 preliminary:1 summation:1 extension:8 hold:1 hall:1 normal:3 deciding:1 mo:2 entropic:1 pee:2 estimation:7 label:10 iw:2 mit:9 gaussian:1 always:1 rather:3 factorizes:1 jaakkola:6 derived:1 joachim:1 improvement:1 likelihood:7 inference:1 el:1 membership:1 nn:1 unlikely:1 hidden:2 spurious:1 classification:18 dual:1 among:1 distribut:1 html:1 marginal:1 equal:1 once:1 never:1 field:1 eliminated:1 manually:1 identical:2 others:1 fundamentally:1 few:3 recognize:1 consisting:2 n1:1 attempt:1 detection:5 subsumed:1 huge:1 mixture:1 integral:1 edge:3 necessary:1 tree:16 incomplete:1 desired:1 uncertain:4 formalism:5 soft:1 cover:2 logp:1 assignment:1 maximization:1 ordinary:1 subset:1 dod:1 levin:1 dependency:1 ytc:2 adaptively:1 st:1 density:1 international:1 probabilistic:1 yl:2 analogously:1 together:1 again:1 wishart:3 d8:3 potential:4 de:1 notable:1 explicitly:1 later:2 view:2 lab:3 closed:5 bayes:1 accuracy:1 efficiently:1 yield:1 weak:1 bayesian:1 ed:2 definition:2 naturally:2 treatment:1 sophisticated:1 mre:18 dt:1 follow:1 specify:1 formulation:5 done:4 defines:1 believe:1 effect:3 normalized:1 multiplier:7 regularization:2 assigned:2 deal:1 attractive:1 percentile:1 criterion:3 complete:2 tt:3 vo:1 fj:1 extend:2 discussed:2 m1:3 relating:1 marginals:2 cambridge:3 imposing:1 ai:6 meila:6 resorting:1 uv:1 longer:1 irrelevant:1 driven:1 wv:1 binary:1 captured:1 minimum:5 additional:4 maximize:1 dashed:4 calculation:2 cross:1 marina:1 e1:1 regression:1 basic:1 expectation:1 normalization:2 kernel:2 ion:1 ot:1 subject:4 member:1 jordan:1 structural:1 ee:3 easy:1 forthcoming:1 suboptimal:1 reduce:1 idea:2 poe:1 expression:2 handled:1 penalty:3 tribus:1 generally:2 involve:1 band:1 svms:5 dna:2 http:1 specifies:3 dotted:1 sign:2 disjoint:2 pace:1 discrete:1 pob:2 nevertheless:2 v1:3 graph:2 sum:1 enforced:1 uncertainty:2 striking:1 extends:2 family:2 reasonable:1 decision:6 prefer:1 bound:1 quadratic:1 annual:1 constraint:11 separable:2 according:1 combination:1 conjugate:3 smaller:1 son:2 vdp:1 slack:1 turn:1 discus:1 letting:1 tractable:1 junction:2 gaussians:1 appropriate:3 thomas:1 assumes:1 tony:1 cf:1 include:1 graphical:5 bl:1 objective:1 already:1 parametric:5 distance:1 incompletely:1 me:5 discriminant:9 reason:2 assuming:2 ratio:4 minimizing:1 negative:2 adjustable:1 perform:1 allowing:1 observation:1 zta:1 pentland:1 immediate:1 lb:1 jebara:7 pair:1 specified:3 nip:3 able:3 beyond:1 usually:1 sparsity:1 including:1 predicting:1 kivinen:1 representing:1 technology:2 lao:1 text:1 prior:23 relative:9 highlight:1 oy:2 remarkable:1 validation:1 consistent:3 principle:5 i8:1 cd:1 bias:3 taking:1 sparse:1 benefit:1 boundary:2 curve:1 valid:1 evaluating:1 coincide:1 employing:1 approximate:1 ml:3 cem:1 summing:1 discriminative:10 search:1 latent:1 robust:1 west:1 tl:2 roc:1 transduction:1 quv:1 wiley:2 mmp:1 exponential:1 xl:4 lie:2 ib:1 admissible:5 splice:2 theorem:5 xt:6 specific:2 admits:1 essential:1 exists:1 intractable:1 false:1 vapnik:1 illustrates:1 margin:12 entropy:24 lagrange:7 expressed:4 partially:2 scalar:1 applies:1 corresponds:1 relies:1 ma:3 conditional:4 viewed:1 consequently:1 towards:3 feasible:1 hard:2 specifically:1 typical:1 reducing:1 wt:4 miss:2 called:1 experimental:1 la:3 exception:1 formally:1 support:8 arises:1 incorporate:1 handling:1
802
1,734
Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology and Department of Electrical Engineering, Univ. of Sydney, NSW 2006, Australia Simon Carlile Department of Physiology and Institute of Biomedical Research, Univ. of Sydney, NSW 2006, Australia Abstract This paper examines the role of biological constraints in the human auditory localization process. A psychophysical and neural system modeling approach was undertaken in which performance comparisons between competing models and a human subject explore the relevant biologically plausible "realism constraints". The directional acoustical cues, upon which sound localization is based, were derived from the human subject's head-related transfer functions (HRTFs). Sound stimuli were generated by convolving bandpass noise with the HRTFs and were presented to both the subject and the model. The input stimuli to the model was processed using the Auditory Image Model of cochlear processing. The cochlear data was then analyzed by a time-delay neural network which integrated temporal and spectral information to determine the spatial location of the sound source. The combined cochlear model and neural network provided a system model of the sound localization process. Human-like localization performance was qualitatively achieved for broadband and bandpass stimuli when the model architecture incorporated frequency division (or tonotopicity), and was trained using variable bandwidth and center-frequency sounds. 1 Introduction The ability to accurately estimate the location of a sound source has obvious evolutionary advantages in terms of avoiding predators and finding prey. Indeed, humans are very accurate in their ability to localize broadband sounds. There has been a considerable amount of psychoacoustical research into the auditory processes involved in human sound localization (recent review [1]). Furthermore, numerous models of the human and animal sound localization process have been proposed (recent reviews [2,3]). However, there still remains a large gap between the psychophysical and the model explanations. Principal congruence between the two approaches exists for localization performance under restricted conditions, such as for narrowband sounds where spectral integration is not required, or for restricted regions of space. Unfortunately, there is no existing computational model that accounts well for human sound localization performance for a wide-range of sounds (e.g., varying in bandwidth and center-frequency). Furthermore, the biological constraints pertinent to sound localization have generally not been explored by these models. These include the spectral resolution of the auditory system in terms of the number and bandwidth of 762 C. T. Jin and S. Carlile frequency channels and the role of tonotopic processing. In addition, the perfonnance requirements of such a system are substantial and involve, for example, the accomodation of spectrally complex sounds, the robustness to irregularity in the sound source spectrum, and the channel based structure of spatial coding as evidenced by auditory spatial after-effects [4]. The crux of the matter is the notion that "biologically-likely realism", if built into a model, provides for a better understanding of the underlying processes. This work attempts to bridge part of this gap between the modeling and psychophysics. It describes the development and use (for the first time, to the authors ' knowledge) of a timedelay neural network model that integrates both spectral and temporal cues for auditory sound localization and compares the perfonnance of such a model with the corresponding human psychophysical evidence. 2 Sound Localization The sound localization perfonnance of a nonnal hearing human subject was tested using stimuli consisting of three different band-passed sounds: (1) a low-passed sound (300 2000 Hz) (2) a high-passed sound (2000 - 14000 Hz) and (3) a broadband sound (300 14000 Hz). These frequency bands respectively cover conditions in which either temporal cues, spectral cues, or both dominate the localization process (see [1]). The subject perfonned five localization trials for each sound condition, each with 76 test locations evenly distributed about the subject's head. The detailed methods used in free-field sound localization can be found in [5]. A short summary is presented below. 2.1 Sound Localization Task Human sound localization experiments were carried out in a darkened anechoic chamber. Free-field sound stimuli were presented from a loudspeaker carried on a semicircular robotic ann. These stimuli consisted of "fresh" white Gaussian noise appropriately bandpassed for each trial. The robotic ann allowed for placement of the speaker at almost any location on the surface of an imaginary sphere, one meter in radius, centered on the subject's head. The subject indicated the location of the sound source by pointing his nose in the perceived direction of the sound. The subject's head orientation was monitored using an electromagnetic sensor system (Polhemus, Inc.). 2.2 Measurement and Validation of Outer Ear Acoustical Filtering The cues for sound localization depend not only upon the spectral and temporal properties of the sound stimulus, but also on the acoustical properties of the individual's outer ears. It is generally accepted that the relevant acoustical cues (i.e., the interaural time difference, ITO; interaurallevel difference, ILD; and spectral cues) to a sound's location in the free-field are described by the head-related transfer function (HRTF) which is typically represented by a finite-length impulse response (FIR) filter [1]. Sounds filtered with the HRTF should be localizable when played over ear-phones which bypass the acoustical filtering of the outer ear. The illusion of free-field sounds using head-phones is known as virtual auditory space (VAS). Thus in order to incorporate outer ear filtering into the modelling process, measurements of the subject's HRTFs were carried out in the anechoic chamber. The measurements were made for both ears simultaneously using a ''blocked ear" technique [1]. 393 measurements were made at locations evenly distributed on the sphere. In order to establish that the HRTFs appropriately indicated the direction of a sound source the subject repeated the localization task as above with the stimulus presented in VAS. Neural System Model of Human Sound Localization 2.3 763 Human Sound Localization Performance The sound localization performance of the human subject in three different stimulus conditions (broadband, high-pass, low-pass) was examined in both the free-field and in virtual auditory space. Comparisons between the two (using correlational statistics, data not shown, but see [3]) across all sound conditions demonstrated their equivalence. Thus the measured HRTFs were highly effective. Localization data across all three sound conditions (single trial VAS data shown in Fig. la) shows that the subject performed well in both the broadband and high-pass sound conditions and rather poorly in the low-pass condition, which is consistent with other studies [6]. The data is illustrated using spherical localization plots which well demonstrates the global distribution of localization responses. Given the large qualitative differences in the data sets presented below, this visual method of analysis was sufficient for evaluating the competing models. For each condition, the target and response locations are shown for both the left (L) and right (R) hemispheres of space. It is clear that in the low-pass condition, the subject demonstrated gross mislocalizations with the responses clustering toward the lower and frontal hemispheres. The gross mislocalizations correspond mainly to the traditional cone of confusion errors [6]. 3 Localization Model The sound localization model consisted of two basic system components: (1) a modified version of the physiological Auditory Image Model [7] which simulates the spectrotemporal characteristics of peripheral auditory processing, and (2) the computational architecture of a time-delay neural network. The sounds presented to the model were filtered using the sUbject's HRTFs in exactly the same manner as was used in producing VAS. Therefore, the modeling results can be compared with human localization performance on an individual basis. The modeling process can be broken down into four stages. In the first stage a sound stimulus was generated with specific band-pass characteristics. The sound stimulus was then filtered with the subject's right and left ear HRTFs to render an auditory stimulus originating from a particular location in space. The auditory stimulus was then processed by the Auditory Image Model (AIM) to generate a neural activity profile that simulates the output of the inner hair cells in the organ of Corti and indicates the spiking probability of auditory nerve fibers. Finally, in the fourth and last stage, a time-delay neural network (TDNN) computed the spatial direction of the sound input based on the distribution of neural activity calculated by AIM. A detailed presentation of the modeling process can be found in [3], although a brief summary is presented here. The distribution of cochlear filters across frequency in AIM was chosen such that the minimum center frequency was 300 Hz and the maximum center frequency was 14 kHz with 31 filters essentially equally spaced on a logarithmic scale. In order to fully describe a computational layer of the TDNN, four characteristic numbers must be specified: (l) the number of neurons; (2) the kernel length, a number which determines the size of the current layer's time-window in terms of the number of time-steps of the previous layer; (3) the kernel width, a number which specifies how many neurons in the previous layer with which there are actual connections; and (4) the undersampling factor, a number describing the multiplicative factor by which the current layer's time-step interval is increased from the previous layer's. Using this nomenclature, the architecture of the different layers of one TDNN is summarized in Table 1, with the smallest time-step being 0.15 ms. The exact connection arrangement of the network is described in the next section. 764 C. T. Jin and S. Carlile Layer Input Hidden I Hidden 2 Output Table I: The Architecture of the TDNN. Neurons Kernel Length Kernel Width Undersampling 62 50 28 15 10 393 4 6 4,5,6 28 2 2 1 The spatial location of a sound source was encoded by the network as a distributed response with the peak occurring at the output neuron representing the target location of the input sound. The output response would then decay away in the fonn of a two-dimensional Gaussian as one moves to neurons further away from the target location. This derives from the well-established paradigm that the nervous system uses overlapping receptive fields to encode properties of the physical world. 3.1 Networks with Frequency Division and Tonotopicity The major auditory brainstem nuclei demonstrate substantial frequency division within their structure. The tonotopic organization of the primary auditory nerve fibers that innervate the cochlea carries forward to the brainstem's auditory nuclei. This arrangement is described as a tonotopic organization. Despite this fact and to our knowledge, no previous network model for sound localization incorporates such frequency division within its architecture. Typically (e.g., [8]) all of the neurons in the first computational layer are fully connected to all of the input cochlear frequency channels. In this work, different architectures were examined with varying amounts of frequency division imposed upon the network structure. The network with the architecture described above had its network connections constrained by frequency in a tonotopic like arrangement. The 31 input cochlear frequency channels for each ear were split into ten overlapping groups consisting generally of six contiguous frequency channels. There were five neurons in the first hidden layer for each group of input channels. The kernel widths of these neurons were set, not to the total number of frequency channels in the input layer, but only to the six contiguous frequency channels defining the group. Infonnation across the different groups of frequency channels was progressively integrated in the higher layers of the network. 3.2 Network Training Sounds with different center-frequency and bandwidth were used for training the networks. In one particular training paradigm, the center-frequency and bandwidth of the noise were chosen randomly. The center-frequency was chosen using a unifonn probability distribution on a logarithmic scale that was similar to the physiological distribution of output frequency channels from AIM. In this manner, each frequency region was trained equally based on the density of neurons in that frequency region. During training, the error backpropagation algorithm was used with a summed squared error measure. It is a natural feature of the learning rule that a given neuron's weights are only updated when there is activity in its respective cochlear channels. So, for example, a training sound containing only low frequencies will not train the high-frequency neurons and vice versa. All modeling results correspond with a single tonotopically organized TDNN trained using random sounds (unless explicitly stated otherwise). Neural System Model of Human Sound Localization 4 765 Localization Performance of a Tonotopic Network Experimentation with the different network architectures clearly demonstrated that a network with frequency division vastly improved the localization performance of the TDNNs (Figure I). In this case, frequency division was essential to producing a reasonable neural system model that would localize similarly to the human subject across all of the different band-pass conditions. For any single band-pass condition, it was found that the TDNN did not require frequency division within its architecture to produce quality solutions when trained only on these band-passed sounds. As mentioned above it was observed that a tonotopic network, one that divides the input frequency channels into different groups and then progressively interconnects the neurons in the higher layers across frequency, was more robust in its localization performance across sounds with variable center-frequency and bandwidth than a simple fully connected network. There are two likely explanations for this observation. One line of reasoning argues that it was easier for the tonotopic network to prevent a narrow band of frequency channels from dominating the localization computation across the entire set of sound stimuli. Or expressed slightly differently, it may have been easier for it to incorporate the relevant information across the different frequency channels. A second line of reasoning argues that the tonotopic network structure (along with the training with variable sounds) encouraged the network to develop meaningful connections for all frequencies. (a) SUBJECT VAS (b) TONOTOPIC NETWORK (c) NETWORK without FREQUENCY DIVISION '., ? .. //'~~~;" " .~::,:<;- ,, , ' L R L R >~ .>t .' .~ : ' '" ~;~::;;;,- L R Figure I: Comparison of the subject's VAS localization performance and the model's localization performance both with and without frequency division. The viewpoint is from an outside observer, with the target location shown by a cross and the response location shown by a black dot. C. T. Jin and S. Carlile 766 5 Matched Filtering and Sound Localization A number of previous sound localization models have used a relatively straight-forward matched filter or template matching analysis [9]. In such cases, the lTD and spectrum of a given input sound is commonly cross-correlated with the lTD and spectrum of an entire database of sounds for which the location is known. The location with the highest correlation is then chosen as the optimal source location. Matched filtering analysis is compared with the localization performance of both the human subject and the neural system model using a bandpass sound with restricted highfrequencies (Figure 2). The matched filtering localizes the sounds much better than the subject or the TDNN model. The matched filtering model used the same number of cochlear channels as the TDNNs and therefore contained the same inherent spectral resolution. This spectral resolution (31 cochlear channels) is certainly less than the spectral resolution of the human cochlea. This shows that although there was sufficient information to localize the sounds from the point of view of matched filtering, neither the human nor TDNN demonstrated such ability in their performance. In order for the TDNN to localize similarly to the matched filtering model, the network weights corresponding to a given location need to assume the form of the filter template for that location. As all of the training sounds were flat-spectrum, the TDNN received no ambiguity as far as the source spectrum was concerned. Thus it is likely that the difference in the distribution of localization responses in Figure 2b, as compared with that in Figure 2c, has been encouraged by using training sounds with random center-frequency and bandwidth, providing a partial explanation as to why the human localization performance is not optimal from a matched filtering standpoint. Figure 2: Comparison of the localization performances of the subject, the TDNN model, and a matched filtering model. Details as in Fig. I. 6 Varying Sound Levels and the ILD Cue The training ofthe TDNNs was performed in such a fashion, that for any particular location in space, the sound level (67 dB SPL) did not vary by more than 1 dB SPL during repeated presentations of the sound. The localization performance of the neural system model was then examined, using a broadband sound source, across a range of sound levels varying from 60 dB SPL to 80 dB SPL. The spherical correlation coefficient between the target and response locations ([10], values above 0.8 indicate "high" correlation) remained above 0.8 between 60 and 75 dB SPL demonstrating that there was a graceful degradation in localization performance over a range in sound level of 15 dB. The network was also tested on broadband sounds, 10 dB louder in one ear than the other. The results of these tests are shown in Figure 3 and clearly illustrate that the localization responses were pulled toward the side with the louder sound. While the magnitude of this effect is certainly not human-like, such behaviour suggests that interaurallevel difference Neural System Model of Human Sound Localization 767 cues were a prominent and constant feature of the data that conferred a measure of robustness to sound level variations. Figure 3: Model's localization performance with a 10 dB increase in sound level: (a,b) monaurally, (c) binaurally. 7 Conclusions A neural system model was developed in which physiological constraints were imposed upon the modeling process: (I) a TDNN model was used to incorporate the important role of spectral-temporal processing in the auditory nervous system, (2) a tonotopic structure was added to the network, (3) the training sounds contained randomly varying centerfrequencies and bandwidths. This biologically plausible model provided increased understanding of the role that these constraints play in determining localization performance. Acknowledgments The authors thank Markus Schenkel and Andre van Schaik for valuable comments. This research was supported by the NHMRC, ARC, and a Dora Lush Scholarship to CJ. References [I] S. Carlile, Virtual auditory space: Generation and applications. New York: Chapman and Hall, 1996. [2] R. H. Gilkey and T. R. Anderson, Binaural and Spatial Hearing in real and virtual environments. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers, 1997. [3] C. Jin, M. Schenkel, and S. Carlile, "Neural system identification model of human sound localisation," (Submitted to J. Acoust. Soc. Am.), 1999. [4] S. Hyams and S. Carlile, "After-effects in auditory localization: evidence for channel based processing," Submitted to the J. Acoust. Soc. Am., 2000. [5] S. Carlile, P. Leong, and S. Hyams, "The nature and distribution of errors in the localization of sounds by humans," Hearing Research, vol. 114, pp. 179-196, 1997. [6] S. Carlile, S. Delaney, and A. Corderoy, "The localization of spectrally restricted sounds by human listeners," Hearing Research, vol. 128, pp. 175-189, 1999. [7] C. Giguere and P. C. Woodland, "A computational model of the auditory periphery for speech and hearing research. i. ascending path," J. Acoust. Soc. Am., vol. 95, pp. 331-342, 1994. [8] C. Neti, E. Young, and M. Schneider, "Neural network models of sound localization based on directional filtering by the pinna," J. Acoust. Soc. Am., vol. 92, no. 6, pp. 3140-3156, 1992. [9] J. Middlebrooks, "Narrow-band sound localization related to external ear acoustics," J. Acoust. Soc. Am., vol. 92, no. 5, pp. 2607-2624, 1992. [10] N. Fisher, I, T. Lewis, and B. J. J. Embleton, Statistical analysis ofspherical data. Cambridge: Cambridge University Press, 1987.
1734 |@word trial:3 version:1 nsw:2 fonn:1 carry:1 existing:1 imaginary:1 current:2 must:1 pertinent:1 plot:1 progressively:2 cue:9 nervous:2 realism:2 short:1 schaik:1 filtered:3 provides:1 location:21 five:2 along:1 qualitative:1 interaural:1 manner:2 indeed:1 nor:1 spherical:2 actual:1 window:1 provided:2 underlying:1 matched:9 spectrally:2 developed:1 acoust:5 finding:1 temporal:5 giguere:1 exactly:1 corderoy:1 demonstrates:1 middlebrooks:1 producing:2 engineering:1 despite:1 path:1 black:1 examined:3 equivalence:1 suggests:1 range:3 acknowledgment:1 irregularity:1 illusion:1 backpropagation:1 physiology:2 matching:1 imposed:2 demonstrated:4 center:9 resolution:4 examines:1 rule:1 dominate:1 his:1 notion:1 variation:1 updated:1 target:5 play:1 exact:1 us:1 associate:1 pinna:1 database:1 observed:1 role:4 electrical:1 region:3 connected:2 highest:1 binaurally:1 valuable:1 substantial:2 gross:2 mentioned:1 broken:1 environment:1 trained:4 depend:1 localization:54 upon:4 division:10 basis:1 binaural:1 differently:1 represented:1 fiber:2 jersey:1 listener:1 train:1 univ:2 effective:1 describe:1 outside:1 encoded:1 plausible:2 dominating:1 otherwise:1 ability:3 statistic:1 advantage:1 relevant:3 anechoic:2 poorly:1 requirement:1 produce:1 bandpassed:1 illustrate:1 develop:1 measured:1 received:1 sydney:2 soc:5 indicate:1 direction:3 radius:1 filter:5 centered:1 human:27 australia:2 brainstem:2 virtual:4 require:1 crux:1 behaviour:1 electromagnetic:1 biological:2 hall:1 congruence:1 lawrence:1 pointing:1 major:1 vary:1 smallest:1 perceived:1 integrates:1 spectrotemporal:1 infonnation:1 bridge:1 organ:1 vice:1 clearly:2 sensor:1 gaussian:2 aim:4 modified:1 rather:1 varying:5 encode:1 derived:1 modelling:1 indicates:1 mainly:1 am:5 integrated:2 typically:2 entire:2 hidden:3 originating:1 nonnal:1 orientation:1 development:1 animal:1 spatial:6 integration:1 psychophysics:1 constrained:1 summed:1 field:6 encouraged:2 chapman:1 stimulus:14 inherent:1 randomly:2 simultaneously:1 individual:2 consisting:2 attempt:1 organization:2 highly:1 localisation:1 certainly:2 analyzed:1 accurate:1 partial:1 respective:1 perfonnance:3 unless:1 divide:1 increased:2 modeling:7 cover:1 contiguous:2 hearing:5 delay:3 erlbaum:1 combined:1 lush:1 density:1 peak:1 squared:1 vastly:1 ear:11 ambiguity:1 containing:1 fir:1 external:1 convolving:1 gilkey:1 account:1 coding:1 summarized:1 coefficient:1 matter:1 inc:1 explicitly:1 performed:2 multiplicative:1 observer:1 view:1 predator:1 simon:1 characteristic:3 correspond:2 spaced:1 ofthe:1 directional:2 identification:1 accurately:1 craig:1 straight:1 submitted:2 andre:1 frequency:39 involved:1 pp:5 obvious:1 monitored:1 auditory:21 knowledge:2 organized:1 cj:1 nerve:2 higher:2 response:10 improved:1 anderson:1 furthermore:2 biomedical:1 stage:3 correlation:3 ild:2 overlapping:2 quality:1 indicated:2 impulse:1 effect:3 consisted:2 illustrated:1 white:1 during:2 width:3 speaker:1 timedelay:1 m:1 prominent:1 demonstrate:1 confusion:1 argues:2 narrowband:1 reasoning:2 image:3 spiking:1 physical:1 khz:1 measurement:4 blocked:1 louder:2 versa:1 cambridge:2 similarly:2 innervate:1 had:1 dot:1 surface:1 recent:2 hemisphere:2 phone:2 periphery:1 psychoacoustical:1 mislocalizations:2 minimum:1 schneider:1 determine:1 paradigm:2 sound:84 cross:2 sphere:2 equally:2 va:6 basic:1 hair:1 essentially:1 cochlea:2 kernel:5 delaney:1 achieved:1 cell:1 tdnns:3 addition:1 interval:1 source:9 standpoint:1 appropriately:2 publisher:1 comment:1 subject:22 hz:4 simulates:2 db:8 incorporates:1 hyams:2 leong:1 split:1 concerned:1 accomodation:1 architecture:9 competing:2 bandwidth:8 inner:1 six:2 passed:4 ltd:2 render:1 nomenclature:1 york:1 speech:1 generally:3 woodland:1 detailed:2 involve:1 clear:1 amount:2 band:8 ten:1 processed:2 generate:1 specifies:1 unifonn:1 vol:5 group:5 four:2 demonstrating:1 localize:4 undersampling:2 prevent:1 neither:1 prey:1 undertaken:1 tonotopically:1 cone:1 fourth:1 almost:1 reasonable:1 spl:5 schenkel:2 layer:13 played:1 activity:3 placement:1 constraint:5 flat:1 markus:1 graceful:1 relatively:1 loudspeaker:1 department:3 peripheral:1 tonotopic:10 describes:1 across:10 slightly:1 biologically:3 restricted:4 interaurallevel:2 remains:1 describing:1 conferred:1 neti:1 nose:1 ascending:1 experimentation:1 away:2 spectral:11 chamber:2 dora:1 corti:1 robustness:2 clustering:1 include:1 carlile:9 scholarship:1 establish:1 psychophysical:3 move:1 arrangement:3 added:1 receptive:1 primary:1 traditional:1 evolutionary:1 darkened:1 thank:1 outer:4 evenly:2 acoustical:5 cochlear:9 toward:2 fresh:1 length:3 providing:1 unfortunately:1 stated:1 neuron:12 observation:1 arc:1 finite:1 semicircular:1 jin:5 defining:1 incorporated:1 head:6 evidenced:1 required:1 specified:1 connection:4 acoustic:1 narrow:2 established:1 below:2 built:1 explanation:3 perfonned:1 natural:1 hrtfs:7 localizes:1 representing:1 interconnects:1 brief:1 numerous:1 carried:3 hrtf:2 tdnn:12 review:2 understanding:2 meter:1 determining:1 fully:3 generation:1 filtering:12 validation:1 nucleus:2 sufficient:2 consistent:1 viewpoint:1 bypass:1 polhemus:1 summary:2 supported:1 last:1 free:5 side:1 pulled:1 institute:1 wide:1 template:2 distributed:3 van:1 calculated:1 evaluating:1 world:1 author:2 qualitatively:1 made:2 forward:2 commonly:1 far:1 global:1 robotic:2 spectrum:5 why:1 table:2 nature:1 channel:17 transfer:2 robust:1 complex:1 did:2 noise:3 profile:1 mahwah:1 allowed:1 repeated:2 fig:2 broadband:7 fashion:1 bandpass:3 ito:1 young:1 down:1 remained:1 specific:1 explored:1 decay:1 physiological:3 evidence:2 derives:1 exists:1 essential:1 magnitude:1 occurring:1 gap:2 easier:2 logarithmic:2 explore:1 likely:3 visual:1 expressed:1 contained:2 determines:1 lewis:1 presentation:2 ann:2 fisher:1 considerable:1 principal:1 correlational:1 total:1 degradation:1 pas:8 accepted:1 la:1 meaningful:1 frontal:1 incorporate:3 localizable:1 tested:2 avoiding:1 correlated:1
803
1,735
Uniqueness of the SVM Solution Christopher J .C. Burges Advanced Technologies, Bell Laboratories, Lucent Technologies Holmdel, New Jersey burges@iucent.com David J. Crisp Centre for Sensor Signal and Information Processing, Deptartment of Electrical Engineering, University of Adelaide, South Australia dcrisp@eleceng.adelaide.edu.au Abstract We give necessary and sufficient conditions for uniqueness of the support vector solution for the problems of pattern recognition and regression estimation, for a general class of cost functions. We show that if the solution is not unique, all support vectors are necessarily at bound, and we give some simple examples of non-unique solutions. We note that uniqueness of the primal (dual) solution does not necessarily imply uniqueness of the dual (primal) solution. We show how to compute the threshold b when the solution is unique, but when all support vectors are at bound, in which case the usual method for determining b does not work. 1 Introduction Support vector machines (SVMs) have attracted wide interest as a means to implement structural risk minimization for the problems of classification and regression estimation. The fact that training an SVM amounts to solving a convex quadratic programming problem means that the solution found is global, and that if it is not unique, then the set of global solutions is itself convex; furthermore, if the objective function is strictly convex, the solution is guaranteed to be unique [1]1. For quadratic programming problems, convexity of the objective function is equivalent to positive semi-definiteness of the Hessian, and strict convexity, to positive definiteness [1]. For reference, we summarize the basic uniqueness result in the following theorem, the proof of which can be found in [1]: Theorem 1: The solution to a convex programming problem, for which the objective function is strictly convex, is unique. Positive definiteness of the Hessian implies strict convexity of the objective function . Note that in general strict convexity of the objective function does not neccesarily imply positive definiteness of the Hessian. Furthermore, the solution can still be unique, even if the objective function is loosely convex (we will use the term "loosely convex" to mean convex but not strictly convex). Thus the question of uniqueness IThis is in contrast with the case of neural nets, where local minima of the objective function can occur. 224 C. J. C. Burges and D. J. Crisp for a convex programming problem for which the objective function is loosely convex is one that must be examined on a case by case basis. In this paper we will give necessary and sufficient conditions for the support vector solution to be unique, even when the objective function is loosely convex, for both the clasification and regression cases, and for a general class of cost function. One of the central features of the support vector method is the implicit mapping ~ of the data Z E Rn to some feature space F, which is accomplished by replacing dot products between data points Zi, Zj, wherever they occur in the train and test algorithms, with a symmetric function K (Zi' Zj ), which is itself an inner product in F [2]: K(Zi' Zj) = (~(Zi)' ~(Zj? = (Xi, Xj), where we denote the mapped points in F by X = ~(z). In order for this to hold the kernel function K must satisfy Mercer's positivity condition [3]. The algorithms then amount to constructing an optimal separating hyperplane in F, in the pattern recognition case, or fitting the data to a linear regression tube (with a suitable choice of loss function [4]) in the regression estimation case. Below, without loss of generality, we will work in the space F, whose dimension we denote by dF. The conditions we will find for non-uniqueness of the solution will not depend explicitly on F or ~. Most approaches to solving the support vector training problem employ the Wolfe dual, which we describe below. By uniqueness of the primal (dual) solution, we mean uniqueness of the set of primal (dual) variables at the solution. Notice that strict convexity of the primal objective function does not imply strict convexity of the dual objective function. For example, for the optimal hyperplane problem (the problem of finding the maximal separating hyperplane in input space, for the case of separable data), the primal objective function is strictly convex, but the dual objective function will be loosely convex whenever the number of training points exceeds the dimension of the data in input space. In that case, the dual Hessian H will necessarily be positive semidefinite, since H (or a submatrix of H, for the cases in which the cost function also contributes to the (block-diagonal) Hessian) is a Gram matrix of the training data, and some rows of the matrix will then necessarily be linearly dependent [5]2. In the cases of support vector pattern recognition and regression estimation studied below, one of four cases can occur: (1) both primal and dual solutions are unique; (2) the primal solution is unique while the dual solution is not; (3) the dual is unique but the primal is not; (4) both solutions are not unique. Case (2) occurs when the unique primal solution has more than one expansion in terms of the dual variables. We will give an example of case (3) below. It is easy to construct trivial examples where case (1) holds, and based on the discussion below, it will be clear how to construct examples of (4). However, since the geometrical motivation and interpretation of SVMs rests on the primal variables, the theorems given below address uniqueness of the primal solution3 ? 2 The Case of Pattern Recognition We consider a slightly generalized form of the problem given in [6], namely to minimize the objective function F = (1/2) IIwl12 + L Ci~f (1) 2Recall that a Gram matrix is a matrix whose ij'th element has the form (Xi,Xj) for some inner product (,), where Xi is an element of a vector space, and that the rank of a Gram matrix is the maximum number of linearly independent vectors Xi that appear in it [6]. 3Due to space constraints some proofs and other details will be omitted. Complete details will be given elsewhere. 225 Uniqueness of the SVM Solution > 0, subject to constraints: . Xi + b) > 1 - ~i' i = 1,,,,,1 C; > 0 i = 1" ... 1 ,:>. -' with constants p E [1,00), Gi (2) (3) where W is the vector of weights, b a scalar threshold, ~i are positive slack variables which are introduced to handle the case of nonseparable data, the Yi are the polarities of the training samples (Yi E {? I} ), Xi are the images of training samples in the space F by the mapping ~, the Gi determine how much errors are penalized (here we have allowed each pattern to have its own penalty), and the index i labels the 1 training patterns. The goal is then to find the values of the primal variables {w, b, ~i} that solve this problem. Most workers choose p = 1, since this results in a particularly simple dual formulation, but the problem is convex for any p 2: 1. We will not go into further details on support vector classification algorithms themselves here, but refer the interested reader to [3], [7] . Note that, at the solution, b is determined from w and ~i by the Karush Kuhn Tucker (KKT) conditions (see below), but we include it in the definition of a solution for convenience. Yi(W Note that Theorem 1 gives an immediate proof that the solution to the optimal hyperplane problem is unique, since there the objective function is just (1/2)lIwI1 2 , which is strictly convex, and the constraints (Eq. (2) with the ~ variables removed) are linear inequality constraints which therefore define a convex set4. For the discussion below we will need the dual formulation of this problem, for the case p = 1. It takes the following form: minimize ~ L-ijG:iG:jYiYj(Xi,Xj) - L-iG:i subject to constraints: (4) TJi > 0, G:i 2: 0 (5) Gi G:i + TJi LG:iYi (6) 0 and where the solution takes the form w = L-i G:iYiXi, and the KKT conditions, which are satisfied at the solution, are TJi~i = 0, G:i (Yi (w . Xi + b) - 1 + ~i) = 0, where TJi are Lagrange multipliers to enforce positivity of the ~i' and G:i are Lagrange multipliers to enforce the constraint (2). The TJi can be implicitly encapsulated in the condition 0 ~ ai :::; Gi , but we retain them to emphasize that the above equations imply that whenever ~i =/; 0, we must have ai = Gi . Note that, for a given solution, a support vector is defined to be any point Xi for which G:i > O. Now suppose we have some solution to the problem (1), (2), (3). Let Nl denote the set {i : Yi = 1, W ? Xi + b < I}, N2 the set {i : Yi = -1, W? Xi + b > -I}, N3 the set {i : Yi = 1, W? Xi + b = I}, N4 the set {i : Yi = -1, W? Xi + b = -I}, Ns the set {i : Yi = 1, W? Xi + b > I}, and N6 the set {i : Yi = -1, W? Xi + b < -I}. Then we have the following theorem: Theorem 2: The solution to the soft-margin problem, (1), (2) and (3), is unique for p > 1. For p = 1, the solution is not unique if and only if at least one of the following two conditions holds: (7) (8) iENl UN3 iEN2 Furthermore, whenever the solution is not unique, all solutions share the same w, and any support vector Xi has Lagrange multiplier satisfying ai = Gi , and when (7) 4This is of course not a new result: see for example [3]. 226 C. 1. C. Burges and D. 1. Crisp holds, then N3 contains no support vectors, and when (8) holds, then no support vectors. N4 contains Proof: For the case p > 1, the objective function F is strictly convex, since a sum of strictly convex functions is a strictly convex function, and since the function g( v) = v P , v E lR+ is strictly convex for p > 1. FUrthermore the constraints define a convex set, since any set of simultaneous linear inequality constraints defines a convex set. Hence by Theorem 1 the solution is unique. For the case p = 1, define Z to be that dF + i-component vector with Zi = Wi, i = 1, ... ,dF, and Zi = ~i' i = dF + 1", . ,dF + t. In terms of the variables z, the problem is still a convex programming problem, and hence has the property that any solution is a global solution. Suppose that we have two solutions, Zl and Z2' Then we can form the family of solutions Zt, where Zt == (1 - t)ZI + tZ2, and since the solutions are global, we have F(zd = F(Z2) = F(zt). By expanding F(zt) - F(zt} = 0 in terms of Zl and Z2 and differentiating twice with respect to t we find that WI = W2. Now given wand b, the ~i are completely determined by the KKT conditions. Thus the solution is not unique if and only if b is not unique. Define 0 == min {miniENl ~i' miniEN6 (-1 - W ? Xi - b)}, and suppose that condition (7) holds. Then a different solution {w', b', e} is given by w' = w, b' = b + 0, and ~~ = ~i - 0, Vi E N 1 , ~~ = ~i + 0, Vi E N2 uN4 , all other ~i = 0, since by construction F then remains the same, and the constraints (2), (3) are satisfied by the primed variables. Similarly, suppose that condition (8) holds. Define 0 == min{miniEN2~i,miniEN5(w?xi+b-l)}. Then a different solution {w',b',e} is given by w' = w, b' = b - 0, and ~~ = ~i - 0, Vi E N 2 , ~: = ~i + 0, Vi E NI U N 3 , all other ~i = 0, since again by construction F is unchanged and the constraints are still met. Thus the given conditions are sufficient for the solution to be nonunique. To show necessity, assume that the solution is not unique: then by the above argument, the solutions must differ by their values of b. Given a particular solution b, suppose that b + 0, 0 > 0 is also a solution. Since the set of solutions is itself convex, then b + 0' will also correspond to a solution for all 0' : 0 ~ 0' ~ O. Given some b' = b + 0', we can use the KKT conditions to compute all the and we can choose 0' sufficiently small so that no ~i' i E N6 that was previously zero becomes nonzero. Then we find that in order that F remain the same, condition (7) must hold. If b - 0, 0 > 0 is a solution, similar reasoning shows that condition (8) must hold. To show the final statement of the theorem, we use the equality constraint (6), together with the fact that, from the KKT conditions, all support vectors Xi with indices in NI uN2 satisfy (Xi = Ci ? Substituting (6) in (7) then gives L: N3 (Xi + L:N4 (Ci - (Xi) = 0 which implies the result, since all (Xi are non-negative. Similarly, substituting (6) in (8) gives L:,M (Ci - (Xi) + L:.Af. (Xi = 0 which again . l'les t h e resu1t. 0 3 4 Imp ei, Corollary: For any solution which is not unique, letting S denote the set of indices of the corresponding set of support vectors, then we must have L:iES CiYi = O. FUrthermore, if the number of data points is finite, then for at least one of the family of solutions, all support vectors have corresponding ~i i= O. Note that it follows from the corollary that if the Ci are chosen such that there exists no subset r of the train data such that L:iET CiYi = 0, then the solution is guaranteed to be unique, even if p = 1. FUrthermore this can be done by choosing all the Ci very close to some central value C, although the resulting solution can depend sensitively on the values chosen (see the example immediately below). Finally, note that if all Ci are equal, the theorem shows that a necessary condition for the solution to be non-unique is that the negative and positive polarity support vectors be equal in number. 227 Uniqueness of the SVM Solution A simple example of a non-unique solution, for the case p = 1, is given by a train set in one dimension with just two examples, {Xl = 1, YI = I} and {xz = -1, Yz = with GI = C z == C. It is straightforward to show analytically that for G 2: 2' the solution is unique, with w = 1, 6 = 6 = b = 0, and marginS equal to 2, while for C < there is a family of solutions, with -1 + 2C ::; b ::; 1 - 2C and 6 = 1- b - 2C, 6 = 1 + b - 2G, and margin l/C . The case G < corresponds to Case (3) in Section (1) (dual unique but primal not), since the dual variables are uniquely specified by a = C. Note also that this family of solutions also satisfies the condition that any solution is smoothly deformable into another solution [7J. If GI > Cz , the solution becomes unique, and is quite different from the unique solution found when G z > CI . When the G's are not equal, one can interpret what happens in terms of the mechanical analogy [8J, with the central separating hyperplane sliding away from the point that exerts the higher force, until that point lies on the edge of the margin region. -11' ! ! Note that if the solution is not unique, the possible values of b fall on an interval of the real line: in this case a suitable choice would be one that minimizes an estimate of the Bayes error, where the SVM output densities are modeled using a validation set 6 . Alternatively, requiring continuity with the cases p > 1, so that one would choose that value of b that would result by considering the family of solutions generated by different choices of p, and taking the limit from above of p -t 1, would again result in a unique solution. 3 The Case of Regression Estimation7 Here one has a set of l pairs {xI,Yd,{xz,yz},???,{XI,YI}, {Xi E :F,Yi E R}, and the goal is to estimate the unknown functional dependence j of the Y on the X, where the function j is assumed to be related to the measurements {Xi,Yi} by Yi = j(Xi) +ni, and where ni represents noise. For details we refer the reader to [3], [9]. Again we generalize the original formulation [10], as follows: for some choice of positive error penalties Gi , and for positive ?i, minimize I F = ~ Ilwllz + 2)Gi~f + C;(~np) (9) i=l with constant p E [1 , 00), subject to constraints Yi - w . Xi - b < ?i W ? Xi + b - Yi < ?i ~;*) + ~i + ~; > 0 (10) (11) (12) where we have adopted the notation ~;*) == {~i ' ~;} [9J. This formulation results in an "? insensitive" loss function, that is, there is no penalty (~}*) = 0) associated with point Xi if IYi - w . Xi - bl ::; ?i. Now let {3, {3* be the Lagrange multipliers introduced to enforce the constraints (10), (11). The dual then gives 2: {3i = 2: {3;, 0::; {3i ::; Gi , 0::; {3; ::; G;, (13) 5The margin is defined to be the distance between the two hyperplanes corresponding to equality in Eq. (2), namely 2/lIwll, and the margin region is defined to be the set of points between the two hyperplanes. 6This method was used to estimate b under similar circumstances in [8]. 7The notation in this section only coincides with that used in section 2 where convenient. 228 C. J. C. Burges and D. J. Crisp which we will need below. For this formulation, we have the following Theorem 3: For a given solution, define !(Xi, Yi) == Yi - W ? Xi - b, and define Nl to be the set of indices {i : !(Xi, Yi) > fi}, N2 the set {i : !(Xi, Yi) = fd, N3 the set {i : !(Xi,Yi) = -fi}, and N4 the set {i : !(Xi,Yi) < -fi}. Then the solution to (9) - (12) is unique for p > 1, and for p = 1 it is not unique if and only if at least one of the following two conditions holds: Ci L iENIUN2 LC; iEN4 (14) C'!, (15) L Ci L iEN3UN4 iENl Furthermore, whenever the solution is not unique, all solutions share the same w, and all support vectors are at bound (that iss, either f3i = Ci or f3i = Cn, and when (14) holds, then N3 contains no support vectors, and when (15) holds, then N2 contains no support vectors. The theorem shows that in the non-unique case one will only be able to move the tube (and get another solution) if one does not change its normal w. A trivial example of a non-unique solution is when all the data fits inside the f-tube with room to spare, in which case for all the solutions, the normal to the f-tubes always lies along the Y direction. Another example is when all Ci are equal, all data falls outside the tube, and there are the same number of points above the tube as below it. 4 Computing b when all SVs are at Bound The threshold b in Eqs. (2), (10) and (11) is usually determined from that subset of the constraint equations which become equalities at the solution and for which the corresponding Lagrange multipliers are not at bound. However, it may be that at the solution, this subset is empty. In this section we consider the situation where the solution is unique, where we have solved the optimization problem and therefore know the values of all Lagrange multipliers, and hence know also w, and where we wish to find the unique value of b for this solution. Since the ~~.) are known once b is fixed, we can find b by finding that value which both minimizes the cost term in the primal Lagrangian, and which satisfies all the constraint equations. Let us consider the pattern recognition case first. Let S+ (S_) denote the set of indices of positive (negative) polarity support vectors. Also let V+ (V_) denote the set of indices of positive (negative) vectors which are not support vectors. It is straightforward to show that if 2:iES_ C i > 2:iES+ Ci, then b = max {maxiES_ (-1 - W ? Xi), maxiEV+ (1 - W ? Xi)}, while if 2:iES_ Ci < 2:iES+ Ci, then b = min {miniEs+ (1 - W ? Xi), miniEv_ (-1 - W ? Xi)}' Furthermore, if 2:iES_ C i = 2:iES+ Ci, and if the solution is unique, then these two values coincide. In the regression case, let us denote by S the set of indices of all support vectors, S its complement, SI the set of indices for which f3i = Ci, and S2 the set of indices for which f3i = C;, so that S = SI U S2 (note SI n S2 = 0). Then if 2:iES2 C; > 2:iESl Ci, the desired value of b is b = max{m~Es(Yi - W? Xi + fi), maxiES(Yi - W? Xi - fi)} while if 2:iES2 C; < 2:iE Sl Ci, then b min {miniEs(Yi - W ? Xi - fi), miniES(Yi - W? Xi + fi)}' 8Recall that if Ei > 0, then {3i{3; = O. Uniqueness of the SVM Solution Again, if the solution is unique, and if also two values coincide. 5 229 l:iE S 2 c; Discussion We have shown that non-uniqueness of the SVM solution will be the exception rather than the rule: it will occur only when one can rigidly parallel transport the margin region without changing the total cost. If non-unique solutions are encountered, other techniques for finding the threshold, such as minimizing the Bayes error arising from a model of the SVM posteriors [8], will be needed. The method of proof in the above theorems is straightforward, and should be extendable to similar algorithms, for example Mangasarian's Generalized SVM [11]. In fact one can extend this result to any problem whose objective function consists of a sum of strictly convex and loosely convex functions: for example, it follows immediately that for the case of the lI-SVM pattern recognition and regression estimation algorithms [12], with arbitrary convex costs, the value of the normal w will always be unique. Acknowledgments C. Burges wishes to thank W. Keasler, V. Lawrence and C. Nohl of Lucent Technologies for their support. References [1] R. Fletcher. edition, 1987. Practical Methods of Optimization. John Wiley and Sons, Inc., 2nd [2] B. E. Boser, I. M. Guyon, and V .Vapnik. A training algorithm for optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, 1992. ACM. [3] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, Inc., New York, 1998. [4] A.J. Smola and B. Scholkopf. On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica, 22:211 - 231, 1998. [5] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985. [6] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273-297, 1995. [7] C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2}:121-167, 1998. [8] C. J. C. Burges and B. Scholkopf. Improving the accuracy and speed of support vector learning machines. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 375-381, Cambridge, MA, 1997. MIT Press. [9] A. Smola and B. Scholkopf. A tutorial on support vector regression. Statistics and Computing, 1998. In press: also, COLT Technical Report TR-1998-030. [10] V. Vapnik, S. Golowich, and A. Smola. Support vector method for function approximation, regression estimation, and signal processing. Advances in Neural Information Processing Systems, 9:281-287, 1996. [11] O.L. Mangarasian. Generalized support vector machines, mathematical programming technical report 98-14. Technical report, University of Wisconsin, October 1998. [12] B. Scholkopf, A. Smola, R. Williamson and P. Bartlett, New Support Vector Algorithms, NeuroCOLT2 NC2-TR-1998-031, 1998.
1735 |@word inversion:1 nd:1 tr:2 necessity:1 contains:4 com:1 z2:3 si:3 attracted:1 must:7 john:2 v_:1 lr:1 hyperplanes:2 mathematical:1 along:1 become:1 scholkopf:4 consists:1 fitting:1 inside:1 themselves:1 nonseparable:1 xz:2 considering:1 becomes:2 notation:2 what:1 minimizes:2 finding:3 classifier:1 zl:2 appear:1 positive:11 engineering:1 local:1 limit:1 rigidly:1 yd:1 twice:1 au:1 studied:1 examined:1 unique:42 acknowledgment:1 practical:1 horn:1 block:1 implement:1 bell:1 convenient:1 get:1 convenience:1 close:1 operator:1 risk:1 crisp:4 equivalent:1 lagrangian:1 go:1 straightforward:3 convex:28 immediately:2 ijg:1 rule:1 handle:1 deptartment:1 construction:2 suppose:5 programming:6 wolfe:1 element:2 recognition:8 particularly:1 satisfying:1 electrical:1 solved:1 region:3 removed:1 mozer:1 convexity:6 depend:2 solving:2 basis:1 completely:1 jersey:1 train:3 describe:1 choosing:1 outside:1 whose:3 quite:1 solve:1 statistic:1 gi:11 itself:3 final:1 net:1 product:3 maximal:1 deformable:1 empty:1 golowich:1 ij:1 eq:3 implies:2 met:1 differ:1 kuhn:1 direction:1 tji:5 australia:1 spare:1 karush:1 strictly:10 hold:12 sufficiently:1 normal:3 lawrence:1 mapping:2 fletcher:1 substituting:2 omitted:1 uniqueness:14 estimation:6 encapsulated:1 label:1 minimization:1 mit:1 sensor:1 always:2 primed:1 rather:1 sensitively:1 corollary:2 rank:1 contrast:1 dependent:1 interested:1 dual:17 classification:2 colt:1 equal:5 construct:2 once:1 represents:1 imp:1 np:1 report:3 employ:1 set4:1 algorithmica:1 interest:1 fd:1 mining:1 nl:2 semidefinite:1 primal:15 edge:1 worker:1 necessary:3 loosely:6 desired:1 soft:1 cost:6 subset:3 johnson:1 extendable:1 density:1 ie:2 retain:1 together:1 again:5 central:3 tube:6 satisfied:2 choose:3 positivity:2 li:1 inc:2 satisfy:2 explicitly:1 vi:4 bayes:2 parallel:1 nc2:1 minimize:3 ni:4 accuracy:1 correspond:1 generalize:1 simultaneous:1 whenever:4 iiwl12:1 definition:1 tucker:1 proof:5 associated:1 recall:2 knowledge:1 higher:1 formulation:5 done:1 generality:1 furthermore:8 just:2 implicit:1 smola:4 roger:1 until:1 christopher:1 replacing:1 ei:2 transport:1 continuity:1 defines:1 requiring:1 multiplier:6 hence:3 equality:3 analytically:1 symmetric:1 laboratory:1 nonzero:1 uniquely:1 coincides:1 generalized:3 complete:1 geometrical:1 reasoning:1 image:1 mangasarian:1 fi:7 charles:1 functional:1 insensitive:1 extend:1 interpretation:1 interpret:1 refer:2 measurement:1 cambridge:2 ai:3 solution3:1 similarly:2 centre:1 dot:1 iyi:2 posterior:1 own:1 inequality:2 accomplished:1 yi:27 minimum:1 determine:1 signal:2 semi:1 sliding:1 exceeds:1 technical:3 af:1 y:4 regression:12 basic:1 circumstance:1 exerts:1 df:5 kernel:2 cz:1 interval:1 w2:1 rest:1 strict:5 south:1 subject:3 jordan:1 structural:1 easy:1 xj:3 fit:1 zi:7 f3i:4 inner:2 cn:1 bartlett:1 penalty:3 hessian:5 york:1 svs:1 clear:1 amount:2 svms:2 sl:1 zj:4 tutorial:2 notice:1 arising:1 zd:1 four:1 threshold:4 changing:1 sum:2 wand:1 family:5 reader:2 guyon:1 holmdel:1 submatrix:1 bound:5 guaranteed:2 quadratic:2 encountered:1 annual:1 occur:4 constraint:15 n3:5 speed:1 argument:1 min:4 separable:1 remain:1 slightly:1 son:2 wi:2 n4:4 wherever:1 happens:1 equation:3 remains:1 previously:1 slack:1 needed:1 know:2 letting:1 adopted:1 away:1 enforce:3 petsche:1 original:1 include:1 yz:2 unchanged:1 bl:1 objective:17 move:1 question:1 occurs:1 neurocolt2:1 dependence:1 usual:1 diagonal:1 distance:1 thank:1 mapped:1 separating:3 s_:1 trivial:2 index:9 polarity:3 modeled:1 minimizing:1 lg:1 october:1 statement:1 negative:4 zt:5 unknown:1 finite:1 nonunique:1 immediate:1 situation:1 rn:1 arbitrary:1 david:1 introduced:2 namely:2 mechanical:1 specified:1 pair:1 complement:1 boser:1 address:1 able:1 below:11 pattern:10 usually:1 summarize:1 max:2 suitable:2 force:1 advanced:1 technology:3 imply:4 nohl:1 n6:2 discovery:1 determining:1 wisconsin:1 loss:3 analogy:1 validation:1 sufficient:3 mercer:1 editor:1 share:2 row:1 elsewhere:1 penalized:1 course:1 burges:8 wide:1 fall:2 taking:1 differentiating:1 fifth:1 dimension:3 gram:3 coincide:2 ig:2 emphasize:1 implicitly:1 global:4 kkt:5 pittsburgh:1 assumed:1 xi:48 alternatively:1 iet:1 expanding:1 contributes:1 improving:1 expansion:1 williamson:1 necessarily:4 constructing:1 linearly:2 motivation:1 noise:1 s2:3 edition:1 n2:4 allowed:1 i:1 definiteness:4 wiley:2 n:1 lc:1 ienl:2 wish:2 xl:1 lie:2 theorem:12 lucent:2 svm:10 cortes:1 exists:1 workshop:1 vapnik:4 ci:19 margin:8 smoothly:1 lagrange:6 scalar:1 corresponds:1 satisfies:2 acm:1 ma:1 iyixi:1 goal:2 room:1 change:1 determined:3 hyperplane:5 ithis:1 total:1 e:1 exception:1 support:31 adelaide:2
804
1,736
Nonlinear Discriminant Analysis using Kernel Functions Volker Roth & Volker Steinhage University of Bonn, Institut of Computer Science III Romerstrasse 164, D-53117 Bonn, Germany {roth, steinhag}@cs.uni-bonn.de Abstract Fishers linear discriminant analysis (LDA) is a classical multivariate technique both for dimension reduction and classification. The data vectors are transformed into a low dimensional subspace such that the class centroids are spread out as much as possible. In this subspace LDA works as a simple prototype classifier with linear decision boundaries. However, in many applications the linear boundaries do not adequately separate the classes. We present a nonlinear generalization of discriminant analysis that uses the kernel trick of representing dot products by kernel functions. The presented algorithm allows a simple formulation of the EM-algorithm in terms of kernel functions which leads to a unique concept for unsupervised mixture analysis, supervised discriminant analysis and semi-supervised discriminant analysis with partially unlabelled observations in feature spaces. 1 Introduction Classical linear discriminant analysis (LDA) projects N data vectors that belong to c different classes into a (c - 1)-dimensional space in such way that the ratio of between group scatter SB and within group scatter Sw is maximized [1]. LDA formally consists of an eigenvalue decomposition of Sv) SB leading to the so called canonical variates which contain the whole class specific information in a (c - I)-dimensional subspace. The canonical variates can be ordered by decreasing eigenvalue size indicating that the first variates contain the major part of the information. As a consequence, this procedure allows low dimensional representations and therefore a visualization of the data. Besides from interpreting LDA only as a technique for dimensionality reduction, it can also be seen as a multi-class classification method: the set of linear discriminant functions define a partition of the projected space into regions that are identified with class membership. A new observation x is assigned to the class with centroid closest to x in the projected space. To overcome the limitation of only linear decision functions some attempts have been made to incorporate nonlinearity into the classical algorithm. HASTIE et al. [2] introduced the so called model of Flexible Discriminant Analysis: LDA is reformulated in the framework of linear regression estimation and a generalization of this method is given by using nonlinear regression techniques. The proposed regression techniques implement the idea of using nonlinear mappings to transform the input data into a new space in which again a linear regression is performed. In real world 569 Nonlinear Discriminant Analysis Using Kernel Functions applications this approach has to deal with numerical problems due to the dimensional explosion resulting from nonlinear mappings. In the recent years approaches that avoid such explicit mappings by using kernel functions have become popular. The main idea is to construct algorithms that only afford dot products of pattern vectors which can be computed efficiently in high-dimensional spaces. Examples of this type of algorithms are the Support Vector Machine [3] and Kernel Principal Component Analysis [4]. In this paper we show that it is possible to formulate classical linear regression and therefore also linear discriminant analysis exclusively in terms of dot products. Therefore, kernel methods can be used to construct a nonlinear variant of discriminant analysis. We call this technique Kernel Discriminant Analysis (KDA). Contrary to a similar approach that has been published recently [5J, our algorithm is a real multi-class classifier and inherits from classical LDA the convenient property of data visualization. 2 Review of Linear Discriminant Analysis Under the assumption of the data being centered (i.e. trices SB and Sw are defined by ""~ ~ ""nj L...,;J=1 nj L...,;l,m=l SB = Sw = ""c ""nj L...,;j=l L...,;l=l (x(j) _ ~ ""nj nj L...,;l=l I (x(j)) I Ei Xi = 0) (x~))T XU)) (x(j) _ I I the scatter ma- ~ ""nj (1) x(j)) T nj L...,;m=l (2) m where nj is the number of patterns x~j) that belong to class j. LDA chooses a transformation matrix V that maximizes the objective function J(V) = IVTSB VI. (3) IVTSwVI The columns of an optimal V are the generalized eigenvectors that correspond to the nonzero eigenvalues in SBVi = Ai SWVi' In [6J and [7J we have shown, that the standard LDA algorithm can be restated exclusively in terms of dot products of input vectors. The final equation is an eigenvalue equation in terms of dot product matrices which are of size N x N. Since the solution of high-dimensional generalized eigenvalue equations may cause numerical problems (N may be large in real world applications), we present an improved algorithm that reformulates discriminant analysis as a regression problem. Moreover, this version allows a simple implementation of the EM-algorithm in feature spaces. 3 Linear regression analysis In this section we give a brief review of linear regression analysis which we use as "building block" for LDA. The task of linear regression analysis is to approximate the regression function by a linear function r(x) = E(YIX = x) ~ c + x T f'. (4) on the basis of a sample (YI, Xl), ... ,(YN , x N ). Let now y denote the vector (YI, ... ,YN)T and X denote the data matrix which rows are the input vectors. Using a quadratic loss function, the optimal parameters c and f' are chosen to minimize the average squared residual ASR = c IN + + f'Tnf'. (5) N-Illy - Xf'112 = IN denotes a N-vector of ones, n denotes a ridge-type penalty matrix n ?I which penalizes the coefficients of f'. Assuming the data beirig centered, i.e E~l Xi = 0, the parameters of the regression function are given by: N c = N- 1 " " . L...,;t=l Yi =: /-Ly , f' = (XT X + ?I)-l XT y. (6) 570 4 V. Roth and V. Steinhage LDA by optimal scoring In this section the LDA problem is linked to linear regression using the framework of penalized optimal scoring. We give an overview over the detailed derivation in [2] and [8]. Considering again the problem with c classes and N data vectors, the class-memberships are represented by a categorical response variable 9 with c levels. It is useful to code the n responses in terms of the indicator matrix Z: Zi ,j = 1, if the i-th data vectJr belongs to class j, and 0 otherwise. The point of optimal scoring is to turn categorical variables into quantitative ones by assigning scores to classes: the score vector 9 assigns the real number 9 j to the j-th level of g. The vector Z9 then represents a vector of scored training data and is regressed onto the data matrix X. The simultaneous estimation of scores and regression coefficients constitutes the optimal scoring problem: minimize the criterion ASR(9, (3) = N- 1 [IIZ9 - X{311 2 + {3TO{3] (7) under the constraint ~ IIZ9UZ = 1. According to (6), for a given score 9 the minimizing {3 is given by {3os = (XT X + 0)-1 xT Z(J, (8) and the partially minimized criteri9n becomes: minASR(9,{3) = 1- N-19TZ?(0)Z9, (9) (3 where M(O) = X(XTX +O)-IX T denotes the regularized hat or smoother matrix. Minimizing of (9) under the constraint ~ IIZ9W = 1 can be performed by the following procedure: 1. Choose an initial matrix 8 0 satisfying the constraint N- 1 8'{; ZT Z8 0 = I and set 8 0 = Z8 0 ~ 2. Run a multi-response regression of 8 0 onto X: 8 0 = M(0)80 = XB, where B is the ma.1rix of regression coefficients. 3. Eigenanalyze 8 0T 8 0to obtain the optimal scores, and update the matrix of regression coefficients: B* = BW, with W being the matrix of eigenvectors. It can be shown, that the final matrix B* is , up to a diagonal scale matrix, equivalent to the matrix of LDA-vectors, see [8]. 5 Ridge regression using only dot products The penalty matrix 0 in (5) assures that the penalized d x d covariance matrix i: = XT X + d is a symmetric nonsingular matrix. Therefore, it has d eigenvectors ei with accomplished positive eigenvalues Ii such that the following equations hold: - -1 ~ d 1 T =" 6 -eie ? i=1 Ii (10) t The first equation implies that the first 1 leading eigenvectors ei with eigenvalues > ? have an expansion in terms of the input vectors. Note that 1 is the number of nonzero eigenvalues of the unpenalized covariance matrix X T X. Together with (6), it follows for the general case, when the dimensionality d may extend l, that {3 can be written as the sum of two terms: an expansion in terms of the vectors Xi with coefficients ai and a similar expansion in terms of the remaining eigenvectors: Ii {3 = I:N . aixi + I:d)=1+1 . ~jej t=1 = XT a + I:d. )=1+1 ~jej , (11) with a = (a1 ... an) T. However, the last term can be dropped, since every eigenvector ej, j = 1 + 1, ... ,d is orthogonal to every vector Xi and does not influence the value of the regression function (4). The problem of penalized linear regression can therefore be stated as minimizing Nonlinear Discriminant Analysis Using Kernel Functions ASR(a) = N- 1 [Ily - XXT al1 2 + aTXOXTaJ. A stationary vector a is determined by a = (XXT + O)-ly. 571 (12) (13) Let now the dot product matrix K be defined by Kij = xT Xj and let for a given test point (Xl) the dot product vector kl be defined by kl = XXI . With this notation the regression function of a test point (xL) reads r(Xl) = /-Ly + klT( K + ?I )-1 y . (14) This equation requires only dot products and we can apply the kernel trick. The final equation (14), up to the constant term /-Ly , has also been found by SAUNDERS et al., [9J . They restated ridge regression in dual variables and optimized the resulting criterion function with a lagrange multiplier technique. Note that our derivation, which is a direct generalization of the standard linear regression formalism, leads in a natural way to a class of more general regression functions including the constant term. 6 LDA using only dot products Setting f3 = XT a as in (11) and using the notation of section 5, for a given score is given by: aas = (XXT + 0)-1 ZO . (15) o the optimal vector a Analogous to (9), the partially minimized criterion becomes: min ASR(O, a) = 1 - N- 1 0T ZT M(O)ZO, ex with M(O) (16) = XXT(XXT + 0)-1 = K(K + ?I)-l. To minimize (16) under the constraint tv IIZOW = 1 the procedure described in section 4 can be used when M(O) is substituted by M(O). The matrix Y which rows are the input vectors projected onto the column vectors of B* is given by: Y = XB* = K(K + ?I)-l Z8 oW. (17) Note that again the dot product matrix K is all that is needed to calculate Y. 7 The kernel trick The main idea of constructing nonlinear algorithms is to apply the linear methods not in the space of observations but in a feature space F that is related to the former by a nonlinear mapping ? : RN ---+ F, X ---+ ?(x) . Assuming that the mapped data are centered in F, i.e. L~=l ?(Xi) = 0, the presented algorithms remain formally unchanged if the dot product matrix K is computed in F: Kij = (?(Xi) . ?(Xj)). As shown in [4], this assumption can be dropped by ?(Xi ) := ?(Xi) - ~ L~=l ?(Xi). writing ? instead of the mapping ?: Computation of dot products in feature spaces can be done efficiently by using kernel functions k(xi, Xj) [3]: For some choices of k there exists a mapping ? into some feature space F such that k acts as a dot product in F. Among possible kernel functions there are e.g. Radial Basis Function (RBF) kernels of the form k(x,y) = exp(-llx - YW/c). 8 The EM-algorithm in feature spaces LDA can be derived as the maximum likelihood method for normal populations with different means and common covariance matrix ~ (see [11]) . Coding the class membership of the observations in the matrix Z as in section 4, LDA maximizes the (complete data) log-likelihood function V. Roth and V. Steinhage 572 This concept can be generalized for the case that only the group membership of Nc < N observations is known ([14], p.679): the EM-algorithm provides a convenient method for maximizing the likelihood function with missing data: E-step: set Pki = Prob(xi E class k) Zik' Pki = { if the class membership of Xi has been observed ? '" ( ) [1/2( Xi - ILk )T~-l ( ot h erWlse, 'f'k Xi ex exp L.; Xi 1Tk ?>" (z.) Lk=l 1Tk?>"(Z;) ' - ILk )] M-step: set 1 N 'irk = N LPki' i=l The idea behind this approach is that even an unclassified observation can be used for estimation if it is given a proper weight according to its posterior probability for class membership. The M-step can be seen as weighted mean and covariance maximum likelihood estimates in a weighted and augmented problem: we augment the data by replicating the N observations c times, with the l-th such replication having observation weights Plio The maximization of the likelihood function can be achieved via a weighted and augmented LDA. It turns out that it is not necessary to explicitly replicate the observations and run a standard LDA: the optimal scoring version of LDA described in section 4 allows an implicit solution of the augmented problem that still uses only N observations. Instead of using a response indicator matrix Z, one uses a blurred response Matrix Z, whose rows consist of the current class probabilities for each observation. At each M-step this Z is used in a multiple linear regression followed by an eigen-decomposition. A detailed derivation is given in [11]. Since we have shown that the optimal scoring problem can be solved in feature spaces using kernel functions this is also the case for the whole EM-algorithm: the E-step requires only differences in Mahalonobis distances which are supplied by KDA. After iterated application of the E- and M-step an observation is classified to the class k with highest probability Pk. This leads to a unique framework for pure mixture analysis (Nc = 0), pure discriminant analysis (Nc = N) and the semisupervised models of discriminant analysis with partially unclassified observations (0 < Nc < N) in feature spaces. 9 Experiments Waveform data: We illustrate KDA on a popular simulated example, taken from [10], pA9-55 and used in [2, 11]. It is a three class problem with 21 variables. The learning set consisted of 100 observations per class. The test set was of size 1000. The results are given in table 1. Table 1: Results for waveform data. The values are averages over 10 simulations. The 4 entries above the line are taken from [11]. QDA: quadratic discriminant analysis, FDA: flexible discriminant analysis, MDA: mixture discriminant analysis. Technique Training Error [%] Test Error [%] LDA 12.1(0.6) 19.1(0.6) QDA 3.9(OA) 20.5(0.6) FDA (best model parameters) 10.0(0.6) 19.1(0.6) MDA (best model parameters) 13.9{0.5) 15.5(0.5) KDA (RBF kernel, (7 = 2, ? = 1.5) 10.7(0.6) 14.1(0.7) 573 Nonlinear Discriminant Analysis Using Kernel Functions The Bayes risk for the problem is about 14% [10]. KDA outperforms the other nonlinear versions of discriminant analysis and reaches the Bayes rate within the error bounds, indicating that one cannot expect significant further improvement using other classifiers. Figure 1 demonstrates the data visualization property of KDA. Since for a 3 class problem the dimensionality of the projected space equals 2, the data can be visualized without any loss of information. In the left plot one can see the projected learn data and the class centroids, the right plot shows the test data and again the class centroids of the learning set. Figure 1: Data visualization with KDA. Left: learn set, right: test set To demonstrate the effect of using unlabeled data for classification we repeated the experiment with waveform data using only 20 labeled observations per class. We compared the the classification results on a test set of size 300 using only the labeled data (error rate E 1 ) with the results of the EM-model which considers the test data as incomplete measurements during an iterative maximization of the likelihood function (error rate E2). Using a RBF kernel (0" = 250), we obtained the following mean error rates over 20 simulations: El = 30 .5(3.6)%, E2 = 17.1(2.7)%. The classification performance could be drastically improved when including the unlabelled data into the learning process. Object recognition: We tested KDA on the MPI Chair Database l . It consists of 89 regular spaced views form the upper viewing hemisphere of 25 different classes of chairs as a training set and 100 random views of each class as a test set. The available images are downscaled to 16 x 16 pixels. We did not use the additional 4 edge detection patterns for each view. Classification results for several classifiers are given in table 2. KDA poly. kernel 2.1 For a comparison of the computational performance we also trained the SVM-light implementation (V 2.0) on the data, [13]. In this experiment with 25 classes the KDA algorithm showed to be Significantly faster than the SVM: using the RBFkernel, KDA was 3 times faster, with the polynomial kernel KDA was 20 times faster than SVM-light. 10 Discussion In this paper we present a nonlinear version of classical linear discriminant analysis. The main idea is to map the input vectors into a high- or even infinite dimensional feature space and to apply LDA in this enlarged space. Restating LDA in a way that only dot products of input vectors are needed makes it possible to use kernel representations of dot products. This overcomes numerical problems in high-dimensional IThe database is available via ftp:/ /ftp.mpik-tueb.mpg.de/pub/chair_dataset/ 574 V. Roth and V. Steinhage feature spaces. We studied the classification performance of the KDA classifier on simulated waveform data and on the MPI chair database that has been widely used for benchmarking in the literature. For medium size problems, especially if the number of classes is high, the KDA algorithm showed to be significantly faster than a SVM while leading to the same classification performance. From classical LDA the presented algorithm inherits the convenient property of data visualization, since it allows low dimensional views of the data vectors. This makes an intuitive interpretation possible, which is helpful in many practical applications. The presented KDA algorithm can be used as the maximization step in an EM algorithm in feature spaces. This allows to include unlabeled observation into the learning process which can improve classification results. Studying the performance of KDA for other classification problems as well as a theoretical comparison of the optimization criteria used in the KDA- and SVM-algorithm will be subject of future work. Acknowledgements This work was supported by Deutsche Forschungsgemeinschaft, DFG. We heavily profitted from discussions with Armin B. Cremers, John Held and Lothar Hermes. References [1] R. Duda and P. Hart, Pattern Classification and Scene Analysis. Wiley & Sons, 1973. [2] T . Hastie, R. Tibshirani, and A. Buja, "Flexible discriminant analysis by optimal scoring," JASA, vol. 89, pp. 1255-1270, 1994. [3] V. N. Vapnik, Statistical learning theory. Wiley & Sons, 1998. [4] B. Sch6lkopf, A. Smola, and K.-R. Muller, "Nonlinear component analysis as a kernel eigenvalue problem," Neural Computation, vol. 10, no. 5, pp. 1299-1319, 1998. [5] S. Mika, G. Ratsch, J. Weston, B. Sch6lkopf, and K.-R. Miiller, "Fisher discriminant analysis with kernels," in Neural Networks for Signal Processing IX (Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, eds.), pp. 41-48, IEEE, 1999. [6] V. Roth and V. Steinhage, "Nonlinear discriminant analysis using kernel functions," Tech. Rep. IAI-TR-99-7, Department of Computer Science III, Bonn University, 1999. [7] V. Roth, A. Pogoda, V. Steinhage, and S. Schroder, "Pattern recognition combining feature- and pixel-based classification within a real world application," in Mustererkennung 1999 (W. Forstner, J. Buhmann, A. Faber, and P. Faber, eds.), Informatik aktuell, pp. 120-129, 21. DAGM Symposium, Bonn, Springer, 1999. [8] T. Hastie, A. Buja, and R. Tibshirani, "Penalized discriminant analysis," AnnStat, vol. 23, pp. 73-102, 1995. [9] S. Saunders, A. Gammermann, and V. Vovk, "Ridge regression learning algorithm in dual variables," tech. rep., Royal Holloway, University of London, 1998. [10] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J . Stone, Classification and Regression Trees. Monterey, CA: Wadsworth and Brooks/Cole, 1984. [11] T. Hastie and R. Tibshirani, "Discriminant analysis by gaussian mixtures," JRSSB, vol. 58, pp. 158-176, 1996. [12] B. Scholkopf, Support Vector Learning. PhD thesis, 1997. R. Oldenbourg Verlag, Munich. [13] T. Joachims, "Making large-scale svm learning practical," in Advances in Kernel Methods - Support Vector Learning (B. Scholkopf, C. Burges, and A. Smola, eds.), MIT Press, 1999. [14] B. Flury, A First Course in Multivariate Statistics. Springer, 1997.
1736 |@word version:4 polynomial:1 duda:1 replicate:1 hu:1 simulation:2 decomposition:2 covariance:4 tr:1 reduction:2 initial:1 exclusively:2 score:6 pub:1 outperforms:1 current:1 scatter:3 assigning:1 written:1 john:1 oldenbourg:1 numerical:3 partition:1 plot:2 update:1 zik:1 stationary:1 provides:1 direct:1 become:1 symposium:1 replication:1 scholkopf:2 consists:2 downscaled:1 mpg:1 multi:3 decreasing:1 considering:1 becomes:2 project:1 moreover:1 notation:2 maximizes:2 medium:1 deutsche:1 eigenvector:1 transformation:1 nj:8 quantitative:1 every:2 act:1 classifier:5 demonstrates:1 ly:4 yn:2 positive:1 dropped:2 reformulates:1 consequence:1 mika:1 studied:1 trice:1 unique:2 practical:2 block:1 implement:1 procedure:3 faber:2 significantly:2 xtx:1 convenient:3 radial:1 regular:1 onto:3 cannot:1 unlabeled:2 risk:1 influence:1 writing:1 equivalent:1 map:1 roth:7 maximizing:1 missing:1 restated:2 formulate:1 assigns:1 pure:2 tnf:1 population:1 analogous:1 heavily:1 us:3 aixi:1 trick:3 satisfying:1 recognition:2 labeled:2 database:3 observed:1 solved:1 calculate:1 region:1 highest:1 trained:1 ithe:1 basis:2 represented:1 xxt:5 derivation:3 zo:2 irk:1 mahalonobis:1 london:1 saunders:2 whose:1 widely:1 otherwise:1 statistic:1 transform:1 final:3 eigenvalue:9 hermes:1 product:16 combining:1 intuitive:1 tk:2 object:1 illustrate:1 ftp:2 mpik:1 c:1 implies:1 waveform:4 centered:3 viewing:1 generalization:3 hold:1 normal:1 exp:2 mapping:6 major:1 al1:1 estimation:3 cole:1 weighted:3 mit:1 gaussian:1 yix:1 pki:2 avoid:1 ej:1 breiman:1 volker:2 wilson:1 derived:1 inherits:2 klt:1 joachim:1 improvement:1 ily:1 likelihood:6 tech:2 centroid:4 helpful:1 el:1 membership:6 sb:4 dagm:1 transformed:1 germany:1 pixel:2 schroder:1 classification:13 flexible:3 dual:2 among:1 augment:1 wadsworth:1 equal:1 construct:2 asr:4 f3:1 having:1 represents:1 unsupervised:1 constitutes:1 future:1 minimized:2 dfg:1 bw:1 attempt:1 friedman:1 detection:1 mixture:4 light:2 behind:1 held:1 xb:2 edge:1 explosion:1 necessary:1 institut:1 orthogonal:1 tree:1 incomplete:1 penalizes:1 qda:2 theoretical:1 kij:2 column:2 formalism:1 maximization:3 entry:1 sv:1 chooses:1 together:1 armin:1 again:4 squared:1 thesis:1 choose:1 tz:1 leading:3 de:2 coding:1 coefficient:5 blurred:1 cremers:1 explicitly:1 vi:1 performed:2 view:4 linked:1 bayes:2 minimize:3 efficiently:2 maximized:1 correspond:1 nonsingular:1 spaced:1 iterated:1 informatik:1 eie:1 published:1 classified:1 simultaneous:1 reach:1 ed:3 pp:6 larsen:1 e2:2 popular:2 dimensionality:3 z9:2 supervised:2 response:5 improved:2 iai:1 formulation:1 done:1 implicit:1 smola:2 ei:3 nonlinear:15 o:1 lda:23 semisupervised:1 building:1 effect:1 concept:2 contain:2 multiplier:1 consisted:1 adequately:1 former:1 assigned:1 read:1 symmetric:1 nonzero:2 deal:1 during:1 mpi:2 criterion:4 generalized:3 stone:1 ridge:4 complete:1 demonstrate:1 interpreting:1 image:1 recently:1 common:1 overview:1 belong:2 extend:1 interpretation:1 significant:1 measurement:1 ai:2 llx:1 nonlinearity:1 replicating:1 dot:16 posterior:1 closest:1 recent:1 showed:2 multivariate:2 belongs:1 hemisphere:1 verlag:1 rep:2 yi:3 accomplished:1 scoring:7 muller:1 seen:2 additional:1 signal:1 semi:1 ii:3 multiple:1 smoother:1 unlabelled:2 xf:1 faster:4 hart:1 a1:1 variant:1 regression:26 kernel:26 achieved:1 xxi:1 ratsch:1 ot:1 subject:1 steinhage:6 contrary:1 call:1 iii:2 forschungsgemeinschaft:1 xj:3 variate:3 zi:1 hastie:4 identified:1 idea:5 prototype:1 penalty:2 miiller:1 reformulated:1 afford:1 cause:1 useful:1 monterey:1 detailed:2 eigenvectors:5 yw:1 tueb:1 visualized:1 supplied:1 canonical:2 per:2 tibshirani:3 vol:4 group:3 douglas:1 year:1 sum:1 run:2 prob:1 unclassified:2 decision:2 rix:1 bound:1 followed:1 quadratic:2 ilk:2 mda:2 constraint:4 scene:1 fda:2 regressed:1 bonn:5 min:1 chair:3 department:1 tv:1 according:2 munich:1 remain:1 em:7 son:2 making:1 taken:2 equation:7 visualization:5 assures:1 turn:2 needed:2 kda:17 studying:1 available:2 apply:3 flury:1 eigen:1 hat:1 denotes:3 remaining:1 include:1 sw:3 especially:1 classical:7 unchanged:1 objective:1 diagonal:1 jrssb:1 ow:1 subspace:3 distance:1 separate:1 mapped:1 simulated:2 oa:1 restating:1 considers:1 discriminant:28 assuming:2 besides:1 code:1 ratio:1 minimizing:3 nc:4 olshen:1 stated:1 implementation:2 zt:2 proper:1 sch6lkopf:2 upper:1 observation:16 rn:1 buja:2 introduced:1 kl:2 optimized:1 brook:1 pattern:5 including:2 royal:1 unpenalized:1 natural:1 regularized:1 indicator:2 buhmann:1 residual:1 representing:1 improve:1 brief:1 lk:1 categorical:2 review:2 literature:1 acknowledgement:1 loss:2 expect:1 limitation:1 jasa:1 row:3 course:1 penalized:4 supported:1 last:1 drastically:1 burges:1 boundary:2 dimension:1 overcome:1 world:3 z8:3 made:1 projected:5 approximate:1 uni:1 overcomes:1 xi:15 iterative:1 table:3 learn:2 erwlse:1 rbfkernel:1 ca:1 expansion:3 poly:1 constructing:1 substituted:1 did:1 pk:1 spread:1 main:3 whole:2 scored:1 repeated:1 xu:1 augmented:3 enlarged:1 benchmarking:1 wiley:2 explicit:1 xl:4 ix:2 specific:1 xt:8 svm:6 exists:1 consist:1 vapnik:1 phd:1 jej:2 lagrange:1 ordered:1 partially:4 springer:2 aa:1 ma:2 weston:1 rbf:3 fisher:2 determined:1 infinite:1 vovk:1 principal:1 called:2 indicating:2 formally:2 holloway:1 support:3 mustererkennung:1 incorporate:1 tested:1 ex:2
805
1,737
Potential Boosters ? Nigel Duffy Department of Computer Science University of California Santa Cruz, CA 95064 David Helmbold Department of Computer Science University of California Santa Cruz, CA 95064 nigedufJ@cse. ucsc. edu dph@~se . ucsc. edu Abstract Recent interpretations of the Adaboost algorithm view it as performing a gradient descent on a potential function. Simply changing the potential function allows one to create new algorithms related to AdaBoost. However, these new algorithms are generally not known to have the formal boosting property. This paper examines the question of which potential functions lead to new algorithms that are boosters. The two main results are general sets of conditions on the potential; one set implies that the resulting algorithm is a booster, while the other implies that the algorithm is not. These conditions are applied to previously studied potential functions , such as those used by LogitBoost and Doom II. 1 Introduction The first boosting algorithm appeared in Rob Schapire's thesis [1]. This algorithm was able to boost the performance of a weak PAC learner [2] so that the resulting algorithm satisfies the strong PAC learning [3] criteria. We will call any method that builds a strong PAC learning algorithm from a weak PAC learning algorithm a PAC boosting algorithm. Freund and Schapire later found an improved PAC boosting algorithm called AdaBoost [4], which also tends to improve the hypotheses generated by practical learning algorithms [5]. The AdaBoost algorithm takes a labeled training set and produces a master hypothesis by repeatedly calling a given learning method. The given learning method is used with different distributions on the training set to produce different base hypotheses. The master hypothesis returned by AdaBoost is a weighted vote of these base hypotheses. AdaBoost works iteratively, determining which examples are poorly classified by the current weighted vote and selecting a distribution on the training set to emphasize those examples. Recently, several researchers [6, 7, 8, 9, 10] have noticed that Adaboost is performing a constrained gradient descent on an exponential potential function of the margins of the examples. The margin of an example is yF(x) where y is the ?1 valued label of the example x and F(x) E lR is the net weighted vote of master hypothesis F. Once Adaboost is seen this way it is clear that further algorithms may be derived by changing the potential function [6, 7, 9, 10]. Potential Boosters? 259 The exponential potential used by AdaBoost has the property that the influence of a data point increases exponentially if it is repeatedly misclassified by the base hypotheses. This concentration on the "hard" examples allows AdaBoost to rapidly obtain a consistent hypothesis (assuming that the base hypotheses have certain properties). However, it also means that an incorrectly labeled or noisy example can quickly attract much of the distribution. It appears that this lack of noisetolerance is one of AdaBoost's few drawbacks [11]. Several researchers [7, 8, 9, 10] have proposed potential functions which do not concentrate as much on these "hard" examples. However, they generally do not show that the derived algorithms have the PAC boosting property. In this paper we return to the original motivation behind boosting algorithms and ask: "for which potential functions does gradient descent lead to PAC boosting algorithms" (i.e. boosters that create strong PAC learning algorithms from arbitrary weak PAC learners). We give necessary conditions that are met by some of the proposed potential functions (most notably the LogitBoost potential introduced by Friedman et al. [7]). Furthermore, we show that simple gradient descent on other proposed potential functions (such as the sigmoidal potential used by Mason et al. [10]) cannot convert arbitrary weak PAC learning algorithms into strong PAC learners. The aim of this work is to identify properties of potential functions required for PAC boosting, in order to guide the search for more effective potentials. Some potential functions have an additional tunable parameter [10] or change over time [12]. Our results do not yet apply to such dynamic potentials. 2 PAC Boosting Here we define the notions of PAC learningl and boosting, and define the notation used throughout the paper. A concept C is a subset of the learning domain X. A random example of C is a pair (x E X,y E {-1, +1}) where x is drawn from some distribution on X and y = 1 if x E C and -1 otherwise. A concept class is a set of concepts. Definition 1 A (strong) PAC learner for concept class C has the property that for every distribution D on X, all concepts C E C, and all 0 < E,O < 1/2: with probability at least 1 - 0 the algorithm outputs a hypothesis h where P D [h( x) :j:. C (x)] ~ E. The learning algorithm is given C, E, 0, and the ability to draw random examples of C (w.r.t. distribution D), and must run in time bounded by poly(l/E,l/o). Definition 2 A weak PAC learner is similar to a strong PA C learner, except that it need only satisfy the conditions for a particular 0 < EO, 00 < 1/2 pair, rather than for all E,O pairs. Definition 3 A PAC boosting algorithm is a generic algorithm which can leverage any weak PAC learner to meet the strong PAC learning criteria. In the remainder of the paper we emphasize boosting the accuracy E as it is much easier to boost the confidence 0, see Haussler et al. [13] and Freund [14] for details. Furthermore, we emphasize boosting by re-sampling, where the strong PAC learner draws a large sample, and each iteration the weak learning algorithm is called with some distribution over this sample. lTo simplify the presentation we omit the instance space dimension and target representation length parameters. N. Duffy and D. Helmbold 260 Throughout the paper we use the following notation. ? m is the cardinality of the fixed sample {(Xl, Y1), ... , (Xm, Ym) }. ? ht(x) is the ?1 valued weak hypothesis created at iteration t. ? at is the weight or vote of ht in the master hypothesis, the a's mayor may not be normalized so that 2::, =1 at' = l. ? Ft (x) = 2::'=1 (at' ht, (x) /2:;=1 aT) E !R, is the master hypothesis 2at iteration t. ? Ui,t = Yi 2::'=1 at,ht, (X) is the margin of Xi after iteration t; the t subscript is often omitted. Note that the margin is positive when the master hypothesis is correct, and the normalized margin is Ui,t/ 2::'=1 at'? ? p(u) is the potential of an instance with margin u, and the total potential is 2:~1 p(Ui). ? P v[ ],P s[ ], and Es[ ] are the probability with respect to the unknown distribution over the domain, and the probability and expectations with respect to the uniform distribution over the sample, respectively. Our results apply to total potential functions of the form 2:~1 p(Ui) where p is positive and strictly decreasing. 3 Leveraging Learners by Gradient Descent AdaBoost [4] has recently been interpreted as gradient descent independently by several groups [6, 7, 8, 9, Under this interpretation AdaBoost is seen as minimizing the total potential 2:i=l P(Ui) = 2:~1 exp( -Ui) via feasible direction gradient descent. On each iteration t + 1, AdaBoost chooses the direction of steepest descent as the distribution on the sample, and calls the weak learner to obtain a new base hypothesis h Hl . The weight at+! of this new weak hypothesis is calculated to minimize3the resulting potential 2:~1 p(Ui,H1) = 2:~1 exp( -(Ui,t + aHIYi ht+! (Xi))). 1m. This gradient descent idea has been generalized to other potential functions [6, 7, 10]. Duffy et al. [9] prove bounds for a similar gradient descent technique using a non-componentwise, non-monotonic potential function. Note that if the weak learner returns a good hypothesis h t (with training error at most ? < 1/2), then 2:~1 Dt(Xi)Yiht(Xi) > 1 - 2? > O. We set T = 1 - 2?, and assume that each base hypothesis produced satisfies 2:~1 Dt(Xi)Yiht(Xi) ~ T. In this paper we consider this general gradient descent approach applied to various potentials 2:~1 P(Ui). Note that each potential function P has two corresponding gradient descent algorithms (see [6]). The un-normalized algorithms (like AdaBoost) continually add in new weak hypotheses while preserving the old a's. The normalized algorithms re-scale the a's so that they always sum to 1. In general, we call such algorithms "leveraging algorithms", reserving the term "boosting" for those that actually have the PAC boosting property. 4 Potentials that Don't Boost In this section we describe sufficient conditions on potential functions so that the corresponding leveraging algorithm does not have the PAC boosting property. We 2The prediction of the master hypothesis on instance x is the sign of Ft(x). 30ur current proofs require that the actual at's be no greater than a constant (say 1). Therefore, this minimizing a may need to be reduced. Potential Boosters? 261 apply these conditions to show that two potentials from the literature do not lead to boosting algorithms. Theorem 1 Let p( u) be a potential function for which: 1} the derivative, p' (u), is increasing (_p' (u) decreasing} in ?R+, and 2} 3{3 > such that for all u > 0, -{3p'(u) ~ -p'(-2u). Then neither the normalized nor the un-normalized leveraging algorithms corresponding to potential p have the PAC boosting property. ? This theorem is proven by an adversary argument. Whenever the concept class is sufficiently rich 4 , the adversary can keep a constant fraction of the sample from being correctly labeled by the master hypothesis. Thus as the error tolerance ? goes to zero, the master hypotheses will not be sufficiently accurate. We now apply this theorem to two potential functions from the literature. Friedman et al. [7J describe a potential they call "Squared Error(p)" where the yo + 1 eF(Xi)) 2 potential at Xi is ( eF(Xi) + e-F(Xi) . This potential can be re-written T - 1 ( e- Ui _ eUi as PSE(Ui) = 4" 1 + 2 eUi + e-Ui + (e- Ui _ eUi eUi + e - Ui )2) ? Corollary 1 Potential "Squared Error{p} " does not lead to a boosting algorithm. Proof: This potential satisfies the conditions of Theorem 1. It is strictly decreasing, and the second condition holds for {3 = 2. Mason et al. [lOJ examine a normalized algorithm using the potential PD(U) = 1- tanh (AU). Their algorithm optimizes over choices of A via cross-validation, and uses weak learners with slightly different properties. However, we can plug this potential directly into the gradient descent framework and examine the resulting algorithms. Corollary 2 The DOOMII potential PD does not lead to a boosting algorithm for any fixed A. Proof: The potential is strictly decreasing, and the second condition of Theorem 1 holds for {3 = 1. Our techniques show that potentials that are sigmoidal in nature do not lead to algorithms with the PAC boosting property. Since sigmoidal potentials are generally better over-estimates of the 0, 1 loss than the potential used by AdaBoost, our results imply that boosting algorithms must use a potential with more subtle properties than simply upper bounding the 0, 1 loss. 5 Potential Functions That Boost In this section we give sufficient conditions on a potential function for it's corresponding un-normalized algorithm to have the PAC boosting property. This result implies that AdaBoost [4J and LogitBoost [7J have the PAC boosting property (Although this was previously known for AdaBoost [4J, we believe this is a new result for LogitBoost). 4The VC-dimension 4 concept class consisting of pairs of intervals on the real line is sufficient for our adversary. N. Duffy and D. Helmbold 262 One set of conditions on the potential imply that it decreases roughly exponentially when the (un-normalized) margins are large. Once the margins are in this exponential region, ideas similar to those used in AdaBoost's analysis show that the minimum normalized margin quickly becomes bounded away from zero. This allows us to bound the generalization error using a theorem from Bartlett et al. [15]. A second set of conditions governs the behavior of the potential function before the un-normalized margins are large enough. These conditions imply that the total potential decreases by a constant factor each iteration. Therefore, too much time will not be spent before all the margins enter the exponential region. The margin value bounding the exponential region is U, and once 2::=1 p(Ui) ~ p(U), all margins p(Ui) will remain in the exponential region. The following theorem gives conditions on p ensuring that 2::=1 P(Ui) quickly becomes less than p(U). Theorem 2 If the following conditions hold for p( u) and U: < pll(U) -qp'(u) Vu > U, 1. -p'(u) is strictly decreasing -and 0 2. 3q> 0 such that p(u) ~ ~ B, and 4Bq 2 m 2 p(0) In ( ~rb))) then 2:i=l P(Ui) ~ p(U) after Tl ~ p(U)2r2 iterations. m The proof of this theorem approximates the new total potential by the old potential minus a times a linear term, plus an error. By bounding the error as a function of a and minimizing we demonstrate that some values of a give a sufficient decrease in the total potential. Theorem 3 If the following conditions hold for p( u), U, q, and iteration Tl: 1. 3(3 ~ .J3 such that -p'(u v ~ 1 and u > U, + v) ~ p(u + v) ~ -p'(u)(3-V q whenever -1 ~ 2. 2::1P(Ui,Tl) ~ p(U), 3. -p' (u) is strictly decreasing, and 4. 3C > 0, 'Y> 1 such that Cp(u) ~ 'Y- u Vu >U which decreases expo- The proof of this theorem is a generalization of the AdaBoost proof. Combining these two theorems, and the generalization bound from Theorem 2 of Bartlett et al. [15] gives the following result, where d is the VC dimension of the weak hypothesis class. Theorem 4 If for all edges 0 < r < 1/2 there exists T 1,r ~ poly(m,l/r), Ur , and qr satisfying the conditions of Theorem 3 such that p(Ur ) ~ poly(r) and qrv'f'=T2 = l(r) < 1 - poly(r), then in time poly(m, 1/r) all examples have nor- 263 Potential Boosters? malized margin at least () = In ((l~~~l)) P D[yFT(X) ~ 0] E 0 ( 1 Vm / In(r) and ( l n 2(r)dlog2 (m/d) )~) (In (l(r) + 1) -In (21(r)))2 + 10g(1/8) Choosing m appropriately makes the error rate sufficiently small so that the algorithm corresponding to p has the PAC boosting property. We now apply Theorem 4 to show that the AdaBoost and LogitBoost potentials lead to boosting algorithms. 6 Some Boosting Potentials In this section we show as a direct consequence of our Theorem 4 that the potential functions for AdaBoost and LogitBoost lead to boosting algorithms. Note that the LogitBoost algorithm we analyze is not exactly the same as that described by Friedman et al. [7], their "weak learner" optimizes a square loss which appears to better fit the potential. First we re-derive the boosting property for AdaBoost. Corollary 3 AdaBoost's [16] potential boosts. Proof: To prove this we simply need to show that the potential p(u) = exp( -u) satisfies the conditions of Theorem 4. This is done by setting Ur = -In(m), qr = 1, 'Y = f3 = e, C = 1, and Tl = o. Corollary 4 The log-likelihood potential (as used in LogitBoost [7]) boosts. Proof: In this case p(u) C = 2, Ur = In (1 + e- f2/2 = -In ( Jl~ orem 2 shows that after Tl are satisfied. 7 ~ -1 ) U ) and -p'(u) and qr = l!~:u. = 1 +exp(-U r ) = We set 'Y = f3 = e, {l?1Z ~. Now The- poly(m, l/r) iterations the conditions of Theorem 4 Conclusions In this paper we have examined leveraging weak learners using a gradient descent approach [9] . This approach is a direct generalization of the Adaboost [4, 16] algorithm, where Adaboost's exponential potential function is replaced by alternative potentials. We demonstrated properties of potentials that are sufficient to show that the resulting algorithms are PAC boosters, and other properties that imply that the resulting algorithms are not PAC boosters. We applied these results to several potential functions from the literature [7, 10, 16]. New insight can be gained from examining our criteria carefully. The conditions that show boosting leave tremendous freedom in the choice of potential function for values less than some U, perhaps this freedom can be used to choose potential functions which do not overly concentrate on noisy examples. There is still a significant gap between these two sets of properties, we are still a long way from classifying arbitrary potential functions as to their boosting properties. There are other classes of leveraging algorithms. One class looks at the distances between successive distributions [17, 18]. Another class changes their potential 264 N. Duffy and D. Helmbold over time [6, 8, 12, 14]. The criteria for boosting may change significantly with these different approaches. For example, Freund recently presented a boosting algorithm [12] that uses a time-varying sigmoidal potential. It would be interesting to adapt our techniques to such dynamic potentials. References [1] Robert E. Schapire. The Design and Analysis of Efficient Learning Algorithms. MIT Press, 1992. [2] Michael Kearns and Leslie Valiant . Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the ACM, 41(1):67-95, January 1994. [3] L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, November 1984. [4] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119-139, August 1997. [5] Eric Bauer and Ron Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine Learning, 36(1-2):105-39, 1999. [6] Leo Breiman. Arcing the edge. Technical Report 486, Department of Statistics, University of California, Berkeley., 1997. available at www.stat.berkeley.edu. [7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Technical report, Stanford University, 1998. [8] G. lliitsch, T. Onoda, and K-R. Muller. Soft margins for AdaBoost. Machine Learning, 2000. To appear. [9] Nigel Duffy and David P. Helmbold. A geometric approach to leveraging weak learners. In Paul Fischer and Hans Ulrich Simon, editors, Computational Learning Theory: 4th European Conference (EuroCOLT '99), pages 18-33. Springer-Verlag, March 1999. [10] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Boosting algorithms as gradient descent. To appear in NIPS 2000. [11] Thomas G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, Boosting, and Randomization. Machine Learning. To appear. [12] Yoav Freund. An adaptive version of the boost-by-majority algorithm. In Proc. l?th Annu. Conf. on Comput. Learning Theory, pages 102-113. ACM, 1999. [13] David Haussler, Michael Kearns, Nick Littlestone, and Manfred K Warmuth. Equivalence of models for polynomiallearnability. Information and Computation, 95(2):129161, December 1991. [14] Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256-285, September 1995. [15] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651-1686, 1998. [16] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297-336, December 1999. [17] Jyrki Kivinen and Manfred K Warmuth. Boosting as entropy projection. In Proc. l?th Annu. Conf. on Comput. Learning Theory, pages 134-144. ACM, 1999. [18] John Lafferty. Additive models, boosting, and inference for generalized divergences. In Proc. l?th Annu. Conf. on Comput. Learning Theory, pages 125-133. ACM.
1737 |@word version:1 minus:1 selecting:1 current:2 yet:1 must:2 written:1 malized:1 john:1 cruz:2 additive:2 warmuth:2 steepest:1 manfred:2 lr:1 boosting:43 cse:1 ron:1 successive:1 sigmoidal:4 ucsc:2 direct:2 prove:2 notably:1 behavior:1 roughly:1 nor:2 examine:2 eurocolt:1 decreasing:6 actual:1 cardinality:1 increasing:1 becomes:2 notation:2 bounded:2 interpreted:1 berkeley:2 every:1 voting:2 exactly:1 omit:1 appear:3 continually:1 positive:2 before:2 llew:1 tends:1 consequence:1 meet:1 subscript:1 plus:1 au:1 studied:1 examined:1 equivalence:1 practical:1 vu:2 empirical:1 significantly:1 projection:1 confidence:2 cannot:1 doom:1 influence:1 www:1 demonstrated:1 go:1 independently:1 automaton:1 helmbold:5 examines:1 haussler:2 insight:1 notion:1 annals:1 target:1 us:2 hypothesis:23 pa:1 satisfying:1 labeled:3 ft:2 region:4 sun:1 decrease:4 pd:2 ui:19 dynamic:2 lto:1 learner:15 f2:1 eric:1 various:1 leo:1 effective:1 describe:2 choosing:1 stanford:1 valued:2 say:1 otherwise:1 ability:1 statistic:2 fischer:1 noisy:2 net:1 remainder:1 combining:1 rapidly:1 poorly:1 pll:1 qr:3 produce:2 leave:1 spent:1 derive:1 frean:1 stat:1 strong:8 implies:3 met:1 concentrate:2 direction:2 drawback:1 correct:1 vc:2 require:1 generalization:5 randomization:1 strictly:5 hold:4 sufficiently:3 exp:4 omitted:1 proc:3 label:1 tanh:1 create:2 weighted:3 mit:1 always:1 aim:1 rather:1 breiman:1 varying:1 corollary:4 arcing:1 derived:2 yo:1 likelihood:1 inference:1 attract:1 misclassified:1 classification:1 constrained:1 once:3 f3:2 sampling:1 look:1 t2:1 report:2 simplify:1 few:1 wee:1 divergence:1 replaced:1 consisting:1 friedman:4 freedom:2 behind:1 accurate:1 edge:2 necessary:1 bq:1 tree:1 old:2 littlestone:1 re:4 instance:3 soft:1 boolean:1 yoav:3 leslie:1 subset:1 uniform:1 examining:1 too:1 nigel:2 chooses:1 lee:1 vm:1 michael:2 ym:1 quickly:3 thesis:1 squared:2 satisfied:1 choose:1 conf:3 booster:9 derivative:1 return:2 potential:75 satisfy:1 later:1 view:2 h1:1 analyze:1 simon:1 square:1 accuracy:1 ensemble:1 identify:1 weak:18 produced:1 researcher:2 classified:1 whenever:2 trevor:1 definition:3 proof:8 tunable:1 ask:1 subtle:1 carefully:1 actually:1 appears:2 dt:2 adaboost:27 improved:2 done:1 furthermore:2 jerome:1 lack:1 logistic:1 yf:1 perhaps:1 believe:1 dietterich:1 concept:7 normalized:11 iteratively:1 criterion:4 generalized:2 theoretic:1 demonstrate:1 cp:1 ef:2 recently:3 qp:1 exponentially:2 jl:1 interpretation:2 approximates:1 significant:1 enter:1 han:1 base:6 add:1 recent:1 optimizes:2 commun:1 certain:1 verlag:1 yi:1 muller:1 seen:2 preserving:1 additional:1 greater:1 minimum:1 yft:1 eo:1 eui:4 ii:1 technical:2 adapt:1 plug:1 cross:1 long:1 ensuring:1 prediction:2 j3:1 variant:1 regression:1 mayor:1 expectation:1 iteration:9 interval:1 kohavi:1 appropriately:1 december:2 leveraging:7 lafferty:1 effectiveness:1 call:4 leverage:1 enough:1 baxter:1 fit:1 hastie:1 idea:2 pse:1 bartlett:4 peter:2 returned:1 repeatedly:2 generally:3 se:1 santa:2 clear:1 reserving:1 governs:1 reduced:1 schapire:6 sign:1 overly:1 correctly:1 rb:1 tibshirani:1 group:1 drawn:1 changing:2 neither:1 ht:5 fraction:1 convert:1 sum:1 run:1 master:9 throughout:2 draw:2 decision:2 bound:3 expo:1 calling:1 argument:1 performing:2 department:3 march:1 remain:1 slightly:1 ur:5 rob:1 hl:1 previously:2 singer:1 available:1 apply:5 away:1 generic:1 alternative:1 original:1 bagging:2 thomas:1 yoram:1 build:1 noticed:1 question:1 concentration:1 september:1 gradient:14 distance:1 majority:2 marcus:1 assuming:1 length:1 minimizing:3 robert:5 design:1 cryptographic:1 unknown:1 upper:1 finite:1 descent:15 november:1 incorrectly:1 january:1 y1:1 arbitrary:3 august:1 david:3 introduced:1 pair:4 required:1 componentwise:1 nick:1 california:3 tremendous:1 boost:7 nip:1 able:1 adversary:3 xm:1 appeared:1 orem:1 explanation:1 kivinen:1 improve:1 rated:1 imply:4 created:1 literature:3 geometric:1 determining:1 freund:7 loss:3 interesting:1 limitation:1 proven:1 validation:1 sufficient:5 consistent:1 editor:1 ulrich:1 classifying:1 formal:1 guide:1 tolerance:1 bauer:1 dimension:3 calculated:1 rich:1 adaptive:1 emphasize:3 keep:1 dlog2:1 xi:10 don:1 search:1 un:5 nature:1 onoda:1 ca:2 poly:6 european:1 constructing:1 domain:2 main:1 motivation:1 logitboost:8 bounding:3 paul:1 tl:5 loj:1 exponential:7 xl:1 comput:3 theorem:19 yiht:2 formula:1 annu:3 pac:30 learnable:1 mason:3 r2:1 exists:1 valiant:2 gained:1 duffy:6 margin:16 gap:1 easier:1 entropy:1 simply:3 monotonic:1 springer:1 satisfies:4 acm:5 presentation:1 jyrki:1 feasible:1 hard:2 change:3 except:1 kearns:2 called:2 total:6 e:1 experimental:1 vote:4 jonathan:1
806
1,738
Constrained Hidden Markov Models Sam Roweis roweis@gatsby.ucl.ac.uk Gatsby Unit, University College London Abstract By thinking of each state in a hidden Markov model as corresponding to some spatial region of a fictitious topology space it is possible to naturally define neighbouring states as those which are connected in that space. The transition matrix can then be constrained to allow transitions only between neighbours; this means that all valid state sequences correspond to connected paths in the topology space. I show how such constrained HMMs can learn to discover underlying structure in complex sequences of high dimensional data, and apply them to the problem of recovering mouth movements from acoustics in continuous speech. 1 Latent variable models for structured sequence data Structured time-series are generated by systems whose underlying state variables change in a continuous way but whose state to output mappings are highly nonlinear, many to one and not smooth. Probabilistic unsupervised learning for such sequences requires models with two essential features: latent (hidden) variables and topology in those variables. Hidden Markov models (HMMs) can be thought of as dynamic generalizations of discrete state static data models such as Gaussian mixtures, or as discrete state versions of linear dynamical systems (LDSs) (which are themselves dynamic generalizations of continuous latent variable models such as factor analysis). While both HMMs and LDSs provide probabilistic latent variable models for time-series, both have important limitations. Traditional HMMs have a very powerful model of the relationship between the underlying state and the associated observations because each state stores a private distribution over the output variables. This means that any change in the hidden state can cause arbitrarily complex changes in the output distribution. However, it is extremely difficult to capture reasonable dynamics on the discrete latent variable because in principle any state is reachable from any other state at any time step and the next state depends only on the current state. LDSs, on the other hand, have an extremely impoverished representation of the outputs as a function of the latent variables since this transformation is restricted to be global and linear. But it is somewhat easier to capture state dynamics since the state is a multidimensional vector of continuous variables on which a matrix "flow" is acting; this enforces some continuity of the latent variables across time. Constrained hidden Markov models address the modeling of state dynamics by building some topology into the hidden state representation. The essential idea is to constrain the transition parameters of a conventional HMM so that the discretevalued hidden state evolves in a structured way.l In particular, below I consider parameter restrictions which constrain the state to evolve as a discretized version of a continuous multivariate variable, i.e. so that it inscribes only connected paths in some space. This lends a physical interpretation to the discrete state trajectories in an HMM. I A standard trick in traditional speech applications of HMMs is to use "left-to-right" transition matrices which are a special case of the type of constraints investigated in this paper. However, leftto-right (Bakis) HMMs force state trajectories that are inherently one-dimensional and uni-directional whereas here I also consider higher dimensional topology and free omni-directional motion. 783 Constrained Hidden Markov Models 2 An illustrative game Consider playing the following game: divide a sheet of paper into several contiguous, nonoverlapping regions which between them cover it entirely. In each region inscribe a symbol, allowing symbols to be repeated in different regions. Place a pencil on the sheet and move it around, reading out (in order) the symbols in the regions through which it passes. Add some noise to the observation process so that some fraction of the time incorrect symbols are reported in the list instead of the correct ones. The game is to reconstruct the configuration of regions on the sheet from only such an ordered list(s) of noisy symbols. Of course, the absolute scale, rotation and reflection of the sheet can never be recovered, but learning the essential topology may be possible. 2 Figure 1 illustrates this setup. 1, 11, 1, 11, .. . 24(V, 21, 2, .. . _ _ ..... 18, 19, 10,3, .. . 8 2UJ 16, 16,.~ 15,15,2(]), ... ~ True Generative Map iteration:030 logLikelihood:-1.9624 Figure 1: (left) True map which generates symbol sequences by random movement between connected cells. (centre) An example noisy output sequence with noisy symbols circled. (right) Learned map after training on 3 sequences (with 15% noise probability) each 200 symbols long. Each cell actually contains an entire distribution over all observed symbols, though in this case only the upper right cell has significant probability mass on more than one symbol (see figure 3 for display details) . Without noise or repeated symbols, the game is easy (non-probabilistic methods can solve it) but in their presence it is not. One way of mitigating the noise problem is to do statistical averaging. For example, one could attempt to use the average separation in time of each pair of symbols to define a dissimilarity between them. It then would be possible to use methods like multi-dimensional scaling or a sort of Kohonen mapping though time 3 to explicitly construct a configuration of points obeying those distance relations. However, such methods still cannot deal with many-to-one state to output mappings (repeated numbers in the sheet) because by their nature they assign a unique spatial location to each symbol. Playing this game is analogous to doing unsupervised learning on structured sequences. (The game can also be played with continuous outputs, although often high-dimensional data can be effectively clustered around a manageable number of prototypes; thus a vector time-series can be converted into a sequence of symbols.) Constrained HMMs incorporate latent variables with topology yet retain powerful nonlinear output mappings and can deal with the difficulties of noise and many-to-one mappings mentioned above; so they can "win" our game (see figs. 1 & 3). The key insight is that the game generates sequences exactly according to a hidden Markov process whose transition matrix allows only transitions between neighbouring cells and whose output distributions have most of their probability on a single symbol with a small amount on all other symbols to account for noise. 2The observed symbol sequence must be "informative enough" to reveal the map structure (this can be quantified using the idea of persistent excitation from control theory) . 3Consider a network of units which compete to explain input data points. Each unit has a position in the output space as well as a position in a lower dimensional topology space. The winning unit has its position in output space updated towards the data point; but also the recent (in time) winners have their positions in topology space updated towards the topology space location of the current winner. Such a rule works well, and yields topological maps in which nearby units code for data that typically occur close together in time. However it cannot learn many-to-one maps in which more than one unit at different topology locations have the same (or very similar) outputs. S. Roweis 784 3 Model definition: state topologies from cell packings Defining a constrained HMM involves identifying each state of the underlying (hidden) Markov chain with a spatial cell in a fictitious topology space. This requires selecting a dimensionality d for the topology space and choosing a packing (such as hexagonal or cubic) which fills the space. The number of cells in the packing is equal to the number of states M in the original Markov model. Cells are taken to be all of equal size and (since the scale of the topology space is completely arbitrary) of unit volume. Thus, the packing covers a volume M in topology space with a side length l of roughly l = MIld. The dimensionality and packing together define a vector-valued function x(m), m = 1 ... M which gives the location of cell m in the packing. (For example, a cubic packing of d dimensional space defines x(m+l) to be [m, mil, mll2, ... ,mild-I] mod l.) State m in the Markov model is assigned to to cell m in the packing, thus giving it a location x( m) in the topology space. Finally, we must choose a neighbourhood rule in the topology space which defines the neighbours of cell m; for example, all "connected" cells, all face neighbours, or all those within a certain radius. (For cubic packings, there are 3d -1 connected neighbours and 2d face neighbours in a d dimensional topology space.) The neighbourhood rule also defines the boundary conditions of the space - e.g. periodic boundary conditions would make cells on opposite extreme faces of the space neighbours with each other. The transition matrix of the HMM is now preprogrammed to only allow transitions between neighbours. All other transition probabilities are set to zero, making the transition matrix very sparse. (I have set all permitted transitions to be equally likely.) Now, all valid state sequences in the underlying Markov model represent connected ( "city block") paths through the topology space. Figure 2 illustrates this for a three-dimensional model. L / / / / L / / / / / / /.1,,< ?. ? L." ? / / / / / ~V / ... .~ /V / ? / / !;.y V / yV 641 Figure 2: (left) PhYSical depiction of the topology space for a constrained HMM with d=3,l=4 and M =64 showing an example state trajectory. (right) Corresponding transition matrix structure for the 64-state HMM computed using face-centred cubic packing. The gaps in the inner bands are due to edge effects. 4 State inference and learning The constrained HMM has exactly the same inference procedures as a regular HMM: the forward-backward algorithm for computing state occupation probabilities and the Viterbi decoder for finding the single best state sequence. Once these discrete state inferences have been performed, they can be transformed using the state position function x( m) to yield probability distributions over the topology space (in the case offorward-backward) or paths through the topology space (in the case of Viterbi decoding). This transformation makes the outputs of state decodings in constrained HMMs comparable to the outputs of inference procedures for continuous state dynamical systems such as Kalman smoothing. The learning procedure for constrained HMMs is also almost identical to that for HMMs. In particular, the EM algorithm (Baum-Welch) is used to update model parameters. The crucial difference is that the transition probabilities which are precomputed by the topology and packing are never updated during learning. In fact, this makes learning much easier in some cases. Not only do the transition probabilities not have to be learned, but their structure constrains the hidden state sequences in such a way as to make the learning of the output parameters much more efficient when the underlying data really does come from a spatially structured generative model. Figure 3 shows an example of parameter learning for the game discussed above. Notice that in this case, each part of state space had only a single output (except for noise) so the final learned output distributions became essentially minimum entropy. But constrained HMMs can in principle model stochastic or multimodal output processes since each state stores an entire private distribution over outputs. 785 Constrained Hidden Markov Models 1'"IIlloa-olO IDp.lkdihood.-2.1451 Figure 3: Snapshots of model parameters during constrained HMM learning for the game described in section 2. At every iteration each cell in the map has a complete distribution over all of the observed symbols. Only the top three symbols of each cell's histogram are show, with/ont size proportional to the square root o/probability (to make ink roughly proportional). The map was trained on 3 noisy sequences each 200 symbols long generated from the map on the left of figure 1 using 15% noise probability. The final map after convergence (30 iterations) is shown on the right of figure l. 5 Recovery of mouth movements from speech audio I have applied the constrained HMM approach described above to the problem of recovering mouth movements from the acoustic waveform in human speech. Data containing simultaneous audio and articulator movement information was obtained from the University of Wisconsin X-ray microbeam database [9]. Eight separate points (four on the tongue, one on each lip and two on the jaw) located in the midsaggital plane of the speaker's head were tracked while subjects read various words, sentences, paragraphs and lists of numbers. The x and y coordinates (to within about ? Imm) of each point were sampled at 146Hz by an Xray system which located gold beads attached to the feature points on the mouth, producing a 16-dimensional vector every 6.9ms. The audio was sampled at 22kHz with roughly 14 bits of amplitude resolution but in the presence of machine noise. These data are well suited to the constrained HMM architecture. They come from a system whose state variables are known, because of physical constraints, to move in connected paths in a low degree-of-freedom space. In other words the (normally hidden) articulators (movable structures of the mouth), whose positions represent the underlying state of the speech production system,4 move slowly and smoothly. The observed speech signal-the system's output--can be characterized by a sequence of short-time spectral feature vectors, often known as a spectrogram. In the experiments reported here, I have characterized the audio signal using 12 line spectral frequencies (LSFs) measured every 6.9ms (to coincide with the articulatory sampling rate) over a 25ms window. These LSF vectors characterize only the spectral shape of the speech waveform over a short time but not its energy. Average energy (also over a 25ms window every 6.9ms) was measured as a separate one dimensional signal. Unlike the movements of the articulators, the audio spectrum/energy can exhibit quite abrupt changes, indicating that the mapping between articulator positions and spectral shape is not smooth. Furthermore, the mapping is many to one: different articulator configurations can produce very similar spectra (see below). The unsupervised learning task, then, is to explain the complicated sequences of observed spectral features (LSFs) and energies as the outputs of a system with a low-dimensional state vector that changes slowly and smoothly. In other words, can we learn the parameters 5 of a constrained HMM such that connected paths through the topology space (state space) generate the acoustic training data with high likelihood? Once this unsupervised learning task has been performed, we can (as I show below) relate the learned trajectories in the topology space to the true (measured) articulator movements. 4 Articulator positions do not provide complete state information. For example, the excitation signal (voiced or unvoiced) is not captured by the bead locations. They do, however, provide much important information; other state information is easily accessible directly from acoustics. 5Model structure (dimensionality and number of states) is currently set using cross validation. S. Roweis 786 While many models of the speech production process predict the many-to-one and nonsmooth properties of the articulatory to acoustic mapping, it is useful to confirm these features by looking at real data. Figure 4 shows the experimentally observed distribution of articulator configurations used to produce similar sounds. It was computed as follows. All the acoustic and articulatory data for a single speaker are collected together. Starting with some sample called the key sample, I find the 1000 samples "nearest" to this key by two measures: articulatory distance, defined using the Mahalanobis norm between two position vectors under the global covariance of all positions for the appropriate speaker, and spectral shape distance, again defined using the Mahalanobis norm but now between two line spectral frequency vectors using the global LSF covariance of the speaker's audio data. In other words, I find the 1000 samples that "look most like" the key sample in mouth shape and that "sound most like" the key sample in spectral shape. I then plot the tongue bead positions of the key sample (as a thick cross), and the 1000 nearest samples by mouth shape (as a thick ellipse) and spectral shape (as dots). The points of primary interest are the dots; they show the distribution of tongue positions used to generate very similar sounds. (The thick ellipses are shown only as a control to ensure that many nearby points to the key sample do exist in the dataset.) Spread or multimodality in the dots indicates that many different articulatory configurations are used to generate the same sound. 30 :Ill I., 10 ;~~.; -~ .:~~~ . - I., .:.'~ ..;.: -30 -:Ill -10 tooguelip>(mmJ 10 0 -~ :.fI:<tl/ 10 0 -~ 0 -so -40 -30 I., ~~?f -so body2-40 > (nmJ 10 I., 0 -10 -30 -~o . 30 ~ " " ~. ' -40 :Ill :. 0 -10 -:Ill -30 I., ;~~t -:Ill 0 -~ -10 .J!! -60 -so - > (om? I., -40 -~ . -so , -40 -30 -:Ill 10 10 0 1-:.~~( .."... bodyl> (mmJ 20 _ 10 toque lip:r. (mmJ :Ill _ 10 toogue bodyl > (mmJ 20 I., "l .\ f?i1~W? 0 :Ill :Ill I., 'lMiJf -so -40 tongue body2 > (mmJ -30 0 -10 -~o -so -60 _ > (mmJ _ -40 Figure 4: Inverse mapping from acoustics to articulation is ill-posed in real speech production data. Each group of four articulator-space plots shows the 1000 samples which are "nearest" to one key sample (thick cross). The dots are the 1000 nearest samples using an acoustic measure based on line spectral frequencies. Spread or multimodality in the dots indicates that many different articulatory configurations are used to generate very similar sounds. Only the positions of the four tongue beads have been plotted. 1\vo examples (with different key samples) are shown, one in the left group of four panels and another in the right group. The thick ellipses (shown as a control) are the two-standard deviation contour of the 1000 nearest samples using an articulatory position distance metric. Why not do direct supervised learning from short-time spectral features (LSFs) to the articulator positions? The ill-posed nature of the inverse problem as shown in figure 4 makes this impossible. To illustrate this difficulty, I have attempted to recover the articulator positions from the acoustic feature vectors using Kalman smoothing on a LDS. In this case, since we have access to both the hidden states (articulator positions) and the system outputs (LSFs) we can compute the optimal parameters of the model directly. (In particular, the state transition matrix is obtained by regression from articulator positions and velocities at time t onto positions at time t + 1; the output matrix by regression from articulator positions and velocities onto LSF vectors; and the noise covariances from the residuals of these regressions.) Figure 5b shows the results of such smoothing; the recovery is quite poor. Constrained HMMs can be applied to this recovery problem, as previously reported [6]. (My earlier results used a small subset of the same database that was not continuous speech and did not provide the hard experimental verification (fig. 4) of the many-to-one problem.) 787 Constrained Hidden Markov Models Figure 5: (A) Recovered articulator movements using state inference on a constrained HMM. A four-dimensional model with 4096 states was trained on data (all beads) from a single speaker but not including the test utterance shown. Dots show the actual measured articulator movements for a single bead coordinate versus time; the thin lines are estimated movements from the corresponding acoustics. (B) Unsuccessful recovery of articulator movements using Kalman smoothing on a global LDS model. All the (speaker-dependent) parameters of the underlying linear dynamical system are known; they have been set to their optimal values using the true movement information from the training data. Furthermore, for this example, the test utterance shown was included in the training data used to estimate model parameters. (C) All 16 bead coordinates; all vertical axes are the same scale. Bead names are shown on the left. Horizontal movements are plotted in the left-hand column and vertical movements in the right-hand column, The separation between the two horizontal lines near the centre of the right panel indicates the machine measurement error. Recovery of tongue tip vertical motion from acoustics 2 345 6 7 8 time [sec] Kalman smoothing on optimal linear dynamical system I 20 ? '::l '[ '0 0 ~-10 B -20L-~--~--~--~--~--~--~~ 02345 6 7 8 time [sec] The basic idea is to train (unsupervised) on sequences of acoustic-spectral features and then map the topology space state trajectories onto the measured articulatory movements. Figure 5 shows movement recovery using state inference in a four-dimensional model with 4096 states (d=4,?=8,M =4096) trained on data (all beads) from a single speaker. (Naive unsupervised learning runs into severe local minima problems. To avoid these, in the simulations shown above, models were trained by slowly annealing two learning parameters6 : a term f.!3 was used in place of the zeros in the sparse transition matrix, and If was used in place of It = p(mtlobservations) during inference of state occupation probabilities. Inverse temperature (3 was raised from 0 to 1.) To infer a continuous state trajectory from an utterance after learning, I first do Viterbi decoding on the acoustics to generate a discrete state sequence mt and then interpolate smoothly between the positions x(mt) of each state. 6 An easier way (which I have used previously) to find good minima is to initialize the models using the articulatory data themselves. This does not provide as impressive "structure discovery" as annealing but still yields a system capable of inverting acoustics into articulatory movements on previously unseen test data. First, a constrained HMM is trained onjust the articulatory movements; this works easily because of the natural geometric (physical) constraints. Next, I take the distribution of acoustic features (LSFs) over all times (in the training data) when Viterbi decoding places the model in a particular state and use those LSF distributions to initialize an equivalent acoustic constrained HMM. This new model is then retrained until convergence using Baum-Welch. S. Roweis 788 After unsupervised learning, a single linear fit is performed between these continuous state trajectories and actual articulator movements on the training data. (The model cannot discover the units system or axes used to represent the articulatory data.) To recover articulator movements from a previously unseen test utterance, I infer a continuous state trajectory as above and then apply the single linear mapping (learned only once from the training data). 6 Conclusions, extensions and other work By enforcing a simple constraint on the transition parameters of a standard HMM, a link can be forged between discrete state dynamics and the motion of a real-valued state vector in a continuous space. For complex time-series generated by systems whose underlying latent variables do in fact change slowly and smoothly, such constrained HMMs provide a powerful unsupervised learning paradigm. They can model state to output mappings that are highly nonlinear, many to one and not smooth. Furthermore, they rely only on well understood learning and inference procedures that come with convergence guarantees. Results on synthetic and real data show that these models can successfully capture the lowdimensional structure present in complex vector time-series. In particular, I have shown that a speaker dependent constrained HMM can accurately recover articulator movements from continuous speech to within the measurement error of the data. This acoustic to articulatory inversion problem has a long history in speech processing (see e.g. [7] and references therein). Many previous approaches have attempted to exploit the smoothness of articulatory movements for inversion or modeling: Hogden et.al (e.g. [4]) provided early inspiration for my ideas, but do not address the many-to-one problem; Simon Blackburn [1] has investigated a forward mapping from articulation to acoustics but does not explicitly attempt inversion; early work at Waterloo [5] suggested similar constraints for improving speech recognition systems but did look at real articulatory data, more recent work at Rutgers [2] developed a very similar system much further with good success. Perpinan [3], considers a related problem in sequence learning using EPG speech data as an example. While in this note I have described only "diffusion" type dynamics (transitions to all neighbours are equally likely) it is also possible to consider directed flows which give certain neighbours of a state lower (or zero) probability. The left-to-right HMMs mentioned earlier are an example of this for one-dimensional topologies. For higher dimensions, flows can be derived from discretization of matrix (linear) dynamics or from other physical/structural constraints. It is also possible to have many connected local flow regimes (either diffusive or directed) rather than one global regime as discussed above; this gives rise to mixtures of constrained HMMs which have block-structured rather than banded transition matrices. Smyth [8] has considered such models in the case of one-dimensional topologies and directed flows; I have applied these to learning character sequences from English text. Another application I have investigated is map learning from mUltiple sensor readings. An explorer (robot) navigates in an unknown environment and records at each time many local measurements such as altitude, pressure, temperature, humidity, etc. We wish to reconstruct from only these sequences of readings the topographic maps (in each sensor variable) of the area as well as the trajectory of the explorer. A final application is tracking (inferring movements) of articulated bodies using video measurements of feature positions. References [1] S. Blackburn & S. Young. ICSLP 1996, Philadephia, v.2 pp.969-972 [2] S. Chennoukh et.al, Eurospeech 1997, Rhodes, Greece, v.l pp.429-432 [3] M. Carreira-Perpinan. NIPS'12, 2000. (This volume.) [4] D. Nix & 1. Hogden. NIPS'lI, 1999, pp.744-750 [5] G. Ramsay & L. Deng. 1. Acoustical Society of America, 95(5), 1994, p.2873 [6] S. Roweis & A. Alwan. Eurospeech 1997, Rhodes, Greece, v.3 pp.1227-1230 [7] 1. Schroeter & M. Sondhi. IEEE Trans.Speech & Audio Processing, 2(1 p2), 1994, pp.133-150 [8] P. Smyth. NIPS'9, 1997, pp.648-654 [9] J. Westbury. X-ray microbeam speech production database user's handbook version J.O. University of Wisconsin, Madison, June 1994.
1738 |@word mild:2 private:2 version:3 manageable:1 inversion:3 norm:2 humidity:1 simulation:1 covariance:3 pressure:1 configuration:6 series:5 contains:1 selecting:1 current:2 recovered:2 discretization:1 yet:1 must:2 informative:1 shape:7 plot:2 update:1 generative:2 plane:1 short:3 record:1 location:6 direct:1 persistent:1 incorrect:1 ray:2 paragraph:1 multimodality:2 roughly:3 themselves:2 ldss:3 multi:1 discretized:1 ont:1 actual:2 window:2 provided:1 discover:2 underlying:9 panel:2 mass:1 developed:1 finding:1 transformation:2 guarantee:1 every:4 multidimensional:1 exactly:2 uk:1 control:3 unit:8 normally:1 producing:1 understood:1 local:3 path:6 therein:1 quantified:1 hmms:15 directed:3 unique:1 enforces:1 block:2 procedure:4 area:1 lsfs:5 epg:1 thought:1 word:4 regular:1 onto:3 close:1 cannot:3 sheet:5 impossible:1 restriction:1 conventional:1 map:13 equivalent:1 baum:2 starting:1 welch:2 resolution:1 identifying:1 recovery:6 abrupt:1 insight:1 rule:3 fill:1 coordinate:3 analogous:1 updated:3 user:1 smyth:2 neighbouring:2 trick:1 velocity:2 recognition:1 located:2 database:3 observed:6 capture:3 region:6 connected:10 movement:23 mentioned:2 environment:1 constrains:1 dynamic:8 preprogrammed:1 trained:5 completely:1 packing:11 multimodal:1 easily:2 sondhi:1 various:1 america:1 train:1 articulated:1 london:1 choosing:1 whose:7 quite:2 posed:2 solve:1 valued:2 loglikelihood:1 reconstruct:2 unseen:2 topographic:1 noisy:4 final:3 sequence:22 ucl:1 lowdimensional:1 kohonen:1 roweis:6 gold:1 convergence:3 produce:2 hogden:2 illustrate:1 ac:1 measured:5 nearest:5 p2:1 recovering:2 involves:1 come:3 idp:1 waveform:2 radius:1 thick:5 correct:1 stochastic:1 human:1 assign:1 icslp:1 generalization:2 clustered:1 really:1 extension:1 around:2 considered:1 mapping:12 viterbi:4 predict:1 early:2 rhodes:2 currently:1 waterloo:1 city:1 successfully:1 sensor:2 gaussian:1 rather:2 avoid:1 mil:1 ax:2 derived:1 june:1 articulator:20 likelihood:1 indicates:3 inference:8 dependent:2 entire:2 typically:1 hidden:16 relation:1 transformed:1 i1:1 mitigating:1 ill:11 constrained:25 spatial:3 special:1 smoothing:5 raised:1 equal:2 construct:1 never:2 once:3 initialize:2 sampling:1 identical:1 blackburn:2 look:2 unsupervised:8 thin:1 thinking:1 nonsmooth:1 neighbour:9 interpolate:1 xray:1 attempt:2 freedom:1 interest:1 highly:2 severe:1 mixture:2 extreme:1 articulatory:15 chain:1 edge:1 capable:1 divide:1 plotted:2 tongue:6 column:2 modeling:2 earlier:2 contiguous:1 cover:2 deviation:1 subset:1 eurospeech:2 characterize:1 reported:3 periodic:1 my:2 synthetic:1 accessible:1 retain:1 probabilistic:3 decoding:4 tip:1 together:3 again:1 containing:1 choose:1 slowly:4 li:1 account:1 converted:1 nonoverlapping:1 centred:1 sec:2 explicitly:2 depends:1 performed:3 root:1 doing:1 yv:1 sort:1 recover:3 complicated:1 simon:1 voiced:1 om:1 square:1 became:1 nmj:1 correspond:1 yield:3 directional:2 lds:2 accurately:1 trajectory:9 history:1 explain:2 simultaneous:1 banded:1 definition:1 energy:4 frequency:3 pp:6 naturally:1 associated:1 static:1 sampled:2 dataset:1 dimensionality:3 greece:2 amplitude:1 impoverished:1 actually:1 higher:2 supervised:1 permitted:1 though:2 furthermore:3 until:1 hand:3 horizontal:2 nonlinear:3 continuity:1 defines:3 reveal:1 name:1 effect:1 building:1 true:4 pencil:1 assigned:1 inspiration:1 spatially:1 read:1 deal:2 mahalanobis:2 game:10 during:3 illustrative:1 excitation:2 speaker:8 m:5 complete:2 vo:1 motion:3 reflection:1 temperature:2 fi:1 rotation:1 mt:2 physical:5 tracked:1 winner:2 attached:1 volume:3 khz:1 discussed:2 interpretation:1 significant:1 measurement:4 smoothness:1 centre:2 ramsay:1 had:1 reachable:1 dot:6 access:1 robot:1 impressive:1 depiction:1 etc:1 add:1 movable:1 navigates:1 multivariate:1 recent:2 store:2 certain:2 arbitrarily:1 success:1 captured:1 minimum:3 somewhat:1 spectrogram:1 deng:1 paradigm:1 signal:4 multiple:1 sound:5 infer:2 smooth:3 characterized:2 cross:3 long:3 equally:2 ellipsis:2 regression:3 basic:1 essentially:1 metric:1 rutgers:1 iteration:3 represent:3 histogram:1 cell:15 diffusive:1 whereas:1 annealing:2 lsf:4 crucial:1 unlike:1 pass:1 subject:1 hz:1 flow:5 mod:1 structural:1 near:1 presence:2 easy:1 enough:1 fit:1 architecture:1 topology:29 opposite:1 inner:1 idea:4 prototype:1 speech:16 cause:1 useful:1 amount:1 band:1 generate:5 exist:1 notice:1 estimated:1 discrete:7 group:3 key:9 four:6 diffusion:1 backward:2 schroeter:1 fraction:1 compete:1 inverse:3 run:1 powerful:3 place:4 almost:1 reasonable:1 separation:2 scaling:1 comparable:1 bit:1 entirely:1 played:1 display:1 topological:1 occur:1 constraint:6 constrain:2 nearby:2 generates:2 extremely:2 hexagonal:1 jaw:1 structured:6 according:1 poor:1 across:1 em:1 sam:1 body2:2 character:1 evolves:1 making:1 restricted:1 altitude:1 taken:1 previously:4 precomputed:1 apply:2 eight:1 spectral:12 appropriate:1 neighbourhood:2 original:1 microbeam:2 top:1 ensure:1 madison:1 exploit:1 giving:1 uj:1 ellipse:1 society:1 ink:1 move:3 primary:1 forged:1 traditional:2 exhibit:1 lends:1 win:1 distance:4 separate:2 link:1 hmm:17 decoder:1 acoustical:1 collected:1 considers:1 enforcing:1 code:1 length:1 kalman:4 relationship:1 difficult:1 setup:1 relate:1 rise:1 unknown:1 allowing:1 upper:1 vertical:3 observation:2 snapshot:1 markov:12 unvoiced:1 nix:1 defining:1 looking:1 head:1 incorporate:1 omni:1 arbitrary:1 retrained:1 inverting:1 pair:1 sentence:1 acoustic:18 learned:5 nip:3 trans:1 address:2 suggested:1 dynamical:4 below:3 articulation:2 regime:2 reading:3 including:1 unsuccessful:1 video:1 mouth:7 difficulty:2 force:1 natural:1 rely:1 explorer:2 residual:1 naive:1 utterance:4 text:1 discretevalued:1 circled:1 discovery:1 geometric:1 evolve:1 occupation:2 wisconsin:2 limitation:1 proportional:2 fictitious:2 versus:1 validation:1 degree:1 verification:1 principle:2 playing:2 production:4 course:1 free:1 english:1 side:1 allow:2 face:4 absolute:1 sparse:2 boundary:2 dimension:1 transition:19 valid:2 contour:1 forward:2 coincide:1 uni:1 confirm:1 global:5 imm:1 handbook:1 spectrum:2 continuous:13 latent:9 bead:9 why:1 lip:2 learn:3 nature:2 inherently:1 improving:1 investigated:3 complex:4 did:2 spread:2 noise:10 repeated:3 body:1 fig:2 tl:1 cubic:4 gatsby:2 position:22 inferring:1 wish:1 obeying:1 winning:1 perpinan:2 young:1 showing:1 symbol:20 list:3 essential:3 effectively:1 dissimilarity:1 illustrates:2 gap:1 easier:3 suited:1 entropy:1 smoothly:4 likely:2 ordered:1 tracking:1 towards:2 change:6 experimentally:1 hard:1 included:1 except:1 carreira:1 acting:1 averaging:1 called:1 experimental:1 attempted:2 indicating:1 college:1 olo:1 audio:7
807
1,739
Algebraic Analysis for Non-Regular Learning Machines Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama 223 Japan swatanab@pi. titech. ac.jp Abstract Hierarchical learning machines are non-regular and non-identifiable statistical models, whose true parameter sets are analytic sets with singularities. Using algebraic analysis, we rigorously prove that the stochastic complexity of a non-identifiable learning machine is asymptotically equal to >'1 log n - (ml - 1) log log n + const., where n is the number of training samples. Moreover we show that the rational number >'1 and the integer ml can be algorithmically calculated using resolution of singularities in algebraic geometry. Also we obtain inequalities 0 < >'1 ~ d/2 and 1 ~ ml ~ d, where d is the number of parameters. 1 Introduction Hierarchical learning machines such as multi-layer perceptrons, radial basis functions, and normal mixtures are non-regular and non-identifiable learning machines. If the true distribution is almost contained in a learning model, then the set of true parameters is not one point but an analytic variety [4][9][3][10]. This paper establishes the mathematical foundation to analyze such learning machines based on algebraic analysis and algebraic geometry. Let us consider a learning machine represented by a conditional probability density p(xlw) where x is an M dimensional vector and w is a d dimensional parameter. We assume that n training samples xn = {Xi; i = 1,2, ... , n} are independently taken from the true probability distribution q(x), and that the set of true parameters Wo = {w E W ; p(xlw) = q(x) (a.s. q(x)) } is not empty. In Bayes statistics, the estimated distribution p(xlxn) is defined by 1 n p(xlxn) = p(x lw) Pn(w)dw, Pn(w) = Zn IIp(XiIW) <p(w), J where <p( w) is an a priori probability density on Rd, and Zn is a normalizing constant. The generalization error is defined by q(x) K(n) = E x n{ q(x) log p(xlxn) dx} J Algebraic Analysis for Non-regular Learning Machines 357 where Ex" {-} shows the expectation value over all training samples xn. One of the main purposes in learning theory is to clarify how fast K(n) converges to zero as n tends to infinity. Using the log-loss function h(x, w) = logq(x) -logp(x, w), we define the K ullback distance and the empirical one, H(w) = J h(x, w)q(x)dx, 1 H(w, xn) = ;; n L h(Xi' w). t=l Note that the set of true parameters is equal to the set of zeros of H(w), Wo = {w E W ; H (w) = O}. If the true parameter set Wo consists of only one point, the learning machine p(xlw) is called identifiable, if otherwise non-identifiable. It should be emphasized that, in non-identifiable learning machines, Wo is not a manifold but an analytic set with singular points, in general. Let us define the stochastic complexity by F(n} = -Exn {log J exp( -nH(w, xn))<p(w)dw}. (1) Then we have an important relation between the stochastic complexity F(n) and the generalization error K (n ) K(n) = F(n + 1) - F(n), which represents that K(n) is equal to the increase of F(n) [1]. In this paper, we show the rigorous asymptotic form of the stochastic complexity F( n) for general non-identifiable learning machines. 2 Main Results We need three assumptions upon which the main results are proven. (A.I) The probability density <p(w} is infinite times continuously differentiable and its support, W == supp <p, is compact. In other words, <p E Cff. (A.2) The log loss function, h(x, w) = log q(x) - logp(x, w), is continuous for x in the support Q == suppq, and is analytic for w in an open set W' :> W. (A.3) Let {rj(x, w*); j = 1,2, ... , d} be the associated convergence radii of h(x, w) at w*, in other words, Taylor expansion of h(x, w) at w* = (wi, ... , wd), 00 h(x, w) = L ak 1 k2 ? .. kd(X)(WI - wi)kl (W2 - W2)k 2 ??? (Wd - Wd)kd, k1, .. ,kd=O absolutely converges in IWj - wjl < rj(x, w*). Assume inf inf rj(x, w*) > xEQw'EW ? for j=I,2, ... ,d. Theorem 1 Assume (A.l),(A.2), and (A.3). Then, there exist a rational number Al > 0, a natural number ml, and a constant C, such that IF(n) - A1logn + (ml - 1) loglognl < C holds for an arbitrary natural number n. Remarks. (1) If q(x) is compact supported, then the assumption (A.3) is automatically satisfied. (2) Without assumptions (A.l) and (A.3), we can prove the upper bound, F(n) ::; A1logn - (ml - 1) log log n + const. S. Watanabe 358 From Theorem 1, if the generalization error K (n) has the asymptotic expansion, then it should be K(n) = Al _ mi - 1 + o( 1 ). n nlogn nlogn As is well known, if the model is identifiable and has the positive definite Fisher information matrix, then Al = d/2 (d is the dimension of the parameter space) and mi = 1. However, hierarchical learning models such as multi-layer perceptrons, radial basis functions, and normal mixtures have smaller Al and larger ml, in other words, hierarchical models are better learning machines than regular ones if Bayes estimation is applied. Constants Al and mi are characterized by the following theorem. Theorem 2 Assume the same conditions as theorem 1. Let small constant. The holomorphic function in Re(z) > 0, J(z) = ( 1H(W)<l f > 0 be a sufficiently H(wrtp(w)dw, can be analytically continued to the entire complex plane as a meromorphic function whose poles are on the negative part of the real axis, and the constants -AI and mi in theorem 1 are equal to the largest pole of J(z) and its multiplicity, respectively. The proofs of above theorems are explained in the following section. Let w = g(u) is an arbitrary analytic function from a set U C Rd to W. Then J(z) is invariant under the mapping, {H(w), tp(w)} -+ {H(g(u)), tp(g(u))Jg'(u) I}, where Jg'(u)1 = Jdet(awi/aUj)J is Jacobian. This fact shows that Al and mi are invariant under a bi-rational mapping. In section 4, we show an algorithm to calculate Al and mi by using this invariance and resolution of singularities. 3 Mathematical Structure In this section, we present an outline of the proof and its mathematical structure. 3.1 Upper bound and b-function For a sufficiently small constant f > 0, we define F*(n) by F*(n) = -log ( exp( -nH(w)) tp(w) dw. lH(w)<l Then by using the Jensen's inequality, we obtain F(n) ~ F*(n). To evaluate F*(n), we need the b-function in algebraic analysis [6][7]. Sato, Bernstein, Bjork, and Kashiwara proved that, for an arbitrary analytic function H(w), there exist a differential operator D(w, aw , z) which is a polynomial for z, and a polynomial b(z) whose zeros are rational numbers on the negative part of the real axis, such that D(w, aw , z)H(wr+1 = b(z)H(wr (2) for any z E C and any w E W l = {w E W; H(w) < f}. By using the relation eq.(2), the holomorphic function J(z) in Re(z) > 0, J(z) == ( 1H(W)<E H(w)ztp(w)dw = b(l) { z 1H(W)<l H(W)Z+I D~tp(w)dw, Algebraic Analysis for Non-regular Learning Machines 359 can be analytically continued to the entire complex plane as a meromorphic function whose poles are on the negative part of the real axis. The poles, which are rational numbers and ordered from the origin to the minus infinity, are referred to as -AI, -A2' -A3, ... , and their multiplicities are also referred to as ml, m2, m3, ... Let Ckm be the coefficient of the m-th order of Laurent expansion of J(z) at -Ak. Then, m" K JK(Z) == J(z) - LL (z :~:)-m (3) k=1 m=1 is holomorphic in Re(z) > -AK+l, and IJK(z)1 Let us define a function J(t) = for 0 < t < ? and J(t) = 0 for with J(z) by the relations, ? 0 (izi ---t 00, Re(z) > -AK+l)' J <5(t - H(w))cp(w)dw ~ ~ t 11 J(z) F*(n) ---t t Z J(t) dt, -log = 1. Then I(t) connects the function F*(n) 10 1 exp( -nt) J(t) dt. The inverse Laplace transform gives the asymptotic expansion of J(t) as t ---t 0, resulting in the asymptotic expansion of F*(n), F*(n) = = i n t dt exp(-t) 1(-) o n n Adogn - (ml - 1) loglogn + 0(1), -log which is the upper bound of F(n). 3.2 Lower Bound We define a random variable A(xn) = sup 1 nl/2(H(w, xn) - H(w)) / H(W)I/2 I. (4) wEW Then, we prove in Appendix that there exists a constant Co which is independent of n such that Ex n {A(xn)2} < Co. (5) By using an inequality ab ~ (a 2 + b2 )/2, 1 nH(w,xn) ~ nH(w) - A(xn)(nH(w))1/2 ~ "2{nH(w) - A(xn)2}, which derives a lower bound, (6) S. Watanabe 360 The first term in eq.(6) is bounded. Let the second term be F*(n) , then -IOg(Zl + Z2) r exp( - nH(w)) <p(w)dw ~ const . n-)'I (log n)m 2 r nH(w) JH(W)?? exp(- 2 ) <p(w)dw ~ ex p (-2) ' 1 -1 JH(W)<? nE which proves the lower bound of F( n), F(n) 4 ~ >'llogn - (m1 - 1) loglogn + canst. Resolution of Singularities In this section, we construct a method to calculate >'1 and mI. First of all, we cover the compact set Wo with a finite union of open sets Wo,. In other words, Wo C Uo, Wo,. Hironaka's resolution of singularities [5J [2J ensures that, for an arbitrary analytic function H(w), we can algorithmically find an open set Uo, C Rd (Uo, contains the origin) and an analytic function go, : Uo, ~ W o, such that H(go,(u)) = a(u) U~I U~2 ... U~d (u E U o, ) (7) where a( u) > 0 is a positive function and ki ~ 0 (1 ~ i ~ d) are even integers (a( u) and k i depend on Uo,). Note that Jacobian Ig~(u) 1 = 0 if and only if u E g~l(WO). <p (go, ()) u Igo,I ( U) I = finite """' ~ CPI ,P2''' ',Pd u PI 1 u P2 2 .. , uP,l d + R( u ) , (8) By combining eq.(7) with eq.(8), we obtain lo,(z) r Jwc> 1 H(w)z<p(w) a(u) {U~I U~2 .. . U~d V Ufl U~2 .. 'U~d dUl dU2 .. ? dUd . U '" For real z , maxo, lo,(z) ~ l(z) ~ Lo, lo,(z), >'1 = min 0, and m1 min min (PI ,... ,Pd) l::;q::;d is equal to the number of q which attains the minimum, min. l::;q::;d Remark. In a neighborhood of Wo E W o, the analytic function H(w) is equivalent to a polynomial H Wo(w ), in other words, there exists constants C1, C2 > 0 such that c1Hwo(w) ~ H(w) ~ C2Hwo(W) . Hironaka's theorem constructs the resolution map go, for any polynomial H Wo (w) algorithmically in the finite procedures ( blowingups for nonsingular manifolds in singularities are recursively applied [5]). From the above discussion, we obtain an inequality, 1 ~ m ~ d. Moreover there exists 'Y > 0 such that H(w) ~ 'Ylw - wol 2 in the neighborhood of Wo E W o, we obtain >'1 ~ d/2. Example. Let us consider a model (x, y) p(x, ylw) ?jJ(x, a, b, c, d) = E R2 and w = (a, b, c, d) E R4 , 1 1 2 Po(x) (271')1/2 exp(-"2(Y - ?jJ (x, w)) ), atanh(bx) + ctanh(dx), 361 Algebraic Analysis for Non-regular Learning Machines where Po(x) is a compact support probability density (not estimated). We also assume that the true regression function is y = 'I/J(x, 0, 0, 0,0). The set of true parameters is Wo = {Ex'I/J(X, a, b, c, d)2 = O} = {ab + cd = 0 and ab3 + cd3 = O}. Assumptions (A.1),(A.2), and (A.3) are satisfied. The singularity in Wo which gives the smallest Al is the origin and the average loss function in the neighborhood WO of the origin is equivalent to the polynomial Ho(a, b, c, d) = (ab+cd)2 + (ab 3 +cd3)2, (see[9]). Using blowing-ups, we find a map 9 : (x, y, z, w) t-+ (a, b, c, d), a = x, b = y3w - yzw, C = zwx, d = y, by which the singularity at the origin is resolved. J(z) r Ho(a, b, c, d)z<p(a, b, c, d)da db de dd iwo J{ x 2y6 w2[1 + (z + w 2(y2 - z)3)2JYlxy3w l<p(g(x, y, z, w)) dxdydzdw, which shows that Al = 2/3 and ml = 1, resulting that F(n) = (2/3) logn + Const. If the generalization error can be asymptotically expanded, then K(n) ~ (2/3n). 5 Conclusion Mathematical foundation for non-identifiable learning machines is constructed based on algebraic analysis and algebraic geometry. We obtained both the rigorous asymptotic form of the stochastic complexity and an algorithm to calculate it. Appendix In the appendix, we show the inequality eq.(5). Lemma 1 Assume conditions (A.1), (A.2) and (A.3) . Then Exn {sup wEW I 1 n L [h(Xi , w) yn r.;; Ex h(X, w) J 12} < 00. i=1 This lemma is proven by using just the same method as [10]. In order to prove (5), we divide 'SUPwEW' in eq.(4) into 'SUPH(w)2':(' and'suPH(w)?'. Finiteness of the first half is directly proven by Lemma 1. Let us prove the second half is also finite. We can assume without loss of generality that w is in the neighborhood of Wo E W o , because W can be covered by a finite union of neighborhoods. In each neighborhood, by using Taylor expansion of an analytic function, we can find functions {fj(x,w)} and {gj(w) = TIi(Wi -WOi)a;} such that J h(x, w) = L gj(w)fj(x, w), (9) j=1 where {fj(x, wo)} are linearly independent functions of x and gj(wo) = O. Since gj(w)fj(x, w) is a part of Taylor expansion among Wo, fJ(x, w) satisfies Ex n {:~X-< 1 n IVn ~(fj(Xi' w) - ExfJ(X, W))12} < 00. (10) S. Watanabe 362 By using a definition H(w) == IH(w, xn) - H(w)l, 1 n J I;:;: L {L 9j (w)(!i (Xi, w) - Ex !j(X, w))}1 i=1 2 j=1 J 1 J n < L9j(w)2 L{;:;: L(fj(Xi , w) - Ex !j(X, w))}2 j=1 j=1 i=1 where we used Cauchy-Schwarz's inequality. On the other hand, the inequality log x :2: (1/2)(logx)2 - X + 1 (x > 0) shows that H(w) = J q(x) log t X )) dx px,w :2: ~2 J q(x)(log t X J )) px,w )2dx :2: 2 L 9j(w)2 j=l a0 where ao > 0 is the smallest eigen value of the positive definite symmetric matrix Ex {!i(X, WO)!k(X, wo)}. Lastly, combining A 2 J n nH(w) ao "" 1 "" 2 A(X ) = :~K,< H(w) ::; 2 :~K,< ~ {Vn f=t(fj(X i , w) - Ex !j(X, w))} n with eq.(lO), we obtain eq.(5). Acknowledgments This research was partially supported by the Ministry of Education, Science, Sports and Culture in Japan, Grant-in-Aid for Scientific Research 09680362. References [1] Amari,S., Murata, N.(1993) Statistical theory of learning curves under entropic loss. Neural Computation, 5 (4) pp.140-153. [2] Atiyah, M.F. (1970) Resolution of singularities and division of distributions. Comm. Pure and Appl. Math., 13 pp.145-150. [3] Fukumizu,K. (1999) Generalization error of linear neural networks in unidentifiable cases. Lecture Notes in Computer Science, 1720 Springer, pp.51-62. [4] Hagiwara,K., Toda,N., Usui,S . (1993) On the problem of applying Ale to determine the structure of a layered feed-forward neural network. Proc. of IJCNN, 3 pp.2263-2266. [5] Hironaka, H. (1964) Resolution of singularities of an algebraic variety over a field of characteristic zero, I,ll. Annals of Math., 79 pp.109-326. [6] Kashiwara, M. (1976) B-functions and holonomic systems, Invent. Math ., 38 pp.33-53. [7] Oaku, T. (1997) An algorithm of computing b-funcitions. Duke Math . J., 87 pp .115132. [8] Sato, M., Shintani,T. (1974) On zeta functions associated with prehomogeneous vector space.Annals of Math., 100, pp.131-170. [9] Watanabe, S.(1998) On the generalization error by a layered statistical model with Bayesian estimation. IEICE Trans., J81-A pp.1442-1452. English version: Elect. Comm. in Japan., to appear. [10] Watanabe, S. (1999) Algebraic analysis for singular statistical estimation. Lecture Notes in Computer Science, 1720 Springer, pp.39-50.
1739 |@word version:1 polynomial:5 open:3 minus:1 xlw:3 recursively:1 contains:1 wd:3 nt:1 z2:1 dx:5 analytic:10 midori:1 intelligence:1 half:2 plane:2 math:5 mathematical:4 c2:1 constructed:1 differential:1 prove:5 consists:1 multi:2 automatically:1 moreover:2 bounded:1 maxo:1 k2:1 zl:1 uo:5 grant:1 yn:1 appear:1 positive:3 tends:1 ak:4 laurent:1 awi:1 cd3:2 r4:1 appl:1 co:2 bi:1 wjl:1 acknowledgment:1 union:2 definite:2 procedure:1 nlogn:2 holomorphic:3 empirical:1 ups:1 word:5 radial:2 regular:7 layered:2 operator:1 applying:1 equivalent:2 map:2 go:4 independently:1 resolution:7 pure:1 m2:1 continued:2 dw:9 laplace:1 annals:2 duke:1 origin:5 jk:1 logq:1 ckm:1 calculate:3 ensures:1 pd:2 comm:2 complexity:5 rigorously:1 depend:1 upon:1 division:1 supwew:1 basis:2 po:2 resolved:1 represented:1 fast:1 neighborhood:6 whose:4 larger:1 otherwise:1 amari:1 statistic:1 transform:1 differentiable:1 combining:2 xlxn:3 ab3:1 convergence:1 empty:1 converges:2 ac:1 eq:8 p2:2 radius:1 tokyo:1 stochastic:5 wol:1 education:1 ao:2 generalization:6 singularity:10 auj:1 clarify:1 hold:1 sufficiently:2 normal:2 exp:7 mapping:2 entropic:1 a2:1 smallest:2 purpose:1 estimation:3 proc:1 schwarz:1 largest:1 establishes:1 fukumizu:1 pn:2 canst:1 rigorous:2 attains:1 entire:2 a0:1 relation:3 among:1 logn:1 priori:1 equal:5 construct:2 field:1 represents:1 y6:1 logx:1 geometry:3 connects:1 ab:4 bjork:1 mixture:2 nl:1 lh:1 culture:1 taylor:3 divide:1 re:4 cover:1 tp:4 zn:2 logp:2 pole:4 aw:2 density:4 jdet:1 zeta:1 continuously:1 satisfied:2 iip:1 bx:1 japan:3 supp:1 tii:1 de:1 b2:1 coefficient:1 analyze:1 sup:2 bayes:2 wew:2 characteristic:1 murata:1 nonsingular:1 hagiwara:1 bayesian:1 definition:1 pp:10 associated:2 mi:7 proof:2 rational:5 proved:1 blowing:1 feed:1 dt:3 izi:1 unidentifiable:1 generality:1 just:1 lastly:1 hand:1 scientific:1 ieice:1 true:9 y2:1 analytically:2 dud:1 symmetric:1 laboratory:1 ll:2 elect:1 outline:1 cp:1 fj:8 jp:1 nh:9 m1:2 ai:2 rd:3 jg:2 gj:4 iwj:1 du2:1 igo:1 inf:2 inequality:7 minimum:1 ministry:1 exn:2 determine:1 ale:1 rj:3 characterized:1 iog:1 regression:1 invent:1 titech:1 expectation:1 holonomic:1 loglogn:2 c1:1 singular:2 finiteness:1 w2:3 ztp:1 ufl:1 db:1 integer:2 bernstein:1 variety:2 wo:22 algebraic:13 jj:2 remark:2 covered:1 ylw:2 exist:2 estimated:2 algorithmically:3 wr:2 nagatsuta:1 sumio:1 asymptotically:2 inverse:1 almost:1 ullback:1 vn:1 appendix:3 layer:2 bound:6 ki:1 identifiable:9 sato:2 ijcnn:1 infinity:2 min:4 expanded:1 px:2 kd:3 smaller:1 wi:4 explained:1 invariant:2 multiplicity:2 taken:1 hierarchical:4 yokohama:1 ho:2 eigen:1 const:4 k1:1 prof:1 toda:1 atanh:1 distance:1 manifold:2 cauchy:1 iwo:1 negative:3 meromorphic:2 upper:3 finite:5 arbitrary:4 kl:1 swatanab:1 trans:1 natural:2 usui:1 kashiwara:2 technology:1 ne:1 axis:3 asymptotic:5 loss:5 lecture:2 suph:2 proven:3 foundation:2 ivn:1 dd:1 pi:3 cd:2 lo:5 supported:2 english:1 jh:2 institute:1 curve:1 dimension:1 calculated:1 xn:11 forward:1 ig:1 compact:4 ml:10 xi:6 continuous:1 ku:1 expansion:7 complex:2 da:1 main:3 linearly:1 cpi:1 referred:2 aid:1 precision:1 watanabe:6 lw:1 jacobian:2 theorem:8 emphasized:1 jensen:1 r2:1 woi:1 normalizing:1 a3:1 exists:3 derives:1 ih:1 contained:1 ordered:1 partially:1 sport:1 springer:2 satisfies:1 conditional:1 hironaka:3 fisher:1 infinite:1 lemma:3 called:1 invariance:1 m3:1 ijk:1 perceptrons:2 ew:1 support:3 absolutely:1 evaluate:1 ex:10
808
174
796 SPEECH RECOGNITION: STATISTICAL AND NEURAL INFORMATION PROCESSING APPROACHES John S. Bridle Speech Research Unit and National Electronics Research Initiative in Pattern Recognition Royal Signals and Radar Establishment Malvern UK Automatic Speech Recognition (ASR) is an artificial perception problem: the input is raw, continuous patterns (no symbols!) and the desired output, which may be words, phonemes, meaning or text, is symbolic. The most successful approach to automatic speech recognition is based on stochastic models. A stochastic model is a theoretical system whose internal state and output undergo a series of transformations governed by probabilistic laws [1]. In the application to speech recognition the unknown patterns of sound are treated as if they were outputs of a stochastic system [18,2]. Information about the classes of patterns is encoded as the structure of these "laws" and the probabilities that govern their operation. The most popular type of SM for ASR is also known as a "hidden Markov model." There are several reasons why the SM approach has been so successful for ASR. It can describe the shape of the spectrum, and has a principled way of describing temporal order, together with variability of both. It is compatible with the hierarchical nature of speech structure [20,18,4], there are powerful algorithms for decoding with respect to the model (recognition), and for adapting the model to fit significant amounts of example data (learning). Firm theoretical (mathematical) foundations enable extensions to be accommodated smoothly (e.g. [3]). There are many deficiencies however. In a typical system the speech signal is first described as a sequence of acoustic vectors (spectrum cross sections or equivalent) at a rate of say 100 per second. The pattern is assumed to consist of a sequence of segments corresponding to discrete states of the model. In each segment the acoustic vectors are drawn from a distribution characteristic of the state, but otherwise independent of one another and of the states before and after. In some systems there is a controlled relationship between states and the phonemes or phones of speech science, but most of the properties and notions which speech scientists assume are importan t are ignored. Most SM approaches are also deficient at a pattern-recognition theory level: The parameters of the models are usually adj usted (using the Baum-Welch re-estimation method [5,2]) so as to maximise the likelihood of the data given the model. This is the right thing to do if the form of the model is actually appropriate for the data, but if not the parameter-optimisation method needs to be concerned with Speech Recognition discrimination between classes (phonemes, words, meanings, ... ) [28,29,30]. A HMM recognition algorithm is designed to find the best explanation of the input in terms of the model. It tracks scores for all plausible current states of the generator and throws away explanations which lead to a current state for which there is a better explanation (Bellman's Dynamic Programming) . It may also throwaway explanations which lead to a current state much worse than the best current state (score pruning), producing a Beam Search method. (It is important to keep many hypotheses in hand, particularly when the current input is ambiguous.) Connectionist (or "Neural Network") approaches start with a strong pre-conception of the types of process to be used. They can claim some legitimacy by reference to new (or renewed) theories of cognitive processing. The actual mechanisms used are usually simpler than those of the SM methods, but the mathematical theory (of what can be learnt or computed for instance) is more difficult, particularly for structures which have been proposed for dealing with temporal structure. One of the dreams for connectionist approaches to speech is a network whose inputs accept the speech data as it arrives, it would have an internal state which contains all necessary information about the past input, and the output would be as accurate and early as it could be. The training of networks with their own dynamics is particularly difficult, especially when we are unable to specify what the internal state should be. Some are working on methods for training the fixed points of continuousvalued recurrent non-linear networks [15,16,27] . Prager [6] has attempted to train various types of network in a full state-feedback arrangement. Watrous [9] limits his recurrent connections to self-loops on hidden and output units, but even so the theory of such recursive non-linear filters is formidable. At the other extreme are systems which treat a whole time-frequency-amplitude array (resulting from initial acoustic analysis) as the input to a network, and require a label as output. For example, the performance that Peeling et al. [7] report on multi-speaker small-vocabulary isolated word recognition tasks approach those of the best HMM techniques available on the same data. Invariance to temporal position was trained into the network by presenting the patterns at random positions in a fixed time-window. Waibel et al. [8] use a powerful compromise arrangement which can be thought of either as the replication of smaller networks across the timewindow (a time-spread network [19]) or as a single small network with internal delay lines (a Time-Delay Neural Network [8]). There are no recurrent links except for trivial ones at the output, so training (using Backpropagation) is no great problem. We may think of this as a finite-impulse-response non-linear filter. Reported results on consonant discrimination are encouraging, and better than those of a HMM system on the same data. The system is insensitive to position by virtue of its construction. Kohonen has constructed and demonstrated large vocabulary isolated word [12] and unrestricted vocabulary continuous speech transcription [13J systems which are inspired by neural network ideas, but implemented as algorithms more suitable for 797 798 Bridle current programmed digital signal processor and CPU chips. Kohonen's phonotopic map technique can be thought of as an unsupervised adaptive quantiser constrained to put its reference points in a non-linear low-dimensional sub-space. His learning vector quantiser technique used for initial labeling combines the advantages of the classic nearest-neighbor method and discriminant training. Among other types of network which have been applied to speech we must mention an interesting class based not on correlations with weight vectors (dot-product) but on distances from reference points. Radial Basis Function theory [22] was developed for multi-dimensional interpolation, and was shown by Broomhead and Lowe [23] to be suitable for many of the jobs that feed-forward networks are used for. The advantage is that it is not difficult to find useful positions for the reference points which define the first, non-linear, transformation. If this is followed by a linear output transformation then the weights can be found by methods which are fast and straightforward. The reference points can be adapted using methods based on backpropagation. Related methods include potential functions [24], Kernel methods [25] and the modified Kanerva network [26]. There is much to be gained form a careful comparison of the theory of stochastic model and neural network approaches to speech recognition. If a NN is to perform speech decoding in a way anything like a SM algorithm it will have a state which is not just one of the states of the hypothetical generative model; the state must include information about the distribution of possible generator states given the pattern so far, and the state transition function must update this distribution depending on the current speech input. It is not clear whether such an internal representation and behavior can be 'learned' from scratch by an otherwise unstructured recurrent network. Stochastic model based algorithms seem to have the edge at present for dealing with temporal sequences. Discrimination-based training inspired by NN techniques may make a significant difference in performance. It would seem that the area where NNs have most to offer is in finding non-linear transformations of the data which take us to a space (perhaps related to formant or articulatory parameters) where comparisons are more relevant to phonetic decisions than purely auditory ones (e.g., [17,10,11]). The resulting transformation could also be viewed as a set of 'feature detectors'. Or perhaps the NN should deliver posterior probabilities of the states of a SM directly [14]. The art of applying a stochastic model or neural network approach is to choose a class of models or networks which is realistic enough to be likely to be able to capture the distinctions (between speech sounds or words for instance) and yet have a structure which makes it amenable to algorithms for building the detail of the models based on examples, and for interpreting particular unknown patterns. Future systems will need to exploit the regularities described by phonetics, to allow the construction of high-performance systems with large vocabularies, and their adaptation to the characteristics of each new user. Speech Recognition There is no doubt that the Stochastic model based methods work best at present, but current systems are generally far inferior to humans even in situations where the usefulness of higher-level processing in minimal. I predict that the next generation of ASR systems will be based on a combination of connectionist and SM theory and techniques, with mainstream speech knowledge used in a rather soft way to decide the structure. It should not be long before the distinction I have been making will disappear [29]. [1] D. R. Cox and H. D. Millar, "The Theory of Stochastic Processes", Methuen, 1965. pp. 721-741. [2] S. E. Levinson, L. R. Rabiner and M. M. Sohndi, "An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition", Bell Syst. Tech. J., vol. 62, no. 4, pp. 10351074, Apr. 1983. [3] M. R. Russell and R. K. Moore, "Explicit modeling of state occupancy in hidden Markov models of automatic speech recognition". IEEE ICASSP-85. [4] S. E. Levinson, "A unified theory of composite pattern analysis for automatic speech recognition', in F. Fallside and W. Woods (eds.), "Computer Speech Processing", Prentice-Hall, 1984. [5] L. E. Baum, "An inequality and associated maximisation technique in statistical estimation of probabilistic functions of a Markov process", Inequalities, vol. 3, pp. 1-8, 1972. [6] R. G. Prager et al., "Boltzmann machines for speech recognition", Computer Speech and Language, vol. 1., no. 1, 1986. [7] S. M. Peeling, R. K. moore and M. J. Tomlinson, "The multi-layer perceptron as a tool for speech pattern processing research", Proc. Inst. Acoustics Conf. on Speech and Hearing, Windermere, November 1986. [8] Waibel et al., ICASSP88, NIPS88 and ASSP forthcoming. [9] R. 1. Watrous, "Connectionist speech recognition using the Temporal Flow model", Proc. IEEE Workshop on Speech Recognition, Harriman NY, June 1988. [10] I. S. Howard and M. A. Huckvale, "Acoustic-phonetic attribute determination using multi-layer perceptrons", IEEE Colloquium Digest 1988/11. [11] M. A. Huckvale and I. S. Howard, "High performance phonetic feature analysis for automatic speech recognition", ICASSP89. [12J T. Kohonen et al., "On-line recognition of spoken words from a large vocabulary", Information Sciences 33, 3-30 (1984). 799 800 Bridle [13] T. Kohonen, "The 'Neural' phonetic typewriter", IEEE Computer, March 1988. [14] H. Bourlard and C. J. Wellekens, "Multilayer perceptrons and automatic speech recognition", IEEE First IntI. Conf. Neural Networks, San Diego, 1987. [15] R. Rohwer and S. Renals, "Training recurrent networks", Proc. N'Euro-88, Paris, June 1988. [16] L. Almeida, "A learning rule for asynchronous perceptrons with feedback in a combinatorial environment", Proc. IEEE IntI. Conf. Neural Networks, San Diego 1987. [17] A. R. Webb and D . Lowe, "Adaptive feed-forward layered networks as pattern classifiers: a theorem illuminating their success in discriminant analysis" , sub. to Neural Networks. [18] J. K. Baker, "The Dragon system: an overview", IEEE Trans. ASSP-23, no. 1, pp. 24-29, Feb. 1975. [19] J. S. Bridle and R. K. Moore, "Boltzmann machines for speech pattern processing", Proc. Inst. Acoust., November 1984, pp. 1-8. [20] B. H. Repp, "On levels of description in speech research", J. Acoust. Soc. Amer. vol. 69 p. 1462-1464, 1981. [21] R. A. Cole et aI, "Performing fine phonetic distinctions: templates vs. features" , in J. Perkell and D. H. Klatt (eds.), "Symposium on invariance and variability of speech processes", Hillsdale, NJ, Erlbaum 1984. [22] M. J. D. Powell, "Radial basis functions for multi-variate interpolation: a review", IMA Conf. on algorithms for the approximation offunctions and data, Shrivenham 1985. [23] D. Broomhead and D. Lowe, "Multi-variable interpolation and adaptive networks", RSRE memo 4148, Royal Signals and Radar Est., 1988. [24] M. A. Aizerman, E. M. Braverman and L. 1. Rozonoer, "On the method of potential functions", Automatika i Telemekhanika, vol. 26 no. 11, pp. 20862088, 1964. [25] Hand, "Kernel discriminant analysis", Research Studies Press, 1982. [26] R. W. Prager and F. Fallside, "Modified Kanerva model for automatic speech recognition", submitted to Cmputer Speech and Language. [27] F. J. Pineda, "Generalisation of back propagation to recurrent neural networks", Physical Review Letters 1987. [28] L. R. Bahl et aI., Proc. ICASSP88, pp. 493-496. Speech Recognition [29] H. Bourlard and C. J. Wellekens, "Links between Markov models and multilayer perceptrons", this volume. [30] L. Niles, H. Silverman, G. Tajchman, M. Bush, "How limited training data can allow a neural network to out-perform an 'optimal' classifier" , Proc. ICASSP89. 801
174 |@word cox:1 mention:1 electronics:1 initial:2 series:1 score:2 contains:1 renewed:1 past:1 current:8 adj:1 yet:1 must:3 john:1 realistic:1 shape:1 offunctions:1 designed:1 update:1 discrimination:3 v:1 generative:1 simpler:1 mathematical:2 constructed:1 symposium:1 initiative:1 replication:1 combine:1 behavior:1 multi:6 bellman:1 inspired:2 continuousvalued:1 actual:1 encouraging:1 window:1 cpu:1 baker:1 formidable:1 what:2 watrous:2 developed:1 unified:1 finding:1 transformation:5 spoken:1 acoust:2 nj:1 temporal:5 hypothetical:1 classifier:2 uk:1 unit:2 producing:1 before:2 maximise:1 scientist:1 treat:1 limit:1 interpolation:3 programmed:1 limited:1 recursive:1 maximisation:1 backpropagation:2 silverman:1 powell:1 area:1 bell:1 adapting:1 thought:2 composite:1 word:6 pre:1 radial:2 symbolic:1 layered:1 put:1 prentice:1 applying:1 equivalent:1 map:1 demonstrated:1 baum:2 straightforward:1 welch:1 methuen:1 unstructured:1 rule:1 array:1 his:2 classic:1 notion:1 rozonoer:1 construction:2 diego:2 user:1 programming:1 perkell:1 hypothesis:1 recognition:23 particularly:3 capture:1 prager:3 russell:1 principled:1 colloquium:1 govern:1 environment:1 dynamic:2 radar:2 trained:1 segment:2 compromise:1 purely:1 deliver:1 basis:2 icassp:1 chip:1 various:1 train:1 fast:1 describe:1 artificial:1 labeling:1 firm:1 whose:2 encoded:1 plausible:1 say:1 otherwise:2 formant:1 think:1 legitimacy:1 pineda:1 sequence:3 advantage:2 product:1 adaptation:1 renals:1 kohonen:4 relevant:1 loop:1 description:1 regularity:1 depending:1 recurrent:6 nearest:1 job:1 strong:1 throw:1 implemented:1 soc:1 attribute:1 filter:2 stochastic:8 human:1 enable:1 hillsdale:1 require:1 extension:1 hall:1 great:1 predict:1 claim:1 early:1 estimation:2 proc:7 label:1 combinatorial:1 cole:1 tool:1 establishment:1 modified:2 rather:1 june:2 likelihood:1 tech:1 inst:2 nn:3 accept:1 hidden:3 among:1 constrained:1 art:1 asr:4 unsupervised:1 future:1 connectionist:4 report:1 national:1 ima:1 braverman:1 arrives:1 extreme:1 articulatory:1 amenable:1 accurate:1 edge:1 necessary:1 typewriter:1 accommodated:1 desired:1 re:1 isolated:2 theoretical:2 minimal:1 instance:2 soft:1 modeling:1 hearing:1 usefulness:1 delay:2 successful:2 erlbaum:1 reported:1 learnt:1 nns:1 probabilistic:3 decoding:2 together:1 choose:1 worse:1 cognitive:1 conf:4 doubt:1 syst:1 potential:2 lowe:3 millar:1 start:1 phoneme:3 characteristic:2 rabiner:1 raw:1 processor:1 submitted:1 detector:1 ed:2 rohwer:1 frequency:1 pp:7 associated:1 bridle:4 auditory:1 popular:1 broomhead:2 knowledge:1 amplitude:1 telemekhanika:1 actually:1 back:1 feed:2 higher:1 specify:1 response:1 amer:1 just:1 correlation:1 hand:2 working:1 propagation:1 bahl:1 perhaps:2 impulse:1 building:1 moore:3 self:1 inferior:1 ambiguous:1 speaker:1 anything:1 presenting:1 interpreting:1 phonetics:1 meaning:2 physical:1 overview:1 insensitive:1 volume:1 significant:2 ai:2 automatic:8 language:2 dot:1 mainstream:1 feb:1 posterior:1 own:1 phone:1 phonetic:5 inequality:2 success:1 unrestricted:1 tomlinson:1 signal:4 levinson:2 full:1 sound:2 determination:1 cross:1 offer:1 long:1 controlled:1 multilayer:2 optimisation:1 kernel:2 beam:1 fine:1 undergo:1 deficient:1 thing:1 flow:1 quantiser:2 seem:2 conception:1 concerned:1 enough:1 fit:1 variate:1 forthcoming:1 idea:1 whether:1 speech:38 rsre:1 ignored:1 useful:1 generally:1 clear:1 amount:1 per:1 track:1 discrete:1 vol:5 drawn:1 wood:1 letter:1 powerful:2 decide:1 decision:1 layer:2 followed:1 adapted:1 deficiency:1 dragon:1 performing:1 waibel:2 combination:1 march:1 smaller:1 across:1 making:1 inti:2 kanerva:2 wellekens:2 describing:1 mechanism:1 available:1 operation:1 hierarchical:1 away:1 appropriate:1 include:2 exploit:1 especially:1 disappear:1 arrangement:2 digest:1 fallside:2 distance:1 unable:1 link:2 tajchman:1 hmm:3 discriminant:3 trivial:1 reason:1 dream:1 relationship:1 difficult:3 webb:1 memo:1 boltzmann:2 unknown:2 perform:2 markov:5 sm:7 howard:2 finite:1 november:2 situation:1 variability:2 assp:2 paris:1 connection:1 acoustic:5 learned:1 distinction:3 trans:1 able:1 usually:2 pattern:13 perception:1 aizerman:1 royal:2 explanation:4 suitable:2 treated:1 bourlard:2 occupancy:1 text:1 review:2 law:2 interesting:1 generation:1 generator:2 digital:1 foundation:1 illuminating:1 throwaway:1 compatible:1 asynchronous:1 allow:2 perceptron:1 neighbor:1 template:1 feedback:2 vocabulary:5 transition:1 forward:2 adaptive:3 san:2 far:2 pruning:1 transcription:1 keep:1 dealing:2 assumed:1 consonant:1 spectrum:2 continuous:2 search:1 why:1 nature:1 apr:1 spread:1 whole:1 malvern:1 euro:1 nile:1 ny:1 sub:2 position:4 explicit:1 governed:1 peeling:2 theorem:1 symbol:1 virtue:1 consist:1 workshop:1 gained:1 smoothly:1 likely:1 viewed:1 klatt:1 careful:1 typical:1 except:1 generalisation:1 invariance:2 attempted:1 est:1 perceptrons:4 internal:5 almeida:1 bush:1 scratch:1
809
1,740
Low Power Wireless Communication via Reinforcement Learning Timothy X Brown Electrical and Computer Engineering University of Colorado Boulder, CO 80309-0530 tirnxb@colorado.edu Abstract This paper examines the application of reinforcement learning to a wireless communication problem. The problem requires that channel utility be maximized while simultaneously minimizing battery usage. We present a solution to this multi-criteria problem that is able to significantly reduce power consumption. The solution uses a variable discount factor to capture the effects of battery usage. 1 Introduction Reinforcement learning (RL) has been applied to resource allocation problems in telecommunications, e.g., channel allocation in wireless systems, network routing, and admission control in telecommunication networks [1,2, 8, 10]. These have demonstrated reinforcement learning can find good policies that significantly increase the application reward within the dynamics of the telecommunication problems. However, a key issue is how to treat the commonly occurring multiple reward and constraint criteria in a consistent way. This paper will focus on power management for wireless packet communication channels. These channels are unlike wireline channels in that channel quality is poor and varies over time, and often one side of the wireless link is a battery operated device such as a laptop computer. In this environment, power management decides when to transmit and receive so as to simultaneously maximize channel utility and battery life. A number of power management strategies have been developed for different aspects of battery operated computer systems such as the hard disk and CPU [4, 5]. Managing the channel is different in that some control actions such as shutting off the wireless transmitter make the state of the channel and the other side of the communication unobservable. In this paper, we consider the problem of finding a power management policy that simultaneously maximizes the radio communication's earned revenue while minimizing battery usage. The problem is recast as a stochastic shortest path problem which in turn is mapped to a discounted infinite horizon with a variable discount factor. Results show significant reductions in power usage. T. X Brown 894 Figure 1: The five components of the radio communication system. 2 Problem Description The problem is comprised of five components as shown in Figure 1: mobile application, mobile radio, wireless channel, base station radio. and base station application. The applications on each end generate packets that are sent via a radio across the channel to the radio and then application on the other side. The application also defines the utility of a given end-to-end performance. The radios implement a simple acknowledgment/retransmit protocol for reliable transmission. The base station is fixed and has a reliable power supply and therefore is not power constrained. The mobile power is limited by a battery and it can choose to turn its radio off for periods of time to reduce power usage. Note that even with the radio off, the mobile system continues to draw power for other uses. The channel adds errors to the packets. The rate of errors depends on many factors such as location of mobile and base station, intervening distance. and levels of interference. The problem requires models for each of these components. To be concrete. the specific models used in this paper are described in the following sections. It should be emphasized that in order to focus on the machine learning issues, simple models have been chosen. More sophisticated models can readily be included. 2.1 The Channel The channel carries fixed-size packets in synchronous time slots. All packet rates are normalized by the channel rate so that the channel carries one packet per unit time in each direction. The forward and reverse channels are orthogonal and do not interfere. Wireless data channels typically have low error rates. Occasionally. due to interference or signal fading, the channel introduces many errors. This variation is possible even when the mobile and base station are stationary. The channel is modeled by a two state Gilbert-Elliot model [3]. In this model, the channel is in either a "good" or a "bad" state with a packet error probabilities Pg and Pb where Pg < Pb? The channel is symmetric with the same loss rate in both directions. The channel stays in each state with a geometrically distributed holding time with mean holding times hg and hb time slots. 2.2 Mobile and Base Station Application The traffic generated by the source is a bursty ON/OFF model that alternates between generating no packets and generating packets at rate TON. The holding times are geometrically distributed with mean holding times hON and hOFF. The traffic in each direction is independent and identically distributed. 2.3 The Radios The radios can transmit data from the application and send it on the channel and simultaneously receive data from the other radio and pass it on to its application. The radios implement a simple packet protocol to ensure reliability. Packets from the sources are queued in the radio and sent one by one. Packets consist of a header and data. The header carries acknowledgements (ACK's) with the most recent packet received without error. The header contains a checksum so that errors in the payload can be detected. Errored packets 895 Low Power Wireless Communication via Reinforcement Learning Parameter Name Channel Error Rate, Good Channel Error Rate, Bad Channel Holding Time, Good Channel Holding Time, Bad Source On Rate Source Holding Time, On Source Holding Time, Off Power, Radio Off Power, Radio On Power, Radio Transmitting Real Time Max Delay Web Browsing Time Scale Symbol pg Pb hg hb TON hON hOFF POFF PON PTX dmax do Value 0.01 0.20 100 10 1.0 1 10 7W 8.5W lOW 3 3 Table 1: Application parameters. cause the receiving radio to send a packet with a negative acknowledgment (NACK) to the other radio instructing it to retransmit the packet sequence starting from the errored packet. The NACK is sent immediately even if no data is waiting and the radio must send an empty packet. Only unerrored packets are sent on to the application. The header is assumed to always be received without errorl. Since the mobile is constrained by power, the mobile is considered the master and the base station the slave. The base station is always on and ready to transmit or receive. The mobile can turn its radio off to conserve power. Every ON-OFF and OFF-ON transition generates a packet with a message in the header indicating the change of state to the base station. These message packets carry no data. The mobile expends power at three levels-PoFF, Po N , and Ptx--corresponding to the radio off, receiver on but no packet transmitted, and receiver on packet transmitted. 2.4 Reward Criteria Reward is earned for packets passed in each direction. The amount depends on the application. In this paper we consider three types of applications, an e-mail application, a real-time application, and a web browsing application. In the e-mail application, a unit reward is given for every packet received by the application. In the real time application a unit reward is given for every packet received by the application with delay less than dmax ? The reward is zero otherwise. In the web browsing application, time is important but not critical. The value of a packet with delay d is (1 - l/do)d, where do is the desired time scale of the arrivals. The specific parameters used in this experiment are given in Table 1. These were gathered as typical values from [7, 9]. It should be emphasized that this model is the simplest model that captures the essential characteristics of the problem. More realistic channels, protocols, applications, and rewards can readily be incorporated but for this paper are left out for clarity. 1 A packet error rate of 20% implies a bit error rate of less than 1%. Error correcting codes in the header can easily reduce this error rate to a low value. The main intent is to simplify the protocol for this paper so that time-outs and other mechanisms do not need to be considered. T. X Brown 896 Component Channel Application Mobile Mobile Base Station States {good,ba~} {ON,OFF} { ON,OFF} {List of waiting and unacknowledged packets and their current delay} {List of waiting and unacknowledged packets and their current delay} Table 2: Components to System State. 3 Markov Decision Processes At any given time slot, t, the system is in a particular configuration, x, defined by the state of each of the components in Table 2. The system state is s = (x, t) where we include the time in order to facilitate accounting for the battery. The mobile can choose to toggle its radio between the ON and OFF state and rewards are generated by successfully received packets. The task of the learner is to determine a radio ON/OFF policy that maximizes the total reward for packets received before batteries run out. The battery life is not a fixed time. First, it depends on usage. Second, for a given drain, the capacity depends on how long the battery was charged, how long it has sat since being charged, the age of the battery, etc. In short, the battery runs out at a random time. The system can be modeled as a stochastic shortest path problem whereby there exists a terminal state, So, that corresponds to the battery empty in which no more reward is possible and the system remains permanently at no cost. 3.1 Multi-criteria Objective Formally, the goal is to learn a policy for each possible system state so as to maximize J'(8)=E{t.C(t) 8,,,}, where E{ 'Is, 'Jr} is the expectation over possible trajectories starting from state s using policy 'Jr, c(t) is the reward for packets received at time t, and T is the last time step before the batteries run out. Typically, T is very large and this inhibits fast learning. So, in order to promote faster learning we convert this problem to a discounted problem that removes the variance caused by the random stopping times. At time t, given action a(t), while in state s(t) the terminal state is reached with probability Ps(t) (a(t)). Setting the value of the terminal state to 0, we can convert our new criterion to maximize: r (8) = E { t. g c(t) (1 - p>(T)(a(T))) S,,,}, where the product is the probability of reaching time t . In words, future rewards are discounted by 1 - Ps (a), and the discounting is larger for actions that drain the batteries faster. Thus a more power efficient strategy will have a discount factor closer to one which correctly extends the effective horizon over which reward is captured. 3.2 Q-Iearning RL methods solve MDP problems by learning good approximations to the optimal value function, J*, given by the solution to the Bellman optimality equation which takes the 897 Low Power Wireless Communication via Reinforcement Learning following form: J*(s) max [Esf{c(s,a,s') aEA(s) + (l-ps(a))J*(s')}] (1) where A(s) is the set of actions available in the current state s, c(s, a, s') is the effective immediate payoff, and Esf {.} is the expectation over possible next states s'. We learn an appr<;>ximation to J* using Watkin's Q-learning algorithm. Bellman's equation can be rewritten in Q-factor as J*(s) max Q*(s,a) (2) aEA(s) In every time step the following decision is made. The Q-value of turning on in the next state is compared to the Q-value of turning off in the next state. If turning on has higher value the mobile turns on. Else, the mobile turns off. Whatever our decision, we update our value function as follows: on a transition from state s to s' on action a, Q(s, a) (1 - 1')Q(s, a) + l' (C(S, a, s') + (1- ps(a)) max Q(s', bEA(Sf) b)) (3) where l' is the learning rate. In order for Q-Iearning to perform well, all potentially important state-action pairs (s, a) must be explored. At each state, with probability 0.1 we apply a random action instead of the action recommended by the Q-value. However, we still use (3) to update Q-values using the action b recommended by the Q-values. 3.3 Structural Limits to the State Space For theoretical reasons it is desirable to use a table lookup representation. In practice, since the mobile radio decides using information available to it, this is impossible for the following reasons. The state of the channel is never known directly. The receiver only observes errored packets. It is possible to infer the state, but, only when packets are actually received and channel state changes introduce inference errors. Traditional packet applications rarely communicate state information to the transport layer. This state information could also be inferred. But, given the quickly changing application dynamics, the application state is often ignored. For the particular parameters in Table 1, (i .e. rON = 1.0) the application is on if and only if it generates a packet so its state is completely specified by the packet arrivals and does not need to be inferred. The most serious deficiency to a complete state space representation is that when the mobile radio turns OFF, it has no knowledge of state changes in the base station. Even when it is ON, the protocol does not have provisions for transferring directly the state information. Again, this implies that state information must be inferred. One approach to these structural limits is to use a POMDP approach [6] which we leave to future work. In this paper, we simply learn deterministic policies on features that estimate the state. 3.4 Simplifying Assumptions Beyond the structural problems of the previous section we must treat the usual problem that the state space is huge. For instance, assuming even moderate maximum queue sizes and maximum wait times yields 1020 states. If one considers e-mail like applications where TX Brown 898 Component Mobile Radio Mobile Radio Mobile Radio Channel Base Radio Feature is radio ON or OFF number of packets waiting at the mobile wait time of first packet waiting at the mobile number of errors received in last 4 time slots number of time slots since mobile was last ON Table 3: Decision Features Measured by Mobile Radio wait times of minutes (1000's of time slot wait times) with many packets waiting possible, the state space exceeds 10100 states. Thus we seek a representation to reduce the size and complexity of the state space. This reduction is taken in two parts. The first is a feature representation that is possible given the structural limits of the previous section, the second is a function approximation based on these feature vectors. The feature vectors are listed in Table 3. These are chosen since they are measurable at the mobile radio. For function approximation, we use state aggregation since it provably converges. 4 Simulation Results This section describes simulation-based experiments on the mobile radio control problem. For this initial study, we simplified the problem by setting Pg = Pb = 0 (i.e. no channel errors). State aggregation was used with 4800 aggregate states. The battery termination probability, ps(a) was simply PIlOOO where P is the power appropriate for the state and action chosen from Table 1. This was chosen to have an expected battery life much longer than the time scale of the traffic and channel processes. Three policies were learned, one for each application reward criteria. The resulting policies are tested by simulating for 106 time slots. In each test run, an upper and lower bound on the energy usage is computed. The upper bound is the case of the mobile radio always on 2 . The lower bound is a policy that ignores the reward criteria but still delivers all the packets. In this policy, the radio is off and packets are accumulated until the latter portion of the test run when they are sent in one large group. Policies are compared using the normalized power savings. This is a measure of how close the policy is to the lower bound with 0% and 100% being the upper and lower bound. The results are given in Table 4. The table also lists the average reward per packet received by the application. For the e-mail application, which has no constraints on the packets, the average reward is identically one. 5 Conclusion This paper showed that reinforcement learning was able to learn a policy that significantly reduced the power consumption of a mobile radio while maintaining a high application utility. It used a novel variable discount factor that captured the impact of different actions on battery life. This was able to gain 50% to 80% of the possible power savings. 2There exist policies that exceed this power, e.g. if they toggle oNand oFFoften and generate many notification packets. But, the always on policy is the baseline that we are trying to improve upon. 899 Low Power Wireless Communication via Reinforcement Learning Application E-mail Real Time Web Browsing Normalized Power Savings 81% 49% 48% Average Reward 1 1.00 0.46 Table 4: Simulation Results. In the application the paper used a simple model of the radio, channel, battery, etc. It also used simple state aggregation and ignored the partially observable aspects of the problem. Future work will address more accurate models, function approximation, and POMDP approaches. Acknowledgment This work was supported by CAREER Award: NCR-9624791 and NSF Grant NCR9725778. References [1] Boyan, J.A., Littman, M.L., "Packet routing in dynamically changing networks: a reinforcement learning approach," in Cowan, J.D., et aI. , ed. Advances in NIPS 6, Morgan Kauffman, SF, 1994. pp. 671-678. [2] Brown, TX, Tong, H., Singh, S., "Optimizing admission control while ensuring quality of service in multimedia networks via reinforcement learning," in Advances in Neural Information Processing Systems 12, ed. M. Kearns, et aI., MIT Press, 1999, pp. 982-988. [3] Goldsmith, AJ., Varaiya, P.P., "Capacity, mutual information, and coding for finite state Markov channels," IEEE T. on Info. Thy., v. 42, pp. 868-886, May 1996. [4] Govil, K., Chan, E., Wasserman, H., "Comparing algorithms for dynamic speedsetting of a low-power cpu," Proceedings of the First ACM Int. Can! on Mobile Computing and Networking (MOBICOM), 1995. [5] Helmbold, D., Long, D.D.E., Sherrod, B., "A dynamic disk spin-down technique for mobile computing. Proceedings of the Second ACM Int. Can! on Mobile Computing and Networking (MOBICOM), 1996. [6] Jaakola, T., Singh, S., Jordan, M.I., "Reinforcement Learning Algorithm for Partially Observable Markov Decision Problems," in Advances in Neural Information Processing Systems 7, ed. G. Tesauro, et aI., MIT Press, 1995, pp. 345-352. [7] Kravits, R., Krishnan, P., "Application-Driven Power Management for Mobile Communication," Wireless Networks, 1999. [8] Marbach, P., Mihatsch, 0., Schulte, M., Tsitsiklis, J.N., "Reinforcement learning for call admission control and routing in integrated service networks," in Jordan, M., et aI., ed. Advances in NIPS 10, MIT Press, 1998. [9] Rappaport, T.S., Wireless Communications: Principles and Practice, Prentice-Hall Pub., Englewood Cliffs, NJ, 1996. [10] Singh, S.P., Bertsekas, D.P., "Reinforcement learning for dynamic channel allocation in cellular telephone systems," in Advances in NIPS 9, ed. Mozer, M., et aI., MIT Press, 1997. pp. 974-980.
1740 |@word disk:2 termination:1 simulation:3 seek:1 accounting:1 simplifying:1 pg:4 carry:4 reduction:2 initial:1 configuration:1 contains:1 pub:1 current:3 comparing:1 must:4 readily:2 realistic:1 remove:1 update:2 stationary:1 device:1 short:1 location:1 ron:1 five:2 admission:3 supply:1 introduce:1 thy:1 expected:1 multi:2 terminal:3 bellman:2 discounted:3 cpu:2 maximizes:2 laptop:1 developed:1 finding:1 nj:1 every:4 iearning:2 control:5 unit:3 whatever:1 grant:1 bertsekas:1 before:2 service:2 engineering:1 treat:2 limit:3 cliff:1 path:2 dynamically:1 co:1 limited:1 jaakola:1 acknowledgment:3 practice:2 implement:2 significantly:3 word:1 wait:4 close:1 prentice:1 impossible:1 gilbert:1 measurable:1 queued:1 deterministic:1 demonstrated:1 pon:1 charged:2 send:3 starting:2 pomdp:2 unerrored:1 immediately:1 correcting:1 wasserman:1 helmbold:1 examines:1 variation:1 transmit:3 colorado:2 us:2 conserve:1 continues:1 electrical:1 capture:2 mobicom:2 earned:2 observes:1 mozer:1 environment:1 complexity:1 reward:19 littman:1 battery:20 unacknowledged:2 dynamic:5 singh:3 upon:1 learner:1 completely:1 po:1 easily:1 tx:2 fast:1 effective:2 detected:1 aggregate:1 header:6 larger:1 solve:1 otherwise:1 sequence:1 product:1 description:1 intervening:1 empty:2 transmission:1 p:5 generating:2 leave:1 converges:1 measured:1 received:10 payload:1 implies:2 direction:4 stochastic:2 packet:47 routing:3 considered:2 hall:1 bursty:1 appr:1 radio:38 esf:2 successfully:1 mit:4 always:4 reaching:1 mobile:33 focus:2 transmitter:1 baseline:1 inference:1 stopping:1 accumulated:1 typically:2 transferring:1 integrated:1 provably:1 issue:2 unobservable:1 hon:2 constrained:2 hoff:2 mutual:1 never:1 saving:3 schulte:1 promote:1 future:3 simplify:1 serious:1 simultaneously:4 huge:1 message:2 englewood:1 introduces:1 operated:2 hg:2 accurate:1 closer:1 orthogonal:1 mihatsch:1 desired:1 theoretical:1 instance:1 cost:1 comprised:1 delay:5 varies:1 sherrod:1 stay:1 off:19 receiving:1 quickly:1 concrete:1 transmitting:1 again:1 management:5 choose:2 watkin:1 lookup:1 coding:1 int:2 caused:1 depends:4 traffic:3 reached:1 portion:1 aggregation:3 spin:1 variance:1 characteristic:1 maximized:1 gathered:1 yield:1 trajectory:1 networking:2 ed:5 notification:1 energy:1 pp:5 gain:1 knowledge:1 provision:1 sophisticated:1 actually:1 higher:1 until:1 web:4 transport:1 interfere:1 defines:1 aj:1 quality:2 mdp:1 usage:7 effect:1 facilitate:1 brown:5 normalized:3 name:1 discounting:1 symmetric:1 elliot:1 whereby:1 criterion:7 trying:1 ack:1 toggle:2 complete:1 goldsmith:1 delivers:1 novel:1 rl:2 significant:1 ai:5 marbach:1 reliability:1 longer:1 etc:2 base:12 add:1 recent:1 showed:1 chan:1 optimizing:1 moderate:1 driven:1 reverse:1 tesauro:1 occasionally:1 life:4 morgan:1 transmitted:2 captured:2 managing:1 determine:1 maximize:3 shortest:2 period:1 signal:1 recommended:2 multiple:1 desirable:1 infer:1 exceeds:1 faster:2 long:3 award:1 impact:1 ensuring:1 expectation:2 receive:3 else:1 source:5 unlike:1 errored:3 sent:5 cowan:1 jordan:2 call:1 structural:4 exceed:1 identically:2 hb:2 krishnan:1 reduce:4 synchronous:1 utility:4 passed:1 queue:1 aea:2 cause:1 action:11 ignored:2 listed:1 amount:1 discount:4 simplest:1 reduced:1 generate:2 exist:1 nsf:1 per:2 correctly:1 waiting:6 group:1 key:1 pb:4 clarity:1 changing:2 ptx:2 geometrically:2 convert:2 run:5 master:1 telecommunication:3 communicate:1 extends:1 draw:1 decision:5 bit:1 layer:1 bound:5 bea:1 fading:1 constraint:2 deficiency:1 generates:2 aspect:2 optimality:1 inhibits:1 alternate:1 poor:1 jr:2 across:1 describes:1 boulder:1 interference:2 taken:1 resource:1 equation:2 remains:1 turn:6 dmax:2 mechanism:1 end:3 available:2 rewritten:1 apply:1 appropriate:1 simulating:1 permanently:1 ensure:1 include:1 maintaining:1 objective:1 strategy:2 usual:1 traditional:1 distance:1 link:1 mapped:1 capacity:2 consumption:2 mail:5 considers:1 cellular:1 reason:2 assuming:1 code:1 modeled:2 minimizing:2 potentially:1 holding:8 info:1 negative:1 intent:1 ba:1 policy:15 perform:1 upper:3 markov:3 poff:2 finite:1 immediate:1 payoff:1 communication:11 incorporated:1 station:11 inferred:3 pair:1 specified:1 varaiya:1 instructing:1 learned:1 nip:3 address:1 able:3 beyond:1 kauffman:1 recast:1 reliable:2 max:4 power:30 critical:1 boyan:1 turning:3 improve:1 ready:1 acknowledgement:1 drain:2 loss:1 allocation:3 age:1 revenue:1 consistent:1 principle:1 checksum:1 supported:1 wireless:13 last:3 tsitsiklis:1 side:3 distributed:3 transition:2 ignores:1 forward:1 commonly:1 reinforcement:13 made:1 simplified:1 observable:2 decides:2 sat:1 receiver:3 assumed:1 table:12 channel:38 learn:4 career:1 shutting:1 protocol:5 main:1 arrival:2 tong:1 slave:1 sf:2 minute:1 down:1 bad:3 specific:2 emphasized:2 symbol:1 ton:2 list:3 explored:1 consist:1 essential:1 exists:1 occurring:1 horizon:2 browsing:4 timothy:1 simply:2 partially:2 expends:1 corresponds:1 acm:2 slot:7 goal:1 hard:1 change:3 included:1 infinite:1 typical:1 telephone:1 kearns:1 total:1 multimedia:1 pas:1 indicating:1 formally:1 rarely:1 ncr:1 latter:1 tested:1
810
1,741
Robust Full Bayesian Methods for Neural Networks Christophe Andrieu* Cambridge University Engineering Department Cambridge CB2 1PZ England ca226@eng.cam.ac.uk J oao FG de Freitas UC Berkeley Computer Science 387 Soda Hall, Berkeley CA 94720-1776 USA jfgf@cs.berkeley.edu Arnaud Doucet Cambridge University Engineering Department Cambridge CB2 1PZ England ad2@eng.cam.ac.uk Abstract In this paper, we propose a full Bayesian model for neural networks. This model treats the model dimension (number of neurons), model parameters, regularisation parameters and noise parameters as random variables that need to be estimated. We then propose a reversible jump Markov chain Monte Carlo (MCMC) method to perform the necessary computations. We find that the results are not only better than the previously reported ones, but also appear to be robust with respect to the prior specification. Moreover, we present a geometric convergence theorem for the algorithm. 1 Introduction In the early nineties, Buntine and Weigend (1991) and Mackay (1992) showed that a principled Bayesian learning approach to neural networks can lead to many improvements [1 ,2] . In particular, Mackay showed that by approximating the distributions of the weights with Gaussians and adopting smoothing priors, it is possible to obtain estimates of the weights and output variances and to automatically set the regularisation coefficients. Neal (1996) cast the net much further by introducing advanced Bayesian simulation methods, specifically the hybrid Monte Carlo method, into the analysis of neural networks [3]. Bayesian sequential Monte Carlo methods have also been shown to provide good training results, especially in time-varying scenarios [4]. More recently, Rios Insua and Muller (1998) and Holmes and Mallick (1998) have addressed the issue of selecting the number of hidden neurons with growing and pruning algorithms from a Bayesian perspective [5,6]. In particular, they apply the reversible jump Markov Chain Monte Carlo (MCMC) algorithm of Green [7] to feed-forward sigmoidal networks and radial basis function (RBF) networks to obtain joint estimates of the number of neurons and weights. We also apply the reversible jump MCMC simulation algorithm to RBF networks so as to compute the joint posterior distribution of the radial basis parameters and the number of basis functions. However, we advance this area of research in two important directions. Firstly, we propose a full hierarchical prior for RBF networks. That * Authorship based on alphabetical order. C. Andrieu, J. F G. d. Freitas and A. Doucet 380 is, we adopt a full Bayesian model, which accounts for model order uncertainty and regularisation, and show that the results appear to be robust with respect to the prior specification. Secondly, we present a geometric convergence theorem for the algorithm. The complexity of the problem does not allow for a comprehensive discussion in this short paper. We have, therefore, focused on describing our objectives, the Bayesian model, convergence theorem and results. Readers are encouraged to consult our technical report for further results and implementation details [Sp. 2 Problem statement Many physical processes may be described by the following nonlinear, multivariate input-output mapping: (1) where Xt E ~d corresponds to a group of input variables, Yt E ~c to the target variables, TIt E ~c to an unknown noise process and t = {I, 2,' .. } is an index variable over the data. In this context, the learning problem involves computing an approximation to the function f and estimating the characteristics of the noise process given a set of N input-output observations: 0 = {Xl, X2, ... ,XN, YI, Y2, ... ,Y N } Typical examples include regression, where YI :N,1 :C2 is continuous; classification, where Y corresponds to a group of classes and nonlinear dynamical system identification, where the inputs and targets correspond to several delayed versions of the signals under consideration. We adopt the approximation scheme of Holmes and Mallick (199S), consisting of a mixture of k RBFs and a linear regression term. Yet, the work can be easily extended to other regression models. More precisely, our model Mis: Mo : Mk: Yt Yt = b + f3'Xt + TIt = ~~=l aj?(IIx t - 11)1) + b + f3'Xt + k TIt =0 k ~ 1 (2) where" ./1 denotes a distance metric (usually Euclidean or Mahalanobis), I-Lj E ~d denotes the j-th RBF centre for a model with k RBFs, aj E ~c the j-th RBF amplitude and b E ~c and f3 E ~d X ~c the linear regression parameters. The noise sequence TIt E ~c is assumed to be zero-mean white Gaussian. It is important to mention that although we have not explicitly indicated the dependency of b, f3 and TIt on k, these parameters are indeed affected by the value of k. For convenience, we express our approximation model in vector-matrix form: Yl,l . . . Y1,e Y2,1 . .. Y2,e 1 1 X1,1 .. . X1 ,d X2,1 . .. X2 ,d </J(X1 , 1'1)' .. </J(X1, JLk) </J(X2, 1'1) .. . </J(X2, JLk) = YN,l .. . YN,c b 1 . . , be f31,1 ... f31 ,e f3d,l ... f3 d,e +n1:N a1,1 .. , a1,e 1 XN,l" ' XN,d </J(XN, 1'1) ' " </J(XN, JLk) ak ,l .,. ak,e 1The software is available at http://vvv . cs . berkeley. edur jfgf. 2Y1 :N,1 :e is an N by c matrix, where N is the number of data and c the number of outputs. We adopt the notation Y1:N,j ~ (Y1,j,Y2,j, .. . ,YN,j)' to denote all the observations corresponding to the j-th output (j-th column of y) . To simplify the notation, Yt is equivalent to Yt,l :c' That is, if one index does not 'appear, it is implied that we are referring to all of its possible values. Similarly, Y is equivalent to Y1:N ,1:c, We will favour the shorter notation and only adopt the longer notation to avoid ambiguities and emphasise certain dependencies. Robust Full Bayesian Methods for Neural Networks where the noise process is assumed to be normally distributed i = 1, ... ,c. In shorter notation, we have: 381 Dt '" N(o, un for Y = D(J.Ll:k,l:d,Xl:N,l:d)Ol:l+d+k,l:C + Dt (3) We assume here that the number k of RBFs and their parameters (J ~ {Ol:m,l:c, J.Ll:k,l:d' uI:J, with m = 1 + d + k, are unknown. Given the data set {x,y}, our objective is to estimate k and (J E 8 k . 3 Bayesian model and aims We follow a Bayesian approach where the unknowns k and (J are regarded as being drawn from appropriate prior distributions. These priors reflect our degree of belief on the relevant values of these quantities [9]. Furthermore, we adopt, a hierarchical prior structure that enables us to treat the priors' parameters (hyper-parameters) as random variables drawn from suitable distributions (hyper-priors). That is, instead of fixing the hyper-parameters arbitrarily, we acknowledge that there is an inherent uncertainty in what we think their values should be. By devising probabilistic models that deal with this uncertainty, we are able to implement estimation techniques that are robust to the specification of the hyper-priors. The overall parameter space 8 x 'II can be written as a finite union of subspaces 8 x 'II = (U~:ir {k} x 8 k) X 'II where 8 0 ~ (JRd+l)C X (JR+) C and 8 k ~ (JRd+l+k)c X (JR+)C X !lk for k E {I, ... ,kmax }. That is, E (JRd+l+k)C, 0' E (JR+)C and J.L E !lk. The hyper-parameter space 'II ~ (1R+)C+l, with elements 'l/J ~ {A, 82 }, will be discussed at the end of this section. The space of the radial basis centres !lk is defined as a compact set including the input data: !lk ~ {J.L; J.Ll:k,i E [min(xl:N,i) -tSi, max(xl:N,i) +tSd k for i = 1, ... ,d with J.Lj,i ofJ.L1 i for j of- l}. Si = II max(xl:N,i) - min(xl:N,i)1I denotes the Euclidean distance for the i-th dimension of the input and t is a user specified parameter that we only need to consider if we wish to place basis functions outside the region where the input data lie. That is, we allow !lk to include the space of the input data and extend it by a factor which is proportional to the spread of the input data. The hyper-volume of this space is: ~k ~ (rrf=l (1 + 2t)Si)k. ? The maximum number of basis functions is defined as k max ~ (N - (d + 1)) We also define !l ~ U~:(nk} x !lk with !lo ~ 0. Under the assumption of independent outputs given (k, (J), the likelihood p(ylk, (J, 'l/J, x) for the approximation model described in the previous section is: D. (2,,-.,.1) -N /2 exp ( - 2!; (hN,i - D(I'"" x)a' ,m,,)' (Y',N" - D(I'H, x)a',m,;) ) We assume the following structure for the prior distribution: p(k, (J, 'l/J) = p( ol:mlk, 0'2, ( 2 )p(J.Ll:k Ik)p(kIA)p(u 2)p(A)p( ( 2 ) where the scale parameters 0';, are assumed to be independent of the hyperparameters (i. e. p( u 2 1A, ( 2 ) = p( 0'2)), independent of each other (p( 0'2) = rr~=l p(u;)) and distributed according to conjugate inverse-Gamma prior distributions: 0'; '" I9 (~, ~). When 'Uo = 0 and /0 = 0, we obtain Jeffreys' uninformative prior [9]. For a given 0'2, the prior distribution p(k, Glom, J.Ll:kIU2, A, 82 ) is: [lI I C . t=l 27ru i 8 i 1m I 2 2 -1/2 exp (- 1 20'.8. t t I [ iOl:m i ) ] [][o(k,J.Ll:k)] o.k .,' :s "6 - 2 - 2 0l'm Aklk! k. AJI)'! J =0 max ] C. Andrieu, J. F. G. d. Freitas and A. Doucet 382 where 1m denotes the identity matrix of size m x m and IIn(k, ILI :k) is the indicator function of the set n (1 if (k,IL1:k) En, 0 otherwise). The prior model order distribution p(kIA) is a truncated Poisson distribution. Conditional upon k, the RBF centres are uniformly distributed. Finally, conditional upon (k, IL1:k), the coefficients 01 :m,i are assumed to be zero-mean Gaussian with variance c5~ The hyper-parameters c5 2 E (IR.+)C and A E IR.+ can be respectively interpreted as the expected signal to noise ratios and the expected number of radial basis. We assume that they are independent of each other, i.e. p(A, c5 2) = p(A)p(c5 2 ). Moreover, p(c5 2 ) = TI~=1 p(c5~). As c5 2 is a scale parameter, we ascribe a vague conjugate prior density to it: c5~ "" YQ (a,p ,(302) for i = 1, ... ,c, with a 0 2 = 2 and (302 > O. The variance of this hyper-prior with a 0 2 = 2 is infinite. We apply the same method to A by setting an uninformative conjugate prior [9]: A"" Qa(1/2 +Cl,c2) (ci? 1 i = 1,2). ur 3.1 Estimation and inference aims The Bayesian inference of k, 0 and 1/J is based on the joint posterior distribution p(k, 0, 1/Jlx, y) obtained from Bayes' theorem. Our aim is to estimate this joint distribution from which, by standard probability marginalisation and transformation techniques, one can "theoretically" obtain all posterior features of interest. We propose here to use the reversible jump MCMC method to perform the necessary computations, see [8] for details. MCMC techniques were introduced in the mid 1950's in statistical physics and started appearing in the fields of applied statistics, signal processing and neural networks in the 1980's and 1990's [3,5,6,10,11]. The key idea is to build an ergodic Markov chain (k(i) , O(i), 1/J(i?)iEN whose equilibrium distribution is the desired posterior distribution. Under weak additional assumptions, the P ? 1 samples generated by the Markov chain are asymptotically distributed according to the posterior distribution and thus allow easy evaluation of all posterior features of interest. For example: .-.... 1 P . .-... . p(k = Jlx, y) = p I){j}(k(t?) and IE(Olk = J, x, y) = i=1 In addition, we can obtain predictions, such as: ~~ 0(i)1I . (k(i?) t-~ {J}. ~i=1 lI{j}(k(t?) (4) P 1~ ( i) (i) IE(YN+llxl :N+l,Yl :N) = p L...,.D(ILl:k,XN+I)OI:m .-... (5) i=1 3.2 Integration of the nuisance parameters According to Bayes theorem, we can obtain the posterior distribution as follows: p(k, 0, 1/Jlx, y) ex p(Ylk, 0, 1/J, x)p(k, 0, 'ljJ) In our case, we can integrate with respect to 01:m (Gaussian distribution) and with respect to (inverse Gamma distribution) to obtain the following expression for the posterior: u; k P( ,IL1:k" A c521 ) x,yex [rrC (c52)-m/2IM't,k 11/2 (TO + Y~:N,iP i,kYl:N,i 2 t ) (_ N~VQ)] X i=1 Ak/k!. ][rrC (c5~)-(062 +l)exp(- (36 2)][(A)(Cl-l/2)exP (-C2 A )] [ lIn(k,lLk)][ <;Sk " k max AJ / ., t c5 2 LJj=O J. i=l t (6) Robust Full Bayesian Methods for Neural Networks 383 It is worth noticing that the posterior distribution is highly non-linear in the RBF centres I-'k and that an expression of p(klx,y) cannot be obtained in closed-form. 4 Geometric convergence theorem It is easy to prove that the reversible jump MCMC algorithm applied to our model converges, that is, that the Markov chain i) , I-'l(~~, A( i) , 8 2(i?) is ergodic. We . iEN present here a stronger result, namely that (k(i)'I-'(I~LA(i),82(i?) converges to . iEN the required posterior distribution at a geometric rate: (k( Theorem 1 Let (k(i), I-'i~)k' A(i), 8 2 (i?) be the Markov chain whose transition . iEN kernel has been described in Section 3. This Markov chain converges to the probability distribution p (k, I-'l :k, A, 82 1 x, y). Furthermore this convergence occurs at a geometric rate, that is, for almost every initial point (k(O), I-'i~k, A(0),8 2 (0?) E 11 x 'II there exists a function of the initial states Co that >0 IIp(i) (k,I-'l:k,A,8 2) -p(k,I-'l:k,A, 82 I and a constant and p E [0,1) such x ,y)IITV ~ CopLi/kmaxJ (7) where p(i) (k,I-'l:k,A,8 2) is the distribution of (k(i)'l-'i~~,A(i),82(i?) and II?IITV is the total variation norm [11]. Proof. See [8] ? Corollary 1 If for each iteration i one samples the nuisance parameters (Ol :m, u%) then the distribution of the series (k(i), oi~~, I-'i~~, u~(i), A(i), 8 Z(i?)iEN converges geometrically towards p(k,Ol:m,I-'l:k,ULA,8 2Ix,y) at the same rate p. 5 Demonstration: robot arm data This data is often used as a benchmark to compare learning algorithms 3 . It involves implementing a model to map the joint angle of a robot arm (Xl, X2) to the position of the end of the arm (Yl, yz). The data were generated from the following model: Yl Y2 = = 2.0cos(xt} + 1.3COS(Xl +X2) + El 2.0sin(xt} + 1.3sin(xl +X2) +E2 where Ei '" N(O, 0"2); 0" = 0.05. We use the first 200 observations of the data set to train our models and the last 200 observations to test them. In the simulations, we chose to use cubic basis functions. Figure 1 shows the 3D plots of the training data and the contours of the training and test data. The contour plots also include the typical approximations that were obtained using the algorithm . We chose uninformative priors for all the parameters and hyper-parameters (Table 1). To demonstrate the robustness of our algorithm, we chose different values for (382 (the only critical hyper-parameter as it quantifies the mean of the spread 8 of Ok)' The obtained mean square errors and probabilities for 81, 8 2, u~ k' u~ k and k, shown in Figure 2, clearly indicate that our algorithm is robust with 'respe~t to prior specification. Our mean square errors are of the same magnitude as the ones reported by other researchers [2,3,5,6]; slightly better (Not by more than 10%). Moreover, our algorithm leads to more parsimonious models than the ones previously reported. 3The robot arm data set can be found in David Mackay's home page: http://vol.ra.phy.cam.ac.uk/mackay/ 384 C. Andrieu. J. F G. d. Freitas and A. Doucet 5 .' . 5 >.0 .. .. .. .. .. . ... ............. -~ .' . ' ~ .' 2 x2 0 ?? ? ??? ? -5 ... ... o 2 o -2 4 2 x2 x1 o -2 4r---~--~--~----, 4r---~--~--~----, oL---~------~--~ 0 -2 4r---~--~--~----, 4 o~--~------~--~ 0 -2 -2 -2 o -1 2 o -1 -1 0 2 :~ 2 -1 0 2 Figure 1: The top plots show the training data surfaces corresponding to each coordinate of the robot arm's position. The middle and bottom plots show the training and validation data [- -] and the respective REF network mappings [-]. Table 1: Simulation parameters and mean square test errors. a 02 2 2 2 6 /30 2 Vo ')'0 CI C2 MS ERROR 0.1 10 100 0 0 0 0 0 0 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.00505 0.00503 0.00502 Conclusions We presented a general methodology for estimating, jointly, the noise variance, parameters and number of parameters of an RBF model. In adopting a Bayesian model and the reversible jump MCMC algorithm to perform the necessary integrations, we demonstrated that the method is very accurate. Contrary to previous reported results, our experiments indicate that our model is robust with respect to the specification of the prior. In addition, we obtained more parsimonious RBF networks and better approximation errors than the ones previously reported in the literature. There are many avenues for further research. These include estimating the type of basis functions, performing input variable selection, considering other noise models and extending the framework to sequential scenarios. A possible solution to the first problem can be formulated using the reversible jump MCMC framework. Variable selection schemes can also be implemented via the reversible jump MCMC algorithm. We are presently working on a sequential version of the algorithm that allows us to perform model selection in non-stationary environments. References [1] Buntine, W.L. & Weigend, A.S. (1991) Bayesian back-propagation. Complex Systems 5:603-643. 385 Robust Full Bayesian Methods for Neural Networks ~and~ 'fl1 and 'fl2 k 0.06 0.8 0.2 0.04 II ~0. 1 0.02 0 0.6 0.4 :!:l. 100 200 0.2 0 0 0 12 0.06 I 16 1 16 O.B 0.6 0.04 0.4 '" 0.1 no 0.02 100 200 0 0.2 0 12 0 0.06 8,... I 14 14 0.8 0.6 0.2 0.4 0.02 "'00 0.1 0.2 :!:l. 100 200 0 0 2 4 6 x 10-" 0 12 I 14 16 Figure 2: Histograms of smoothness constraints (~1 and 8 2), noise variances (O'i k and O'~ k) and model order (k) for the robot arm data using 3 different values f~r {382. The plots confirm that the algorithm is robust to the setting of {382. [2] Mackay, D.J .C. (1992) A practical Bayesian framework for backpropagation networks. Neural Computation 4:448-472. [3] Neal, R.M. (1996) Bayesian Learning for Neural Networks. New York: Lecture Notes in Statistics No. 118, Springer-Verlag. [4] de Freitas, J .F.G., Niranjan, M., Gee, A.H. & Doucet, A. (1999) Sequential Monte Carlo methods to train neural network models. To appear in Neural Computation. [5] Rios Insua, D. & Miiller, P. (1998) Feedforward neural networks for nonparametric regression. Technical report 98-02. Institute of Statistics and Decision Sciences, Duke University, http://vtw.1 . stat. duke. edu. [6] Holmes, C.C. & Mallick, B.K. (1998) Bayesian radial basis functions of variable dimension. Neural Computation 10:1217-1233. [7] Green, P.J. (1995) Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82:711-732. [8] Andrieu, C., de Freitas, J.F.G. & Doucet, A. (1999) Robust full Bayesian learning for neural networks. Technical report CUED/F-INFENG/TR 343. Cambridge University, http://svr-www.eng.cam.ac.uk/. [9] Bernardo, J .M. & Smith, A.F .M. (1994) Bayesian Theory. Chichester: Wiley Series in Applied Probability and Statistics. [10] Besag, J ., Green, P.J., Hidgon, D. & Mengersen, K. (1995) Bayesian computation and stochastic systems. Statistical Science 10:3-66. [11] Tierney, L. (1994) Markov chains for exploring posterior distributions. The Annals of Statistics. 22(4):1701-1762.
1741 |@word middle:1 version:2 stronger:1 norm:1 simulation:4 eng:3 mention:1 tr:1 mlk:1 phy:1 initial:2 series:2 selecting:1 freitas:6 si:2 yet:1 written:1 enables:1 plot:5 stationary:1 devising:1 smith:1 short:1 sigmoidal:1 firstly:1 c2:4 klx:1 ik:1 prove:1 theoretically:1 expected:2 ra:1 indeed:1 growing:1 ol:6 automatically:1 considering:1 estimating:3 notation:5 moreover:3 what:1 interpreted:1 transformation:1 berkeley:4 every:1 ti:1 bernardo:1 biometrika:1 uk:4 normally:1 uo:1 appear:4 yn:4 engineering:2 treat:2 ak:3 chose:3 co:3 practical:1 union:1 alphabetical:1 implement:1 backpropagation:1 cb2:2 aji:1 area:1 radial:5 svr:1 convenience:1 cannot:1 selection:3 context:1 kmax:1 www:1 equivalent:2 map:1 demonstrated:1 yt:5 focused:1 ergodic:2 holmes:3 regarded:1 variation:1 coordinate:1 annals:1 target:2 user:1 duke:2 element:1 ien:5 bottom:1 region:1 principled:1 environment:1 complexity:1 ui:1 cam:4 tit:5 upon:2 basis:10 vague:1 easily:1 joint:5 iol:1 train:2 monte:6 hyper:10 outside:1 whose:2 otherwise:1 statistic:5 think:1 jointly:1 ip:1 sequence:1 rr:1 net:1 propose:4 relevant:1 convergence:5 extending:1 converges:4 cued:1 ac:4 stat:1 fixing:1 implemented:1 c:2 involves:2 indicate:2 direction:1 stochastic:1 implementing:1 secondly:1 im:1 exploring:1 ad2:1 hall:1 exp:4 equilibrium:1 mapping:2 mo:1 early:1 adopt:5 estimation:2 clearly:1 gaussian:3 aim:3 avoid:1 varying:1 corollary:1 improvement:1 likelihood:1 besag:1 rio:2 inference:2 el:1 lj:2 hidden:1 issue:1 classification:1 overall:1 ill:1 smoothing:1 integration:2 mackay:5 uc:1 field:1 f3:5 encouraged:1 report:3 simplify:1 inherent:1 gamma:2 comprehensive:1 delayed:1 consisting:1 n1:1 interest:2 highly:1 evaluation:1 chichester:1 mixture:1 chain:9 accurate:1 necessary:3 shorter:2 respective:1 euclidean:2 desired:1 mk:1 column:1 introducing:1 buntine:2 reported:5 llxl:1 dependency:2 referring:1 density:1 ie:2 probabilistic:1 yl:4 physic:1 ambiguity:1 reflect:1 iip:1 hn:1 tsd:1 li:2 account:1 de:3 coefficient:2 explicitly:1 closed:1 bayes:2 rbfs:3 oi:2 ir:3 square:3 variance:5 characteristic:1 correspond:1 weak:1 bayesian:23 identification:1 carlo:6 worth:1 researcher:1 e2:1 proof:1 mi:1 ula:1 amplitude:1 back:1 feed:1 ok:1 dt:2 follow:1 methodology:1 furthermore:2 working:1 ei:1 nonlinear:2 reversible:9 propagation:1 aj:3 ascribe:1 indicated:1 usa:1 y2:5 andrieu:5 arnaud:1 neal:2 white:1 deal:1 mahalanobis:1 ll:6 sin:2 nuisance:2 authorship:1 m:1 demonstrate:1 vo:1 oao:1 l1:1 iin:1 consideration:1 recently:1 physical:1 volume:1 discussed:1 extend:1 cambridge:5 ylk:2 smoothness:1 similarly:1 centre:4 specification:5 robot:5 longer:1 surface:1 posterior:11 multivariate:1 showed:2 perspective:1 scenario:2 certain:1 verlag:1 arbitrarily:1 christophe:1 yi:2 muller:1 kyl:1 additional:1 signal:3 ii:8 full:8 jrd:3 fl1:1 technical:3 jfgf:2 england:2 determination:1 olk:1 lin:1 niranjan:1 a1:2 prediction:1 infeng:1 regression:5 metric:1 poisson:1 histogram:1 iteration:1 kernel:1 adopting:2 addition:2 uninformative:3 addressed:1 marginalisation:1 vvv:1 contrary:1 consult:1 feedforward:1 easy:2 idea:1 avenue:1 favour:1 expression:2 miiller:1 york:1 nonparametric:1 mid:1 http:4 estimated:1 vol:1 affected:1 express:1 group:2 key:1 drawn:2 tierney:1 rrf:1 asymptotically:1 geometrically:1 weigend:2 inverse:2 noticing:1 uncertainty:3 angle:1 soda:1 place:1 almost:1 reader:1 i9:1 parsimonious:2 home:1 decision:1 precisely:1 constraint:1 x2:10 software:1 min:2 performing:1 department:2 according:3 fl2:1 conjugate:3 jr:3 slightly:1 ninety:1 ur:1 jeffreys:1 presently:1 jlk:3 vq:1 previously:3 describing:1 end:2 available:1 gaussians:1 apply:3 hierarchical:2 appropriate:1 appearing:1 robustness:1 denotes:4 top:1 include:4 iix:1 especially:1 build:1 approximating:1 yz:1 implied:1 objective:2 quantity:1 occurs:1 subspace:1 distance:2 ru:1 index:2 ratio:1 demonstration:1 statement:1 implementation:1 unknown:3 perform:4 neuron:3 observation:4 markov:9 benchmark:1 acknowledge:1 finite:1 truncated:1 extended:1 y1:5 rrc:2 introduced:1 david:1 cast:1 namely:1 specified:1 required:1 qa:1 able:1 dynamical:1 usually:1 green:3 including:1 max:5 belief:1 mallick:3 suitable:1 critical:1 hybrid:1 indicator:1 advanced:1 arm:6 scheme:2 yq:1 lk:6 started:1 ljj:2 prior:21 geometric:5 literature:1 f3d:1 regularisation:3 lecture:1 proportional:1 validation:1 integrate:1 degree:1 lo:1 kia:2 last:1 gee:1 allow:3 institute:1 emphasise:1 fg:1 distributed:4 f31:2 dimension:3 xn:6 transition:1 contour:2 forward:1 c5:10 jump:9 vtw:1 mengersen:1 pruning:1 compact:1 confirm:1 doucet:6 tsi:1 llk:1 assumed:4 continuous:1 un:1 quantifies:1 sk:1 table:2 jlx:3 robust:11 ca:1 cl:2 yex:1 complex:1 sp:1 spread:2 ofj:1 noise:9 hyperparameters:1 ref:1 x1:5 en:1 c52:1 cubic:1 il1:3 wiley:1 position:2 wish:1 xl:9 lie:1 ix:1 theorem:7 xt:5 pz:2 exists:1 sequential:4 ci:2 magnitude:1 nk:1 springer:1 corresponds:2 conditional:2 identity:1 formulated:1 rbf:9 towards:1 specifically:1 typical:2 uniformly:1 infinite:1 total:1 ili:1 la:1 mcmc:9 ex:1
811
1,742
Managing Uncertainty in Cue Combination Zhiyong Yang Deparbnent of Neurobiology, Box 3209 Duke University Medical Center Durham, NC 27710 zhyyang@duke.edu Richard S. Zemel Deparbnent of Psychology University of Arizona Tucson, AZ 85721 zemel@u.arizona.edu Abstract We develop a hierarchical generative model to study cue combination. The model maps a global shape parameter to local cuespecific parameters, which in tum generate an intensity image. Inferring shape from images is achieved by inverting this model. Inference produces a probability distribution at each level; using distributions rather than a single value of underlying variables at each stage preserves information about the validity of each local cue for the given image. This allows the model, unlike standard combination models, to adaptively weight each cue based on general cue reliability and specific image context. We describe the results of a cue combination psychophysics experiment we conducted that allows a direct comparison with the model. The model provides a good fit to our data and a natural account for some interesting aspects of cue combination. Understanding cue combination is a fundamental step in developing computational models of visual perception, because many aspects of perception naturally involve multiple cues, such as binocular stereo, motion, texture, and shading. It is often formulated as a problem of inferring or estimating some relevant parameter, e.g., depth, shape, position, by combining estimates from individual cues. An important finding of psychophysical studies of cue combination is that cues vary in the degree to which they are used in different visual environments. Weights assigned to estimates derived from a particular cue seem to reflect its estimated reliability in the current scene and viewing conditions. For example, motion and stereo are weighted approximately equally at near distances, but motion is weighted more at far distances, presumably due to distance limits on binocular disparity.3 Experiments have also found these weightings sensitive to image manipulations; if a cue is weakened, such as by adding noise, then the uncontaminated cue is utilized more in making depth judgments.9 A recent study2 has shown that observers can adjust the weighting they assign to a cue based on its relative utility for a particular task. From these and other experiments, we can identify two types of information that determine relative cue weightings: (1) cue reliability: its relative utility in the context of the task and general viewing conditions; and (2) region informativeness: cue information available locally in a given image. A central question in computational models of cue combination then concerns how these forms of uncertainty can be combined. We propose a hierarchical generative Z. Yang and R. S. Zemel 870 model. Generative models have a rich history in cue combination, as thel underlie models of Bayesian perception that have been developed in this area. lO , The novelty in the generative model proposed here lies in its hierarchical nature and use of distributions throughout, which allows for both context-dependent and imagespecific uncertainty to be combined in a principled manner. Our aims in this paper are dual: to develop a combination model that incorporates cue reliability and region informativeness (estimated across and within images), and to use this model to account for data and provide predictions for psychophysical experiments. Another motivation for the approach here stems from our recent probabilistic framework,11 which posits that every step of processing entails the representation of an entire probability distribution, rather than just a single value of the relevant underlying variable(s). Here we use separate local probability distributions for each cue estimated directly from an image. Combination then entails transforming representations and integrating distributions across both space and cues, taking across- and within-image uncertainty into account. 1 IMAGE GENERATION In this paper we study the case of combining shading and texture. Standard shapefrom-shading models exclude texture, l, 8 while standard shape-from-texture models exclude shading.7 Experimental results and computational arguments have supported a strong interaction between these cues}O but no model accounting for this interaction has yet been worked out. The shape used in our experiments is a simple surface: Z = B(l - x2 ), Ixl <= 1, Iyl <= 1 where Z is the height from the xy plane. B is the only shape parameter. (1) Our image formation model is a hierarchical generative model (see Figure 1). The top layer contains the global parameter B. The second layer contains local shading and texture parameters S, T {Sj, 11}, where i indexes image regions. The generation of local cues from a global parameter is intended to allow local uncertainties to be introduced separately into the cues. This models specific conditions in realistic images, such as shading uncertainty due to shadows or specularities, and texture uncertainty when prior assumptions such as isotropicity are violated. 4 Here we introduce uncertainty by adding independent local noise to the underlying shape parameter; this manipulation is less realistic but easier to control. = Global Shape (B) /~ Local Shading ({S}) Local Texture ({T}) ~~ Image (I) Figure 1: Left: The generative model of image formation. Right: Two sample images generated by the image formation procedure. B = 1.4 in both. Left: 0', = 0.05,O't = O. Right: 0', = O,O't = 0.05. The local cues are sampled from Gaussian distributions: p(SdB) = N(f(B); 0',); p(7iIB) N(g(B); O't}. f(B),g(B) describe how the local cue parameters depend = 871 Managing Uncertainty in Cue Combination on the shape parameter B, while 0"8 and O"t represent the degree of noise in each cue. In this paper, to simplify the generation process we set f(B) g(B) = B. From {Si} and {Ti}, two surfaces are generated; these are essentially two separate noisy local versions of B. The intensity image combines these surfaces. A set of same-intensity texsels sampled from a uniform distribution are mapped onto the texture surface, and then projected onto the image plane under orthogonal projection. The intensity of surface pixels not contained within these texsels are determined generated from the shading surface using Lambertian shading. Each image is composed of 10 x 10 non-overlapping regions, and contains 400 x 400 pixels. Figure 1 shows two images generated by this procedure. = 2 COMBINATION MODEL We create a combination, or recognition model by inverting the generative model of Figure 1 to infer the shape parameter B from the image. An important aspect of the combination model is the use of distributions to represent parameter estimates at each stage. This preserves uncertainty information at each level, and allows it to playa role in subsequent inference. The overall goal of combination is to infer an estimate of B given some image I. We derive our main inference equation using a Bayesian integration over distributions: P(BIl) = J P(S, TIl) P(BIS, T) P(BIS, T)P(S, TIl)dSdT '" IT P(Sdl)P(TiII) ;(B)P(S, TIB)/ J P(B)P(S, TIB)db '" (2) n (3) P(SdB)P(TiIB) (4) ? To simplify the two components we have assumed that the prior over B is uniform, and that the S, T are conditionally independent given B, and given the image. This third assumption is dubious but is not essential in the model, as discussed below. We now consider these two components in tum. 2.1 Obtaining local cue-specific representations from an image One component in the inference equation, P(S, TIl), describes local cuedependent information in the particular image I. We first define intermediate representations S, T that are dependent on shading and texture cues, respectively. The shading representation is the curvature of a horizontal section: S = f(B) 2B(1 + 4x 2B2)-3/2. The texture representation is the cosine of the surface slant: T = g(B) = (1 + 4x 2B2)-1/2. Note that these S, T variables do not match those used in the generative model; ideally we could have used these cue-dependent variables, but generating images from them proved difficult. = Some image pre-processing must take place in order to estimate values and uncertainties for these particular local variables. The approach we adopt involves a simple statistical matching procedure, similar to k-nearest neighbors, applied to local image patches. After applying Gaussian smoothing and band-pass filtering to the image, two representations of each patch are obtained using separate shading and texture filters. For shading, image patches are represented by forming a histogram of ~1; for texture, the patch is represented by the mean and standard deviation of the amplitude of Gabor filter responses at 4 scales and orientations. This representation of a shading patch is then compared to a database of similar Z. Yang and R. S. Zemel 872 patch representations. Entries in the shading database are formed by first selecting a particular value of B and (j3' generating an image patch, and applying the appropriate filters. Thus S = f (B) and the noise level (j 3 are known for each entry, allowing an estimate of these variables for the new patch to be formed as a linear combination of the entries with similar representations. An analogous procedure, utilizing a separate database, allows T and an uncertainty estimate to be derived for texture. Both databases have 60 different h, (j pairs, and 10 samples of each pair. Based on this procedure we obtain for each image patch mean values Mt, Ml and uncertainty values Vi3 , Vit for Si, Tt. These determine P(IIS), P(IIT), which are approximated as Gaussians. Taking into account the Gaussian priors for Si, Tt, V,3 V,3 P(IISi)P(Si) '" exp(-t(S - Mt)2)ex p(-t(S - M~)2) (5) P(Sil!) P(TtI!) v,t W P(IITt)P(Tt) '" exp( -f(T - Ml)2) exp( -f(T - M~)2) (6) Note that the independence assumption of Equation 3 is not necessary, as the matching procedure could use a single database indexed by both the shading and texture representations of a patch. 2.2 Transforming and combining cue-specific local representations The other component of the inference equation describes the relationship between the intermediate, cue-specific representations S, T and the shape parameter B: V;3 v;t P(SIB) '" exp(--t(S - f(B))2) ; P(TIB) '" exp(-t(T - g(B))2) (7) The two parameters Vb3, V: in this equation describe the uncertainty in the relationship between the intermediate parameters S, T and B; they are invariant across space. These two, along with the parameters of the priors-M~, M~, V~, Vt-are the free parameters of this model. Note that this combination model neatly accounts for both types of cue validity we identified: the variance in P(SIB) describes the general uncertainty of a given cue, while the local variance in P(SI!) describes the image-specific uncertainty of the cue. Combining Equations 3-7, and completing the integral in Equation 2, we have: P( BII) - exp [ -~ ~ .tI(B)' + .,g(B)' - 1 2.,f(B) - 2?? g( B) (8) Thus our model infers from any image a mean U and variance E2 for B as nonlinear combinations of the cue estimates, taking into account the various forms of uncertainty. 3 A CUE COMBINATION PSYCHOPHYSICS EXPERIMENT We have conducted psychophysical experiments using stimuli generated by the procedure described above. In each experimental trial, a stimulus image and four 873 Managing Uncertainty in Cue Combination views of a mesh surface are displayed side-by-side on a computer screen. The subject's task is to manipulate the curvature of the mesh to match the stimulus. The final shape of the mesh surface describes the subject's estimate of the shape parameter B on that trial. The subject's variance is computed across repeated trials with an identical stimulus. In a given block of trials, the stimulus may contain only shading information (no texture elements), only texture information uniform shading), or both. The local cue noise ?(i$' (it) is zero in some blocks, non-zero in others. The primary experimental findings (see Figure 2) are: ? Shape from shading alone produces underestimates of B. Shape from texture alone also leads to underestimation, but to a lesser degree. ? Shape from both cues leads to almost perfect estimation, with smaller variance than shape from either cue alone. Thus cue enhancement-more accurate and robust judgements for stimuli containing multiple cues than just individual cues-applies to this paradigm. ? The variance of a subject's estimation increases with B. ? Noise in either shading or texture systematically biases the estimation from the true values: the greater the noise level, the greater the bias. ? Shape from both cues is more robust against noise than shape from either cue alone, providing evidence of another form of cue enhancement. 2 ..g 2 c: 1.5 . ,.. 1.5 0 '" ""E .~ W 2 1 .-t --l.-I- ~.-r E -/.- <II w 1 1,r 2 1 1.6 w 1.2 , ,l' ~ 1.5 ~ ';i ,r"r' "~I' , ,1 7 w 1 1 Stimulus 1.8 !,. 1.4 r' 2 1 Stimulus ~ /1- ( t 2 Stimulus 1.8 ~ ! 1.6 1.4 w 1.2 0.8 1.5 1.5 Slimulus Stimulus Figure 2: Means and standard errors are shown for the shape matching experiment, for different values of B, under different stimulus conditions. rop: No noise in local shape parameters. Left: Shape from shading alone. Middle: Shape from texture alone. Right: Shape from shading and texture. BOTIOM: Shape from shading and texture. Left: (i$ = 0.05, (it = O. Right: (i$ = a, (it = 0.05. 4 MODELING RESULTS The model was ~rained using a subset of data from these experiments. The error criteria was mean relative error (M RE) between the model outputs (U, E) and Z. Yang and R. S. Zemel 874 B 1.4 1.6 1.4 1.6 1.2 1.4 O"s 0.10 0.10 0.05 0.05 0 0 data (U/E) 1.18/0.072 1.34/0.075 1.32/0.042 1.52/0.049 1.20/0.052 1.36/0.062 O"t 0 0 0 0 0.05 0.05 model (U /E) 1.20/0.06 1.35/0.063 1.4/0.067 1.46/0.069 1.14/0.056 1.30/0.063 Table 1: Data versus model predictions on images outside the training class. The first column of means and variances are from the experimental data, the second column from the model. experimental data (subject mean and variance on the same image). The six free parameters of the model were described as the sum of third order polynomials of local S, T and the noise levels. Gradient descent was used to train the model. The model was trained and tested on three different subsets of the experimental data. When trained on data in which only B varied, the model output accurately predicts unseen experimental data of the same type. When the data varied in B and O"s or O"t, the model outputs agree very well with subject data (M RE ,...., 5 8%). When trained on data where all three variables vary, the model fits the data reasonably well (M RE ,...., 8 -13%). For a model of the first type, Figure 3 compares model predictions to data from within the same set, while Table 1 shows model outputs and subject responses for test examples from outside the training class. 1.6r---------:"""""""' 1.5 g1.4 . ~ ~ 1.3 Itl 1.2 1.1 1.2 1.3 1.4 1.5 Stimulus 1.6 Figure 3: Model performance on data in which O"s = O,O"t = 0.10. Upper line: perfect estimation. Lower line: experimental data. Dashed line: model prediction. The model accounts for some important aspects of cue combination. Trained model parameters reveal that the texture prior is considerably weaker than the shading prior, and texture has a more reliable relationship with B. Consequently, at equal noise levels texture outweighs shading in the combination model. These factors account for the degree of underestimation found in each single-cue experiment, and the greater accuracy (i.e., enhancement) with combined-cues. Our studies also reveal a novel form of cue interaction: for some image patches, esp. at high curvature and noise levels, shading information becomes hannful, i.e., curvature estimation becomes less reliable when shading information is taken into account. Note that this differs from cue veto, in that texture does not veto shading. Finally, the primary contribution of our model lies in its ability to predict the effect of continuous within-image variation in cue reliability on combination. Figure 4 shows how the estimation becomes more accurate and less variable with increas- Managing Uncertainty in Cue Combination 875 ing certainty in shading infonnation. Standard cue combination models cannot produce similar behavior, as they do not estimate within-image cue reliabilities. (I) '" 0.9 0.1 0.85 0.09 ;:-C~U ,- _ _ _ _ _ _ _ _ 8=1.6 :; E 0.8 m c .S! ~ 0.75 1ii w 0 'D '".po 80.08 c: .c: '" ~ 0.07 ~8=1.8 0.7 C? 0.06 0 '~t1:1 8=1.4 0 0.65 0 0.5 1.5 0.05 0 0.5 1.5 Figure 4: Mean (left) and variance (right) of model output as a function of "is, for different values of B. Here Us = 0.15, Ut = 0, all model parameters held constant. 5 CONCLUSION We have proposed a hierarchical generative model to study cue combination. Inferring parameters from images is achieved by inverting this modeL Inference produces probability distributions at each level: a set of local distributions, separately representing each cue, are combined to fonn a distribution over a relevant scene variable. The model naturally handles variations in cue reliability, which depend both on spatially local image context and general cue characteristics. This fonn of representation, incorporating image-specific cue utilities, makes this model more powerful than standard combination models. The model provides a good fit to our psychophysics results on shading and texture combination and an account for several aspects of cue combination; it also provides predictions for hGW varying noise levels, both within and across images, will effect combination. We are extending this work in a number of directions. We are conducting experiments to obtain local shape estimates from subjects. We are conSidering better ways to extract local representations and distributions over them directly from an image, and methods of handling natural outliers such as shadows and occlusion. References [1] Hom, B. K. P. (1977). Understanding image intensities. AI 8, 201-231. [2] Jacobs, R. A. & Fine I. (1999). Experience-dependent integration of texture and motion cues to depth. Vis. Res., 39, 4062-4075. [3] Johnston, E. B., Cumming, B. G., & Landy, M. S. (1994) . Integration of depth modules: Stereopsis and texture. Vis. Res. 34, 2259-2275. [4] Knill, D. C. (1998). Surface orientation from texture: ideal observers, generic observers and the information content of texture cues. Vis. Res. 38, 1655-1682. [5] Knill, D. c., Kersten, D., & Mamassian P. (1996). Implications of a Bayesian formulation of visual information for processing for psychophysiCS. In Perception as Bayesian Inference, D. C. Knill and W. Richards (Eds.), 239-286, Cambridge Univ Press. [6] Landy, M. S., Maloney, L. T., Johnston, E. B., & Young, M. J. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vis. Res. 35,389-412. [7] Malik, J. & Rosenholtz, R. (1997). Computing local surface orientation and shape from texture for curved surfaces. I/CV 23, 149-168. [8] Pentland, A. (1984). Local shading analysis. IEEE PAM!, 6, 170-187. [9] Young, M.J., Landy, MS., & Maloney, L.T. (1993). A perturbation analysis of depth perception from combinations of texture and motion cues. Vis. Res. 33, 2685-2696. [10] Yuille, A. & Bulthoff, H. H. (1996). Bayesian decision theory and psychophysiCS. In Perception as Bayesian Inference, D. C. Knill and W. Richards (Eds.), 123-16}, Cambridge Univ Press. [11] Zemel, R. S., Dayan, P., & Pouget, A. (1998) . Probabilistic interpretation of population codes. Neural Computa- tion, 403-430. PART VIII ApPLICATIONS
1742 |@word trial:4 middle:1 version:1 judgement:1 polynomial:1 accounting:1 jacob:1 fonn:2 shading:31 contains:3 disparity:1 selecting:1 current:1 si:5 yet:1 must:1 mesh:3 realistic:2 subsequent:1 shape:27 alone:6 cue:70 generative:9 plane:2 provides:3 height:1 along:1 direct:1 vi3:1 combine:1 introduce:1 manner:1 behavior:1 considering:1 becomes:3 estimating:1 underlying:3 developed:1 finding:2 certainty:1 every:1 computa:1 ti:2 control:1 medical:1 underlie:1 t1:1 local:27 limit:1 esp:1 approximately:1 pam:1 weakened:1 bi:2 thel:1 block:2 differs:1 procedure:7 area:1 gabor:1 projection:1 matching:3 pre:1 integrating:1 onto:2 cannot:1 context:4 applying:2 kersten:1 map:1 center:1 vit:1 pouget:1 utilizing:1 population:1 handle:1 variation:2 analogous:1 duke:2 element:1 recognition:1 iib:1 utilized:1 approximated:1 richards:2 predicts:1 database:5 role:1 tib:3 module:1 region:4 principled:1 transforming:2 environment:1 iisi:1 ideally:1 trained:4 depend:2 iyl:1 yuille:1 sdl:1 po:1 iit:1 represented:2 various:1 train:1 univ:2 describe:3 zemel:6 formation:3 outside:2 ability:1 unseen:1 g1:1 noisy:1 final:1 propose:1 interaction:3 relevant:3 combining:4 az:1 enhancement:3 extending:1 produce:4 generating:2 perfect:2 tti:1 derive:1 develop:2 nearest:1 ex:1 strong:1 shadow:2 involves:1 direction:1 posit:1 filter:3 viewing:2 assign:1 exp:6 presumably:1 predict:1 vary:2 adopt:1 estimation:6 infonnation:1 sensitive:1 create:1 weighted:2 gaussian:3 aim:1 rather:2 varying:1 derived:2 inference:8 dependent:4 dayan:1 entire:1 pixel:2 overall:1 dual:1 orientation:3 rop:1 smoothing:1 integration:3 psychophysics:5 equal:1 identical:1 others:1 stimulus:12 simplify:2 richard:1 bil:1 composed:1 preserve:2 individual:2 intended:1 occlusion:1 adjust:1 held:1 implication:1 accurate:2 integral:1 necessary:1 xy:1 experience:1 orthogonal:1 indexed:1 mamassian:1 re:8 column:2 modeling:2 deviation:1 entry:3 subset:2 uniform:3 conducted:2 sdb:2 considerably:1 combined:4 adaptively:1 fundamental:1 probabilistic:2 reflect:1 central:1 containing:1 til:3 account:10 exclude:2 b2:2 vi:5 tion:1 view:1 observer:3 contribution:1 formed:2 accuracy:1 variance:9 characteristic:1 conducting:1 judgment:1 identify:1 weak:1 bayesian:6 accurately:1 history:1 ed:2 maloney:2 against:1 underestimate:1 uncontaminated:1 e2:1 naturally:2 sampled:2 proved:1 ut:1 infers:1 amplitude:1 tum:2 response:2 formulation:1 box:1 just:2 stage:2 binocular:2 bulthoff:1 horizontal:1 nonlinear:1 overlapping:1 reveal:2 effect:2 validity:2 contain:1 true:1 assigned:1 spatially:1 conditionally:1 cosine:1 criterion:1 m:1 tt:3 motion:5 tiii:1 image:47 novel:1 dubious:1 mt:2 discussed:1 interpretation:1 measurement:1 cambridge:2 slant:1 ai:1 cv:1 neatly:1 reliability:7 entail:2 surface:12 playa:1 curvature:4 recent:2 manipulation:2 vt:1 greater:3 managing:4 rained:1 determine:2 novelty:1 paradigm:1 dashed:1 ii:3 multiple:2 infer:2 stem:1 ing:1 sib:2 match:2 equally:1 manipulate:1 prediction:5 j3:1 essentially:1 histogram:1 represent:2 achieved:2 separately:2 fine:1 johnston:2 unlike:1 subject:8 db:1 veto:2 incorporates:1 seem:1 near:1 yang:4 ideal:1 intermediate:3 independence:1 fit:3 psychology:1 identified:1 lesser:1 six:1 utility:3 defense:1 stereo:2 involve:1 locally:1 band:1 generate:1 rosenholtz:1 estimated:3 four:1 sum:1 uncertainty:19 powerful:1 place:1 throughout:1 almost:1 patch:11 decision:1 layer:2 completing:1 hom:1 arizona:2 worked:1 scene:2 x2:1 aspect:5 argument:1 developing:1 combination:33 across:6 describes:5 smaller:1 making:1 outlier:1 invariant:1 taken:1 equation:7 agree:1 available:1 gaussians:1 hierarchical:5 lambertian:1 appropriate:1 generic:1 bii:1 top:1 outweighs:1 landy:3 deparbnent:2 psychophysical:3 malik:1 question:1 primary:2 gradient:1 distance:3 separate:4 mapped:1 viii:1 code:1 index:1 relationship:3 providing:1 nc:1 difficult:1 allowing:1 upper:1 descent:1 curved:1 displayed:1 pentland:1 neurobiology:1 varied:2 perturbation:1 intensity:5 introduced:1 inverting:3 pair:2 below:1 perception:6 tucson:1 reliable:2 natural:2 representing:1 extract:1 prior:6 understanding:2 relative:4 interesting:1 generation:3 filtering:1 versus:1 ixl:1 sil:1 degree:4 informativeness:2 systematically:1 lo:1 supported:1 free:2 side:2 allow:1 bias:2 weaker:1 neighbor:1 taking:3 depth:6 rich:1 projected:1 far:1 sj:1 ml:2 global:4 assumed:1 stereopsis:1 continuous:1 table:2 nature:1 reasonably:1 robust:2 obtaining:1 main:1 motivation:1 noise:13 knill:4 repeated:1 screen:1 cumming:1 inferring:3 position:1 lie:2 weighting:3 third:2 young:2 specific:7 concern:1 evidence:1 essential:1 incorporating:1 fusion:1 adding:2 texture:32 durham:1 easier:1 specularities:1 forming:1 visual:3 contained:1 applies:1 itl:1 goal:1 formulated:1 consequently:1 content:1 determined:1 pas:1 experimental:8 underestimation:2 violated:1 tested:1 handling:1
812
1,743
The Nonnegative Boltzmann Machine Oliver B. Downs Hopfield Group Schultz Building Princeton University Princeton, NJ 08544 obdowns@princeton.edu David J.e. MacKay Cavendish Laboratory Madingley Road Cambridge, CB3 OHE United Kingdom mackay@mrao.cam.ac.uk Daniel D. Lee Bell Laboratories Lucent Technologies 700 Mountain Ave. Murray Hill, NJ 07974 ddlee@bell-labs.com Abstract The nonnegative Boltzmann machine (NNBM) is a recurrent neural network model that can describe multimodal nonnegative data. Application of maximum likelihood estimation to this model gives a learning rule that is analogous to the binary Boltzmann machine. We examine the utility of the mean field approximation for the NNBM, and describe how Monte Carlo sampling techniques can be used to learn its parameters. Reflective slice sampling is particularly well-suited for this distribution, and can efficiently be implemented to sample the distribution. We illustrate learning of the NNBM on a transiationally invariant distribution, as well as on a generative model for images of human faces. Introduction The multivariate Gaussian is the most elementary distribution used to model generic data. It represents the maximum entropy distribution under the constraint that the mean and covariance matrix of the distribution match that of the data. For the case of binary data, the maximum entropy distribution that matches the first and second order statistics of the data is given by the Boltzmann machine [1]. The probability of a particular state in the Boltzmann machine is given by the exponential form: P({Si = ?1}) = ~ exp (-~ L.siAijSj + ~biSi) . tJ (1) t Interpreting Eq. 1 as a neural network, the parameters A ij represent symmetric, recurrent weights between the different units in the network, and bi represent local biases. Unfortunately, these parameters are not simply related to the observed mean and covariance of the The Nonnegative Boltzmann Machine 429 (a) (b) 5.-------------~ 40 30 20 o 1 2 3 4 5 Figure 1: a) Probability density and b) shaded contour plot of a two dimensional competitive NNBM distribution. The energy function E (x) for this distribution contains a saddle point and two local minima, which generates the observed multimodal distribution. data as they are for the normal Gaussian. Instead, they need to be adapted using an iterative learning rule that involves difficult sampling from the binary distribution [2]. The Boltzmann machine can also be generalized to continuous and nonnegative variables. In this case, the maximum entropy distribution for nonnegative data with known first and second order statistics is described by a distribution previously called the "rectified Gaussian" distribution [3]: p(x) = {texP[-E(X)] o if Xi 2:: O'v'i, if any Xi <0, (2) where the energy function E (x) and normalization constant Z are: E(x) Z _ ~xT Ax -bTx 2 r ' dx exp[-E(x)]. (3) (4) Il:?o The properties of this nonnegative Boltzmann machine (NNBM) distribution differ quite substantially from that of the normal Gaussian. In particular, the presence of the nonnegativity constraints allows the distribution to have multiple modes. For example, Fig. 1 shows a two-dimensional NNBM distribution with two separate maxima located against the rectifying axes. Such a multimodal distribution would be poorly modelled by a single normal Gaussian. In this submission, we discuss how a multimodal NNBM distribution can be learned from nonnegative data. We show the limitations of mean field approximations for this distribution, and illustrate how recent developments in efficient sampling techniques for continuous belief networks can be used to tune the weights of the network [4]. Specific examples of learning are demonstrated on a translationally invariant distribution, as well as on a generative model for face images. Maximum Likelihood The learning rule for the NNBM can be derived by maximizing the log likelihood of the observed data under Eq. 2. Given a set of nonnegative vectors {xJt }, where J-L = l..M 0. B. Downs, D. J. MacKay and D. D. Lee 430 indexes the different examples, the log likelihood is: 1 M 1 M L= M LlogP(xJL ) = - M LE(xJL) -logZ. (5) JL=l Jl=l Taking the derivatives ofEq. 5 with respect to the parameters A and b gives: aL (6) (7) where the subscript "c" denotes a "clamped" average over the data, and the subscript "f" denotes a "free" average over the NNBM distribution: M ~ Lf(xJL) (f(x))c (8) JL=l (f(x))r 1"2 dx P(x)f(x). = (9) 0 These derivatives are used to define a gradient ascent learning rule for the NNBM that is similar to that of the binary Boltzmann machine. The contrast between the clamped and free covariance matrix is used to update the iteractions A, while the difference between the clamped and free means is used to update the local biases b. Mean field approximation The major difficulty with this learning algorithm lies in evaluating the averages (XiXj)f and (Xi)r. Because it is analytically intractable to calculate these free averages exactly, approximations are necessary for learning. Mean field approximations have previously been proposed as a deterministic alternative for learning in the binary Boltzmann machine, although there have been contrasting views on their validity [5,6]. Here, we investigate the utility of mean field theory for approximating the NNBM distribution. The mean field equations are derived by approximating the NNBM distribution in Eq. 2 with the factorized form: Q(x) = 1 1 (X.)'Y II. Q1';(Xi) = II. -.-2. e-1';, I! 'Ti 'Ti !Ei ~ (10) ~ where the different marginal densities Q(Xi) are characterized by the means 'Ti with a fixed constant I' The product of I-distributions is the natural factorizable distribution for nonnegative random variables. The optimal mean field parameterS'Ti are determined by minimizing the Kullback-Leibler divergence between the NNBM distribution and the factorized distribution: DKL(QIIP) = J [ dx Q(x) log Q(X)] P(x) = (E(x))Q(x) + log Z - H(Q). (11) Finding the minimum of Eq. 11 by setting its derivatives with respect to the mean field parameters 'Ti to zero gives the simple mean field equations: A;m = h + 1) [bi - ~ Ai;T; + ~i] (12) The Nonnegative Boltzmann Machine (a) 431 (b) Figure 2: a) Slice sampling in one dimension. Given the current sample point, Xi, a height y E [0, aP(x)] is randomly chosen. This defines a slice (x E SlaP(x) ~ y) in which a new Xi+! is chosen. b) For a multidimensional slice S, the new point Xi+l is chosen using ballistic dynamics with specular reflections off the interior boundaries of the slice. These equations can then be solved self-consistently for Ti. The "free" statistics of the NNBM are then replaced by their statistics under the factorized distribution Q(x): (Xi}r ~ Ti, (XiXj}r ~ [h + 1)2 + (r + 1) 8ij ] TiTj. (13) The fidelity of this approximation is determined by how well the factorized distribution Q(x) models the NNBM distribution. Unfortunately, for distributions such as the one shown in Fig. 3, the mean field approximation is quite different from that of the true multimodal NNBM distribution. This suggests that the naive mean field approximation is inadequate for learning in the NNBM, and in fact attempts to use this approximation fail to learn the examples given in following sections. However, the mean field approximation can still be used to initialize the parameters to reasonable values before using the sampling techniques that are described below. Monte-Carlo sampling A more direct approach to calculating the "free" averages in Eq. 6-7 is to numerically approximate them. This can be accomplished by using Monte Carlo sampling to generate a representative set of points that sufficiently approximate the statistics of the continuous distribution. In particular, Markov chain Monte-Carlo methods employ an iterative stochastic dynamics whose equilibrium distribution converges to that of the desired distribution [4]. For the binary Boltzmann machine, such sampling dynamics involves random "spin flips" which change the value of a single binary component. Unfortunately, these single component dynamics are easily caught in local energy minima, and can converge very slowly for large systems. This makes sampling the binary distribution very difficult, and more specialized computational techniques such as simulated annealing, cluster updates, etc., have been developed to try to circumvent this problem. For the NNBM, the use of continuous variables makes it possible to investigate different stochastic dynamics in order to more efficiently sample the distribution. We first experimented with Gibbs sampling with ordered overrelaxation [7], but found that the required inversion of the error function was too computationally expensive. Instead, the recently developed method of slice sampling [8] seems particularly well-suited for implementation in the NNBM. The basic idea of the slice sampling algorithm is shown in Fig. 2. Given a sample point Xi, a random y E [0, aP(xi)] is first uniformly chosen. Then a slice S is defined as the connected set of points (x E S I aP(x) ~ y), and the new point Xi+l E S is chosen 0. B. Downs, D. J. MacKay and D. D. Lee 432 (b) 4 1 2 3 4 5 2 3 4 5 Figure 3: Contours of the two-dimensional competitive NNBM distribution overlaid by a) 'Y = 1 mean field approximation and b) 500 reflected slice samples. randomly from this slice. The distribution of Xn for large n can be shown to converge to the desired density P(x). Now, for the NNBM, solving the boundary points along a particular direction in a given slice is quite simple, since it only involves solving the roots of a quadratic equation. In order to efficiently choose a new point within a particular slice, reflective "billiard ball" dynamics are used. A random initial velocity is chosen, and the new point is evolved by travelling a certain distance from the current point while specularly reflecting from the boundaries of the slice. Intuitively, the reversibility of these reflections allows the dynamics to satisfy detailed balance. In Fig. 3, the mean field approximation and reflective slice sampling are used to model the two-dimensional competitive NNBM distribution. The poor fit of the mean field approximation is apparent from the unimodality of the factorized density, while the sample points from the reflective slice sampling algorithm are more representative of the underlying NNBM distribution. For higher dimensional data, the mean field approximation becomes progressively worse. It is therefore necessary to implement the numerical slice sampling algorithm in order to accurately approximate the NNBM distribution. Translationally invariant model Ben-Yishai et al. have proposed a model for orientation tuning in primary visual cortex that can be interpreted as a cooperative NNBM distribution [9]. In the absence of visual input, the firing rates of N cortical neurons are described as minimizing the energy function E (x) with parameters: 1 ? 27r (14) 8ij + N - N cos( N Ii - jl) 1 This distribution was used to test the NNBM learning algorithm. First, a large set of N = 25 dimensional nonnegative training vectors were generated by sampling the distribution with (3 = 50 and ? = 4. Using these samples as training data, the A and b parameters were learned from a unimodal initialization by evolving the training vectors using reflective slice sampling, and these evolved vectors were used to calculate the "free" averages in Eq. 6-7. The A and b estimates were then updated, and this procedure was iterated until the evolved averages matched that of the training data. The learned A and b parameters were then found to almost exactly match the original form in Eq. 14. Some representative samples from the learned NNBM distribution are shown in Fig. 4. The Nonnegative Boltzmann Machine 433 3 2 5 10 15 20 25 Figure 4: Representative samples taken from a NNBM after training to learn a translationally invariant cooperative distribution with (3 = 50 and ? = 4. b) Figure 5: a) Morphing of a face image by successive sampling from the learned NNBM distribution. b) Samples generated from a normal Gaussian. Generative model for faces We have also used the NNBM to learn a generative model for images of human faces. The NNBM is used to model the correlations in the coefficients of the nonnegative matrix factorization (NMF) of the face images [10]. NMF reduces the dimensionality of nonnegative data by decomposing the face images into parts correponding to eyes, noses, ears, etc. Since the different parts are coactivated in reconstructing a face, the activations of these parts contain significant correlations that need to be captured by a generative model. Here we briefly demonstrate how the NNBM is able to learn these correlations. Sampling from the NNBM stochastically generates coefficients which can graphically be displayed as face images. Fig. 5 shows some representative face images as the reflective slice sampling dynamics evolves the coefficients. Also displayed in the figure are the analogous images generated if a normal Gaussian is used to model the correlations instead. It is clear that the nonnegativity constraints and multimodal nature of the NNBM results in samples which are cleaner and more distinct as faces. 434 O. B. Downs, D. J. MacKay and D. D. Lee Discussion Here we have introduced the NNBM as a recurrent neural network model that is able to describe multimodal nonnegative data. Its application is made practical by the efficiency of the slice sampling Monte Carlo method. The learning algorithm incorporates numerical sampling from the NNBM distribution and is able to learn from observations of nonnegative data. We have demonstrated the application of NNBM learning to a cooperative, translationally invariant distribution, as well as to real data from images of human faces. Extensions to the present work include incorporating hidden units into the recurrent network. The addition of hidden units implies modelling certain higher order statistics in the data, and requires calculating averages over these hidden units. We anticipate the marginal distribution over these units to be most commonly unimodal, and hence mean field theory should be valid for approximating these averages. Another possible extension involves generalizing the NNBM to model continuous data confined within a certain range, i.e. 0 :s; Xi :s; 1. In this situation, slice sampling techniques would also be used to efficiently generate representative samples. In any case, we hope that this work stimulates more research into using these types of recurrent neural networks to model complex, multimodal data. Acknowledgements The authors acknowledge useful discussion with John Hopfield, Sebastian Seung, Nicholas Socci, and Gayle Wittenberg, and are indebted to Haim Sompolinsky for pointing out the maximum entropy interpretation of the Boltzmann machine. This work was funded by Bell Laboratories, Lucent Technologies. O.B. Downs is grateful for the moral support, and open ears and minds of Beth Brittle, Gunther Lenz, and Sandra Scheitz. References [1] Hinton, GE & Sejnowski, TJ (1983). Optimal perceptual learning. IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, 448-453. [2] Ackley, DH, Hinton, GE, & Sejnowski, TJ (1985). A learning algorithm for Boltzmann machines. Cognitive Science 9, 147-169. [3] Socci, ND, Lee, DD, and Seung, HS (1998). The rectified Gaussian distribution. Advances in Neural Information Processing Systems 10, 350-356. [4] MacKay, DJC (1998). Introduction to Monte Carlo Methods. Learning in Graphical Models. Kluwer Academic Press, NATO Science Series, 175-204. [5] Galland, CC (1993). The limitations of deterministic Boltzmann machine learning. Network 4, 355-380. [6] Kappen, HJ & Rodriguez, FB (1997). Mean field approach to learning in Boltzmann machines. Pattern Recognition in Practice Jij, Amsterdam. [7] Neal, RM (1995). Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation. Technical Report 9508, Dept. of Statistics, University of Toronto. [8] Neal, RM (1997). Markov chain Monte Carlo methods based on "slicing" the density function. Technical Report 9722, Dept. of Statistics, University of Toronto. [9] Ben-Yishai, R, Bar-Or, RL, & Sompolinsky, H (1995). Theory of orientation tuning in visual cortex. Proc. Nat. Acad. Sci. USA 92, 3844-3848. [10] Lee, DD, and Seung, HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401,788-791.
1743 |@word h:2 briefly:1 inversion:1 seems:1 nd:1 open:1 covariance:3 q1:1 kappen:1 initial:1 contains:1 series:1 united:1 daniel:1 suppressing:1 current:2 com:1 si:1 activation:1 dx:3 john:1 numerical:2 plot:1 update:3 progressively:1 generative:5 toronto:2 billiard:1 successive:1 height:1 along:1 direct:1 examine:1 becomes:1 underlying:1 matched:1 factorized:5 mountain:1 evolved:3 interpreted:1 substantially:1 developed:2 contrasting:1 finding:1 nj:2 multidimensional:1 ti:7 exactly:2 rm:2 uk:1 unit:5 before:1 local:4 acad:1 subscript:2 firing:1 ap:3 initialization:1 suggests:1 shaded:1 co:1 factorization:2 bi:2 range:1 practical:1 practice:1 implement:1 lf:1 procedure:1 logz:1 evolving:1 bell:3 road:1 interior:1 deterministic:2 demonstrated:2 gayle:1 maximizing:1 graphically:1 caught:1 slicing:1 rule:4 cavendish:1 analogous:2 updated:1 xjl:3 velocity:1 expensive:1 particularly:2 located:1 recognition:2 submission:1 cooperative:3 observed:3 ackley:1 solved:1 calculate:2 connected:1 sompolinsky:2 seung:3 cam:1 dynamic:8 grateful:1 solving:2 efficiency:1 multimodal:8 easily:1 hopfield:2 unimodality:1 llogp:1 distinct:1 describe:3 monte:8 sejnowski:2 quite:3 whose:1 apparent:1 statistic:8 product:1 jij:1 poorly:1 cluster:1 converges:1 ben:2 object:1 illustrate:2 recurrent:5 ac:1 ij:3 eq:7 implemented:1 involves:4 implies:1 differ:1 direction:1 stochastic:2 human:3 sandra:1 anticipate:1 elementary:1 extension:2 sufficiently:1 normal:5 exp:2 equilibrium:1 overlaid:1 pointing:1 major:1 estimation:1 lenz:1 proc:1 ballistic:1 hope:1 beth:1 gaussian:8 hj:1 ax:2 derived:2 wittenberg:1 consistently:1 modelling:1 likelihood:4 contrast:1 ave:1 hidden:3 fidelity:1 orientation:2 development:1 mackay:6 initialize:1 marginal:2 field:18 reversibility:1 sampling:24 washington:1 represents:1 report:2 employ:1 randomly:2 divergence:1 replaced:1 translationally:4 attempt:1 djc:1 investigate:2 tj:3 yishai:2 chain:3 oliver:1 necessary:2 walk:1 desired:2 inadequate:1 too:1 stimulates:1 density:5 lee:6 off:1 ear:2 choose:1 slowly:1 worse:1 stochastically:1 cognitive:1 derivative:3 coefficient:3 satisfy:1 view:1 try:1 lab:1 root:1 competitive:3 rectifying:1 ofeq:1 il:1 spin:1 efficiently:4 modelled:1 iterated:1 accurately:1 carlo:8 rectified:2 cc:1 indebted:1 sebastian:1 against:1 energy:4 dimensionality:1 reflecting:1 higher:2 reflected:1 until:1 correlation:4 ei:1 rodriguez:1 defines:1 mode:1 usa:1 building:1 validity:1 contain:1 true:1 analytically:1 hence:1 symmetric:1 laboratory:3 leibler:1 neal:2 self:1 generalized:1 hill:1 correponding:1 demonstrate:1 interpreting:1 reflection:2 image:10 recently:1 specialized:1 rl:1 jl:4 interpretation:1 kluwer:1 numerically:1 significant:1 cambridge:1 gibbs:1 ai:1 tuning:2 funded:1 cortex:2 etc:2 multivariate:1 recent:1 certain:3 binary:8 accomplished:1 captured:1 minimum:3 converge:2 ii:3 multiple:1 unimodal:2 reduces:1 technical:2 match:3 academic:1 characterized:1 dkl:1 xjt:1 btx:1 basic:1 vision:1 represent:2 normalization:1 confined:1 addition:1 annealing:1 ascent:1 incorporates:1 reflective:6 presence:1 fit:1 specular:1 idea:1 utility:2 moral:1 useful:1 detailed:1 clear:1 tune:1 cleaner:1 mrao:1 generate:2 group:1 gunther:1 cb3:1 overrelaxation:2 almost:1 reasonable:1 haim:1 quadratic:1 nonnegative:17 adapted:1 constraint:3 generates:2 ball:1 poor:1 xixj:2 reconstructing:1 evolves:1 ohe:1 intuitively:1 invariant:5 taken:1 computationally:1 equation:4 previously:2 discus:1 fail:1 mind:1 flip:1 nose:1 ge:2 travelling:1 decomposing:1 generic:1 nicholas:1 alternative:1 galland:1 original:1 denotes:2 include:1 graphical:1 calculating:2 murray:1 approximating:3 primary:1 gradient:1 distance:1 separate:1 simulated:1 sci:1 index:1 minimizing:2 balance:1 kingdom:1 unfortunately:3 difficult:2 negative:1 implementation:1 boltzmann:17 neuron:1 observation:1 markov:3 acknowledge:1 displayed:2 situation:1 hinton:2 dc:1 nmf:2 david:1 introduced:1 required:1 learned:5 nnbm:38 able:3 bar:1 below:1 pattern:2 belief:1 difficulty:1 natural:1 circumvent:1 technology:2 eye:1 naive:1 morphing:1 acknowledgement:1 brittle:1 limitation:2 dd:2 free:7 bias:2 face:12 taking:1 slice:20 boundary:3 dimension:1 xn:1 evaluating:1 cortical:1 contour:2 valid:1 fb:1 author:1 made:1 commonly:1 schultz:1 approximate:3 nato:1 kullback:1 xi:13 continuous:5 iterative:2 socci:2 learn:6 nature:2 complex:1 factorizable:1 fig:6 representative:6 ddlee:1 nonnegativity:2 exponential:1 lie:1 clamped:3 perceptual:1 down:5 lucent:2 specific:1 xt:1 coactivated:1 experimented:1 intractable:1 incorporating:1 nat:1 qiip:1 suited:2 entropy:4 generalizing:1 simply:1 saddle:1 visual:3 amsterdam:1 ordered:2 dh:1 absence:1 change:1 determined:2 uniformly:1 called:1 support:1 dept:2 princeton:3
813
1,744
Kirchoff Law Markov Fields for Analog Circuit Design Richard M. Golden * RMG Consulting Inc. 2000 Fresno Road, Plano, Texas 75074 RMGCONSULT@AOL.COM, www.neural-network.com Abstract Three contributions to developing an algorithm for assisting engineers in designing analog circuits are provided in this paper. First, a method for representing highly nonlinear and non-continuous analog circuits using Kirchoff current law potential functions within the context of a Markov field is described. Second, a relatively efficient algorithm for optimizing the Markov field objective function is briefly described and the convergence proof is briefly sketched. And third, empirical results illustrating the strengths and limitations of the approach are provided within the context of a JFET transistor design problem. The proposed algorithm generated a set of circuit components for the JFET circuit model that accurately generated the desired characteristic curves. 1 Analog circuit design using Markov random fields 1.1 Markov random field models A Markov random field (MRF) is a generalization of the concept of a Markov chain. In a Markov field one begins with a set of random variables and a neighborhood relation which is represented by a graph. Each random variable will be assumed in this paper to be a discrete random variable which takes on one of a finite number of possible values. Each node of the graph indexs a specific random variable. A link from the jth node to the ith node indicates that the conditional probability distribution of the ith random variable in the field is functionally dependent upon the jth random variable. That is, random variable j is a neighbor of random variable i. The only restriction upon the definition of a Markov field (Le., the positivity condition) is that the probability of every realization of the field is strictly positive. The essential idea behind Markov field design is that one specifies a potential (energy) function for every clique in the neighborhood graph such that the subset of random variables associated with that clique obtain their optimal values when that clique'S potential function obtains its minimal value (for reviews see [1]-[2]) . ? Associate Professor at University of Texas at Dallas (www.utdallas.eduj-901den) R. M. Golden 908 Markov random field models provide a convenient mechanism for probabilistically representing and optimally combining combinations of local constraints. 1.2 Analog circuit design using SPICE In some mixed signal ASIC (Application Specific Integrated Circuit) design problems, most of the circuit design specifications are well known but the introduction of a single constraint (e.g., an increase in substrate noise) could result in a major redesign of an entire circuit. The industry standard tool for aiding engineers in solving analog circuit design problems is SPICE which is a software environment for simulation of large scale electronic circuits. SPICE does have special optimization options for fitting circuit parameters to desired input-output characteristics but typically such constraints are too weak for SPICE to solve analog circuit design problems with large numbers of free parameters (see [3] for an introduction to SPICE). Another difficulty with using SPICE is that it does not provide a global confidence factor for indicating its confidence in a generated design or local confidence factors for determining the locations of "weak points" in the automatically generated circuit design solution. 1.3 Markov field approaches to analog circuit design In this paper, an approach for solving real-world analog circuit design problems using an appropriately constructed Markov random is proposed which will be referred to as MRFSPICE. Not only are desired input-output characteristics directly incorporated into the construction of the potential functions for the Markov field but additional constraints based upon Kirchoff's current law are directly incorporated into the field. This approach thus differs from the classic SPICE methodology because Kirchoff current law constraints are explicitly incorporated into an objective function which is minimized by the "optimal design". This approach also differs from previous Markov field approaches (Le., the "Harmony" neural network model [4] and the "Brain-State-in-a-Box" neural network model [5]) designed to qualitatively model human understanding of electronic circuit behavior since those approaches used pair-wise correlational (quadratic) potential functions as opposed to the highly nonlinear potential functions that will be used in the approach described in this paper. 1.4 Key contributions This paper thus makes three important contributions to the application of Markov random fields to the analog circuit design problem. First, a method for representing highly nonlinear and non-continuous analog circuits using Kirchoff current law potential functions within the context of a Markov field is described. Second, a relatively efficient algorithm for optimizing the Markov field objective function is briefly described and the convergence proof is briefly sketched. And third, empirical results illustrating the strengths and limitations of the approach is provided within the context of a JFET transistor design problem. 2 2.1 Modeling assumptions and algorithms Probabilistic modeling assumptions A given circuit circuit design problem consists of a number of design decision variables. Denote those design decision variables by the discrete random variables Kirchoff Law Markov Fields for Analog Circuit Design 909 Xl, ... ,Xd' Let the MRF be denoted by the set x = [Xl"'" Xd] so that a realization of x is the d-dimensional real vector x. A realization of x is referred to as a circuit design solution. Let the joint (global) probability mass function for x be denoted by Po. It is assumed that po(x) > po(y) if and only if the circuit design solution x is preferred to the circuit design solution y. Thus, po(x) specifies a type of probabilistic fuzzy measure [1]. For example, the random variable Xi might refer to a design decision concerning the choice of a particular value for a capacitor C 14 . From previous experience, it is expected that the value of C I4 may be usually constrained without serious difficulties to one of ten possible values: O.IJ.LF, 0.2J.LF, 0.3J.LF, O.4J.LF, 0.5J.LF, 0.6J.LF, 0.7J.LF, 0.8J.LF, 0.9J.LF, ar 1J.LF. Thus, ki = 10 in this example. By limiting the choice of C I4 to a small number of finite values, this permits the introduction of design expertise hints directly into the problem formulation without making strong committments to the ultimate choice of the value of capacitor C 14 ? Other examples of design decision variable values include: resistor values, inductor values, transistor types, diode types, or even fundamentally different circuit topologies. The problem that is now considered will be to assign design preference probabilities in a meaningful way to alternative design solutions. The strategy for doing this will be based upon constructing po with the property that if po(x) > po(y), then circuit design solution x exhibits the requisite operating characteristics with respect to a set of M "test circuits" more effectively than circuit design solution y. An optimal analog circuit design solution x* then may be defined as a global maximum of Po. The specific details of this strategy for constructing Po are now discussed by first carefully defining the concept of a "test circuit". Let V = {O, 1, 2, ... ,m} be a finite set of integers (i.e., the unique "terminals" in the test circuit) which index a set of m complex numbers, Vo, VI, V2, .?. ,Vm which will be referred to as voltages. The magnitude of Vk indicates the voltage magnitude while the angle of Vk indicates the voltage phase shift. By convention the ground voltage, Vo , is always assigned the value of O. Let d E V x V (i.e., an ordered pair of elements in V). A circuit component current source is defined with respect to V by a complex-valued function ia,b whose value is typically functionally dependent upon Va and Vb but may also be functionally dependent upon other voltages and circuit component current sources associated with V. For example, a "resistor" circuit component current source would be modeled by choosing ia ,b = (Vb - Va) / R where R is the resistance in ohms of some resistor, Vb is the voltage observed on one terminal of the resistor, and Va is the voltage observed on the other terminal of the resistor. The quantity ia ,b is the current flowing through the resistor from terminal a to terminal b. Similarly, a "capacitor" circuit component current source would be modeled by choosing ia,b = (Vb - Va) /[27rj f] where j = A and f is the frequency in Hz of the test circuit. A "frequency specific voltage controlled current source" circuit component current source may be modeled by making ia ,b functionally dependent upon some subset of voltages in the test circuit. See [6] for additional details regarding the use of complex arithmetic for analog circuit analysis and design. An important design constraint is that Kirchoff's current law should be satisfied at every voltage node. Kirchoff's current law states that the sum of the currents entering a voltage node must be equal to zero [6]. We will now show how this physical law can be directly embodied as a system of nonlinear constraints on the R. M. Golden 910 behavior of the MRF. We say that the kth voltage node in test circuit q is clamped if the voltage Vk is known. For example, node k in circuit q might be directly grounded, node k might be directly connected to a grounded voltage source, or the voltage at node k, Vk, might be a desired known target voltage. If voltage node k in test circuit q is damped, then Kirchoff's current law at voltage node k in circuit q is simply assumed to be satisfied which, in turn, implies that the voltage potential function ?lq,k = O. Now suppose that voltage node k in test circuit q is not clamped. This means that the voltage at node k must be estimated. If there are no controlled current sources in the test circuit (Le., only passive devices), then the values of the voltages at the undamped nodes in the circuit can be calculated by solving a system of linear equations where the current choice of circuit component values are treated as constants. In the more general case where controlled current sources exist in the test circuit, then an approximate iterative gradient descent algorithm (such as the algorithm used by SPICE) is used to obtain improved estimates of the voltages of the undamped nodes. The iterative algorithm is always run for a fixed number of iterations. Now the value of ?lq,k must be computed. The current entering node k via arc j in test circuit q is denoted by the two-dimensional real vector Ik,i whose first component is the real part of the complex current and whose second component is the imaginary part. The average current entering node k in test circuit q is given by the formula: nit -q ~ q Ik = (link) L- Ik,i' j=l pesign circuit components (e.g., resistors, capacitors, diodes, etc.) which minimize will satisfy Kirchoff's current law at node k in test circuit q. However, the measure is an not entirely adequate indicator of the degree to which Kirchoff's current law is satisfied since 1% may be small in magnitude not necessarily because Kirchoff's current law is satisfied but simply because all currents entering node k are small in magnitude. To compensate for this problem, a normalized current signal magnitude to current signal variability ratio is minimized at node k in test circuit q. This ratio decreases in magnitude if 1% has a magnitude which is small relative to the magnitude of individual currents entering node k in test circuit q. It It The voltage potential function, ?l q,k, for voltage node k in test circuit q is now formally defined as follows. Let Let AI, ... , Au be those eigenvalues of Qk,q whose values are strictly greater than some small positive number ?. Let ei be the eigenvector associated with eigenvalue Ai. Define u Qk,~ = L(l/Aj)ejeJ. j=l Thus, if Qk,q has all positive eigenvalues, then Qk,q is simply the matrix inverse of Qk,~. Using this notation, the voltage potential function for the undamped voltage Kirchoff Law Markov Fields for Analog Circuit Design 911 node k in test circuit q may be expressed by the formula: ;?,. 'i!q,1e = [-Iq]TQ-I-Iq Ie Ie' Now define the global probability or "global preference" of a particular design configuration by the formula: PG(x) = (l/Z)exp( -U(x? (1) where U = (liN) Lq Lk <I>q,k and where N is the total number of voltage nodes across all test circuits. The most preferred (Le., "most probable") design are the design circuit components that maximize PG. Note that probabilities have been assigned such that circuit configurations which are less consistent with Kirchoff's current law are considered "less probable" (Le., "less preferred"). Because the normalization constant Z in (I) is computationally intractable to compute, it is helpful to define the easily computable circuit confidence factor, CCF, given by the formula: CCF{x) = exp( -U{x? = ZPG{x) . Note that the global probability P is directly proportional to the CCF. Since U is always non-negative and complete satisfaction of Kirchoff's current laws corresponds to the case where U = 0, it follows that CCF(x) has a lower bound of 0 (indicating "no subjective confidence" in the design solution x) and an upper bound of 1 (indicating" absolute subjective confidence" in the design solution x). In addition, local conditional probabilities of the form can be computed using the formula: Such local conditional probabilities are helpful for explicitly computing the probability or "preference" for selecting one design circuit component value given a subset of other design component values have been accepted. Remember that probability (Le., "preference") is essentially a measure of the degree to which the chosen design components and pre-specified operating characteristic voltage versus frequency curves of the circuit satisfy Kirchoff's current laws. 2.2 MRFSPICE algorithm The MRFSPICE algorithm is a combination of the Metropolis and Besag's ICM (Iterated Conditional Modes) algorithms [1]-[2]. The stochastic Metropolis algorithm (with temperature parameter set equal to one) is used to sample from p(x). As each design solution is generated, the CCF for that design solution is computed and the design solution with the best CCF is kept as an initial design solution guess Xo. Next, the deterministic ICM algorithm is then initialized with Xo and the ICM algorithm is applied until an equilibrium point is reached. A simulated annealing method involving decreasing the temperature parameter according to a logarithmic cooling schedule in Step 1 through Step 5 could easily be used to guarantee convergence in distribution to a uniform distribution over the global maxima of PG (Le., convergence to an optimal solution) [1]-[2]. However, for the test problems considered thus far, equally effective results have been obtained by using the above fast heuristic algorithm which is guaranteed to converge to a local maximum as opposed to a global maximum. It is proposed that in situations where the convergence rate is slow or the local maximum generated by MRFSPICE is a 912 R. M. Golden poor design solution with low CCF, that appropriate local conditional probabilites be computed and provided as feedback to a human design engineer. The human design engineer can then make direct alterations to the sample space of PG (Le., the domain of CC F) in order to appropriately simply the search space. Finally, the ICM algorithm can be easily viewed as an artificial neural network algorithm and in fact is a generalization of the classic Hopfield (1982) model as noted in [1]. !!.TEST + "'"""----~ ~OQ1 !GTEST .sQ1 Figure 1: As external input voltage generator EGTEST and external supply voltage EDT EST are varied, current ffiTEST flowing through external resistor RTEST is measured. 3 JFET design problem In this design problem, specific combinations of free parameters for a macroequivalent JFET transistor model were selected on the basis of a given set of characteristic curves specifying how the drain to source current of the JFET varied as a function of the gate voltage and drain voltage at OH z and 1MH z. Specifically, a .JFET transis~ tor model ~;as simulated using the classic Shichman and Hodges (1968) large-signal n-channel .JFET model as described by Vladimirescu [3] (pp. 96-100). The circuit diagram of this transistor model is shown in Figure 1. The only components in the circuit diagram which are not part of the JFET transistor model are the external voltage generators EDTEST and EGTEST, and external resistor RTEST. The specific functions which describe how IDIQGDl, CDIQGD1, RDIQGD1, IDIQGSl, CDIQGS1, RDIQGSl, CGDQ1, and CGSQ1 change as a function of EGTEST and the current IRTEST (which Hows through RTEST) are too long and complex to be 913 Kirchoff Law Markov Fields for Analog Circuit Design presented here (for more details see [3] pp. 96-100). Five design decision variables were defined. The first design decision variable, XDIQGS1, specified a set of parameter values for the large signal gate to source diode model portion of the JFET model. There were 20 possible choices for the value of XDIQGS1. Similarly, the second design decision variable, XDIQGDl, had 20 possible values and specified a set of parameter values for the large signal gate to drain diode model portion of the JFET model. The third design decision variable was XQl which also had 20 possible values were each value specified a set of choices for JFET -type specific parameters. The fourth and fifth design decision variables were the resistors RSQl and RSDI each of which could take on one of 15 possible values. The results of the JFET design problem are shown in Table 1. The phase angle for IRTEST at 1M H z was specified to be approximately 10 degrees, while the observed phase angle for IRTEST ranged from 7 to 9 degrees. The computing time was approximately 2 - 4 hours using unoptimized prototype MATLAB code on a 200 MHZ Pentium Processor. The close agreement between the desired and actual results suggests further research in this area would be highly rewarding. Table 1: Evaluation of MRFSPICE-generated JFET design EGTEST EDTEST 0 0 0 -0.5 -0.5 -0.5 -1.0 -1.0 -1.0 1.5 2.0 3.0 1.5 2.0 3.0 1.5 2.0 3.0 IRTEST @ DC (rna) (desired/ actual) 1.47/1.50 1.96/1.99 2.94/2.99 1.47/1.50 1.96/2.00 2.95/2.99 1.48/1.50 1.97/2.00 2.9613.00 IRTEST @ IMHZ _~ma) (desired/ actual) 1.19 1.21 1.60 1.62 2.43 2.43 1.07/1.11 1.49/1.52 2.34/2.35 0.96/1.02 1.39/1.44 2.27/2.29 Acknowledgments This research was funded by Texas Instruments Inc. through the direct efforts of Kerry Hanson. Both Kerry Hanson and Ralph Golden provided numerous key insights and knowledge substantially improving this project's quality. References [1] Golden, R. M. (1996) Mathematical methods for neural network analysis and design. Cambridge: MIT Press. [2] Winkler, G. (1995) Image analysis, random fields, and dynamic Monte Carlo methods: A mathematical introduction. New York: Springer-Verlag. [3] Vladimirescu, A. (1994) The SPICE book. New York: Wiley. [4] Smolensky, P. (1986). Information processing in dynamical systems: Foundations of Harmony theory. In D. E. Rumelhart and J . L. McClelland (eds.), Parallel distributed processing. Volume 1: Foundations, pp. 194-281. Cambridge: MIT Press. [5] Anderson, J . A. (1995). An introduction to neural networks. Cambridge: MIT Press. [6] Skilling, H. (1959) Electrical engineering circuits. New York: Wiley.
1744 |@word illustrating:2 briefly:4 simulation:1 pg:4 initial:1 configuration:2 selecting:1 subjective:2 imaginary:1 current:34 com:2 must:3 designed:1 selected:1 device:1 guess:1 ith:2 consulting:1 node:24 location:1 preference:4 five:1 mathematical:2 constructed:1 direct:2 supply:1 ik:3 consists:1 fitting:1 expected:1 behavior:2 brain:1 terminal:5 decreasing:1 automatically:1 actual:3 provided:5 begin:1 notation:1 project:1 circuit:69 mass:1 probabilites:1 substantially:1 eigenvector:1 fuzzy:1 guarantee:1 remember:1 every:3 golden:6 xd:2 positive:3 engineering:1 local:7 dallas:1 approximately:2 might:4 au:1 specifying:1 suggests:1 unique:1 acknowledgment:1 differs:2 lf:10 area:1 empirical:2 convenient:1 confidence:6 road:1 pre:1 close:1 context:4 www:2 restriction:1 deterministic:1 nit:1 insight:1 oh:1 classic:3 limiting:1 construction:1 target:1 suppose:1 substrate:1 designing:1 agreement:1 associate:1 element:1 rumelhart:1 cooling:1 observed:3 electrical:1 connected:1 decrease:1 environment:1 dynamic:1 solving:3 upon:7 basis:1 po:9 joint:1 easily:3 hopfield:1 mh:1 represented:1 fast:1 effective:1 describe:1 monte:1 artificial:1 neighborhood:2 choosing:2 whose:4 heuristic:1 solve:1 valued:1 say:1 winkler:1 eigenvalue:3 transistor:6 combining:1 realization:3 convergence:5 iq:2 measured:1 ij:1 utdallas:1 strong:1 diode:4 implies:1 convention:1 stochastic:1 human:3 assign:1 generalization:2 probable:2 strictly:2 considered:3 ground:1 exp:2 equilibrium:1 kirchoff:17 major:1 tor:1 harmony:2 tool:1 mit:3 always:3 rna:1 voltage:34 probabilistically:1 vk:4 indicates:3 besag:1 pentium:1 helpful:2 dependent:4 integrated:1 entire:1 typically:2 relation:1 unoptimized:1 sketched:2 ralph:1 denoted:3 constrained:1 special:1 field:23 equal:2 minimized:2 kerry:2 fundamentally:1 richard:1 serious:1 hint:1 individual:1 phase:3 tq:1 highly:4 evaluation:1 behind:1 damped:1 chain:1 experience:1 initialized:1 desired:7 minimal:1 industry:1 modeling:2 ar:1 mhz:1 subset:3 uniform:1 too:2 ohm:1 optimally:1 ie:2 probabilistic:2 vm:1 rewarding:1 hodges:1 satisfied:4 opposed:2 positivity:1 external:5 book:1 potential:10 alteration:1 inc:2 satisfy:2 explicitly:2 vi:1 doing:1 reached:1 portion:2 option:1 parallel:1 contribution:3 minimize:1 qk:5 characteristic:6 weak:2 iterated:1 accurately:1 carlo:1 expertise:1 cc:1 processor:1 ed:1 definition:1 energy:1 frequency:3 pp:3 proof:2 associated:3 jfet:14 knowledge:1 schedule:1 carefully:1 methodology:1 flowing:2 improved:1 formulation:1 box:1 anderson:1 until:1 ei:1 nonlinear:4 mode:1 quality:1 aj:1 concept:2 normalized:1 ranged:1 ccf:7 assigned:2 entering:5 noted:1 complete:1 vo:2 temperature:2 passive:1 image:1 wise:1 physical:1 volume:1 analog:16 discussed:1 functionally:4 refer:1 cambridge:3 ai:2 similarly:2 had:2 funded:1 specification:1 operating:2 etc:1 optimizing:2 asic:1 verlag:1 additional:2 greater:1 converge:1 maximize:1 signal:6 assisting:1 arithmetic:1 rj:1 compensate:1 lin:1 long:1 concerning:1 equally:1 va:4 controlled:3 mrf:3 involving:1 essentially:1 iteration:1 grounded:2 normalization:1 addition:1 annealing:1 diagram:2 source:11 appropriately:2 hz:1 capacitor:4 integer:1 topology:1 idea:1 regarding:1 prototype:1 computable:1 texas:3 shift:1 ultimate:1 effort:1 resistance:1 york:3 adequate:1 matlab:1 aiding:1 ten:1 mcclelland:1 specifies:2 exist:1 spice:9 estimated:1 discrete:2 key:2 kept:1 graph:3 sum:1 run:1 angle:3 inverse:1 vladimirescu:2 fourth:1 electronic:2 decision:9 vb:4 entirely:1 ki:1 bound:2 guaranteed:1 quadratic:1 inductor:1 i4:2 strength:2 constraint:7 software:1 relatively:2 developing:1 according:1 combination:3 poor:1 across:1 metropolis:2 making:2 den:1 xo:2 computationally:1 equation:1 turn:1 mechanism:1 instrument:1 permit:1 v2:1 appropriate:1 skilling:1 alternative:1 gate:3 include:1 objective:3 quantity:1 strategy:2 exhibit:1 gradient:1 kth:1 link:2 simulated:2 code:1 index:2 modeled:3 ratio:2 negative:1 design:60 redesign:1 upper:1 markov:21 arc:1 finite:3 descent:1 defining:1 situation:1 incorporated:3 variability:1 dc:1 varied:2 edt:1 pair:2 specified:5 hanson:2 hour:1 usually:1 dynamical:1 smolensky:1 ia:5 satisfaction:1 difficulty:2 treated:1 indicator:1 representing:3 numerous:1 lk:1 embodied:1 review:1 understanding:1 drain:3 determining:1 relative:1 law:18 mixed:1 limitation:2 proportional:1 versus:1 generator:2 undamped:3 foundation:2 degree:4 consistent:1 free:2 jth:2 neighbor:1 absolute:1 fifth:1 distributed:1 curve:3 calculated:1 feedback:1 world:1 qualitatively:1 far:1 approximate:1 obtains:1 preferred:3 clique:3 global:8 ejej:1 assumed:3 xi:1 continuous:2 iterative:2 search:1 table:2 channel:1 aol:1 improving:1 complex:5 necessarily:1 constructing:2 domain:1 noise:1 rmg:1 icm:4 referred:3 slow:1 wiley:2 resistor:10 lq:3 xl:2 clamped:2 rtest:3 third:3 formula:5 specific:7 essential:1 intractable:1 effectively:1 magnitude:8 logarithmic:1 simply:4 expressed:1 ordered:1 springer:1 corresponds:1 ma:1 conditional:5 viewed:1 professor:1 change:1 specifically:1 engineer:4 correlational:1 total:1 accepted:1 est:1 meaningful:1 indicating:3 formally:1 requisite:1
814
1,745
The Infinite Gaussian Mixture Model Carl Edward Rasmussen Department of Mathematical Modelling Technical University of Denmark Building 321, DK-2800 Kongens Lyngby, Denmark carl@imm.dtu.dk http://bayes.imm.dtu.dk Abstract In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the "right" number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling. 1 Introduction One of the major advantages in the Bayesian methodology is that "overfitting" is avoided; thus the difficult task of adjusting model complexity vanishes. For neural networks, this was demonstrated by Neal [1996] whose work on infinite networks led to the reinvention and popularisation of Gaussian Process models [Williams & Rasmussen, 1996]. In this paper a Markov Chain Monte Carlo (MCMC) implementation of a hierarchical infinite Gaussian mixture model is presented. Perhaps surprisingly, inference in such models is possible using finite amounts of computation. Similar models are known in statistics as Dirichlet Process mixture models and go back to Ferguson [1973] and Antoniak [1974]. Usually, expositions start from the Dirichlet process itself [West et al, 1994]; here we derive the model as the limiting case of the wellknown finite mixtures. Bayesian methods for mixtures with an unknown (finite) number of components have been explored by Richardson & Green [1997], whose methods are not easily extended to multivariate observations. 2 Finite hierarchical mixture The finite Gaussian mixture model with k components may be written as: 7r P(yl/l-l, ... ,/l-k,Sl,? .. ,Sk, l,? .. , 7r k) k = L7r jN(/l-j,sjl), (1) j=l where /l-j are the means, Sj the precisions (inverse variances), 7rj the mixing proportions (which must be positive and sum to one) and N is a (normalised) Gaussian with specified mean and variance. For simplicity, the exposition will initially assume scalar observations, n of which comprise the training data y = {Yl, ... , Yn}. First we will consider these models for a fixed value of k, and later explore the properties in the limit where k -+ 00. 555 The Infinite Gaussian Mixture Model Gibbs sampling is a well known technique for generating samples from complicated multivariate distributions that is often used in Monte Carlo procedures. In its simplest form, Gibbs sampling is used to update each variable in turn from its conditional distribution given all other variables in the system. It can be shown that Gibbs sampling generates samples from the joint distribution, and that the entire distribution is explored as the number of Gibbs sweeps grows large. We introduce stochastic indicator variables, Ci, one for each observation, whose role is to encode which class has generated the observation; the indicators take on values 1 ... k. Indicators are often referred to as "missing data" in a mixture model context. In the following sections the priors on component parameters and hyperparameters will be specified, and the conditional distributions for these, which will be needed for Gibbs sampling, will be derived. In general the form of the priors are chosen to have (hopefully) reasonable modelling properties, with an eye to mathematical convenience (through the use of conjugate priors) . 2.1 Component parameters The component means, f.1j, are given Gaussian priors: p(f.1jIA,r) ",N(A,r- 1), (2) whose mean, A, and precision, r, are hyperparameters common to all components. The hyperparameters themselves are given vague Normal and Gamma priors: p(A) ",N(f.1Y,Cl;), p(r) "'9(1 , Cl;2) ocr- 1 / 2 exp(-rCl;/2), (3) 0-; where f.1y and are the mean and variance of the observations 1. The shape parameter of the Gamma prior is set to unity, corresponding to a very broad (vague) distribution. The conditional posterior distributions for the means are obtained by multiplying the likelihood from eq. (1) conditioned on the indicators, by the prior, eq. (2): ilJ 1 = -n? "" ~ Yi, J i:Ci=j (4) where the occupation number, nj, is the number of observations belonging to class j, and '[jj is the mean of these observations. For the hyperparameters, eg. (2) plays the role of the likelihood which together with the priors from eq. (4) give conditional posteriors of standard form: p(AIf.11,? .. ,f.1k,r)", N( "k f.1YCly- 2 + r ~j=l f.1j 1 ) -2 ' -2 ' ClY + kr Cl y + kr k p(rlf.11' ... ,f.1k,A) "'9(k+ 1, [A(Cl; + (5) + L(f.1j _A)2)]-1). .1 J= The component precisions, S j, are given Gamma priors: p(Sjlj3,w) '" 9(j3,w- 1), (6) whose shape, j3, and mean, w- 1 , are hyperparameters common to all components, with priors of inverse Gamma and Gamma form: (7) 1 Strictly speaking, the priors ought not to depend on the observations. The current procedure is equivalent to nonnalising the observations and using unit priors. A wide variety of reasonable priors will lead to similar results. C. E. Rasmussen 556 The conditional posterior precisions are obtained by multiplying the likelihood from eq. (1) conditioned on the indicators, by the prior, eq . (6): p(sjlc, y, /Lj,,8, w) '" 9(,8 + nj, [,8: n' (w,8 + .L (Yi J - /Lj)2)] -1). (8) t :C;=J For the hyperparameters, eq. (6) plays the role of likelihood which together with the priors from eq. (7) give: The latter density is not of standard form, but it can be shown that p(log(,8) lSI, ... , Sk, w) is log-concave, so we may generate independent samples from the distribution for log(,8) using the Adaptive Rejection Sampling (ARS) technique [Gilks & Wild, 1992], and transform these to get values for ,8. The mixing proportions, 7rj, are given a symmetric Dirichlet (also known as multivariate beta) prior with concentration parameter a/ k: k P(""'" , ".1<? ~ Dirichlet(<>/ k, ... , <>/ k) = r~(;1). }1,,;/.-1, (10) where the mixing proportions must be positive and sum to one. Given the mixing proportions, the prior for the occupation numbers, n j, is multinomial and the joint distribution of the indicators becomes: n k P(Cl, ... ,ckl 7r l, ... , 7rk) = II nj = 7r;i, L 8Kronecker(Ci,j). (11) i=1 j=1 Using the standard Dirichlet integral, we may integrate out the mixing proportions and write the prior directly in terms of the indicators: P(Cl, ... ,ckl a ) = / P(Cl, . .. ,ckl 7r l, .. ' = r(a) / r(a/k)k , 7r k)p( 7r l, ... II 7rnj+a/k-ld7r ' = k j=1 J J ,7rk)d7rl???d7rk r(a) r(n + a) (12) II r(nj + a/k) k j=1 r(a/k) . In order to be able to use Gibbs sampling for the (discrete) indicators, Ci, we need the conditional prior for a single indicator given all the others; this is easily obtained from eq. (12) by keeping all but a single indicator fixed: p (Ci + a/k , = J'1 C-i, a ) = n-i,j n-l+a (13) where the subscript -i indicates all indexes except i and n-i,j is the number of observations, excluding Yi, that are associated with component j. The posteriors for the indicators are derived in the next section. Lastly, a vague prior of inverse Gamma shape is put on the concentration parameter a: p(a- l ) '" 9(1,1) =} p(a) oc a- 3 / 2 exp( - 1/(2a)). (14) The Infinite Gaussian Mixture Model 557 The likelihood for 0: may be derived from eq. (12), which together with the prior from eq. (14) gives: o:k r( 0:) p(nl, ... ,nklo:) = r(n + 0:)' ( I po: k,n) ex o:k-3/2 exp( - 1/(20:))r(0:) r(n + 0:) . ( 15) Notice, that the conditional posterior for 0: depends only on number of observations, n, and the number of components, k, and not on how the observations are distributed among the components. The distribution p(log( 0:) Ik, n) is log-concave, so we may efficiently generate independent samples from this distribution using ARS. 3 The infinite limit So far, we have considered k to be a fixed finite quantity. In this section we will explore the limit k -7 00 and make the final derivations regarding the conditional posteriors for the indicators. For all the model variables except the indicators, the conditional posteriors for the infinite limit is obtained by substituting for k the number of classes that have data associated with them, krep , in the equations previously derived for the finite model. For the indicators, letting k -7 00 in eq. (13), the conditional prior reaches the following limits: components where n-i,j > 0: n-l+o:' (16) 0: = n-l+o: This shows that the conditional class prior for components that are associated with other observations is proportional to the number of such observations; the combined prior for all other classes depends only on 0: and n. Notice how the analytical tractability of the integral in eq. (12) is essential, since it allows us to work directly with the (finite number of) indicator variables, rather than the (infinite number of) mixing proportions. We may now combine the likelihood from eq. (1) conditioned on the indicators with the prior from eq. (16) to obtain the conditional posteriors for the indicators: all other components combined: componentsforwhichn_i,j > 0: P(Ci =jlc-i,ltj,Sj,o:) ex (17) P(Ci = jlc-i, o:)p(Yi IItj ,Sj ,c-d ex all other components combined: n-i,j S)1/2 exp ( - Sj (Yi - Itj)2 /2), n-l+o: p(Ci:j:. Ci' for all i :j:. i'lc-i, A, r, (3, W, 0:) ex: p(Ci:j:. Ci' foralli:j:. i'lc-i,O:) J p(Yilltj,sj)p(ltj,sjIA,r,{3,w)dltjdsj . The likelihood for components with observations other than Yi currently associated with them is Gaussian with component parameters Itj and Sj. The likelihood pertaining to the currently unrepresented classes (which have no parameters associated with them) is obtained through integration over the prior distribution for these. Note, that we need not differentiate between the infinitely many unrepresented classes, since their parameter distributions are all identical. Unfortunately, this integral is not analytically tractable; I follow Neal [1998], who suggests to sample from the priors (which are Gaussian and Gamma shaped) in order to generate a Monte Carlo estimate of the probability of "generating a new class". Notice, that this approach effectively generates parameters (by sampling from the prior) for the classes that are unrepresented. Since this Monte Carlo estimate is unbiased, the resulting chain will sample from exactly the desired distribution, no matter how many samples are used to approximate the integral; I have found that using a single sample works fairly well in many applications. In detail, there are three possibilities when computing conditional posterior class probabilities, depending on the number of observations associated with the class: C. E. Rasmussen 558 if n-i,j > 0: there are other observations associated with class j, and the posterior class probability is as given by the top line of eq. (17). if n - i,j = a and Ci = j: observation Yi is currently the only observation associated with class j; this is an peculiar situation, since there are no other observations associated with the class, but the class still has parameters. It turns out that this situation should be handled as an unrepresented class, but rather than sampling for the parameters, one simply uses the class parameters; consult [Neal 1998] for a detailed derivation. unrepresented classes: values for the mixture parameters are picked at random from the prior for these parameters, which is Gaussian for J.Lj and Gamma shaped for Sj. Now that all classes have parameters associated with them, we can easily evaluate their likelihoods (which are Gaussian) and the priors, which take the form n-i,jl(n - 1 + a) for components with observations other than Yi associated with them, and al (n - 1 + a) for the remaining class. When hitherto unrepresented classes are chosen, a new class is introduced in the model; classes are removed when they become empty. 4 Inference; the "spirals" example To illustrate the model, we use the 3 dimensional "spirals" dataset from [Ueda et aI, 1998], containing 800 data point, plotted in figure 1. Five data points are generated from each of 160 isotropic Gaussians, whose means follow a spiral pattern. o 16 18 20 22 24 represented components 4 5 6 shape, 13 7 Figure 1: The 800 cases from the three dimensional spirals data. The crosses represent a single (random) sample from the posterior for the mixture model. The krep = 20 represented classes account for nl (n + a) ~ 99.6% of the mass. The lines indicate 2 std. dev. in the Gaussian mixture components; the thickness of the lines represent the mass of the class. To the right histograms for 100 samples from the posterior for krep , a and f3 are shown. 4.1 Multivariate generalisation The generalisation to multivariate observations is straightforward. The means, J.Lj, and precisions, S j, become vectors and matrices respectively, and their prior (and posterior) The Infinite Gaussian Mixture Model 559 distributions become multivariate Gaussian and Wishart. Similarly, the hyperparameter A becomes a vector (multivariate Gaussian prior) and rand w become matrices with Wishart priors. The (3 parameter stays scalar, with the prior on ((3 - D + 1)-1 being Gamma with mean 1/ D, where D is the dimension of the dataset. All other specifications stay the same. Setting D = 1 recovers the scalar case discussed in detail. 4.2 Inference The mixture model is started with a single component, and a large number of Gibbs sweeps are performed, updating all parameters and hyperparameters in turn by sampling from the conditional distributions derived in the previous sections. In figure 2 the auto-covariance for several quantities is plotted, which reveals a maximum correlation-length of about 270 . Then 30000 iterations are performed for modelling purposes (taking 18 minutes of CPU time on a Pentium PC) : 3000 steps initially for "burn-in", followed by 27000 to generate 100 roughly independent samples from the posterior (spaced evenly 270 apart). In figure 1, the represented components of one sample from the posterior is visualised with the data. To the right of figure 1 we see that the posterior number of represented classes is very concentrated around 18 - 20, and the concentration parameter takes values around a ::::: 3.5 corresponding to only a/ (n + a) ::::: 0.4% of the mass of the predictive distribution belonging to unrepresented classes. The shape parameter (3 takes values around 5-6, which gives the "effective number of points" contributed from the prior to the covariance matrices of the mixture components. 4.3 The predictive distribution Given a particular state in the Markov Chain, the predictive distribution has two parts: the represented classes (which are Gaussian) and the unrepresented classes. As when updating the indicators, we may chose to approximate the unrepresented classes by a finite mixture of Gaussians, whose parameters are drawn from the prior. The final predictive distribution is an average over the (eg. 100) samples from the posterior. For the spirals data this density has roughly 1900 components for the represented classes plus however many are used to represent the remaining mass. I have not attempted to show this distribution. However, one can imagine a smoothed version of the single sample shown in figure 1, from averaging over models with slightly varying numbers of classes and parameters. The (small) mass from the unrepresented classes spreads diffusely over the entire observation range. -c 1[ - - " ", - ~ ----;::=::;=::::;;:;:=:::::;-~ --, ~ 0.8 og log (a.) ~ log(~-2) Q) <.>0.6 ~ cO.4 til ?~0 . 2 og 0' '." - -'- . - ' .--- - .~~, - ," '-' , - .~' :; 200 400 600 til 0 iteration lag time ~30.---~----~----~----~---. Q) c 8. . E20 8 i -= ~10 .:: (J) ~ Q. ... - -- . .. ....--- ... --------.. . = :- ? 2:? 800 1000 '0 00 '**' 1000 2000 3000 4000 5000 Monte Carlo iteration Figure 2: The left plot shows the auto-covariance length for various parameters in the Markov Chain, based on 105 iterations. Only the number of represented classes, krep , has a significant correlation; the effective correlation length is approximately 270, computed as the sum of covariance coefficients between lag -1000 and 1000. The right hand plot shows the number of represented classes growing during the initial phase of sampling. The initial 3000 iterations are discarded. C. E. Rasmussen 560 5 Conclusions The infinite hierarchical Bayesian mixture model has been reviewed and extended into a practical method. It has been shown that good performance (without overfitting) can be achieved on multidimensional data. An efficient and practical MCMC algorithm with no free parameters has been derived and demonstrated on an example. The model is fully automatic, without needing specification of parameters of the (vague) prior. This corroborates the falsity of the common misconception that "the only difference between Bayesian and non-Bayesian methods is the prior, which is arbitrary anyway ... ". Further tests on a variety of problems reveals that the infinite mixture model produces densities whose generalisation is highly competitive with other commonly used methods. Current work is undertaken to explore performance on high dimensional problems, in terms of computational efficiency and generalisation. The infinite mixture model has several advantages over its finite counterpart: 1) in many applications, it may be more appropriate not to limit the number of classes, 2) the number of represented classes is automatically determined, 3) the use of MCMC effectively avoids local minima which plague mixtures trained by optimisation based methods, ego EM [Ueda et aI, 1998] and 4) it is much simpler to handle the infinite limit than to work with finite models with unknown sizes, as in [Richardson & Green, 1997] or traditional approaches based on extensive crossvalidation. The Bayesian infinite mixture model solves simultaneously several long-standing problems with mixture models for density estimation. Acknowledgments Thanks to Radford Neal for helpful comments, and to Naonori Ueda for making the spirals data available. This work is funded by the Danish Research Councils through the Computational Neural Network Center (CONNECT) and the THOR Center for Neuroinformatics. References Antoniak, C. E. (1974). Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Annals of Statistics 2, 1152-1174. Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. Annals of Statistics 1, 209-230. Gilks, W. R. and P. Wild (1992). Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41, 337-348. Neal, R. M. (1996). Bayesian Learning for Neural Networks, Lecture Notes in Statistics No. 118, New York: Springer-Verlag. Neal, R. M. (1998). Markov chain sampling methods for Dirichlet process mixture models. Technical Report 4915, Department of Statistics, University of Toronto. http://www.cs.toronto.edu/~radford/mixmc.abstract.html . Richardson, S. and P. Green (1997). On Bayesian analysis of mixtures with an unknown number of components. Journal of the Royal Statistical Society. B 59, 731-792. Ueda, N., R. Nakano, Z. Ghahramani and G. E. Hinton (1998). SMEM Algorithm for Mixture Models, NIPS 11. MIT Press. West, M., P. Muller and M. D. Escobar (1994). Hierarchical priors and mixture models with applications in regression and density estimation. In P. R. Freeman and A. F. M. Smith (editors), Aspects of Uncertainty, pp. 363-386. John Wiley. Williams, C. K. I. and C. E. Rasmussen (1996). Gaussian Processes for Regression, in D. S. Touretzky, M. C. Mozer and M. E. Hasselmo (editors), NIPS 8, MIT Press.
1745 |@word version:1 proportion:6 covariance:4 initial:2 current:2 written:1 must:2 john:1 shape:5 plot:2 update:1 isotropic:1 smith:1 toronto:2 simpler:1 five:1 mathematical:2 beta:1 become:4 ik:1 wild:2 combine:1 introduce:1 roughly:2 themselves:1 growing:1 freeman:1 automatically:1 cpu:1 becomes:2 mass:5 hitherto:1 finding:1 nj:4 ought:1 multidimensional:1 concave:2 exactly:1 unit:1 yn:1 positive:2 local:1 limit:8 subscript:1 approximately:1 burn:1 chose:1 plus:1 suggests:1 co:1 range:1 practical:2 gilks:2 acknowledgment:1 procedure:2 foralli:1 get:1 convenience:1 put:1 context:1 www:1 equivalent:1 demonstrated:2 missing:1 center:2 williams:2 go:1 rnj:1 straightforward:1 simplicity:1 handle:1 anyway:1 limiting:1 annals:2 imagine:1 play:2 carl:2 us:1 ego:1 updating:2 std:1 role:3 removed:1 visualised:1 mozer:1 vanishes:1 complexity:1 trained:1 depend:1 predictive:4 efficiency:1 vague:4 easily:3 joint:2 po:1 represented:9 various:1 derivation:2 effective:2 monte:5 pertaining:1 popularisation:1 neuroinformatics:1 whose:8 lag:2 statistic:6 richardson:3 transform:1 itself:1 final:2 differentiate:1 advantage:2 analytical:1 ckl:3 mixing:6 ltj:2 crossvalidation:1 empty:1 produce:1 generating:2 escobar:1 derive:1 depending:1 illustrate:1 eq:15 solves:1 edward:1 c:1 indicate:1 stochastic:1 strictly:1 around:3 considered:1 normal:1 exp:4 substituting:1 major:1 purpose:1 estimation:2 currently:3 council:1 hasselmo:1 mit:2 gaussian:19 rather:2 varying:1 og:2 encode:1 derived:6 modelling:3 likelihood:9 indicates:1 pentium:1 helpful:1 inference:4 ferguson:2 entire:2 lj:4 initially:2 among:1 html:1 priori:1 l7r:1 integration:1 fairly:1 comprise:1 f3:1 shaped:2 sampling:14 identical:1 broad:1 others:1 report:1 gamma:9 simultaneously:1 phase:1 possibility:1 highly:1 mixture:31 nl:2 pc:1 chain:6 peculiar:1 integral:4 naonori:1 necessary:1 desired:1 plotted:2 dev:1 ar:2 tractability:1 connect:1 thickness:1 combined:3 thanks:1 density:5 stay:2 standing:1 yl:2 together:3 itj:2 ilj:1 containing:1 wishart:2 sidestep:1 til:2 account:1 rcl:1 coefficient:1 matter:1 depends:2 later:1 performed:2 picked:1 start:1 bayes:1 competitive:1 complicated:1 jia:1 variance:3 who:1 efficiently:1 spaced:1 bayesian:11 carlo:5 multiplying:2 reach:1 touretzky:1 danish:1 pp:1 associated:11 recovers:1 dataset:2 adjusting:1 back:1 follow:2 methodology:1 rand:1 aif:1 done:1 lastly:1 correlation:3 hand:1 hopefully:1 perhaps:1 grows:1 building:1 unbiased:1 counterpart:1 analytically:1 symmetric:1 neal:6 eg:2 during:1 oc:1 common:3 multinomial:1 diffusely:1 jl:1 discussed:1 jlc:2 significant:1 gibbs:9 ai:2 automatic:1 similarly:1 neatly:1 funded:1 specification:2 multivariate:7 posterior:17 apart:1 wellknown:1 verlag:1 yi:8 muller:1 minimum:1 ii:3 rj:2 needing:1 technical:2 cross:1 long:1 j3:2 regression:2 optimisation:1 histogram:1 represent:3 iteration:5 achieved:1 comment:1 kongens:1 consult:1 spiral:6 variety:2 regarding:1 handled:1 sjl:1 speaking:1 york:1 jj:1 detailed:1 amount:1 nonparametric:2 concentrated:1 simplest:1 http:2 generate:4 sl:1 lsi:1 notice:3 discrete:1 write:1 hyperparameter:1 drawn:1 undertaken:1 sum:3 unrepresented:10 inverse:3 uncertainty:1 reasonable:2 ueda:4 entirely:1 followed:1 kronecker:1 generates:2 aspect:1 department:2 conjugate:1 belonging:2 slightly:1 em:1 unity:1 making:1 lyngby:1 equation:1 previously:1 turn:3 needed:1 letting:1 tractable:1 available:1 gaussians:2 hierarchical:4 ocr:1 appropriate:1 jn:1 top:1 dirichlet:7 remaining:2 nakano:1 ghahramani:1 society:1 sweep:2 quantity:2 concentration:3 traditional:1 evenly:1 denmark:2 length:3 index:1 difficult:2 unfortunately:1 implementation:1 unknown:3 contributed:1 observation:23 markov:5 discarded:1 finite:12 situation:2 extended:2 excluding:1 hinton:1 smoothed:1 arbitrary:1 introduced:1 specified:2 extensive:1 plague:1 nip:2 able:1 usually:1 pattern:1 green:3 royal:1 indicator:18 smem:1 eye:1 dtu:2 started:1 auto:2 prior:39 occupation:2 fully:1 lecture:1 proportional:1 integrate:1 editor:2 surprisingly:1 keeping:1 rasmussen:6 free:2 normalised:1 wide:1 taking:1 distributed:1 dimension:1 avoids:1 commonly:1 adaptive:2 avoided:1 far:1 sj:7 approximate:2 thor:1 imm:2 overfitting:2 reveals:2 corroborates:1 sk:2 reviewed:1 cl:7 e20:1 spread:1 hyperparameters:7 west:2 referred:1 wiley:1 lc:2 precision:5 rlf:1 rk:2 minute:1 misconception:1 explored:2 dk:3 essential:1 effectively:2 kr:2 ci:12 conditioned:3 rejection:2 led:1 antoniak:2 explore:3 infinitely:1 simply:1 scalar:3 radford:2 springer:1 relies:1 conditional:14 exposition:2 infinite:15 except:2 generalisation:4 determined:1 averaging:1 iitj:1 attempted:1 latter:1 evaluate:1 mcmc:3 ex:4
815
1,746
Can VI mechanisms account for figure-ground and medial axis effects? Zhaoping Li Gatsby Computational Neuroscience Unit University College London zhaoping~gatsby.ucl.ac.uk Abstract When a visual image consists of a figure against a background, V1 cells are physiologically observed to give higher responses to image regions corresponding to the figure relative to their responses to the background. The medial axis of the figure also induces relatively higher responses compared to responses to other locations in the figure (except for the boundary between the figure and the background). Since the receptive fields of V1 cells are very small compared with the global scale of the figure-ground and medial axis effects, it has been suggested that these effects may be caused by feedback from higher visual areas. I show how these effects can be accounted for by V1 mechanisms when the size of the figure is small or is of a certain scale. They are a manifestation of the processes of pre-attentive segmentation which detect and highlight the boundaries between homogeneous image regions. 1 Introduction Segmenting figure from ground is one of the most important visual tasks. We neither know how to execute it on a computer in general, nor do we know how the brain executes it. Further, the medial axis of a figure has been suggested as providing a convenient skeleton representation of its shape (Blum 1973). It is therefore exciting to find that responses of cells in V1, which is usually considered a low level visual area, differentiate between figure and ground (Lamme 1995, Lamme, Zipser, and Spekreijse 1997, Zipser, Lamme, Schiller 1996) and highlight the medial axis (Lee, Mumford, Romero, and Lamme 1998). This happens even though the receptive fields in V1 are much smaller than the scale of these global and perceptually significant phenomena. A common assumption is that feedback from higher visual areas is mainly responsible for these effects. This is supported by the finding that the figure-ground effects in V1 can be strongly reduced or abolished by anaesthesia or lesions in higher visual areas (Lamme et al 1997). However, in a related experiment (Gallant, van Essen, and Nothdurft 1995), V1 cells were found to give higher responses to global boundaries between two texture regions. Further, this border effect was significant only 10-15 milliseconds after the initial responses of the cells and was present even under anaesthesia. It is thus Can VI Mechanisms Account/or Figure-Ground and Medial Axis Effects? 137 plausible that VI mechanisms is mainly responsible for the border effect. In this paper, I propose that the figure-ground and medial axis effects are manifes- tations of the border effect, at least for apropriately sized figures . The border effect is significant within a limited and finite distance from the figure border. Let us call the image region within this finite distance from the border the effective border region. When the size of the figure is small enough, all parts of the figure belong to the effective border region and can induce higher responses. This suggests that the figure-ground effect will be reduced or diminished as the size of the figure becomes larger, and the VI responses to regions of the figure far away from the border will not be significantly higher than responses to background. This suggestion is supported by experimental findings (Lamme et al 1997). Furthermore, the border effect can create secondary ripples as the effect decays with distance from the border. Let us call the distance from the border to the ripple the ripple wavelength. When the size of a figure is roughly twice the ripple wavelength, the ripples from the two opposite borders of the figure can reinforce each other at the center of the figure to create the medial axis effect, which, indeed, is observed to occur only for figures of appropriate sizes (Lee et al 1998). I validate this proposal using a biologically based model of VI with intra-cortical interactions between cells with nearby but not necessarily overlapping receptive fields. Intra-cortical interactions cause the responses of a cell be modulated by nearby stimuli outside its classical receptive fields - the contextual influences that are observed physiologically (Knierim and van Essen 1992, Kapadia et al 1995). Contextual influences make VI cells sensitive to global image features, despite their local receptive fields, as manifested in the border and other effects. 2 The VI model We have previously constructed a VI model and shown it to be able to highlight smooth contours against a noisy background (Li 1998, 1999, 1999b) and also the boundaries between texture regions in images - the border effect. Its behavior agrees with physiological observations (Knierim and van Essen 1992, Kapadia et al 1995) that the neural response to a bar is suppressed strongly by contextual bars of similar orientatons - iso-orientation suppression; that the response is less suppressed by orthogonally or randomly oriented contextual bars; and that it is enhanced by contextual bars that are aligned to form a smooth contour in which the bar is within the receptive field - contour enhancement. Without loss of generality, the model ignores color, motion, and stereo dimensions, includes mainly layer 2-3 orientation selective cells, and ignores the intra-hypercolumnar mechanism by which their receptive fields are formed. Inputs to the model are images filtered by the edge- or bar-like local receptive fields (RFs) of VI cells. l Cells influence each other contextually via horizontal intra-cortical connections (Rockland and Lund 1983, Gilbert, 1992), transforming patterns of inputs to patterns of cell responses. Fig. 1 shows the elements of the model and their interactions. At each location i there is a model VI hypercolumn composed of K neuron pairs. Each pair (i, fJ) has RF center i and preferred orientation fJ = k1r / K for k = 1, 2, ...K , and is called (the neural representation of) an edge segment. Based on experimental data (White, 1989), each edge segment consists of an excitatory and an inhibitory neuron that are interconnected, and each model cell represents a collection of local cells of similar types. The excitatory cell receives the visual input; its output is used as a measure of the response or salience of the edge segment and projects to higher visual areas. The inhibitory cells are treated as interneurons. Based on observations IThe terms 'edge' and 'bar' will be used interchangeably. Z. Li 138 A Visual space, edge detectors, and their interactions B Neural connection pattern. Solid: J, Dashed: W ~ 0'?': . ,-, .? : ,-, .? :: ~ ~ ~ 0' ?': 0' ?': ~ ~ ~~~-~~~ ~ ~ >< ,-, .? :; ~ ~ ~ ,- , ..:: :,..:: :, .. :: ~ Model Neural Elements Edge outputs to higher visual areas Inputs Ie to inhibitory cells ~~-r--+-~~--r--+~--~ An interconnected neuron pair for - : edge segment i e --~ Inhibitory interneurons '-0-- I Excitatory neurons Visual inputs, filtered through the receptive fields, to the excitatory cells. Figure 1: A: Visual inputs are sampled in a discrete grid of edge/bar detectors. Each grid point i has K neuron pairs (see C), one per bar segment, tuned to different orientations () spanning 180 0 ? Two segments at different grid points can interact with each other via monosynaptic excitation J (the solid arrow from one thick bar to anothe r) or disynaptic inhibition W (the dashed arrow to a thick dashed bar). See also C. B: A schematic of the neural connection pattern from the center (thick solid) bar to neighboring bars within a few sampling unit distances. J's contacts are shown by thin solid bars. W's are shown by thin dashed bars. The connection pattern is translation and rotation invariant. C: An input bar segment is directly processed by an interconnected pair of excitatory and inhibitory cells, each cell models abstractly a local group of cells of the same type. The excitatory cell receives visual input and sends output 9x (Xii}) to higher centers. The inhibitory cell is an interneuron. Visual space is taken as having periodic boundary conditions. by Gilbert, Lund and their colleagues (Rockland and Lund, 1983, Gilbert 1992) horizontal connections JiO,jO' (respectively WiO,jO') mediate contextual influences via monosynaptic excitation (respectively disynaptic inhibition) from j(}' to i(} which have nearby but different RF centers, i -::j:. j, and similar orientation preferences, () '" ()'. The membrane potentials follow the equations: XiO YiO 2: 'l/J(f),,(})9y(Yi ,9+flO) + J 9x(XiO) + 2: JiO ,jO' 9x (XjOl ) + fio + fo flO j#.i,O' -a yYiO+9x(XiO)+ 2: W iO ,jOl9x(Xjol)+fc -axXiO - 0 j# i,O' 139 Can VI Mechanisms Account/or Figure-Ground and Medial Axis Effects? where O:zXie and O:yYie model the decay to resting potential, 9z(X) and 9y(Y) are sigmoid-like functions modeling cells' firing rates in response to membrane potentials x and y, respectively, 1/J(6.8) is the spread of inhibition within a hypercolumn, J 0 9z(Xie) is self excitation, Ie and 10 are background inputs, including noise and inputs modeling the general and local normalization of activities (see Li (1998) for more details). Visual input lie persists after onset, and initializes the activity levels 9z(Xie). The activities are then modified by the contextual influences. Depending on the visual input, the system often settles into an oscillatory state (Gray and Singer, 1989, see the details in Li 1998). Temporal averages of 9z(Xie) over several oscillation cycles are used as the model's output. The nature of the computation performed by the model is determined largely by the horizontal connections J and W, which are local (spanning only a few hypercolumns), and translation and rotation invariant (Fig. IB). A: B: Input image (li8) to model -------------111111111 II III -------------11111 I I I I I I -------------1 I I I I I I I I II -------------1 I I I I I I I I I I -------------1 I I III I I I I I -------------1 I I 1 I I I I I 1 I -------------1 I I I I I I I II I -------------1 I I I 1 I I I I I I -------------1 I II I I II I I I -------------1111111 I III -------------1 1 I 1 I I I I I I I II I II I III III III III III II I III II I Model output -------------1.1 I III II I I I I I -------------111 -------------111 -------------111 -------------111 -------------111 -------------111 -------------111 -------------111 -------------1.1 -------------111 I I I I II I I I 1 II I 1 I 1 II I III II I I I 1 II I I I I II I I I I II II I I II I I II II I I I I II II II II II II II II II II II II II II II II II II II II II I I I I I I I I I I Figure 2: An example of the performance of the model. A: Input li9 consists of two regions; each visible bar has the same input strength. B: Model output for A, showing non-uniform output strengths (temporal averages of 9" (Xi9)) for the edges. The input and output strengths are proportional to the bar widths. Because of the noise in the system, the saliencies of the bars in the same column are not exactly the same, this is also the case in other figures. The model was applied to some texture border and figure-ground stimuli, as shown in examples in the figures. The input values fir} are the same for all visible bars in each example. The differences in the outputs are caused by intracortical interactions. They become significant about one membrane time constant after the initial neural response (Li, 1998). The widths of the bars in the figures are proportional to input and output strengths. The plotted region in each picture is often a small region of an extended image. The same model parameters (e.9. the dependence of the synaptic weights on distances and orientations, the thresholds and gains in the functions 9z0 and 9yO, and the level of input noise in 10 ) are used for all the simulation examples. Fig. 2 demonstrates that the model indeed gives higher responses to the boundaries between texture regions. This border effect is highly significant within a distance of about 2 texture element spacings from the border. Thus the effective border region is about 2 in texture element spacings in this example. Furthermore, at about 9 texture element spacings to the right of the texture border there is a much smaller but Significant (visible on the figure) secondary peak in the response amplitude. Thus the ripple wavelength is about 9 texture element spacings here. The border effect is mainly caused by the fact that the texture elements at the border experience less iso-orientation suppression (which reduces the response levels to other texture bars in the middle of a homogeneous (texture) region) - the texture elements at the border have fewer neighboring texture bars of a similar orientation than the texture elements in the centers of the regions. The stronger responses to the effective border region cause extra iso-orientation suppression to texture bars near but right outside the effective border region. Let us call this region of stronger Z. Li 140 Model Input Model Output --111 11111111111111111111111 111-==11111111111111111111111111111== ==111 11111111111111111111111 111== --11111111111111111111111111111---111 11111111111111111111111111-==11111111111111111111111111111== ==111 11111111111111111111111111== --11111111111111111111111111111---11111111111111111111111111111-- --?? 1111111111111111111111111 ??-==1111111 11111111111"111 --?? 1111111111111111 11111 11111== I 111.-=::= III III : 11I11 111I11 : I I III 1.:== - .... 1"11"1111111111"111111 ??-==11 11"11 I111 111111 111== 1 I I II I" II " I II I I I I I I I 1 ' . - -_ '?? 1111 11111111111111111 ??-- -------1 I I I -------1 II I -------1 I I I ------- I I I I ------- I I I I -------1111 ------- I I I I -------1 I I I -------1 I I I ---- --- I I I I I I I I II I I I I I I I I I 1 11 I I I I I I I I I I I I :::::= IIII III ------1 I I I I I I ---------1 I ----------1 I --------- 1 I ----------1 I ---------- I I ---------1 I --------- I I ---------- 1 I ---------1 I ----------1 I ---------- I I ----------1 I --------11 III II I I II I III I I I II I II I I I I II I II I II I II I III III III III III III III III III III III III 111------111------- I I 1------- I I 1------- 111------- I I 1------- 11 1-----111-----I I 1------111------111------111------111------- I 1--------- I 1--------I I --------- I 1--------- I 1--------- 1-------11-------I 1--------- I 1--------- I 1--------I 1--------- I 1--------- 11--------- -------------1 1- ------------ - ----------- 1 II 1------------------------1I 1------------------------1-------------------------11III 1-------------------------1 1-------------------------1 II 1-------------------------1 1------------- - -----------1 I 1-------------------------1 -------------1 II 1------------1------------- -------------1 1------------------------- 1II1------------- ......------------------------------......------------------------------------------ = : = 11I111111111111111111 1111 =:== II - - - - - -?? I I I I - - - - - - -??? I I I II I I -------. I I I I I ------. I' I I I - - - - - - -??? I I I -------. '1 1 I I II II ------- I I I I ------. III I I I I ::=:=11.111 ------1.1 1 I I -------11 ------- I I II II I I II I I I I I ? I I I .. ., . ---------... I ------- ----------?? 1 II II II 11.--------I 1----------------1?? -------------------?11, III I ??- - - - - - - - ----------. ::::::::::111 II I 111?--------I I 1--------- --------- .1 I I I I. --------I I I I ? ?- - - - - - - - - 1 ----------1. -------------------1?1 ---------- --------I III I 11?? ??--------- II -------- I I I 1 ?? - - - - - - - - I I I ??? - - - - - - - - I II ??? - - - - - - - - .. -------------??1------------------------- ?? -------------------------1 -----------------------?? 1------------------------?? 1-------------------------1.1------------------------------------------------1.1-------------------------------------------------1 ?? ------------------------------------------------?? 1-------------------------1.1------------- ...... ... Figure 3: Dependence on the size of the figure. The figure-ground effect is most evident only for small figures, and the medial axis effect is most evident only for figures of finite and appropriate sizes. suppression from the border the border suppression region, which is significant and visible in Fig. (2B). This region can reach no further than the longest length of the horizontal connenctions (mediating the sup presion) from the effective border region. Consequently, texture bars right outside the border suppression region not only escape the stronger suppression from the border, but also experience weaker iso-orientation suppression from the weakened texture bars in the nearby border suppression region. As a result, a second saliency peak appears - the ripple effect, and we can hence conclude that the ripple wavelength is of the same order of magnitude as the longest connection length of the cortical lateral connections mediating intra-cortical interactions. Fig. 3 shows that for very small figures, the whole figure belongs to the effective border region and is highlighted in the Vl responses. As the figure size increases, the responses in the inside of the figure become smaller than the responses in the border region. However, when the size of the figure is appropriate, namely about twice the ripple wavelength, the center of the figure induces a secondary response highlight. In this case, the ripples or the secondary saliency peaks from both borders superpose onto each other at the same spatial location at the center of the figure. This reinforces the saliency peak at this medial axis since it has two border suppression regions (from two opposite borders), one on each side of it, as its contextual stimuli. For even larger figures, the medial axis effect diminishes because the ripples from Can VI Mechanisms Account for Figure-Ground and Medial Axis Effects? Model Output Model Input I I 11111111111111111 11111111111111111 11111111111111111 11111111111111111 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 1---------------1 111111111IIII11I1 11111111111111111 11111111111111111 11111111111111111 11111111111111111 11111111111111111 11111111111111111 -------1 II I I II I II III 11------- :::::::1 I I I I II I II I I I I I::::::: :::::::1 II I I II I I I I I I I I::::::: :::::::1 II I I I I I II I I II I::::::: -------1 II -------1 -------1 IIII 141 11------111------11------- IIIIIIIIII IIIIIIIII IIIIIIIIII :::::::1 II I I I I I II I I I I I::::::: -------1 II I I I I I II I I II 1------- 1111111111111111111 "1"11111111111111 I I 111'111"1111'11 I II II I I"1" 111111 I 1111'I'I ................. I ????????????????? 11:::::::::::::::11 11:::::::::::::::1' 1---------------11 1.=:~~~~~~~=I .1---------------11 ?1---------------1 1---------------1 .. --------------- .. I????? . . . . ..... " . . ????????? ........ 1 1111111.11111111111 1111111111111111111 '""11"'1"1'1111 1111111111,11,1111, '111111111111111111 1111111111111111111 -------.111,,1 ?? , 1111.------I 111.-----------?? -------.1111I ".1" I 11I1 I I 111.------:::::::11: II 1IIII I III:::::::: -------??1 II I I.'II1I I III I II I'1 1:------:::::=U :::::: -------?? 1II II "." ,.1" 111.1-------------.11 I I , ?? ------:::::::111 II 1III1 I III:::::::: -------?? , I 111I11 I I 1??------- -::-:-:-:-=-1 I11~1?71711 I1--:-:-:-:-::=:-:-: -_-_-==_-_-_-:..-_-(j' ,',',I,',',',"""'if___-_-=_-===-_-___ ---------111111111111111------------~-------1111111111111111111111111111111Ir:----=:-:-- ::===-----:-,,' ,',1,',',',,1,'',',,',','',,'',,'',,'',,'1,',',',,'',,'',,'',,''-,-=---==..-:------===------,' ',------=.---: :--~-----11111111111111111111111111111111111_-:---==-----? -------1111 IIIIIIIIII 1111------- - - - - - - 1 1 II I I I I I t I I I 1 1 1 1 - - - - - - - - --------1111111111111111-------'11----------------1 I I I I I I I I I I I I 1----------_-_-===-_-_-_-_-_-_=.! ,',1,',',',',1,'.', !.=.,..-_-_-_-___-===--_-_-_ --------111 III1IIIII -----=-----_-~-..=..'_I_'-'_I~-::_-:-~----- """'///////////////""'" """'////////'/'////'"",, """'/'/','/'//'////'"",, """,'//'/'/"/'////'""" """"'/'//','////'/'"",, """'///'//////'////'"",, """'///'/'////'////'"",, """'///'//////'////'"",, """,'//'/"/'/'//'"""" """,'//'//////'///'""", """'///'///////////'"",, """'///////////////""'" """'///////////////""'" 11---------------1 1I1I1 11---------------1 II II II IIII II I II II 11---------------1 I I I II I II II I 11---------------1 I I I II I 11---------------1 II II II IIII II I 1111 '''' 11---------------1 11:::::::::::::::1 I I I II I 11111 II II I 11---------------1 I I I II I I1III 11---------------1 I I I II I I I II I 11---------------11 I I II I II II I 11---------------11 I I I I I II II I 11---------------11 I I II I """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", """""""""""""", I I' I ,.1------------111 I I I I III I111:::::::::::::::1.11 II I I II I , ?? - - - - - - - - - - - - - ?? , I II I 1111111===--=====111, : II I II I '.1--------------11' I II I I IIII I I I I " ------------I II 1111:::::::::::::::.11 I II 1 I II I 11::::::::::::::::111 III I I II I , ?? -------------- ?? , I II I Figure 4: Dependence on the shape and texture feature of the figures. the two opposite borders of the figure no longer reinforce each other. Fig. 4 demonstrates that the border effect and its consequences for the medial axis also depend on the shape of the figures and the nature of the texture they contain (eg the orientations of the elements). Bars in the texture parallel to the border induce stronger highlights, and as a consequence, cause stronger ripple effects and medial axis highlights. This comes from the stronger co-linear, contour enhancing, inputs these bars receive than bars not parallel to the border. Z. Li 142 3 Summary and Discussion The model of V1 was originally proposed to account for pre-attentive contour enhancement and visual segmentation (Li 1998, 1999, 1999b). The contextual influences mediated by intracortical interactions enable each V1 neuron to process inputs from a local image area substantially larger than its classical receptive field. This enables cortical neurons to detect image locations where translation invariance in the input image breaks down, and highlight these image locations with higher neural activities, making them conspicuous. These highlights mark candidate locations for image region (or object surface) boundaries, smooth contours and small figures against backgrounds, serving the purpose of pre-attentive segmentation. This paper has shown that the figure-ground and medial axis effects observed in the recent experiments can be accounted for using a purely V1 mechanism for border highlighting, provided that the sizes of the figures are small enough or of finite and appropriate scale. This has been the case in the existing experiments. We therefore suggest that feedbacks from higher visual areas are not necessary to explain the experimental observations, although we cannot, of course, exclude the possibilities that they also contribute. References [1] Lamme V.A. (1995) Journal of Neuroscience 15(2), 1605-15. [2] Lee T.S, Mumford D, Romero R. and Lamme V. A.F. (1998) Vis. Res. 38: 2429-2454. [3] Zipser K., Lamme V. A., and Schiller P. H. (1996) J. Neurosci. 16 (22), 737689. [4] Lamme V. A. F., Zipser K. and Spekreijse H. Soc. Neuroscience Abstract 603.1, 1997. [5] Blum H. (1973) Biological shape and visual science J. Theor. Bioi. 38: 205-87. [6] Gallant J.L., van Essen D.C., and Nothdurft H.C. (1995) In Early vision and beyond eds. T. Papathomas, Chubb C, Gorea A., and Kowler E. (MIT press), pp 89-98. [7] C. D. Gilbert (1992) Neuron. 9(1), 1-13. [8] C. M. Gray and W. Singer (1989) Proc. Natl. Acad. Sci. USA 86, 1698-1702. [9] M. K. Kapadia, M. Ito, C. D. Gilbert, and G. Westheimer (1995) Neuron. 15(4), 843-56. [10] J. J. Knierim and D. C. van Essen (1992) J. Neurophysiol. 67, 961-980. [11] Z. Li (1998) Neural Computation 10(4) p 903-940. [12] Z. Li (1999) Network: computations in neural systems 10(2). p. 187-212. [13] Z. Li (1999b) Spatial Vision 13(1) p. 25-50. [14] K.S. Rockland and J. S. Lund (1983) J. Compo Neurol. 216, 303-318 [15] E. L. White (1989) Cortical circuits (Birkhauser) .
1746 |@word middle:1 stronger:6 simulation:1 solid:4 initial:2 tuned:1 existing:1 contextual:9 kowler:1 visible:4 romero:2 shape:4 enables:1 medial:16 fewer:1 iso:4 compo:1 filtered:2 contribute:1 location:6 preference:1 constructed:1 become:2 consists:3 inside:1 indeed:2 roughly:1 behavior:1 nor:1 brain:1 becomes:1 project:1 monosynaptic:2 provided:1 circuit:1 substantially:1 finding:2 temporal:2 exactly:1 demonstrates:2 uk:1 unit:2 li8:1 segmenting:1 persists:1 local:7 io:1 consequence:2 despite:1 acad:1 firing:1 twice:2 weakened:1 suggests:1 co:1 contextually:1 limited:1 responsible:2 area:8 significantly:1 convenient:1 yio:1 pre:3 induce:2 suggest:1 onto:1 cannot:1 influence:6 gilbert:5 center:8 enhanced:1 homogeneous:2 superpose:1 element:10 xi9:1 observed:4 k1r:1 region:27 cycle:1 transforming:1 skeleton:1 xio:3 depend:1 segment:7 ithe:1 purely:1 neurophysiol:1 effective:7 london:1 outside:3 larger:3 plausible:1 abstractly:1 highlighted:1 noisy:1 differentiate:1 kapadia:3 ucl:1 propose:1 interconnected:3 interaction:7 neighboring:2 aligned:1 rockland:3 flo:2 validate:1 enhancement:2 ripple:12 hypercolumnar:1 object:1 depending:1 ac:1 soc:1 come:1 thick:3 enable:1 settle:1 biological:1 theor:1 considered:1 ground:13 anaesthesia:2 early:1 purpose:1 diminishes:1 proc:1 sensitive:1 agrees:1 create:2 mit:1 modified:1 spekreijse:2 yo:1 longest:2 mainly:4 suppression:10 detect:2 vl:1 selective:1 i1:2 orientation:11 spatial:2 field:10 having:1 sampling:1 zhaoping:2 represents:1 thin:2 stimulus:3 iiiiiiiiii:3 escape:1 few:2 randomly:1 oriented:1 composed:1 interneurons:2 essen:5 highly:1 intra:5 possibility:1 natl:1 edge:10 necessary:1 experience:2 re:1 plotted:1 iii1:1 column:1 modeling:2 uniform:1 periodic:1 hypercolumns:1 peak:4 ie:2 lee:3 jo:3 fir:1 li:12 account:5 potential:3 exclude:1 intracortical:2 includes:1 caused:3 vi:13 onset:1 performed:1 break:1 sup:1 parallel:2 ii1:1 formed:1 ir:1 largely:1 saliency:4 executes:1 detector:2 oscillatory:1 fo:1 reach:1 explain:1 synaptic:1 ed:1 against:3 attentive:3 disynaptic:2 colleague:1 pp:1 sampled:1 gain:1 color:1 segmentation:3 amplitude:1 appears:1 higher:14 originally:1 xie:3 follow:1 response:25 execute:1 though:1 strongly:2 generality:1 furthermore:2 receives:2 horizontal:4 overlapping:1 gray:2 usa:1 effect:29 contain:1 hence:1 white:2 eg:1 interchangeably:1 self:1 width:2 excitation:3 manifestation:1 evident:2 motion:1 fj:2 image:14 common:1 rotation:2 sigmoid:1 belong:1 resting:1 significant:7 grid:3 longer:1 surface:1 inhibition:3 recent:1 belongs:1 certain:1 manifested:1 yi:1 dashed:4 ii:133 reduces:1 smooth:3 schematic:1 enhancing:1 vision:2 normalization:1 cell:24 proposal:1 background:7 receive:1 iiii:6 spacing:4 sends:1 extra:1 call:3 zipser:4 near:1 iii:39 enough:2 opposite:3 stereo:1 cause:3 induces:2 processed:1 reduced:2 millisecond:1 inhibitory:6 neuroscience:3 per:1 reinforces:1 serving:1 xii:1 discrete:1 group:1 threshold:1 blum:2 nothdurft:2 neither:1 v1:10 oscillation:1 chubb:1 layer:1 activity:4 strength:4 occur:1 nearby:4 relatively:1 jio:2 membrane:3 fio:1 smaller:3 suppressed:2 conspicuous:1 biologically:1 happens:1 making:1 invariant:2 taken:1 equation:1 previously:1 mechanism:8 singer:2 know:2 away:1 appropriate:4 classical:2 contact:1 initializes:1 mumford:2 receptive:10 dependence:3 distance:7 schiller:2 reinforce:2 lateral:1 sci:1 spanning:2 length:2 providing:1 westheimer:1 mediating:2 gallant:2 i_:1 observation:3 neuron:9 finite:4 extended:1 knierim:3 pair:5 hypercolumn:2 namely:1 connection:8 able:1 suggested:2 bar:29 usually:1 pattern:5 beyond:1 lund:4 rf:3 including:1 treated:1 tations:1 orthogonally:1 picture:1 axis:16 mediated:1 relative:1 loss:1 highlight:8 abolished:1 suggestion:1 proportional:2 i111:2 exciting:1 translation:3 excitatory:6 summary:1 accounted:2 supported:2 course:1 salience:1 side:1 weaker:1 van:5 boundary:7 dimension:1 feedback:3 cortical:7 contour:6 ignores:2 collection:1 far:1 preferred:1 global:4 conclude:1 physiologically:2 nature:2 interact:1 necessarily:1 yyio:1 spread:1 neurosci:1 arrow:2 border:41 noise:3 whole:1 mediate:1 lesion:1 fig:6 gatsby:2 lie:1 candidate:1 ib:1 ito:1 z0:1 down:1 showing:1 lamme:10 decay:2 physiological:1 neurol:1 i11:4 texture:21 magnitude:1 perceptually:1 interneuron:1 fc:1 wavelength:5 visual:19 highlighting:1 bioi:1 sized:1 consequently:1 diminished:1 determined:1 except:1 birkhauser:1 called:1 secondary:4 invariance:1 experimental:3 wio:1 college:1 mark:1 modulated:1 phenomenon:1
816
1,747
A SNoW-Based Face Detector Ming-Hsuan Yang Dan Roth Narendra Ahuja Department of Computer Science and the Beckman Institute University of Illinois at Urbana-Champaign Urbana, IL 61801 mhyang~vision.ai.uiuc.edu danr~cs.uiuc.edu ahuja~vision.ai.uiuc.edu Abstract A novel learning approach for human face detection using a network of linear units is presented. The SNoW learning architecture is a sparse network of linear functions over a pre-defined or incrementally learned feature space and is specifically tailored for learning in the presence of a very large number of features. A wide range of face images in different poses, with different expressions and under different lighting conditions are used as a training set to capture the variations of human faces. Experimental results on commonly used benchmark data sets of a wide range of face images show that the SNoW-based approach outperforms methods that use neural networks, Bayesian methods, support vector machines and others. Furthermore, learning and evaluation using the SNoW-based method are significantly more efficient than with other methods. 1 Introduction Growing interest in intelligent human computer interactions has motivated a recent surge in research on problems such as face tracking, pose estimation, face expression and gesture recognition. Most methods, however, assume human faces in their input images have been detected and localized. Given a single image or a sequence of images, the goal of face detection is to identify and locate human faces regardless of their positions, scales, orientations, poses and illumination. To support automated solutions for the above applications, this has to be done efficiently and robustly. The challenge in building an efficient and robust system for this problem stems from the fact that human faces are highly non-rigid objects with a high degree of variability in size, shape, color and texture. Numerous intensity-based methods have been proposed recently to detect human faces in a single image or a sequence of images. Sung and Poggio [24J report an example-based learning approach for locating vertical frontal views of human faces. They use a number of Gaussian clusters to model the distributions of face and non-face patterns. A small window is moved over an image to determine whether a face exists using the estimated distributions. In [16], a detection algorithm is proposed that combines template matching and feature-based detection method using hierarchical Markov random fields (MRF) and maximum a posteriori probability (MAP) estimation. Colmenarez and Huang [4) apply Kullback relative information for maximal discrimination between positive and negative examples of faces. They use a family of discrete Markov processes to model faces and background patterns and estimate the density functions. Detection of a face is based on the likelihood A SNoW-Based Face Detector 863 ratio computed during training. Moghaddam and Pentland [12] propose a probabilistic method that is based on density estimation in a high dimensional space using an eigenspace decomposition. In [20], Rowleyet al. use an ensemble of neural networks to learn face and non-face patterns for face detection. Schneiderman et al. describe a probabilistic method based on local appearance and principal component analysis [23]. Their method gives some preliminary results on profile face detection. Finally, hidden Markov models [17], higher order statistics [17], and support vector machines (SVM) [14] have also been applied to face detection and demonstrated some success in detecting upright frontal faces under certain lighting conditions. In this paper, we present a face detection method that uses the SNoW learning architecture [18, 3] to detect faces with different features and expressions, in different poses, and under different lighting conditions. SNoW (Sparse Network of Winnows) is a sparse network of linear functions that utilizes the Winnow update rule [10]. SNoW is specifically tailored for learning in domains in which the potential number of features taking part in decisions is very large, but may be unknown a priori. Some of the characteristics of this learning architecture are its sparsely connected units, the allocation of features and links in a data driven way, the decision mechanism and the utilization of an efficient update rule. SNoW has been used successfully on a variety of large scale learning tasks in the natural language domain [18, 13, 5, 19] and this is its first use in the visual processing domain. In training the SNoW-based face detector, we use a set of 1,681 face images from Olivetti [22], UMIST [6], Harvard [7], Yale [1] and FERET [15] databases to capture the variations in face patterns. In order to compare our approach with other methods, our experiments involve two benchmark data sets [20, 24] that have been used in other works on face detection. The experimental results on these benchmark data sets (which consist of 225 images with 619 faces) show that our method outperforms all other methods evaluated on this problem, including those using neural networks [20], Kullback relative information [4], naive Bayes [23] and support vector machines [14], while being significantly more efficient computationally. Along with these experimental results we describe further experiments that provide insight into some of the theoretical and practical considerations of SNoW-based learning systems. In particular, we study the effect of learning with primitive as well as with multi-scale features, and discuss some of the sources of the success of the approach. 2 The SN oW System The SNoW (Sparse Network of Winnows) learning architecture is a sparse network of linear units over a common pre-defined or incrementally learned feature space. Nodes in the input layer of the network represent simple relations over the input and are being used as the input features. Each linear unit is called a target node and represents relations which are of interest over the input examples; in the current application, only two target nodes are being used, one as a representation for a face pattern and the other for a non-face pattern. Given a set of relations (Le., types of features) that may be of interest in the input image, each input image is mapped into a set of features which are active (present) in it; this representation is presented to the input layer of SNoW and propagates to the target nodes. (Features may take either binary value, just indicating the fact that the feature is active (present) or real values, reflecting its strength; in the current application, all features are binary. See Sec 3.1.) Target nodes are linked via weighted edges to (some of the) input features. Let At = {i 1 , ... , i m } be the set of features that are active in an example and are linked to the target node t. Then the linear unit is active if and only if 2:iEAt wf > Ot, where wf is the weight on the edge connecting the ith feature to the target node t, and Ot is its threshold. In the current application a single SNoW unit which includes two subnetworks, one M-H Yang, D. Roth and N. Ahuja 864 for each of the targets, is used. A given example is treated autonomously by each target subnetwork; that is, an image labeled as a face is used as a positive example for the face target and as a negative example for the non-face target, and vice-versa. The learning policy is on-line and mistake-driven; several update rules can be used within SNoW. The most successful update rule, and the only one used in this work is a variant of Littlestone's Winnow update rule, a mUltiplicative update rule tailored to the situation in which the set of input features is not known a priori, as in the infinite attribute model [2]. This mechanism is implemented via the sparse architecture of SNoW. That is, (1) input features are allocated in a data driven way - an input node for the feature i is allocated only if the feature i is active in the input image and (2) a link (Le., a non-zero weight) exists between a target node t and a feature i if and only if i has been active in an image labeled t. Thus, the architecture also supports augmenting the feature types at later stages or from external sources in a flexible way, an option we do not use in the current work. The Winnow update rule has, in addition to the threshold fh at the target t, two update parameters: a promotion parameter a > 1 and a demotion parameter 0 < f3 < 1. These are being used to update the current representation of the target t (the set of weights w;) only when a mistake in prediction is made. Let At = {il' ... , i m } be the set of active features that are linked to the target node t. If the algorithm predicts 0 (that is, LiEAt w~ ::; fh) and the received label is 1, the active weights in the current example are promoted in a mUltiplicative fashion: 'Vi E At, wf +- a . w~. If the algorithm predicts 1 (LiEA t wf > Ot) and the received label is 0, the active weights in the current example are demoted: 'Vi E At, w~ +- f3. wf. All other weights are unchanged. The key property of the Winnow update rule is that the number of examples l it requires to learn a linear function grows linearly with the number of relevant features and only logarithmically with the total number of features. This property seems crucial in domains in which the number of potential features is vast, but a relatively small number of them is relevant (this does not mean that only a small number of them will be active, or have non-zero weights). Winnow is known to learn efficiently any linear threshold function and to be robust in the presence of various kinds of noise and in cases where no linear-threshold function can make perfect classification, and still maintain its abovementioned dependence on the number of total and relevant attributes [11, 9]. Once target subnetworks have been learned and the network is being evaluated, a winner-take-all mechanism selects the dominant active target node in the SNoW unit to produce a final prediction. In general, but not in this work, units' output may be cached and processed along with the output of other SNoW units to produce a coherent output. 3 Learning to detect faces For training, we use a set of 1,681 face images (collected from Olivetti [22], UMIST [6], Harvard [7], Yale [1] and FE RET [15] databases) which have wide variations in pose, facial expression and lighting condition. For negative examples we start with 8,422 non-face examples from 400 images of landscapes, trees, buildings, etc. Although it is extremely difficult to collect a representative set of non-face examples, the bootstrap method [24] is used to include more non-face examples during training. For positive examples, each face sample is manually cropped and normalized such that it is aligned vertically and its size is 20 x 20 pixels. To make the detection method less sensitive to scale and rotation variation, 10 face examples are generated from each original sample. The images are produced by randomly rotating the images by up to 15 degrees with scaling between 80% and 120%. This produces 16,810 face samples. Then, histogram equalization is performed that maps the lIn the on-line setting [10] this is usually phrased in terms of a mistake-bound but is known to imply convergence in the PAC sense [25, 8]. A SNoW-Based Face Detector 865 intensity values to expand the range of intensities. The same procedure is applied to input images in detection phase. 3.1 Primitive Features The SNoW-based face detector makes use of Boolean features that encode the positions and intensity values of pixels. Let the pixel at (x, y) of an image with width wand height h have intensity value I(x, y) (O :::; I{x, y) :::; 255). This information is encoded as a feature whose index is 256{y * w + x) + I{x, y). This representation ensures that different points in the {position x intensity} space are mapped to different features. (That is, the feature indexed 256{y * w + x) + I{x, y) is active if and only if the intensity in position (x, y) is I{x, y).) In our experiments, the values for wand hare 20 since each face sample has been normalized to an image of 20 x 20 pixels. Note that although the number of potential features in our representation is 102400 (400 x 256), only 400 of those are active (present) in each example, and it is plausible that many features will never be active. Since the algorithm's complexity depends on the number of active features in an example, rather than the total number of features, the sparseness also ensures efficiency. 3.2 Multi-scale Features Many vision problems have utilized multi-scale features to capture the structures of an object. However, extracting detailed multi-scale features using edge or region information from segmentation is a computationally expensive task. Here we use the SNo W paradigm to extract Boolean features that represent multi-scale information. This is done in a similar way to the {position x intensity} used in Sec. 3.1, only that in this case we encode, in addition to position, the mean and variance of a multi-scale pixel. The hope is that the multi-scale feature will capture information that otherwise requires many pixel-based features to represent, and thus simplify the learning problem. Uninformative multi-scale features will be quickly assigned low weights by the learning algorithm and will not degrade performance. Since each face sample is normalized to be a rectangular image of the same size, it suffices to consider rectangular sub-images with varying size from face samples, and for each generate features in terms of the means and variances of their intensity values. Empirical results show that faces can be described effectively this way. Instead of using the absolute values of the mean and variance when encoding the features, we discretize these values into a predefined number of classes. Since the distribution of the mean values as well as the variance values is normal, the discretization is finer near the means of these distributions. The total number of values was determined empirically to be 100, out of which 80 ended up near the mean. Given that, we use the same scheme as in Sec. 3.1 to map the {position x intensi ty mean x intensity variance} space into the Boolean feature space. This is done separately for four different sub-image scales, of 1 x 1, 2 x 2, 4 x 4 to 10 x 10 pixels. The multi-scale feature vector consists of active features corresponding to all these scales. The number of active features in each example is therefore 400 + 100 + 25 + 4, although the total number of features is much larger. In recent work we have used more sophisticated conjunctive features for this purpose yielding even better results. However, the emphasis here is that with the SNoW approach, even very simplistic features support excellent performance. 4 Empirical Results We tested the SNoW-based approach with both sets of features on the two sets of images collected by Rowley [20], and Sung [24]. Each image is scanned with a rectangular window to determine whether a face exists in the window or not. To detect faces of different scales, each input image is repeatedly subsampled by a factor of 1.2 and scanned through for 10 iterations. Table 1 shows the reported 866 M-H. Yang, D. Roth and N Ahuja experimental results of the SNoW-based face detectors and several face detection systems using the two benchmark data sets (available at http://www.cs.cmu.edu/ -har/ faces.html). The first data set consists of 130 images with 507 frontal faces and the second data set consists of 23 images with 155 frontal faces. There are a few hand drawn faces and cartoon faces in both sets. Since some methods use intensity values as their features, systems 1-4 and 7 discard these such hand drawn and cartoon faces. Therefore, there are 125 images with 483 faces in test set 1 and 20 images with 136 faces in test set 2 respectively. The reported detection rate is computed as the ratio between the number of faces detected in the images by the system and the number of faces identified there by humans. The number of false detections is the number of non-faces detected as faces. It is difficult to evaluate the performance of different methods even though they use the same benchmark data sets because different criteria (e.g. training time, number of training examples involved, execution time, number of scanned windows in detection) can be applied to favor one over another. Also, one can tune the parameters of one's method to increase the detection rates while increasing also the false detections. The methods using neural networks [20], distribution-based [24], Kullback relative information [4] and naive Bayes [23] report several experimental results based on different sets of parameters. Table 1 summarizes the best detection rates and corresponding false detections of these methods. Although the method in [4] has the highest detection rates in one benchmark test, this was done by significantly increasing the number of false detections. Other than that, it is evident that the SNoW-based face detectors outperforms others in terms of the overall performance. These results show the credibility of SNoW for these tasks, as well Table 1: Experimental results on images from test set 1 (125 images with 483 faces) in [20] and test set 2 (20 images with 136 faces) in [24] (see text for details) Test Set 1 Test Set 2 II Method /I Detect Rate SNoW w/ priDlitive features 94.2'70 SNoW wi Dlulti-scale features 94.8% Mixture of factor analyzers [261 92.3'70 Fisher linear discriminant [271 93.6'7. Distribution- based. [24 J N~ Neural network [20J 92.5J"o Naive Bayes [23J 93.0% Kullback relative information [41 98.0'7. Support vector machine [14J N/A False Detects 84 78 82 74 N/A 862 88 12758 N/A Detect Rate 1 False Detects 93.6'70 3 94.1% 3 89.4'70 3 91.5'7. 1 81.9% 13 90.3% 42 91.2% 12 NjA NjA 74.2'7. 20 as exhibit the improvement achieved by increasing the expressiveness of the features. This may indicate that further elaboration of the features, which can be done in a very general and flexible way within SNoW, would yield further improvements. In addition to comparing feature sets, we started to investigate some of the reasons for the success of SNoW in this domain, which we discuss briefly below. Two potential contributions are the Winnow update rule and the architecture. First, we studied the update rule in isolation, independent of the SNoW architecture. The results we got when using the Winnow simply as a discriminator were fairly poor (63.9%/65.3% for Test Set 1, primitive and multi-scale features, respectively, and similar results for the Test Set 2.). The results are not surprising, given that Winnow is used here only as a discriminator and is using only positive weights. Investigating the architecture in isolation reveals that weighting or discarding features based on their contribution to mistakes during training, as is done within SNoW, is crucial. Considering the active features uniformly (separately for faces and non-faces) yields poor results. Specifically, studying the resulting SNoW network shows that the total number of features that were active with non-faces is 102,208, out of 102,400 possible A SNoW-Based Face Detector 867 (primitive) features. The total number of active features in faces was only 82,608, most of which are active only a few times. In retrospect, this is clear given the diverse set of images used as negative examples, relative to the somewhat restricted (by nature) set of images that constitute faces. (Similar phenomenon occurs with the multi-scale features, where the numbers are 121572 and 90528, respectively, out of 135424.) Overall it exhibits that the architecture, the learning regime and the update rule all contribute significantly to the success of the approach. Figure 1 shows some faces detected in our experiments. Note that profile faces and faces under heavy illumination are detected. Experimental results show that profile faces and faces under different illumination are detected very well by our method. Note that although there may exist several detected faces around each face, only one window is drawn to enclose each detected face for clear presentation . . .f?,~' "ru - i ... ' Figure 1: Sample experimental results using our method on images from two benchmark data sets. Every detected face is shown with an enclosing window. 5 Discussion and Conclusion Many theoretical and experimental issues are to be addressed before a learning system of this sort can be used to detect faces efficiently and robustly under general conditions. In terms of the face detection problem, the presented method is still not able to detect rotated faces. A recent method [21], addresses this problem by building upon a upright face detector [20] and rotating each test sample to upright position. However, it suffers from degraded detection rates and more false detections. Given our results, we believe that the SNoW approach, if adapted in similar ways, would generalize very well to detect faces under more general conditions. In terms of the SNoW architecture, although the main ingredients of it are understood theoretically, more work is required to better understand its strengths. This is increasingly interesting given that the architecture has been found to perform very well in large-scale problem in the natural language domain as well 868 M-H. Yang, D. Roth and N Ahuja The contributions of this paper can be summarized as follows. We have introduced the SNoW learning architecture to the domain of visual processing and described an approach that detect faces regardless of their poses, facial features and illumination conditions. Experimental results show that this method outperforms other methods in terms of detection rates and false detectionss, while being more efficient both in learning and evaluation. References [1) P. Belhumeur, J. Hespanha, and D. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711-720, 1997. [2) A. Blum. Learning boolean functions in an infinite attribute space. Machine Learning, 9(4):373386, 1992. [3) A. Carleson, C. Cumby, J. Rosen, and D. Roth . The SNoW learning architecture. Technical Report UIUCDCS-R-99-2101, UIUC Computer Science Department, May 1999. [4) A. J. Colmenarez and T . S. Huang. Face detection with information-based maximum discrimination. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 782-787, 1997. [5) A. R. Golding and D. Roth. A winnow based approach to context-sensitive spelling correction. Machine Learning, 34:107-130, 1999. Special Issue on Machine Learning and Natural Language. [6) D. B . Graham and N . M. Allinson. Characterizing virtual eigensignatures for general purpose face recognition. In H. Wechsler, P. J. Phillips, V . Bruce, F. Fogelman-Soulie, and T . S. Huang, editors, Face Recognition: From Theory to Applications, volume 163 of NATO ASI Series F, Computer and Systems Sciences, pages 446-456. Springer, 1998. [7) P. Hallinan. A Deformable Model for Face Recognition Under Arbitrary Lighting Conditions. PhD thesis, Harvard University, 1995. [8) D. Helmbold and M . K. Warmuth. On weak learning. Journal of Computer and System Sciences, 50(3):551-573, June 1995. [9) J . kivinen and M. K . Warmuth. Exponentiated gradient versus gradient descent for linear predictors. In Proceedings oj the Annual ACM Symposium on the Theory of Computing, 1995. [10) N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285-318, 1988. [11) N. Littlestone. Redundant noisy attributes, attribute errors, and linear threshold learning using winnow. In Proceedings oj the fourth Annual Workshop on Computational Learning Theory, pages 147-156, 1991. [12) B. Moghaddam and A . Pentland. Probabilistic visual learning for object recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):696-710, 1997. [13) M. Munoz, V. Punyakanok, D. Roth, and D. Zimak. A learning approach to shallow parsing. In EMNLP- VLC'99, the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, June 1999. [14) E. Osuna, R . Freund, and F. Girosi. Training support vector machines: an application to face detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 130-136, 1997. [15) P. J. Phillips, H. Moon, S . Rizvi , and P . Rauss. The feret evaluation. In H. Wechsler, P. J. Phillips, V . Bruce, F. Fogelman-Soulie, and T. S . Huang, editors, Face Recognition: From Theory to Applications, volume 163 of NATO ASI Series F, Computer and Systems Sciences, pages 244-261. Springer, 1998. [16) R. J. Qian and T. S . Huang. Object detection using hierarchical mrf and map estimation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 186-192, 1997. [17) A. N. Rajagopalan, K. S. Kumar, J. Karlekar, R. Manivasakan, and M . M. Patil. Finding faces in photographs . In Proceedings of the Sixth International Conference on Computer Vision, pages 640-645, 1998. [18) D. Roth. Learning to resolve natural language ambiguities: A unified approach. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 806-813, 1998. [19) D. Roth and D. Zelenko. Part of speech tagging using a network of linear separators. In COLINGACL 98, The 17th Int. Conference on Computational Linguistics, pages 1136-1142, 1998. [20) H. Rowley, S . Baluja, and T. Kanade. Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):23-38, 1998. [21) H. Rowley, S. Baluja, and T . Kanade . Rotation invariant neural network-based face detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 38-44, 1998. [22) F. S. Samaria. Face Recognition Using Hidden Markov Models. PhD thesis, University of Cambridge, 1994. [23) H . Schneiderman and T . Kanade. Probabilistic modeling of local appearance and spatial relationships for object recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 45- 51, 1998. [24) K.-K. Sung and T . Poggio. Example-based learning for view-based human face detection . IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):39-51, 1998. L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, Nov. 1984. M .-H. Yang, N. Ahuja, and D. Kriegman . Face detection using a mixture of factor analyzers . In Proce'edings of the IEEE International Conference on Image Processing, 1999 . [27) M.-H. Yang, N. Ahuja, and D . Kriegman. Mixtures of linear subspaces for face detection. In Proceedings of the Foruth IEEE International Conference on Automatic Face and Gesture Recognition, 2000.
1747 |@word briefly:1 seems:1 decomposition:1 series:2 outperforms:4 current:7 discretization:1 comparing:1 surprising:1 conjunctive:1 parsing:1 girosi:1 shape:1 update:13 discrimination:2 v:1 intelligence:5 warmuth:2 ith:1 detecting:1 node:11 contribute:1 demoted:1 height:1 along:2 symposium:1 consists:3 dan:1 combine:1 theoretically:1 tagging:1 surge:1 uiuc:4 growing:1 multi:11 ming:1 detects:2 resolve:1 window:6 considering:1 increasing:3 abound:1 eigenspace:1 kind:1 ret:1 finding:1 unified:1 ended:1 sung:3 every:1 utilization:1 unit:9 positive:4 before:1 understood:1 local:2 vertically:1 mistake:4 encoding:1 emphasis:1 studied:1 collect:1 range:3 practical:1 bootstrap:1 procedure:1 empirical:3 asi:2 significantly:4 got:1 matching:1 projection:1 pre:2 context:1 equalization:1 www:1 map:4 demonstrated:1 roth:9 primitive:4 regardless:2 rectangular:3 hsuan:1 helmbold:1 qian:1 rule:11 insight:1 variation:4 target:16 punyakanok:1 us:1 harvard:3 logarithmically:1 recognition:15 expensive:1 utilized:1 sparsely:1 predicts:2 database:2 labeled:2 capture:4 region:1 ensures:2 connected:1 autonomously:1 highest:1 complexity:1 rowley:3 kriegman:3 upon:1 efficiency:1 joint:1 various:1 samaria:1 describe:2 detected:9 artificial:1 whose:1 encoded:1 larger:1 plausible:1 otherwise:1 favor:1 statistic:1 noisy:1 final:1 sequence:2 propose:1 interaction:1 maximal:1 relevant:3 aligned:1 deformable:1 moved:1 convergence:1 cluster:1 produce:3 cached:1 perfect:1 rotated:1 object:5 augmenting:1 pose:6 received:2 implemented:1 c:2 enclose:1 indicate:1 snow:37 attribute:6 human:10 virtual:1 suffices:1 preliminary:1 correction:1 around:1 normal:1 narendra:1 fh:2 purpose:2 estimation:4 beckman:1 label:2 sensitive:2 vice:1 successfully:1 weighted:1 hope:1 promotion:1 gaussian:1 rather:1 varying:1 encode:2 june:2 improvement:2 likelihood:1 detect:10 wf:5 posteriori:1 sense:1 rigid:1 hidden:2 relation:3 expand:1 selects:1 pixel:7 overall:2 classification:1 orientation:1 flexible:2 html:1 priori:2 issue:2 fogelman:2 spatial:1 special:1 fairly:1 field:1 once:1 f3:2 never:1 rauss:1 cartoon:2 manually:1 represents:1 rosen:1 others:2 report:3 intelligent:1 simplify:1 few:2 randomly:1 national:1 subsampled:1 phase:1 maintain:1 danr:1 detection:35 interest:3 highly:1 investigate:1 evaluation:3 mixture:3 yielding:1 har:1 predefined:1 moghaddam:2 edge:3 poggio:2 facial:2 tree:1 indexed:1 littlestone:3 rotating:2 theoretical:2 modeling:1 boolean:4 umist:2 zimak:1 predictor:1 successful:1 reported:2 density:2 international:3 probabilistic:4 connecting:1 quickly:2 thesis:2 ambiguity:1 huang:5 emnlp:1 external:1 potential:4 sec:3 summarized:1 includes:1 int:1 vi:2 depends:1 multiplicative:2 view:2 later:1 performed:1 linked:3 start:1 bayes:3 option:1 sort:1 bruce:2 contribution:3 il:2 degraded:1 moon:1 variance:5 characteristic:1 efficiently:3 ensemble:1 yield:2 identify:1 landscape:1 generalize:1 eigensignatures:1 bayesian:1 weak:1 produced:1 lighting:5 finer:1 detector:9 suffers:1 sixth:1 ty:1 hare:1 involved:1 color:1 segmentation:1 sophisticated:1 reflecting:1 higher:1 done:6 evaluated:2 though:1 furthermore:1 just:1 stage:1 retrospect:1 hand:2 incrementally:2 believe:1 grows:1 building:3 effect:1 normalized:3 assigned:1 during:3 width:1 allinson:1 criterion:1 evident:1 image:40 consideration:1 novel:1 recently:1 common:1 rotation:2 empirically:1 winner:1 volume:2 versa:1 munoz:1 ai:2 phillips:3 credibility:1 automatic:1 cambridge:1 illinois:1 analyzer:2 language:5 etc:1 dominant:1 recent:3 zelenko:1 olivetti:2 winnow:12 irrelevant:1 driven:3 discard:1 commun:1 certain:1 binary:2 success:4 proce:1 somewhat:1 promoted:1 belhumeur:1 determine:2 paradigm:1 redundant:1 sno:1 ii:1 stem:1 champaign:1 technical:1 gesture:2 lin:1 elaboration:1 prediction:2 mrf:2 variant:1 simplistic:1 vision:9 cmu:1 fifteenth:1 histogram:1 represent:3 tailored:3 iteration:1 achieved:1 background:1 addition:3 cropped:1 uninformative:1 separately:2 addressed:1 source:2 allocated:2 crucial:2 ot:3 extracting:1 near:2 yang:6 presence:2 automated:1 variety:1 isolation:2 architecture:14 identified:1 golding:1 whether:2 expression:4 motivated:1 locating:1 speech:1 constitute:1 repeatedly:1 detailed:1 involve:1 tune:1 clear:2 processed:1 generate:1 http:1 exist:1 estimated:1 diverse:1 discrete:1 demotion:1 key:1 four:1 threshold:6 blum:1 drawn:3 vast:1 karlekar:1 wand:2 schneiderman:2 fourth:1 sigdat:1 family:1 utilizes:1 decision:2 summarizes:1 scaling:1 graham:1 layer:2 bound:1 yale:2 annual:2 strength:2 scanned:3 adapted:1 phrased:1 extremely:1 kumar:1 relatively:1 department:2 poor:2 increasingly:1 osuna:1 wi:1 feret:2 shallow:1 restricted:1 invariant:1 computationally:2 discus:2 mechanism:3 subnetworks:2 studying:1 available:1 apply:1 hierarchical:2 robustly:2 original:1 include:1 linguistics:1 patil:1 wechsler:2 society:5 unchanged:1 occurs:1 dependence:1 spelling:1 abovementioned:1 subnetwork:1 ow:1 exhibit:2 gradient:2 subspace:1 link:2 mapped:2 degrade:1 collected:2 discriminant:1 reason:1 ru:1 index:1 relationship:1 ratio:2 difficult:2 fe:1 hespanha:1 negative:4 enclosing:1 policy:1 unknown:1 perform:1 fisherfaces:1 discretize:1 vertical:1 liea:1 markov:4 urbana:2 benchmark:7 descent:1 pentland:2 situation:1 variability:1 locate:1 arbitrary:1 intensity:11 expressiveness:1 introduced:1 required:1 discriminator:2 coherent:1 learned:3 address:1 able:1 usually:1 pattern:15 below:1 regime:1 challenge:1 rajagopalan:1 including:1 oj:2 natural:5 treated:1 kivinen:1 scheme:1 imply:1 numerous:1 started:1 naive:3 extract:1 sn:1 colmenarez:2 text:1 relative:5 freund:1 interesting:1 allocation:1 versus:1 localized:1 ingredient:1 degree:2 propagates:1 editor:2 heavy:1 exponentiated:1 understand:1 institute:1 wide:3 template:1 face:106 taking:1 eigenfaces:1 absolute:1 sparse:6 characterizing:1 soulie:2 commonly:1 made:1 mhyang:1 transaction:4 nov:1 nato:2 kullback:4 active:21 investigating:1 reveals:1 corpus:1 table:3 kanade:3 learn:3 nja:2 robust:2 nature:1 excellent:1 separator:1 domain:7 main:1 linearly:1 noise:1 profile:3 rizvi:1 representative:1 fashion:1 ahuja:7 sub:2 position:8 weighting:1 discarding:1 specific:1 pac:1 learnable:1 svm:1 exists:3 consist:1 workshop:1 false:8 effectively:1 valiant:1 texture:1 phd:2 execution:1 illumination:4 sparseness:1 photograph:1 simply:1 appearance:2 visual:3 tracking:1 springer:2 vlc:1 acm:2 uiucdcs:1 goal:1 presentation:1 fisher:1 specifically:3 upright:3 infinite:2 determined:1 uniformly:1 baluja:2 principal:1 called:1 total:7 experimental:10 cumby:1 indicating:1 support:8 frontal:4 evaluate:1 tested:1 phenomenon:1
817
1,748
Policy Search via Density Estimation AndrewY. Ng Computer Science Division u.c. Berkeley Berkeley, CA 94720 ang@cs.berkeley.edu Ronald Parr Computer Science Dept. Stanford University Stanford, CA 94305 parr@cs.stanjord.edu Daphne Koller Computer Science Dept. Stanford University Stanford, CA 94305 kolle r@cs.stanjord.edu Abstract We propose a new approach to the problem of searching a space of stochastic controllers for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP). Following several other authors, our approach is based on searching in parameterized families of policies (for example, via gradient descent) to optimize solution quality. However, rather than trying to estimate the values and derivatives of a policy directly, we do so indirectly using estimates for the probability densities that the policy induces on states at the different points in time. This enables our algorithms to exploit the many techniques for efficient and robust approximate density propagation in stochastic systems. We show how our techniques can be applied both to deterministic propagation schemes (where the MDP's dynamics are given explicitly in compact form,) and to stochastic propagation schemes (where we have access only to a generative model, or simulator, of the MDP). We present empirical results for both of these variants on complex problems. 1 Introduction In recent years, there has been growing interest in algorithms for approximate planning in (exponentially or even infinitely) large Markov decision processes (MDPs) and partially observable MDPs (POMDPs). For such large domains, the value and Q-functions are sometimes complicated and difficult to approximate, even though there may be simple, compactly representable policies which perform very well. This observation has led to particular interest in direct policy search methods (e.g., [9, 8, 1]), which attempt to choose a good policy from some restricted class IT of policies. In our setting, IT = {1ro : (J E ~m} is a class of policies smoothly parameterized by (J E ~m. If the value of 1ro is differentiable in (J, then gradient ascent methods may be used to find a locally optimal 1ro. However, estimating values of 1ro (and the associated gradient) is often far from trivial. One simple method for estimating 1ro's value involves executing one or more Monte Carlo trajectories using 1ro, and then taking the average empirical return; cleverer algorithms executing single trajectories also allow gradient estimates [9, 1]. These methods have become a standard approach to policy search, and sometimes work fairly well. In this paper, we propose a somewhat different approach to this value/gradient estimation problem. Rather than estimating these quantities directly, we estimate the probability density over the states of the system induced by 1ro at different points in time. These time slice Policy Search via Density Estimation 1023 densities completely determine the value of the policy 1re. While density estimation is not an easy problem, we can utilize existing approaches to density propagation [3, 5], which allow users to specify prior knowledge about the densities, and which have also been shown, both theoretically and empirically, to provide robust estimates for time slice densities. We show how direct policy search can be implemented using this approach in two very different settings of the planning problem: In the first, we have access to an explicit model of the system dynamics, allowing us to provide an explicit algebraic operator that implements the approximate density propagation process. In the second, we have access only to a generative model of the dynamics (which allows us only to sample from, but does not provide an explicit representation of, next-state distributions). We show how both of our techniques can be combined with gradient ascent in order to perform policy search, a somewhat subtle argument in the case of the sampling-based approach. We also present empirical results for both variants in complex domains. 2 Problem description A Markov Decision Process (MDP) is a tuple (S, So, A, R, P) where:! S is a (possibly infinite) set of states; So E S is a start state; A is a finite set of actions; R is a reward function R : S f-t [0, Rmax]; P is a transition model P : S x A f-t ils, such that P(s' I s, a) gives the probability oflanding in state s' upon taking action a in state s. A stochastic policy is a map 1r : S f-t ilA, where 1r( a Is) is the probability of taking action a in state s. There are many ways of defining a policy 1r'S "quality" or value. For a horizon T and discount factor 1', the finite horizon discounted value function VT,"Y[1r] is defined by VO,"Y[1r](s) = R(s) ; vt+1,"Y[1r](s) = R(s) + l' L:a 1r(a I s) L:sl P(s' Is, a)vt'''Y[1r](s'). For an infinite state space (here and below), the summation is replaced by an integral. We can now define several optimality criteria. The finite horizon total reward with horizon T is VT[1r] = VT,d1r](so). The infinite horizon discounted reward with discount l' < 1 is V"Y[1r] = limT-HXl VT,"Y[1r](So). The infinite horizon average reward is Vavg [1r] = limT-HXl ~ VT,1 [1r](so), where we assume that the limit exists. Fix an optimality criterion V. Our goal is to find a policy that has a high value. As discussed, we assume we have a restricted set II of policies, and wish to select a good 1r E II. We assume that II = {1re I E ffim} is a set of policies parameterized by 0 E ffi.m , and that 1re(a I s) is continuously differentiable in 0 for each s, a. As a very simple example, we may have a one-dimensional state, two-action MDP with "sigmoidal" 1re, such that the probability of choosing action ao at state x is 1re(ao I x) = 1/(1 + exp( -8 1 - 82 x)) . ? Note that this framework also encompasses cases where our family II consists of policies that depend only on certain aspects of the state. In particular, in POMDPs, we can restrict attention to policies that depend only on the observables. This restriction results in a subclass of stochastic memory-free policies. By introducing artificial "memory bits" into the process state, we can also define stochastic limited-memory policies. [6] Each 0 has a value V[O] = V[1re], as specified above. To find the best policy in II, we can search for the 0 that maximizes V[O]. If we can compute or approximate V[O], there are many algorithms that can be used to find a local maximum. Some, such as Nelder-Mead simplex search (not to be confused with the simplex algorithm for linear programs), require only the ability to evaluate the function being optimized at any point. If we can compute or estimate V[O]'s gradient with respect to 0, we can also use a variety of (deterministic or stochastic) gradient ascent methods. IWe write rewards as R(s) rather than R(s, a), and assume a single start state rather than an initial-state distribution, only to simplify exposition; these and several other minor extensions are trivial. 1024 3 A. Y Ng, R. Parr and D. Koller Densities and value functions Most optimization algorithms require some method for computing V[O] for any 0 (and sometimes also its gradient). In many real-life MOPs, however, doing so exactly is completely infeasible, due to the large or even infinite number of states. Here, we will consider an approach to estimating these quantities, based on a density-based reformulation of the value function expression. A policy 71" induces a probability distribution over the states at each time t. Letting ?(O) be the initial distribution (giving probability 1 to so), we define the time slice distributions via the recurrence: (1) s a It is easy to verify that the standard notions of value defined earlier can reformulated in terms of ?(t); e.g., VT,1'[7I"](So) = Ei'=o ,,/(?(t) . R), where? is the dot-product operation (equivalently, the expectation of R with respect to ?(t). Somewhat more subtly, for the case of infinite horizon average reward, we have that Vavg [71"] = ?(oo) . R, where ?(oo) is the limiting distribution of (1), if one exists. This reformulation gives us an alternative approach to evaluating the value of a policy 71"0: we first compute the time slice densities ?(t) (or ?(oo), and then use them to compute the value. Unfortunately, that modification, by itself, does not resolve the difficulty. Representing and computing probability densities over large or infinite spaces is often no easier than representing and computing value functions. However, several results [3, 5] indicate that representing and computing high-quality approximate densities may often be quite feasible. The general approach is an approximate density propagation algorithm, using time-slice distributions in some restricted family 3. For example, in continuous spaces, 3 might be the set of multivariate Gaussians. The approximate propagation algorithm modifies equation (1) to maintain the time-slice densities in 3. More precisely, for a policy 71"0, we can view (1) as defining an operator cf>[0] that takes one distribution in !:1s and returns another. For our current policy 71"0 0 , we can rewrite (1) as: ?(t+1) = cf>[Oo](?(t)) . In most cases,=: will not be closed under cf>; approximate density propagation algorithms use some alternative operator 4>, with the properties that, for ? E 3: (a) 4>( ?) is also in 3, and (b) 4>( ?) is (hopefully) close to cf>(?). We use 4>[0] to denote the approximation to cf>[0], and ?(t) to denote (4) [0]) (t) (?(O)). If 4> is selected carefully, it is often the case that ?(t) is close to ?(t). Indeed, a standard contraction analysis for stochastic processes can be used to show: Proposition 1 Assume thatJor all t, 11cf>(?(t)) - 4>(?(t))lll ~ c. Then there exists some constant>. such thatJor all t, 1I?(t) - ?(t) lit ~ c/ >.. In some cases, >. might be arbitrarily small, in which case the proposition is meaningless. However, there are many systems where>. is reasonable (and independent of c) [3]. Furthermore, empirical results also show that approximate density propagation can often track the exact time slice distributions quite accurately. Approximate tracking can now be applied to our planning task. Given an optimality criterion V expressed with ?(t) s, we define an approximation V to it by replacing each ?(t) with ?(t), e.g., VT,1'[7I"](so) = Ei'=o ,t?(t) . R. Accuracy guarantees on approximate tracking induce comparable guarantees on the value approximation; from this, guarantees on the performance of a policy 7I"iJ found by optimizing V are also possible: Proposition 2 Assume that,for all t, we have that 11?(t) - ?(t) lit ~ 6. ThenJor each fixed T, ,: IVT,1'[7I"](So) - VT,1' [7I"](so)I = 0(6). 1025 Policy Search via Density Estimation Proposition 3 Let 0* = argmaxo V[O] and V[O]I ::; ?, then V[O*] - V[O] ::; 2?. 4 0 argmaxo V[O] . If maxo!V[O] - Differentiating approximate densities In this section we discuss two very different techniques for maintaining an approximate density ? (t) using an approximate propagation operator <1>, and show when and how they can be combined with gradient ascent to perform policy search. In general, we will assume that :=: is a family of distributions parameterized by E ffi.l. For example, if :=: is the set of d-dimensional multivariate Gaussians with diagonal covariance matrices, would be a 2d-dimensional vector, specifying the mean vector and the covariance matrix's diagonal. e e Now, consider the task of doing gradient ascent over the space of policies, using some optimality criterion V, say VT,.,,[O]. Differentiating it relative to 0, we get '\7oVT,.,, [O] = '?'['=0 ,t ds~t ) . R. To avoid introducing new notation, we also use ? (t) to denote the associated vector of parameters E ffi.l . These parameters are a function of O. Hence, the internal gradient term is represented by an ? x m Jacobian matrix, with entries representing the derivative of a parameter ~i relative to a parameter OJ. This gradient can be computed using a simple recurrence, based on the chain rule for derivatives: e The first summand (an ? x m Jacobian) is the derivative of the transition operator <1> relative to the policy parameters O. The second is a product of two terms: the derivative of <1> relative to the distribution parameters, and the result of the previous step in the recurrence. 4.1 Deterministic density propagation Consider a transition operator q, (for simplicity, we omit the dependence on 0). The idea in this approach is to try to get <1>( ?) to be as close as possible to q,(?), subject to the constraint that <1>( ?) E 3. Specifically, we define a projection operator r that takes a distribution 'ljJ not in 3, and returns a distribution in 3 which is closest (in some sense) to 'ljJ . We then define <1>(?) = r(q,(?)). In order to ensure that gradient descent applies in this setting, we need only ensure that rand q, are differentiable functions. Clearly, there are many instantiations of this idea for which this assumption holds. We provide two examples. Consider a continuous-state process with nonlinear dynamics, where q, is a mixture of conditional linear Gaussians. We can define 3 to be the set of multivariate Gaussians. The operator r takes a distribution (a mixture of gaussians) 'ljJ and computes its mean and covariance matrix. This can be easily computed from 'ljJ's parameters using simple differentiable algebraic operations. A very different example is the algorithm of [3] for approximate density propagation in dynamic Bayesian networks (DBNs). A DBN is a structured representation of a stochastic process, that exploits conditional independence properties of the distribution to allow compact representation. In a DBN, the state space is defined as a set of possible assignments x to a set of random variables Xl , ' .. ,Xn . The transition model P(x' I x) is described using a Bayesian network fragment over the nodes {Xl, ' " ,Xn , X{, .. . ,X~}. A node X i represents xft) and X: represents xft+1). The nodes X i in the network are forced to be roots (i.e., have no parents), and are not associated with conditional probability distributions. Each node X: is associated with a conditional probability distribution (CPO), which specifies P(X: I Parents(XD) . The transition probability P(X' I X) is defined as A. Y. Ng, R. Parr and D. Koller 1026 11 P(X: I Parents(Xf)). OBNs support a compact representation of complex transition models in MOPs [2]. We can extend the OBN to encode the behavior of an MOP with a stochastic policy 7l' by introducing a new random variable A representing the action taken at the current time. The parents of A will be those variables in the state on which the action is allowed to depend. The CPO of A (which may be compactly represented with function approximation) is the distribution over actions defined by 7l' for the different contexts. In discrete OBNs, the number of states grows exponentially with the number of state variables, making an explicit representation of a joint distribution impractical. The algorithm of [3] defines:::: to be a set of distributions defined compactly as a set of marginals over smaller clusters of variables. In the simplest example, :::: is the set of distributions where XI, ... ,Xn are independent. The parameters ~ defining a distribution in :::: are the parameters of n multinomials. The projection operator r simply marginalizes distributions onto the individual variables, and is differentiable. One useful corollary of [3]'s analysis is that the decay rate of a structured ~ over:::: can often be much higher than the decay rate of ~, so that multiple applications of ~ can converge very rapidly to a stationary distribution; this property is very useful when approximating ?(oo) to optimize relative to Vavg . 4.2 Stochastic density propagation In many settings, the assumption that we have direct access to ~ is too strong. A weaker assumption is that we have access to a generative model - a black box from which we can generate samples with the appropriate distribution; i.e., for any s, a, we can generate samples s' from P(s' I s, a). In this case, we use a different approximation scheme, based on [5]. The operator ~ is a stochastic operator. It takes the distribution ?, and generates some number of random state samples Si from it. Then, for each Si and each action a, we generate a sample s~ from the transition distribution P(? I Si, a). This sample (Si' ai, sD is then assigned a weight Wi = 7l'8(ai I Si), to compensate for the fact that not all actions would have been selected by 7l'e with equal probability. The resulting set of N samples s~ weighted by the WiS is given as input to a statistical density estimator, which uses it to estimate a new density ?'. We assume that the density estimation procedure is a differentiable function of the weights, often a reasonable assumption. Clearly, this <1> can be used to compute ?(t) for any t, and thereby approximate 7l'e'S value. However, the gradient computation for ~ is far from trivial. In particular, to compute the derivative 8<1> /8?, we must consider <1>'s behavior for some perturbed ?It) other than the one (say, ?~t) to which it was applied originally. In this case, an entirely different set of samples would probably have been generated, possibly leading to a very different density. It is hard to see how one could differentiate the result of this perturbation. We propose an alternative solution based on importance sampling. Rather than change the samples, we modify their weights to reflect the change in the probability that they would be generated. Specifically, when fitting t +1) , we now define a sample (Si' ai, sD's weight to be ?i ~ (t) 0) _ ?1 (Si)7l'e (ai lSi) '1'1' ~(t) ?o (Si) . (J.(t) Wt (3) We can now compute <1>'s derivatives at (0 o, ?~t)) with respect to any of its parameters, as required in (2). Let ( be the vector of parameters (0, e). Using the chain rule, we have 8<1> [O](?) 8<1> [O](?) 8w 8( = 8w . 8[' The first term is the derivative of the estimated density relative to the sample weights (an ? x N matrix). The second is the derivative of the weights relative to the parameter vector (an N x (m + ?) Jacobian), which can easily be computed from (3). 1027 Policy Search via Density Estimation ~ 0.042 o.~ 818 ~ 038 0.36 ,, ..... 0.34 (J) , 0 ( ) 0.32 ,, ,, 0.3 I O~~ ,, , O~r o 2.0'----:.';O:------:-:'~:---,:7:~:----:200=?--:2~~--::300::---~3~ ::--::400:----:-!..SO #Function evaluations (a) (b) Figure 1: Driving task: (a) DBN model; (b) policy-search/optimization results (with 1 s.e.) 5 Experimental results We tested our approach in two very different domains. The first is an average-reward DBN-MDP problem (shown in Figure l(a)), where the task is to find a policy for changing lanes when driving on a moderately busy two-lane highway with a slow lane and a fast lane. The model is based on the BAT DBN of [4], the result of a separate effort to build a good model of driver behavior. For simplicity, we assume that the car's speed is controlled automatically, so we are concerned only with choosing the LateraL Action - change Lane or drive straight. The observables are shown in the figure: LCLr and RClr are the clearance to the next car in each lane (close, medium or far). The agent pays a cost of 1 for each step it is "blocked" by (meaning driving close to) the car to its front; it pays a penalty of 0.2 per step for staying in the fast lane. Policies are specified by action probabilities for the 18 possible observation combinations. Since this is a reasonably small number of parameters, we used the simplex search algorithm described earlier to optimize V[O]. The process mixed quite quickly, so ?(20) was a fairly good approximation to ?( = ). Bused a fully factored representation of the joint distribution except for a single cluster over the three observables. Evaluations are averages of 300 Monte Carlo trials of 400 steps each. Figure 1(b) shows the estimated and actual average rewards, as the policy parameters are evolved over time. The algorithm improved quickly, converging to a very natural policy with the car generally staying in the slow lane, and switching to the fast lane only when necessary to overtake. In our second experiment, we used the bicycle simulator of [7]. There are 9 actions corresponding to leaning left/center/right and applying negative/zero/positive torque to the handlebar; the six-dimensional state used in [7] includes variables for the bicycle'S tilt angle and orientation, and the handlebar's angle. If the bicycle tilt exceeds 7r /15, it falls over and enters an absorbing state. We used policy search over the following space: we selected twelve (simple, manually chosen but not fine-tuned) features of each state; actions were chosen with a softmax - the probability of taking action ai is exp(x,wi)/ E j exp(x ,wj). As the problem only comes with a generative model of the complicated, nonlinear, noisy bicycle dynamics, we used the stochastic density propagation version of our algorithm, with (stochastic) gradient ascent. Each distribution in B was a mixture of a singleton point consisting of the absorbing-state, and of a 6-D multivariate Gaussian. A. Y. Ng, R. Pa" and D. Koller 1028 The first task in this domain was to balance reliably on the bicycle. Using a horizon of T = 200, discount 'Y = 0.995, and 600 Si samples per density propagation step, this was quickly achieved. Next, trying to learn to ride to a goal 2 10m in radius and 1000m away, it also succeeded in finding policies that do so reliably. Formal evaluation is difficult, but this is a sufficiently hard problem that even finding a solution can be considered a success. There was also some slight parameter sensitivity (and the best results were obtained only with ~(O) picked/fit with some care, using in part data from earlier and less successful trials, to be "representative" of a fairly good rider's state distribution,) but using this algorithm, we were able to obtain solutions with median riding distances under 1.1 km to the goal. This is significantly better than the results of [7] (obtained in the learning rather than planning setting, and using a value-function approximation solution), which reported much larger riding distances to the goal of about 7km, and a single "best-ever" trial of about 1.7km. 6 Conclusions We have presented two new variants of algorithms for performing direct policy search in the deterministic and stochastic density propagation settings. Our empirical results have also shown these methods working well on two large problems. Acknowledgements. We warmly thank Kevin Murphy for use of and help with his Bayes Net Toolbox, and Jette Randl~v and Preben Alstr~m for use of their bicycle simulator. A. Ng is supported by a Berkeley Fellowship. The work of D. Koller and R. Parr is supported by the ARO-MURI program "Integrated Approach to Intelligent Systems", DARPA contract DACA76-93-C-0025 under subcontract to lET, Inc., ONR contract N6600 1-97 -C8554 under DARPA's HPKB program, the Sloan Foundation, and the Powell Foundation. References [1] L. Baird and A.W. Moore. Gradient descent for general Reinforcement Learning. In NIPS II, 1999. [2] C. Boutilier, T. Dean, and S. Hanks . Decision theoretic planning: Structural assumptions and computational leverage. 1. Artijiciallntelligence Research, 1999. [3] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. VAl, pages 33-42, 1998. [4] J. Forbes, T. Huang, K. Kanazawa, and S.J. Russell. The BATmobile: Towards a Bayesian automated taxi. In Proc. IlCAI, 1995. [5] D. Koller and R. Fratkina. Using learning for approximation in stochastic processes. In Proc. ICML, pages 287-295, 1998. [6] N . Meuleau, L. Peshkin, K-E. Kim, and L.P. Kaelbling. Learning finite-state controllers for partially observable environments. In Proc. VAIlS, 1999. [7] 1. Randl0v and P. Alstr0m. Learning to drive a bicycle using reinforcement learning and shaping. In Proc. ICML, 1998. [8] J.K. Williams and S. Singh. Experiments with an algorithm which learns stochastic memoryless policies for POMDPs. In NIPS 11, 1999. [9] R.J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229-256, 1992. 2For these experiments, we found learning could be accomplished faster with the simulator'S integration delta-time constant tripled for training. This and "shaping" reinforcements (chosen to reward progress made towards the goal) were both used, and training was with the bike "infinitely distant" from the goal. For this and the balancing experiments, sampling from the fallen/absorbingstate portion of the distributions J>(t) is obviously inefficient use of samples, so all samples were drawn from the non-absorbing state portion (i.e. the Gaussian, also with its tails corresponding to tilt angles greater than 7r /15 truncated), and weighted accordingly relative to the absorbing-state portion.
1748 |@word trial:3 version:1 km:3 covariance:3 contraction:1 thereby:1 initial:2 fragment:1 tuned:1 existing:1 current:2 si:9 artijiciallntelligence:1 must:1 ronald:1 distant:1 enables:1 stationary:1 generative:4 selected:3 accordingly:1 meuleau:1 batmobile:1 node:4 sigmoidal:1 daphne:1 direct:4 become:1 driver:1 consists:1 fitting:1 theoretically:1 indeed:1 behavior:3 planning:5 growing:1 simulator:4 torque:1 discounted:2 alstr:1 automatically:1 resolve:1 actual:1 lll:1 confused:1 estimating:4 notation:1 maximizes:1 medium:1 bike:1 evolved:1 rmax:1 maxo:1 finding:2 impractical:1 guarantee:3 berkeley:4 subclass:1 xd:1 exactly:1 ro:7 omit:1 positive:1 local:1 modify:1 sd:2 limit:1 switching:1 taxi:1 mead:1 might:2 black:1 specifying:1 limited:1 bat:1 alstr0m:1 implement:1 handlebar:2 procedure:1 powell:1 empirical:5 significantly:1 projection:2 induce:1 ila:1 get:2 onto:1 close:5 operator:11 context:1 applying:1 restriction:1 optimize:3 deterministic:4 map:1 center:1 dean:1 modifies:1 williams:2 attention:1 pomdp:1 simplicity:2 factored:1 rule:2 estimator:1 his:1 searching:2 notion:1 limiting:1 dbns:1 user:1 exact:1 us:1 pa:1 muri:1 enters:1 vavg:3 wj:1 russell:1 environment:1 moderately:1 reward:9 dynamic:6 depend:3 rewrite:1 singh:1 subtly:1 upon:1 division:1 completely:2 observables:3 compactly:3 easily:2 joint:2 darpa:2 represented:2 forced:1 fast:3 monte:2 artificial:1 kevin:1 choosing:2 quite:3 stanford:4 larger:1 say:2 ability:1 itself:1 noisy:1 obviously:1 differentiate:1 differentiable:6 net:1 propose:3 aro:1 product:2 rapidly:1 description:1 parent:4 cluster:2 executing:2 staying:2 help:1 oo:5 ij:1 minor:1 progress:1 strong:1 implemented:1 c:3 involves:1 indicate:1 come:1 radius:1 stochastic:18 ovt:1 require:2 fix:1 ao:2 proposition:4 daca76:1 summation:1 extension:1 hold:1 sufficiently:1 considered:1 exp:3 bicycle:7 mop:3 parr:5 driving:3 estimation:7 proc:5 highway:1 weighted:2 clearly:2 gaussian:2 rather:6 avoid:1 corollary:1 encode:1 kim:1 sense:1 inference:1 integrated:1 koller:7 orientation:1 softmax:1 fairly:3 integration:1 equal:1 ng:5 sampling:3 manually:1 represents:2 lit:2 icml:2 simplex:3 connectionist:1 simplify:1 summand:1 intelligent:1 individual:1 murphy:1 replaced:1 consisting:1 maintain:1 attempt:1 interest:2 evaluation:3 mixture:3 chain:2 tuple:1 integral:1 succeeded:1 necessary:1 re:6 earlier:3 assignment:1 cost:1 introducing:3 kaelbling:1 entry:1 successful:1 too:1 front:1 reported:1 perturbed:1 combined:2 density:35 overtake:1 twelve:1 sensitivity:1 contract:2 continuously:1 quickly:3 reflect:1 choose:1 possibly:2 marginalizes:1 huang:1 derivative:9 leading:1 return:3 inefficient:1 busy:1 singleton:1 includes:1 inc:1 baird:1 explicitly:1 sloan:1 view:1 try:1 closed:1 root:1 doing:2 picked:1 portion:3 start:2 bayes:1 complicated:2 forbes:1 il:1 accuracy:1 hxl:2 bayesian:3 fallen:1 accurately:1 carlo:2 trajectory:2 pomdps:3 drive:2 straight:1 associated:4 knowledge:1 car:4 subtle:1 shaping:2 carefully:1 higher:1 originally:1 specify:1 improved:1 rand:1 though:1 box:1 hank:1 furthermore:1 d:1 working:1 ei:2 replacing:1 nonlinear:2 hopefully:1 propagation:16 defines:1 quality:3 grows:1 mdp:6 riding:2 verify:1 hence:1 assigned:1 memoryless:1 xft:2 moore:1 recurrence:3 clearance:1 criterion:4 subcontract:1 trying:2 theoretic:1 vo:1 meaning:1 absorbing:4 multinomial:1 empirically:1 tilt:3 exponentially:2 discussed:1 extend:1 slight:1 tail:1 marginals:1 blocked:1 ai:5 dbn:5 dot:1 ride:1 access:5 obn:1 multivariate:4 closest:1 recent:1 optimizing:1 certain:1 onr:1 arbitrarily:1 success:1 vt:11 life:1 accomplished:1 greater:1 somewhat:3 care:1 determine:1 converge:1 ii:6 multiple:1 exceeds:1 xf:1 faster:1 compensate:1 dept:2 controlled:1 converging:1 variant:3 controller:2 expectation:1 sometimes:3 limt:2 achieved:1 fellowship:1 fine:1 randl:1 median:1 meaningless:1 ascent:6 probably:1 induced:1 subject:1 structural:1 leverage:1 easy:2 concerned:1 automated:1 variety:1 independence:1 fit:1 restrict:1 stanjord:2 idea:2 peshkin:1 expression:1 six:1 effort:1 penalty:1 algebraic:2 reformulated:1 action:15 boutilier:1 useful:2 generally:1 discount:3 ang:1 locally:1 induces:2 simplest:1 generate:3 specifies:1 sl:1 ilcai:1 lsi:1 estimated:2 delta:1 track:1 per:2 write:1 discrete:1 ivt:1 reformulation:2 drawn:1 changing:1 utilize:1 year:1 angle:3 parameterized:4 family:4 reasonable:2 decision:5 comparable:1 bit:1 entirely:1 pay:2 precisely:1 constraint:1 lane:9 generates:1 aspect:1 speed:1 argument:1 optimality:4 performing:1 structured:2 combination:1 representable:1 smaller:1 rider:1 wi:3 modification:1 making:1 restricted:3 taken:1 equation:1 discus:1 ffi:3 letting:1 tractable:1 operation:2 gaussians:5 away:1 indirectly:1 appropriate:1 alternative:3 cf:6 ensure:2 maintaining:1 exploit:2 giving:1 build:1 approximating:1 quantity:2 dependence:1 diagonal:2 jette:1 gradient:18 distance:2 separate:1 thank:1 lateral:1 trivial:3 balance:1 equivalently:1 difficult:2 unfortunately:1 negative:1 reliably:2 policy:44 perform:3 allowing:1 observation:2 markov:4 finite:4 descent:3 truncated:1 defining:3 ever:1 kolle:1 perturbation:1 iwe:1 required:1 specified:2 toolbox:1 optimized:1 nip:2 able:1 below:1 boyen:1 encompasses:1 program:3 oj:1 memory:3 difficulty:1 natural:1 representing:5 scheme:3 mdps:2 ljj:4 prior:1 acknowledgement:1 val:1 relative:8 cpo:2 fully:1 mixed:1 foundation:2 agent:1 leaning:1 balancing:1 supported:2 free:1 infeasible:1 formal:1 allow:3 weaker:1 fall:1 taking:4 differentiating:2 slice:7 xn:3 transition:7 evaluating:1 computes:1 author:1 made:1 reinforcement:4 far:3 approximate:17 observable:3 compact:3 instantiation:1 nelder:1 xi:1 search:15 continuous:2 preben:1 learn:1 reasonably:1 robust:2 ca:3 complex:4 domain:4 allowed:1 representative:1 slow:2 explicit:4 wish:1 xl:2 jacobian:3 learns:1 decay:2 exists:3 kanazawa:1 importance:1 horizon:8 easier:1 smoothly:1 led:1 simply:1 infinitely:2 expressed:1 tracking:2 partially:3 applies:1 andrewy:1 conditional:4 goal:6 exposition:1 towards:2 feasible:1 hard:2 change:3 infinite:7 specifically:2 except:1 wt:1 total:1 experimental:1 select:1 internal:1 support:1 evaluate:1 tested:1
818
1,749
Churn Reduction in the Wireless Industry Michael C. Mozer*+, Richard Wolniewicz*, David B. Grimes*+, Eric Johnson *, Howard Kaushansky* * Athene Software + Department of Computer Science 2060 Broadway, Suite 300 University of Colorado Boulder, CO 80309-0430 Boulder, CO 80302 Abstract Competition in the wireless telecommunications industry is rampant. To maintain profitability, wireless carriers must control chum, the loss of subscribers who switch from one carrier to another. We explore statistical techniques for chum prediction and, based on these predictions. an optimal policy for identifying customers to whom incentives should be offered to increase retention. Our experiments are based on a data base of nearly 47,000 U.S. domestic subscribers, and includes information about their usage, billing, credit, application, and complaint history. We show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, chum prediction and remediation can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Competition in the wireless telecommunications industry is rampant. As many as seven competing carriers operate in each market. The industry is extremely dynamic, with new services, technologies, and carriers constantly altering the landscape. Carriers announce new rates and incentives weekly, hoping to entice new subscribers and to lure subscribers away from the competition. The extent of rivalry is reflected in the deluge of advertisements for wireless service in the daily newspaper and other mass media. The United States had 69 million wireless subscribers in 1998, roughly 25% of the population. Some markets are further developed; for example, the subscription rate in Finland is 53%. Industry forecasts are for a U.S. penetration rate of 48% by 2003. Although there is significant room for growth in most markets, the industry growth rate is declining and competition is rising. Consequently, it has become crucial for wireless carriers to control chum-the loss of customers who switch from one carrier to another. At present, domestic monthly chum rates are 2-3% of the customer base. At an average cost of $400 to acquire a subscriber, churn cost the industry nearly $6.3 bilIion in 1998; the total annual loss rose to nearly $9.6 billion when lost monthly revenue from subscriber cancellations is considered (Luna, 1998). It costs roughly five times as much to sign on a new subscriber as to retain an existing one. Consequently, for a carrier with 1.5 milIion subscribers, reducing the monthly churn' rate from 2% to 1% would yield an increase in annual earnings of at least $54 milIion, and an increase in shareholder value of approximately $150 million. (Estimates are even higher when lost monthly revenue is considered; see Fowlkes, Madan, Andrew, & Jensen, 1999; Luna, 1998.) The goal of our research is to evaluate the benefits of predicting churn using techniques from statistical machine learning. We designed models that predict the probability M. C. Mozer, R. Wolniewicz. D. B. Grimes. E. Johnson and H. Kaushansky 936 of a subscriber churning within a short time window, and we evaluated how well these predictions could be used for decision making by estimating potential cost savings to the wireless carrier under a variety of assumptions concerning subscriber behavior. 1 THE FRAMEWORK Figure 1 shows a framework for churn prediction and profitability maximization. Data from a subscriber-on which we elaborate in the next section-is fed into three components which estimate: the likelihood that the subscriber will churn, the profitability (expected monthly revenue) of the subscriber, and the subscriber's credit risk. Profitability and credit risk determine how valuable the subscriber is to the carrier, and hence influences how much the carrier should be willing to spend to retain the subscriber. Based on the predictions of subscriber behavior, a decision making component determines an intervention strategy-whether a subscriber should be contacted, and if so, what incentives should be offered to appease them. We adopt a decision-theoretic approach which aims to maximize the expected profit to the carrier. In the present work, we focus on churn prediction and utilize simple measures of subscriber profitability and credit risk. However, current modeling efforts are directed at more intelligent models of profitability and credit risk. 2 DATASET The subscriber data used for our experiments was provided by a major wireless carrier. The carrier does not want to be identified, as churn rates are confidential. The carrier provided a data base of 46,744 primarily business subscribers, all of whom had multiple services. (Each service corresponds to a cellular telephone or to some other service, such as voice messaging or beeper capability.) All subscribers were from the same region of the United States, about 20% in major metropolitan areas and 80% more geographically distributed. The total revenue for all subscribers in the data base was $14 million in October 1998. The average revenue per subscriber was $234. We focused on multi-service subscribers, because they provide significantly more revenue than do typical single-service subscribers. When subscribers are on extended contracts, churn prediction is relatively easy: it seldom occurs during the contract period, and often occurs when the contract comes to an end. Consequently, all subscribers in our data base were month-to-month, requiring the use of more subtle features than contract termination date to anticipate churn. The subscriber data was extracted from the time interval October through December, 1998. Based on these data, the task was to predict whether a subscriber would churn in January or February 1999. The carrier provided their internal definition of churn, which was based on the closing of all services held by a subscriber. From this definition, 2,876 of the subscribers active in October through December churned-6.2% of the data base. subscriber data .. --.. subscriber churn prediction subscriber profitability estimation --.... . decision making --- intervention strategy subscriber credit risk estimation FIGURE 1. The framework for churn prediction and profitability maximization 937 Churn Reduction in the Wireless Industry 2.1 INPUT FEATURES Ultimately, churn occurs because subscribers are dissatisfied with the price or quality of service, usually as compared to a competing carrier. The main reasons for subscriber dissatisfaction vary by region and over time. Table 1 lists important factors that influence subscriber satisfaction, as well as the relative importance of the factors (J. D. Power and Associates, 1998). In the third column, we list the type of information required for determining whether a particular factor is likely to be influencing a subscriber. We categorize the types of information as follows. Network. Call detail records (date, time, duration, and location of all calls), dropped cans (calls lost due to lack of coverage or available bandwidth), and quality of service data (interference, poor coverage). Billing. Financial information appearing on a subscriber's bill (monthly fee, additional charges for roaming and additional minutes beyond monthly prepaid limit). Customer Service. Cans to the customer service department and their resolutions. Application for Service. Information from the initial application for service, including contract details, rate plan, handset type, and credit report. Market. Details of rate plans offered by carrier and its competitors, recent entry of competitors into market, advertising campaigns, etc. Demographics. Geographic and population data of a given region. A subset of these information sources were used in the present study. Most notably, we did not utilize market information, because the study was conducted over a fairly short time interval during which the market did not change significantly. More important, the market forces were fairly uniform in the various geographic regions from which our subscribers were selected. Also, we were unable to obtain information about the subscriber equipment (age and type of handset used). The information sources listed above were distributed over three distinct data bases maintained by the carrier. The data bases contained thousands of fields, from which we identified 134 variables associated with each subscriber which we conjectured might be linked to churn. The variables included: subscriber location, credit classification, customer classification (e.g., corporate versus retail), number of active services of various types, beginning and termination dates of various services, avenue through which services were activated, monthly charges and usage, number, dates and nature of customer service calls, number of cans made, and number of abnormany terminated cans. 2.2 DATA REPRESENTATION As all statisticians and artificial intelligence researchers appreciate, representation is key. A significant portion of our effort involved working with domain experts in the wireless telecommunications industry to develop a representation of the data that highlights and makes explicit those features which-in the expert's judgement-were highly related to churn. To evaluate the benefit of carefuny constructing the representation, we performed TABLE 1. Factors influencing subscriber satisfaction Factor call quality pricing options corporate capability customer service credibility I customer communications roaming I coverage nandset olillng cost of roaming Importance 21% 18% 17% 17% 10% 7% 4V/o 3% 3"10 Nature of data required for prediction network market, billing market, customer service customer service market, customer service network application billing marKet, billing 938 M C. Mozer. R. Wo/niewicz. D. B. Grimes. E. Johnson and H. Kaushansky studies using both naive and a sophisticated representations. The naive representation mapped the 134 variables to a vector of 148 elements in a straightforward manner. Numerical variables, such as the length of time a subscriber had been with the carrier, were translated to an element of the representational vector which was linearly related to the variable value. We imposed lower and upper limits on the variables, so as to suppress irrelevant variation and so as not to mask relevant variation by too large a dynamic range; vector elements were restricted to lie between --4 and +4 standard deviations of the variable. One-of-n discrete variables, such as credit classification, were translated into an n-dimensional subvector with one nonzero element. The sophisticated representation incorporated the domain knowledge of our experts to produce a 73-element vector encoding attributes of the subscriber. This representation collapsed across some of the variables which, in the judgement of the experts, could be lumped together (e.g., different types of calls to the customer service department), and expanded on others (e.g., translating the scalar length-of-time-with-carrier to a multidimensional basis-function representation, where the receptive-field centers of the basis functions were suggested by the domain experts), and performed transformations of other variables (e.g., ratios of two variables, or time-series regression parameters). 3 PREDICTORS The task is to predict the probability of churn from the vector encoding attributes of the subscriber. We compared the churn-prediction performance of two classes of models: logit regression and a nonlinear neural network with a single hidden layer and weight decay (Bishop, 1995). The neural network model class was parameterized by the number of units in the hidden layer and the weight decay coefficient. We originally anticipated that we would require some model selection procedure, but it turned out that the results were remarkably insensitive to the choice of the two neural network parameters; weight decay up to a point seemed to have little effect, and beyond that point it was harmful, and varying the number of hidden units from 5 to 40 yielded nearly identical performance. We likely were not in a situation where overfitting was an issue, due to the large quantity of data available; hence increasing the model complexity (either by increasing the number of hidden units or decreasing weight decay) had little cost. Rather than selecting a single neural network model, we averaged the predictions of an ensemble of models which varied in the two model parameters. The average was uniformly weighted. 4 METHODOLOGY We constructed four predictors by combining each of the two model classes (logit regression and neural network) with each of the two subscriber representations (naive and sophisticated). For each predictor, we performed a ten-fold cross validation study, utilizing the same splits across predictors. In each split of the data, the ratio of churn to no churn examples in the training and validation sets was the same as in the overall data set. For the neural net models, the input variables were centered by subtracting the means and scaled by dividing by their standard deviation. Input values were restricted to lie in the range [--4, +4]. Networks were trained until they reached a local minimum in error. 5 RESULTS AND DISCUSSION 5.1 CHURN PREDICTION For each of the four predictors, we obtain a predicted probability of churn for each subscriber in the data set by merging the test sets from the ten data splits. Because decision making ultimately requires a "churn" or "no churn" prediction. the continuous probability measure must be thresholded to obtain a discrete predicted outcome. 939 Chum Reduction in the Wireless Industry For a given threshold, we determine the proportion of churners who are correctly identified as churners (the hit rate), and the proportion of nonchurners who are correctly identified as nonchurners (the rejection rate). Plotting the hit rate against the rejection rate for various thresholds, we obtain an ROC curve (Green & Swets, 1966). In Figure 2, the closer a curve comes to the upper right corner of the graph-lOO% correct prediction of churn and 100% correct prediction of nonchurn-the better is the predictor at discriminating churn from nonchurn. The dotted diagonal line indicates no discriminability: If a predictor randomly classifies x% of cases as churn, it is expected to obtain a hit rate of x% and a rejection rate of (lOO--x)% . As the Figure indicates, discriminability is clearly higher for the sophisticated representation than for the naive representation. Further, for the sophisticated representation at least, the nonlinear neural net outperforms the logit regression. It appears that the neural net can better exploit nonlinear structure in the sophisticated representation than in the naive representation, perhaps due to the basis-function representation of key variables. Although the four predictors appear to yield similar curves, they produce large differences in estimated cost savings. We describe how we estimate cost savings next. 5.2 DECISION MAKING Based on a subscriber's predicted churn probability, we must decide whether to offer the subscriber some incentive to remain with the carrier, which will presumably reduce the likelihood of churn. The incentive will be offered to any subscriber whose churn probability is above a certain threshold. The threshold will be selected to maximize the expected cost savings to the carrier; we will refer to this as the optimal decision-making policy. The cost savings will depend not only on the discriminative ability of the predictor, but also on: the cost to the carrier of providing the incentive, denoted C i (the cost to the carrier may be much lower than the value to the subscriber, e.g., when air time is offered); the time horizon over which the incentive has an effect on the subscriber's behavior; the reduction in probability that the subscriber will leave within the time horizon as a result of the incentive, Pi; and the lost-revenue cost that results when a subscriber churns, Ct. 100 90 CD Cii .... 80 " 70 "" "- C 0 :u Q) ?CD .!:;.. 60 "0 "- " ,, " Q) ~ 50 E Q) C .... :::J '- \ "." \ C \ '- 30 U C 0 \ \., \ \ \ \, \ ~ 40 ~ , \ \ \ -.. \ 20 neural net I sophisticated logit regression lsophisticated 0~ 10 0 0 20 30 40 I \I neural net I naive log~ regression I naive 10 \ \1 \1 \\ \\ 50 60 70 80 90 100 % chur~ identified (hit rate) FIGURE 2. Test-set performance for the four predictors. Each curve shows, for various thresholds, the ability of a predictor to correctly identify churn (x axis) and nonchum (y axis). The more bowed a curve, the better able a predictor is at discriminating churn from nonchurn. 940 M. C. Mozer; R. Wo[niewicz, D. B. Grimes, E. Johnson and H. Kaushansky We assume a time horizon of six months. We also assume that the lost revenue as a result of churn is the average subscriber bill over the time horizon, along with a fixed cost of $500 to acquire a replacement subscriber. (This acquisition cost is higher than the typical cost we stated earlier because subscribers in this data base are high valued, and often must be replaced with multiple low-value subscribers to achieve the same revenue.) To estimate cost savings, the parameters C i' Pi' and C, are combined with four statistics obtained from a predictor: N(pL,aL): N(pS,aL): N(pL,aS): N(pS,aS): number of subscribers who are predicted to leave (churn) and who actually leave barring intervention number of subscribers who are predicted to stay (nonchurn) and who actually leave barring intervention number of subscribers who are predicted to leave and who actually stay number of subscribers who are predicted to stay and who actually stay Given these statistics, the net cost to the carrier of performing no intervention is: net(no intervention) = [ N(pL,aL) + N(pS,aL) ] C, This equation says that whether or not churn is predicted, the subscriber will leave, and the cost per subscriber will be C,. The net cost of providing an incentive to all subscribers whom are predicted to churn can also be estimated: net(incentive) = [N(pL,aL) + N(pL,aS)] q + [Pi N(pL,aL) + N(pS,aL)] C, This equation says that the cost of offering the incentive, C i ' is incurred for all subscribers for who are predicted to churn, but the lost revenue cost will decrease by a fraction Pi for the subscribers who are correctly predicted to churn. The savings to the carrier as a result of offering incentives based on the churn predictor is then savings per churnable subscriber = [ net(no intervention) - net(incentive)] / [N(pL,aL) + N(pS,aL)] The contour plots in Figure 3 show expected savings per churnable subscriber, for a range of values of q, Pi, and C" based on the optimal policy and the sophisticated neuralnet predictor. Each plot assumes a different subscriber retention rate (= I-Pi) given intervention. The "25% retention rate" graph supposes that 25% of the churning subscribers who are offered an incentive wiII decide to remain with the carrier over the time horizon of six months. For each plot, the cost of intervention (q) is varied along the x-axis, and the average monthly bill is varied along the y-axis. (The average monthly biII is converted to lost revenue, C" by computing the total biII within the time horizon and adding the subscriber acquisition cost.) The shading of a region in the plot indicates the expected savings assuming the specified retention rate is achieved by offering the incentive. The grey-level bar to the right of each plot translates the shading into dollar savings per subscriber who will churn barring intervention. Because the cost of the incentive is factored into the savings estimate, the estimate is actually the net return to the carrier. The white region in the lower right portion of each graph is the region in which no cost savings will be obtained. As the graphs clearly show, if the cost of the incentive needed to achieve a certain retention rate is low and the cost of lost revenue is high, significant per-subscriber savings can be obtained. As one might suspect in examining the plots, what's important for determining persubscriber savings is the ratio of the incentive cost to the average monthly bill. The plots clearly show that for a wide range of assumptions concerning the average monthly bill, incentive cost, and retention rate, a significant cost savings is realized. The plots assume that all subscribers identified by the predictor can be contacted and offered the incentive. If only some fraction F of aII subscribers are contacted, then the estimated savings indicated by the plot should be multiplied by F. To pin down a likely scenario, it is reasonable to assume that 50% of subscribers can be contacted, 35% of whom will be retained by offering an incentive that costs the carrier 941 Chum Reduction in the Wireless Industry 25% retention rate 35% retention rate 600 400 250 350 250 ~2oo 300 ~2oo 400 ~150 ? 300 200 :0 250 ~150 :0 C 0 E 100 200 150 E 100 1; 100 > 0> 50 0 0> <II 50 50 50 100 150 500 0 200 intervention cost ($) 50 100 150 200 intervention cost ($) 50% retention rate 75% retention rate 900 0> > <II BOO 250 700 600 ~2oo 500 ~150 400 C 300 ;, 100 200 <II 100 50 100 150 intervention cost ($) 200 :s 0 > 50 0 intervention cost ($) FIGURE 3. Expected savings to the carrier per chumable subscriber, under a variety of assumptions concerning intervention cost, average monthly bill of subscriber, and retention rate that will be achieved by offering an incentive to a churnable subscriber. $75, and in our data base, the average monthly bill is $234. Under this scenario, the expected savings-above and beyond recovering the incentive cost-to the carrier is $93 based on the sophisticated neural net predictor. In contrast, the expected savings is only $47 based on the naive neural net predictor, and $81 based on the sophisticated logistic regression model. As we originally conjectured, both the nonlinearity of the neural net and the bias provided by the sophisticated representation are adding value to the predictions. Our ongoing research involves extending these initial results in a several directions. First, we have confirmed our positive results with data from a different time window, and for test data from a later time window than the training data (as would be necessary in real-world usage). Second, we have further tuned and augmented our sophisticated representation to obtain higher prediction accuracy, and are now awaiting additional data to ensure the result replicates. Third, we are applying a variety of techniques, including sensitivity analysis and committee and boosting techniques, to further improve prediction accuracy. And fourth, we have begun to explore the consequences of iterating the decision making process and evaluating savings over an extended time period. Regardless of these directions for future work, the results presented here show the promise of data mining in the domain of wireless telecommunications. As is often the case for decision-making systems, the predictor need not be a perfect discriminator to realize significant savings. 6 REFERENCES Bishop, C. (1995). Neural networks for pattern recognition. New York: Oxford University Press. Fowlkes, A. J., Madan, A., Andrew, J., and Jensen, C.(1999). The effect of churn on value: An industry advisory. Green, D. M ., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York: Wiley. Luna, L. (1998). Churn is epidemic. Radio Communications Report, December 14, 1998. Power, J. D., & Associates (1998). 1998 Residential Wireless Customer Satisfaction Survey. September 22, 1998.
1749 |@word rising:1 judgement:2 proportion:2 logit:4 termination:2 willing:1 grey:1 subscriber:85 profit:1 shading:2 reduction:5 initial:2 series:1 united:2 selecting:1 offering:5 tuned:1 outperforms:1 existing:1 current:1 must:4 realize:1 numerical:1 hoping:1 designed:1 plot:9 intelligence:1 selected:2 beginning:1 short:2 record:1 boosting:1 earnings:1 location:2 bowed:1 five:1 along:3 constructed:1 become:1 contacted:4 manner:1 swets:2 mask:1 expected:9 notably:1 behavior:3 market:12 roughly:2 multi:1 decreasing:1 little:2 window:3 increasing:2 domestic:2 provided:4 estimating:1 classifies:1 mass:1 medium:1 what:2 developed:1 transformation:1 suite:1 multidimensional:1 churning:2 growth:2 charge:2 weekly:1 complaint:1 scaled:1 hit:4 control:2 unit:3 intervention:17 appear:1 positive:1 carrier:33 retention:12 service:23 influencing:2 dropped:1 limit:2 local:1 consequence:1 encoding:2 oxford:1 approximately:1 lure:1 might:2 discriminability:2 co:2 campaign:1 range:4 averaged:1 directed:1 lost:8 procedure:1 area:1 significantly:2 selection:1 risk:5 influence:2 collapsed:1 applying:1 bill:7 imposed:1 customer:14 center:1 straightforward:1 regardless:1 duration:1 focused:1 resolution:1 survey:1 identifying:1 factored:1 utilizing:1 financial:1 population:2 variation:2 colorado:1 associate:2 element:5 recognition:1 dissatisfied:1 thousand:1 region:7 decrease:1 valuable:1 rose:1 mozer:4 complexity:1 dynamic:2 ultimately:2 trained:1 depend:1 eric:1 basis:3 neuralnet:1 translated:2 aii:1 various:5 awaiting:1 distinct:1 describe:1 artificial:1 outcome:1 whose:1 spend:1 valued:1 say:2 epidemic:1 ability:2 statistic:2 net:15 subtracting:1 relevant:1 turned:1 combining:1 date:4 achieve:2 representational:1 competition:4 billion:1 p:5 extending:1 produce:2 perfect:1 leave:6 oo:3 andrew:2 develop:1 dividing:1 coverage:3 predicted:11 recovering:1 come:2 involves:1 direction:2 correct:2 attribute:2 centered:1 translating:1 require:1 anticipate:1 pl:7 credit:9 considered:2 presumably:1 predict:3 major:2 finland:1 adopt:1 vary:1 estimation:2 radio:1 metropolitan:1 weighted:1 clearly:3 aim:1 rather:1 varying:1 geographically:1 focus:1 likelihood:2 indicates:3 contrast:1 equipment:1 dollar:1 hidden:4 issue:1 classification:3 overall:1 denoted:1 plan:2 fairly:2 psychophysics:1 field:2 saving:23 barring:3 identical:1 nearly:4 anticipated:1 future:1 report:2 others:1 intelligent:1 richard:1 primarily:1 handset:2 randomly:1 replaced:1 replacement:1 statistician:1 maintain:1 detection:1 highly:1 mining:1 replicates:1 grime:4 activated:1 held:1 closer:1 daily:1 necessary:1 harmful:1 industry:12 modeling:1 column:1 earlier:1 altering:1 maximization:2 cost:39 deviation:2 subset:1 entry:1 uniform:1 predictor:19 examining:1 conducted:1 johnson:4 too:1 loo:2 supposes:1 combined:1 sensitivity:1 discriminating:2 retain:2 stay:4 contract:5 michael:1 together:1 messaging:1 luna:3 corner:1 expert:6 return:1 potential:1 converted:1 churnable:3 includes:1 coefficient:1 performed:3 later:1 linked:1 portion:2 reached:1 option:1 capability:2 air:1 accuracy:2 who:16 ensemble:1 yield:3 identify:1 landscape:1 advertising:1 confirmed:1 researcher:1 churn:45 history:1 definition:2 competitor:2 against:1 acquisition:2 involved:1 associated:1 dataset:1 begun:1 knowledge:1 subtle:1 sophisticated:12 actually:5 appears:1 higher:4 originally:2 athene:1 reflected:1 methodology:1 evaluated:1 profitability:8 until:1 working:1 nonlinear:3 lack:1 logistic:1 quality:3 indicated:1 perhaps:1 pricing:1 usage:3 effect:3 requiring:1 geographic:2 hence:2 nonzero:1 white:1 during:2 lumped:1 subscription:1 maintained:1 theoretic:1 insensitive:1 million:3 significant:6 monthly:14 refer:1 declining:1 credibility:1 seldom:1 wiii:1 closing:1 cancellation:1 nonlinearity:1 had:4 etc:1 base:10 recent:1 conjectured:2 irrelevant:1 scenario:2 certain:2 minimum:1 additional:3 cii:1 determine:2 maximize:2 period:2 signal:1 ii:3 multiple:2 corporate:2 cross:1 offer:1 concerning:4 wolniewicz:2 prediction:20 regression:7 retail:1 achieved:2 want:1 remarkably:1 interval:2 source:2 crucial:1 operate:1 suspect:1 december:3 call:6 split:3 easy:1 boo:1 switch:2 variety:4 competing:2 identified:6 bandwidth:1 billing:5 reduce:1 avenue:1 translates:1 whether:5 six:2 effort:2 wo:2 york:2 iterating:1 listed:1 ten:2 chum:7 dotted:1 sign:1 estimated:3 per:7 correctly:4 discrete:2 promise:1 incentive:23 key:2 four:5 threshold:5 thresholded:1 utilize:2 graph:4 nonchurners:2 fraction:2 nonchurn:4 residential:1 parameterized:1 fourth:1 telecommunication:4 announce:1 decide:2 reasonable:1 decision:9 fee:1 layer:2 ct:1 fold:1 annual:2 yielded:1 software:1 churners:2 extremely:1 performing:1 expanded:1 relatively:1 department:3 poor:1 across:2 remain:2 penetration:1 making:8 restricted:2 boulder:2 interference:1 equation:2 pin:1 committee:1 needed:1 deluge:1 fed:1 end:1 demographic:1 available:2 multiplied:1 away:1 appearing:1 fowlkes:2 voice:1 assumes:1 ensure:1 exploit:1 february:1 appreciate:1 quantity:1 occurs:3 realized:1 strategy:2 receptive:1 diagonal:1 september:1 unable:1 mapped:1 seven:1 whom:4 extent:1 cellular:1 reason:1 assuming:1 length:2 retained:1 ratio:3 providing:2 acquire:2 october:3 broadway:1 stated:1 suppress:1 policy:3 upper:2 howard:1 january:1 situation:1 extended:2 communication:2 incorporated:1 varied:3 confidential:1 david:1 required:2 subvector:1 specified:1 discriminator:1 beyond:3 suggested:1 able:1 usually:1 bar:1 pattern:1 including:2 green:2 power:2 satisfaction:3 business:1 force:1 predicting:1 improve:1 technology:1 axis:4 naive:8 determining:2 relative:1 loss:3 highlight:1 versus:1 age:1 revenue:12 validation:2 incurred:1 offered:7 plotting:1 pi:6 cd:2 wireless:15 bias:1 wide:2 rivalry:1 roaming:3 benefit:2 distributed:2 curve:5 world:1 evaluating:1 contour:1 seemed:1 made:1 newspaper:1 active:2 overfitting:1 discriminative:1 continuous:1 table:2 nature:2 constructing:1 domain:5 did:2 main:1 kaushansky:4 terminated:1 linearly:1 augmented:1 crafted:1 roc:1 elaborate:1 wiley:1 explicit:1 lie:2 third:2 advertisement:1 minute:1 down:1 bishop:2 jensen:2 list:2 decay:4 biii:2 merging:1 adding:2 importance:3 horizon:6 forecast:1 rejection:3 explore:2 dissatisfaction:1 likely:3 contained:1 scalar:1 corresponds:1 determines:1 constantly:1 extracted:1 goal:1 month:4 consequently:3 room:1 price:1 change:1 included:1 telephone:1 typical:2 reducing:1 uniformly:1 total:3 internal:1 categorize:1 ongoing:1 evaluate:2
819
175
29 "FAST LEARNING IN MULTI-RESOLUTION HIERARCHIES" John Moody Yale Computer Science, P.O. Box 2158, New Haven, CT 06520 Abstract A class of fast, supervised learning algorithms is presented. They use local representations, hashing, atld multiple scales of resolution to approximate functions which are piece-wise continuous. Inspired by Albus's CMAC model, the algorithms learn orders of magnitude more rapidly than typical implementations of back propagation, while often achieving comparable qualities of generalization. Furthermore, unlike most traditional function approximation methods, the algorithms are well suited for use in real time adaptive signal processing. Unlike simpler adaptive systems, such as linear predictive coding, the adaptive linear combiner, and the Kalman filter, the new algorithms are capable of efficiently capturing the structure of complicated non-linear systems. As an illustration, the algorithm is applied to the prediction of a chaotic timeseries. 1 Introduction A variety of approaches to adaptive information processing have been developed by workers in disparate disciplines. These include the large body of literature on approximation and interpolation techniques (curve and surface fitting), the linear, real-time adaptive signal processing systems (such as the adaptive linear combiner and the Kalman filter), and most recently, the reincarnation of non-linear neural network models such as the multilayer perceptron. Each of these methods has its strengths and weaknesses. The curve and surface fitting techniques are excellent for off-line data analysis, but are typically not formulated with real-time applications in mind. The linear techniques of adaptive signal processing and adaptive control are well-characterized, but are limited to applications for which linear descriptions are appropriate. Finally, neural network learning models such as back propagation have proven extremely versatile at learning a wide variety of non-linear mappings, but tend to be very slow computationally and are not yet well characterized. The purpose of this paper is to present a general description of a class of supervised learning algorithms which combine the ability of the conventional curve 30 Moody fitting and multilayer perceptron methods to precisely learn non-linear mappings with the speed and flexibility required for real-time adaptive application domains. The algorithms are inspired by a simple, but often overlooked, neural network model, Albus's Cerebellar Model Articulation Controller (CMAC) [2,1], and have a great deal in common with the standard techniques of interpolation and approximation. The algorithms "learn from examples", generalize well, and can perform efficiently in real time. Furthermore, they overcome the problems of precision and generalization which limit the standard CMAC model, while retaining the CMAC's speed. 2 System Description The systems are designed to rapidly approximate mappings g: X 1-+ fi from multidimensional input spaces x E Sinput to multidimensional output spaces fi E Soutput. The algorithms can be applied to any problem domain for which a metric can be defined on the input space (typically the Euclidean, Hamming, or Manhattan metric) and for which the desired learned mapping is (to a close approximation) piece-wise continuous. (Discontinuities in the desired mapping, such as those at classification boundaries, are approximated continuously.) Important general classes of such problems include approximation of real-valued functions nn 1-+ nm (such as those found in signal processing), classification problems nn 1-+ f3"l (such as phoneme classification), and boolean mapping problems Bn 1-+ f3"l (such as the NETtalk problem [20]). Here, n are the reals and B is {0,1}. This paper focuses on realvalued mappings; the formulation and application of the algorithms to boolean problem domains will be presented elsewhere. In order to specify the complete learning system in detail, it is easiest to start with simple special cases and build the description from the bottom up: 2.1 A Simple Adaptive Module The simplest special case of the general class under consideration is described as follows. The input space is overlayed with a lattice of points xf3 a local function value or "weight" Vf3 is assigned to every possible lattice point. The output of the system for a given input is: (1) where Nf3(x) is a neighborhood function for the 13 th lattice point such that Nf3 = 1 if xf3 is the lattice point closest to the input vector x and Nf3 0 otherwise. More generally, the neighborhood functions N can overlap and the sum in equation (1) can be replaced by an average. This results in a greater ability to generalize when training data is sparse, but at the cost of losing fine detail. = "Fast Learning in Multi-Resolution Hierarchies" Learning is accomplished by varying the the system output on a set of training data: E = V{3 to minimize the squared error of ~ ~(Zide8ired - zeXi))2 , (2) I where the sum is over all exemplars {Xi, Zide'ired} in the training set. The determination of V{3 is easily formulated as a real time adaptive algorithm by using gradient descent to minimize an instantaneous estimate E(t) of the error: dV dt 2.2 dE(t) =-fJ dV . (3) Saving Memory with Hashing: The CMAC The approach of the previous section encounters serious difficulty when the dimension of the input space becomes large and the distribution of data in the input space becomes highly non-uniform. In such cases, allocating a separate function value for each possible lattice point is extremely wasteful, because the majority of lattice points will have no training data within a local neighborhood. As an example, suppose that the input space is four dimensional, but that all input data lies on a fuzzy two dimensional subspace. (Such a situation [projected onto 3-dimensions] is shown in figure [2A].) Furthermore, suppose that the input space is overlayed with a rectangular lattice with K nodes per dimension. The complete lattice will contain K4 nodes, but only O( K2) of those nodes will have training data in their local neighborhoods Thus, only 0(K2) of the weights V{3 will have any meaning. The remaining 0(1(4) weights will be wasted. (This assumes that the lattice is not too fine. If K is too large, then only O(P) of the lattice points will have training data nearby, where P is the number of training data.) An alternative approach is to have only a small number of weights and to allocate them to only those regions of the input space which are populated with training data. This allocation can be accomplished by a dimensionality-reducing mapping from a virtual lattice in the input space onto a lookup table of weights or function values. In the absence of any a priori information about the distribution of data in the input space, the optimal mapping is a random mapping, for example a universal hashing function [8]. The random nature of such a function insures that neighborhood relationships in the virtual lattice are not preserved. The average behavior of an ensemble of universal hashing functions is thus to access all elements of the lookup table with equal probability, regardless of the correlations in the input data. The many-to-one hash function can be represented here as a matrix HT{3 of D's and 1 's with one 1 per column, but many 1's per row. With this notation, the system response function is: T zeX) = N L L V H {3 N{3(x) T T=l{3=l T (4) 31 32 Moody CTID --~" Resolution 1 I---.--I.,c, Resolution 2 A ~M"?l-I.,c, Hash Table Figure 1: (A) A simple CMAC module. (B) The computation of errors for a multiresolution hierarchy. The CMAC model of Albus is obtained when a distributed representation of the input space is used and the neighborhood functions NP(x) are overlapping. In this case, the sum over (3 is replaced by an average. Note that, as specified by equation (4), hash table collisions are not resolved. This introduces "collision noise", but the effect of this noise is reduced by 1/ .j(B), where B is the number of neighborhood functions which respond to a given input. Collision noise can be completely eliminated if standard collision resolution techniques are used. A few comments should be made about efficiency. In spite of the costly formal sums in equation (4), actual implementations of the algorithm are extremely fast. The set of non-zero NP (X) on the virtual lattice, the hash function value for each vertex, and the set of corresponding lookup table values i h given by the hash function are easily determined on the fly. The entire hash function H T f3 is never pre-computed, the sum over the index (3 is limit.ed to a few lattice points neighboring the input X, and since each lattice point is associated with only one lookup table value, the formal sum over T disappears. The CMAC model is shown schematically in figure [IA). 2.3 Interpolation: Neighborhood Functions with Graded Response One serious problem with the formulations discussed so far is that the neighborhood functions are constant in their regions of support. Thus, the system response is discontinuous over neighborhood boundaries. This problem can be easily remedied by using neighborhood functions with graded response in order to perform continuous interpolation between lattice points . "Fast Learning in Multi-Resolution Hierarchies" The normalized system response function is then: (5) The functions Rf3 (i) are the graded neighborhood response functions associated with each lattice point if3. They are intended to have local support on the" input space Sinput, thus being non-zero only in a local neighborhood of their associated lattice point Xf3. Each function Rf3(x) attains its maximum value at lattice point i f3 and drops off monotonically to zero as the distance lIif3 - Xli increases. Note that R is not necessarily isotropic or symmetric. Certain classes of localized response functions R defined on certain lattices are self-normalized, meaning that: L Rf3(X) = 1 , for any x. (6) f3 In this case, the equation (5) simplifies to: Z(x) = L L: liT H Tf3 Rf3(X) T (7) f3 One particularly important and useful class of of response functions are the Bsplines. However, it is not easy to formulate B-splines on arbitrary lattices in high dimensional spaces. 2.4 M uIti-Resolution Interpolation The final limitation of the methods described so far is that they use a lattice at only one scale of resolution. Without detailed a priori knowledge of the distribution of data in the input space, it is difficult to choose an optimal lattice spacing. Furthermore, there is almost always a trade-off between the ability to generalize and the ability to capture fine detail. When a single coarse resolution is used, generalization is good, but fine details are lost. When a single fine resolution is used, fine details are captured in those regions which contain dense data, but no general picture emerges for those regions in which data is sparse. Good generalization and fine detail can both be captured by using a multiresolution hierarchy. A hierarchical system with L levels represents functions 9 : i 1-+ yin the followmg way: y(X) = Yi.(x) L = L: %A (E) , (8) ~=l where %A is a mapping as described in equation(5) for the A-th level in the hierarchy. The coarsest scale is A = 1 and the finest is A L. = 33 34 Moody The multi-resolution system is trained such that the finer scales learn corrections to the total output of the coarser scales. This is accomplished by using a hierarchy of error functions. For each level in the hierarchy A, the output for that level fh, is defined to be the partial sum ~ Y>. = 2: Zit . /C=} (Note that Y)..+l = Y>.. + z~+}.) The error for level A is defined to be E).. = 2: E)..(i) , i where the error associated with the E ~ (.) , ith exemplar is: ="21 (-del Yi - .. ( .. ))2 Y~ Xi The learning or training procedure for level A involves varying the lookup table values V; for that level to minimize E)... Note that the lookup table values V; for previous or subsequent levels (Ie 1= A) are held fixed during the minimization of E)... Thus, the lookup table values for each level are varied to minimize only the error defined for that level. This hierarchical learning procedure guarantees that the first level mapping Zl is the best possible at that level, the second level mapping Z2 constitutes the best possible corrections to the first level, and the A-th level mapping Z~ constitutes the best possible corrections to the total contributions of all previous levels. The computation of error signals is shown schematically in figure [lB]. It should be noted that multi-resolution approaches have been successfully used in other contexts. Examples are the well-known multigrid methods for solving differential equations and the pyramid architectures used in machine vision [6,7]. 3 Application to Timeseries Prediction The multi-resolution hierarchy can be applied to a wide variety of problem domains as mentioned earlier. Due to space limitations, we consider only one test problem here, the prediction of a chaotic timeseries. As it is usually formulated, the prediction is accomplished by finding a realvalued mapping f : nn 1-+ n which takes a sequence of n recent samples of the timeseries and predicts the value at a future moment. Typically, the state space imbedding in nn is i{t] = (x[t], x[t - ~], x[t - 2~], x[t - 3~)), where ~ is the sampling parameter, and the correct prediction for prediction time T is x[t + T). For the purposes of testing various non-parametric prediction methods, it is assumed that the underlying process which generates the timeseries is unknown. The particular timeseries studied here results from integrating the Mackey-Glass differential-delay equation [14]: dx[t] dt = -b x[t] + a x[t - r] 1 + x[t - r)1o (9) "Fast Learning in Multi-Resolution HierarchieS- .(II . . . .. . ' ?? 0.0 .. ' ,a S '(O) -0.5 ? ?? ? ----------- ..... ??? -1.5 ? '. ,' .... : .. .' .. 2.0 ? ?? ?? 3.0 3.5 2.5 Num. Data (log 10) 4.0 Figure 2: (A) Imbedding in three dimensions of 1000 successive points of the Mackey-Glass chaotic timeseries with delay parameter T = 17 and sampling parameter ~ = 6. (B) Normalized Prediction Error vs. Number of Training Data. Squares are runs with the multi-resolution hierarchy runs. The circle is the back propagation benchmark. The horizontal line is included for visual reference only and is not intended to imply a scaling law for back propagation. = = The solid lines in figure [3] show the resulting timeseries for T 17, a 0.2, and b = 0.1; note that it is cyclic, but not periodic. The characteristic time of the series, given by the inverse of the mean of the power spectrum, is tcha,. ~ 50. Classical techniques like linear predictive coding and Gabor-Volterra-Wiener polynomial expansions typically do no better than chance when predicting beyond tcha,. [10]. For purposes of comparison, the sampling parameter and prediction time are chosen to be ~ = 6 and T = 85 > tcha,. respectively. Figure [2A] shows a projection of the four dimensional state space imbedding onto three dimensions. The orbits of the series lie on a fuzzy two dimensional subspace which is a strange attractor of fractal dimension 2.1. This problem has been studied by both conventional data analysis techniques and by neural network methods. It was first studied by Farmer and Sidorowich who locally fitted linear and quadratic surfaces directly to the data. [11,10]. The exemplars in the imbedding space were stored in a k-d tree structure in order to allow rapid determination of proximity relationships [3,4,19]. The local surface fitting method is extremely efficient computationally. This kind of approach has found wide application in the statistics community [5]. Casdagli has applied the method of radial basis functions, which is an exact interpolation method and also depends on explicit storage of the data. [9]. The radial basis functions method is a global method and becomes com- 35 36 Moody putationally expensive when the number of exemplars is large, growing as O(P3). Both approaches yield excellent results when used as off-line algorithms, but do not seem to be well suited to real-time application domains. For real-time applications, little a priori knowledge about the data can be assumed, large amounts of past data can't be stored, the function being learned may vary with time, and computing speed is essential. Three different neural network techniques have been applied to the timeseries prediction problem, back propagation [13], self-organized, locally-tuned processing units [18,17], and an approach based on the GMDH method and simulated annealing [21]. The first two approaches can in principle be applied in real time, because they don't require explicit storage of past data and can adapt continuously. Back propagation yields better predictions since it is completely supervised, but the locally-tuned processing units learn substantially faster. The GMDH approach yields excellent results, but is computationally intensive and is probably limited to off-line use. The multi-resolution hierarchy is intended to offer speed, precision, and the ability to adapt continuously in real time. Its application to the Mackey-Glass prediction problem is demonstrated in two different modes of operation: off-line learning and real-time learning. 3.1 Off-Line Learning In off-line mode, a five level hierarchy was trained to predict the future values. At each level, a regular rectangular lattice was used, with each lattice having A intervals and therefore A + 1 nodes per dimension. The lattice resolutions were chosen to be (AI 4, A2 8, A3 16, A4 32, As 64). The corresponding number of vertices in each ofthe virtual4-dimensionallattices was therefore (Ml 625, M2 = 6,561, M3 = 83,521, M4 = 1,185,921, Ms = 17,850,625). The corresponding lookup table sizes were (TI 625, T2 4096, T3 4096, T4 4096, Ts 4096). Note that TI = M 1, so hashing was not required for the first layer. For all other layers, T>. < M>., so hashing was used. For layers 3, 4, and 5, T>. <: M>., so hashing resulted in a dramatic reduction in the memory required. The neighborhood response function RI3(E) was a B-spline with support in the 16 cells adjacent to each lattice point EI3. Hash table collisions were not resolved. The learning method used was simple gradient descent. The lookup table values were updated after the presentation of each exemplar. At each level, the training set was presented repeatedly until a convergence criterion was satisfied. The levels were trained sequentially: level 1 was trained until it converged, followed by level 2, and so on. The performance of the system as a function of training set size is shown in figure [2B]. The normalized error is defined as [rms error]/[O'], where 0' is the standard deviation of the timeseries. For each run, a different segment of the timeseries was used. In all cases, the performance was measured on an independent test sequence consisting of the 500 exemplars immediately following the training sequence. The prediction error initially drops rapidly as the number of training data are increased, = = = = = = = = = = = "Fast Learning in Multi-Resolution Hierarchies" but then begins to level out. This leveling out is most likely caused by collision noise in the hash tables. Collision resolution techniques should improve the results, but have not yet been implemented. For training sets with 500 exemplars, the multi-resolution hierarchy achieved prediction accuracy equivalent to that of a back propagation network trained by Lapedes and Farber [13]. Their network had four linear inputs, one linear output, and two internal layers, each containing 20 sigmoidal units. The layers were fully connected yielding 541 adjustable parameters (weights and thresholds) total. They trained their network in off-line mode using conjugate gradient, which they found to be significantly faster than gradient descent. The multi-resolution hierarchy converged in about 3.5 minutes on a Sun 3/60 for the 500 exemplar runs. Lapedes estimates that the back propagation network required probably 5 to 10 minutes ofCray X/MP time running at about 90 Mflops [12]. This would correspond to about 4, 000 to 8, 000 minutes of Sun 3/60 time. Hence, the multi-resolution hierarchy converged about three orders of magnitude faster that the back propagation network. This comparison should not be taken to be universal, since many implementations of both back propagation and the multi-resolution hierarchy are possible. Other comparisons could easily vary by factors of ten or more. It is interesting to note that the training time for the multi-resolution hierarchy increased sub-linearly with training set size. This is because the lookup table values were varied after the presentation of each exemplar, not after presentation of the whole set. A similar effect should be observable in back propagation nets. In fact, training after the presentation of each exemplar could very likely increase the overall rate of convergence for a back propagation net. 3.2 Real-Time Learning Unlike most standard curve and surface fitting methods, the multi-resolution hierarchy is extremely well-suited for real-time applications. Indeed, the standard CMAC model has been applied to the real-time control of robots with encouraging success [16,15]. Figure [3] illustrates a two level hierarchy (with 5 and 9 nodes per dimension) learning to predict the timeseries for T 50 from an initial tabula rasa configuration (all lookup table values set to zero). The solid line is the actual timeseries data, while the dashed line are the predicted values. The predicted values lead the actual values in the graphs. Notice that the system discovers the intrinsically cyclic nature of the series almost immediately. At the end of a single pass through 9,900 exemplars, the normalized prediction error is below 5% and the fit looks very good to the eye. On a Sun 3/50, the algorithm required 1.4 msec per level to respond to and learn from each exemplar. At this rate, the two level system was able to process 360 exemplars (over 7 cycles of the timeseries) per second. This rate would be considered phenomenal for a typical back propagation network running on a Sun 3/50. = 37 38 Moody I 1.0 f..J III 0.8 H 0 ~ >d 0.6 0 ..::l u d tf ~ ? I I 0.4 - 0.2 l- 0.0 h o ,, ) ~ '-,' ~- I I I' ,I \: - I II II _ 1- 1 1 1 " 100 200 300 400 Time ( = # of Exemplars ) I 1.0 f..J , ~ I - I I III 0.8 0 ~ -~ - >d 0.6 u 0.4 r-< 0.2 l- 0 ?.::l d ~ J I I I t ~ _ , ~ ,1~I , 1 I r I ,, I II I - 0.0 h 1 I I 1 9500 9600 9700 9800 9900 Time ( = # of exemplars) Figure 3: An example of learning to predict the Mackey-Glass chaotic timeseries in real time with a two-stage multi-resolution hierarchy. 4 Discussion There are two reasons that the multi-resolution hierarchy learns much more quickly than back propagation. The first is that the hierarchy uses local representations of the input space and thus requires evaluation and modification of only a few lookup table values for each exemplar. In contrast, the complete back propagation net must be evaluated and modified for each exemplar. Second, the learning in the multi-resolution hierarchy is cast as a purely quadratic optimization procedure. In contrast, the back propagation procedure is non-linear and is plagued with a multitude of local minima and plateaus which can significantly retard the learning process. In these respects, the multi-resolution hierarchy is very similar to the local surface fitting techniques exploited by Farmer and Sidorowich. The primary difference, however, is that the hierarchy, with its multi-resolution architecture and hash table data structures offers the flexibility needed for real time problem domains and does not require the explicit storage of past data or the creation of data structures which depend on the distribution of data. Acknowledgements I gratefully acknowledge helpful comments from Chris Darken, Doyne Farmer, Alan Lapedes, Tom Miller, Terry Sejnowski, and John Sidorowich. I am especially grateful for support from ONR grant NOO0l4-86-K-0310, AFOSR grant F4962088-C0025, and a Purdue Army subcontract. "Fast Learning in Multi-Resolution Hierarchies" References [1] J.S. Albus. Brain, Behavior and Rohotic6. Byte Books, 1981. [2] J.S. Albus. A new approach to manipulator control: the cerebellar model articulation controller (CMAC). J. Dyn. SY6. Mea6., Contr., 97:220, 1975. [3] Jon L. Bentley. Multidimensional binary search trees in database applications. IEEE Tran6. on Software Engineering, SE-5:333, 1979. [4] Jon L. Bentley. Multidimensional divide and conquer. Communication6 of the A CM, 23:214, 1980. [5] L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Tree6. Wadsworth, Monterey, CA, 1984. Clauification and Regreuion [6] Peter J. Burt and Edward H. Adelson. The laplacian pyramid as a compact image code. IEEE Tran6. Communication6, COM-31:532, 1983. [7] Peter J. Burt and Edward H. Adelson. A multiresolution spline with application to image mosaics. A CM Tran6. on Graphic6, 2:217, 1983. [8) J.L. Carter and M.N. Wegman. Universal classes of hash functions. In Proceeding6 of the Ninth Annual SIGA CT Conference, 1977. [9] M. Casdagli. Nonlinear Prediction of Chaotic Time Serie6. Technical Report, Queen Mary College, London, 1988. [10) J.D. Fanner and J.J. Sidorowich. Erploiting Cha06 to Predict the Future and Reduce Noi6e. Technical Report, Los Alamos National Laboratory, Los Alamos, New Mexico, 1988. [11] J.D. Fanner and J.J. Sidorowich. Predicting chaotic time series. PhY6icai Review Letter6, 59:845, 1987. [12] A. Lapedes. 1988. Personal communication. [13] A.S. Lapedes and R. Farber. Nonlinear Signal Proceuing U6ing Neural Network6: Prediction and SY6tem Modeling. Technical Report, Los Alamos National Laboratory, Los Alamos, New Mexico, 1987. [14) M.C. Mackey and L. Glass. Oscillation and chaos in physiological control systems. Science, 197:287. [15] W. T. Miller, F. H. Glanz, and L. G. Kraft. Application of a general learning algorithm to the control of robotic manipulators. International Journal of Robotic6 Re6earch, 6(2):84, 1987. [16) W. Thomas Miller. Sensor-based control of robotic manipulators using a general learning algorithm. IEEE Journal of Rohotic6 and Automation, RA-3(2):157, 1987. [17) J. Moody and C. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1989. To Appear. [18] J. Moody and C. Darken. Learning with localized receptive fields. In Touretzky, Hinton, and Sejnowski, editors, Proceeding6 of the 1988 Connectioni6t Model6 Summer School, Morgan Kaufmann, Publishers, 1988. [19] S. Omohundro. Efficient algorithms with neural network behavior. Compler SY6tem6, 1:273. [20] T. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce English text. Compler SY6tem6, 1:145, 1987. [21) M.F. Tenorio and W.T. Lee. Self-organized neural networks for the identification problem. Poster paper presented at the Neural Infonnation Processing Systems Conference, 1988. 39
175 |@word polynomial:1 casdagli:2 bn:1 dramatic:1 solid:2 versatile:1 reduction:1 moment:1 cyclic:2 series:4 configuration:1 initial:1 tuned:3 lapedes:5 past:3 z2:1 com:2 yet:2 dx:1 must:1 finest:1 john:2 subsequent:1 designed:1 drop:2 hash:10 mackey:5 v:1 isotropic:1 ith:1 num:1 coarse:1 node:5 successive:1 sigmoidal:1 simpler:1 five:1 differential:2 fitting:6 combine:1 leveling:1 ra:1 indeed:1 rapid:1 behavior:3 growing:1 multi:22 brain:1 inspired:2 retard:1 actual:3 little:1 encouraging:1 becomes:3 begin:1 notation:1 underlying:1 proceeding6:2 easiest:1 multigrid:1 kind:1 substantially:1 cm:2 fuzzy:2 developed:1 finding:1 guarantee:1 every:1 multidimensional:4 ti:2 k2:2 control:6 zl:1 farmer:3 unit:4 grant:2 appear:1 engineering:1 local:10 limit:2 proceuing:1 interpolation:6 studied:3 limited:2 imbedding:4 pronounce:1 testing:1 lost:1 chaotic:6 procedure:4 cmac:10 universal:4 gabor:1 significantly:2 projection:1 network6:1 pre:1 integrating:1 radial:2 regular:1 spite:1 poster:1 onto:3 close:1 storage:3 context:1 conventional:2 equivalent:1 demonstrated:1 clauification:1 sidorowich:5 regardless:1 rectangular:2 resolution:32 formulate:1 immediately:2 m2:1 nf3:3 updated:1 hierarchy:28 suppose:2 exact:1 losing:1 us:1 mosaic:1 element:1 approximated:1 particularly:1 expensive:1 putationally:1 coarser:1 predicts:1 database:1 bottom:1 module:2 fly:1 capture:1 region:4 connected:1 sun:4 cycle:1 trade:1 mentioned:1 tran6:3 personal:1 trained:6 depend:1 solving:1 segment:1 grateful:1 predictive:2 purely:1 creation:1 efficiency:1 kraft:1 completely:2 basis:2 easily:4 resolved:2 represented:1 various:1 fast:9 london:1 sejnowski:3 neighborhood:14 valued:1 otherwise:1 ability:5 statistic:1 final:1 sequence:3 net:3 neighboring:1 rapidly:3 flexibility:2 multiresolution:3 albus:5 description:4 los:4 gmdh:2 convergence:2 measured:1 exemplar:17 school:1 edward:2 zit:1 implemented:1 predicted:2 involves:1 farber:2 discontinuous:1 correct:1 filter:2 virtual:3 require:2 generalization:4 correction:3 proximity:1 considered:1 plagued:1 great:1 mapping:15 predict:4 vary:2 a2:1 fh:1 purpose:3 infonnation:1 tf:1 successfully:1 minimization:1 sensor:1 always:1 modified:1 breiman:1 varying:2 rosenberg:1 combiner:2 focus:1 contrast:2 attains:1 am:1 glass:5 helpful:1 contr:1 nn:4 typically:4 entire:1 initially:1 overall:1 classification:3 priori:3 retaining:1 special:2 wadsworth:1 equal:1 field:1 f3:6 saving:1 never:1 eliminated:1 sampling:3 having:1 represents:1 lit:1 look:1 adelson:2 constitutes:2 jon:2 future:3 report:3 np:2 ired:1 haven:1 serious:2 spline:3 few:3 connectioni6t:1 t2:1 resulted:1 national:2 m4:1 replaced:2 intended:3 tcha:3 consisting:1 attractor:1 overlayed:2 friedman:1 mflop:1 highly:1 evaluation:1 weakness:1 introduces:1 yielding:1 dyn:1 held:1 allocating:1 capable:1 worker:1 partial:1 tree:2 divide:1 euclidean:1 phenomenal:1 desired:2 tf3:1 circle:1 orbit:1 fitted:1 increased:2 column:1 earlier:1 boolean:2 modeling:1 queen:1 lattice:27 cost:1 vertex:2 deviation:1 uniform:1 alamo:4 delay:2 too:2 stored:2 periodic:1 international:1 ie:1 lee:1 off:9 discipline:1 continuously:3 moody:8 uiti:1 quickly:1 squared:1 nm:1 satisfied:1 containing:1 choose:1 glanz:1 book:1 de:1 lookup:12 coding:2 automation:1 caused:1 mp:1 depends:1 piece:2 xf3:3 sy6tem6:2 start:1 complicated:1 parallel:1 contribution:1 minimize:4 square:1 accuracy:1 wiener:1 phoneme:1 characteristic:1 efficiently:2 miller:3 ensemble:1 who:1 yield:3 ofthe:1 t3:1 correspond:1 generalize:3 xli:1 identification:1 ei3:1 finer:1 followmg:1 converged:3 plateau:1 touretzky:1 ed:1 associated:4 hamming:1 intrinsically:1 knowledge:2 emerges:1 dimensionality:1 organized:2 back:16 hashing:7 dt:2 supervised:3 tom:1 specify:1 response:9 formulation:2 evaluated:1 box:1 furthermore:4 stage:1 correlation:1 until:2 horizontal:1 nonlinear:2 overlapping:1 propagation:16 del:1 mode:3 quality:1 manipulator:3 mary:1 bentley:2 effect:2 contain:2 normalized:5 hence:1 assigned:1 symmetric:1 laboratory:2 deal:1 nettalk:1 adjacent:1 during:1 self:3 noted:1 m:1 criterion:1 subcontract:1 stone:1 complete:3 omohundro:1 fj:1 meaning:2 wise:2 consideration:1 instantaneous:1 recently:1 fi:2 discovers:1 common:1 image:2 chaos:1 discussed:1 ai:1 populated:1 rasa:1 gratefully:1 had:1 access:1 robot:1 surface:6 closest:1 recent:1 certain:2 binary:1 onr:1 success:1 accomplished:4 yi:2 exploited:1 sinput:2 captured:2 minimum:1 greater:1 tabula:1 morgan:1 monotonically:1 signal:6 ii:4 dashed:1 multiple:1 alan:1 technical:3 faster:3 characterized:2 determination:2 adapt:2 offer:2 laplacian:1 prediction:17 multilayer:2 controller:2 metric:2 vision:1 cerebellar:2 pyramid:2 achieved:1 cell:1 preserved:1 schematically:2 ri3:1 fine:7 spacing:1 annealing:1 interval:1 kaufmann:1 publisher:1 unlike:3 probably:2 comment:2 tend:1 seem:1 iii:2 easy:1 variety:3 fit:1 architecture:2 reduce:1 simplifies:1 intensive:1 allocate:1 rms:1 peter:2 repeatedly:1 fractal:1 generally:1 collision:7 useful:1 detailed:1 se:1 monterey:1 amount:1 locally:4 ten:1 carter:1 simplest:1 reduced:1 notice:1 per:7 four:3 threshold:1 achieving:1 wasteful:1 k4:1 ht:1 wasted:1 graph:1 sum:7 run:4 inverse:1 respond:2 almost:2 strange:1 p3:1 oscillation:1 scaling:1 comparable:1 capturing:1 layer:5 ct:2 followed:1 summer:1 yale:1 quadratic:2 annual:1 strength:1 precisely:1 software:1 nearby:1 generates:1 speed:4 extremely:5 coarsest:1 conjugate:1 modification:1 dv:2 taken:1 computationally:3 equation:7 needed:1 mind:1 end:1 operation:1 hierarchical:2 appropriate:1 alternative:1 encounter:1 thomas:1 assumes:1 remaining:1 include:2 running:2 a4:1 build:1 graded:3 especially:1 classical:1 conquer:1 volterra:1 parametric:1 costly:1 primary:1 receptive:1 traditional:1 gradient:4 subspace:2 distance:1 separate:1 remedied:1 simulated:1 majority:1 chris:1 reason:1 kalman:2 code:1 index:1 relationship:2 illustration:1 mexico:2 difficult:1 olshen:1 disparate:1 implementation:3 unknown:1 perform:2 adjustable:1 darken:3 benchmark:1 acknowledge:1 purdue:1 descent:3 t:1 timeseries:15 wegman:1 situation:1 hinton:1 communication:1 varied:2 ninth:1 arbitrary:1 lb:1 community:1 overlooked:1 burt:2 cast:1 required:5 specified:1 learned:2 discontinuity:1 beyond:1 able:1 usually:1 below:1 articulation:2 memory:2 terry:1 ia:1 overlap:1 power:1 difficulty:1 predicting:2 improve:1 imply:1 eye:1 picture:1 realvalued:2 disappears:1 byte:1 review:1 literature:1 acknowledgement:1 text:1 manhattan:1 law:1 fully:1 afosr:1 interesting:1 limitation:2 allocation:1 proven:1 localized:2 principle:1 editor:1 row:1 elsewhere:1 english:1 formal:2 allow:1 perceptron:2 wide:3 sparse:2 distributed:1 curve:4 overcome:1 boundary:2 dimension:8 made:1 adaptive:11 projected:1 far:2 approximate:2 observable:1 compact:1 ml:1 global:1 sequentially:1 robotic:2 assumed:2 xi:2 spectrum:1 don:1 continuous:3 search:1 table:17 learn:7 nature:2 ca:1 expansion:1 excellent:3 necessarily:1 domain:6 dense:1 linearly:1 whole:1 noise:4 body:1 slow:1 precision:2 sub:1 explicit:3 msec:1 lie:2 re6earch:1 learns:1 minute:3 physiological:1 multitude:1 a3:1 essential:1 magnitude:2 illustrates:1 t4:1 suited:3 yin:1 insures:1 likely:2 army:1 tenorio:1 visual:1 chance:1 formulated:3 presentation:4 absence:1 included:1 typical:2 determined:1 reducing:1 total:3 sy6tem:1 pas:1 m3:1 college:1 internal:1 support:4 u6ing:1
820
1,750
Scale Mixtures of Gaussians and the Statistics of Natural Images Martin J. Wainwright Stochastic Systems Group Electrical Engineering & CS MIT, Building 35-425 Cambridge, MA 02139 mjwain@mit.edu Eero P. Simoncelli Ctr. for Neural Science, and Courant Inst. of Mathematical Sciences New York University New York, NY 10012 eero. simoncelli@nyu.edu Abstract The statistics of photographic images, when represented using multiscale (wavelet) bases, exhibit two striking types of nonGaussian behavior. First, the marginal densities of the coefficients have extended heavy tails. Second, the joint densities exhibit variance dependencies not captured by second-order models. We examine properties of the class of Gaussian scale mixtures, and show that these densities can accurately characterize both the marginal and joint distributions of natural image wavelet coefficients. This class of model suggests a Markov structure, in which wavelet coefficients are linked by hidden scaling variables corresponding to local image structure. We derive an estimator for these hidden variables, and show that a nonlinear "normalization" procedure can be used to Gaussianize the coefficients. Recent years have witnessed a surge of interest in modeling the statistics of natural images. Such models are important for applications in image processing and computer vision, where many techniques rely (either implicitly or explicitly) on a prior density. A number of empirical studies have demonstrated that the power spectra of natural images follow a 1/ f'Y law in radial frequency, where the exponent "f is typically close to two [e.g., 1]. Such second-order characterization is inadequate, however, because images usually exhibit highly non-Gaussian behavior. For instance, the marginals of wavelet coefficients typically have much heavier tails than a Gaussian [2]. Furthermore, despite being approximately decorrelated (as suggested by theoretical analysis of 1/ f processes [3]), orthonormal wavelet coefficients exhibit striking forms of statistical dependency [4, 5]. In particular, the standard deviation of a wavelet coefficient typically scales with the absolute values of its neighbors [5]. A number of researchers have modeled the marginal distributions of wavelet coefficients with generalized Laplacians, py(y) ex exp( -Iy/ AlP) [e.g. 6, 7, 8]. Special cases include the Gaussian (p = 2) and the Laplacian (p = 1), but appropriate exResearch supported by NSERC 1969 fellowship 160833 to MJW, and NSF CAREER grant MIP-9796040 to EPS. M J Wainwright and E. P. Simoncelli 856 Mixing density GSM density GSM char. function JZ(t) symmetrized Gamma ( t'l ) -, 'Y l+w ,),>0 Student: l/JZ({3-~) [1/(>,2 Positive, J~ - stable + y2)]t3, No explicit form {3>~ exp (-IAW~), a E (0,2] a-stable generalized Laplacian: No explicit form exp (-Iy / AlP), No explicit form p E (0,2] Table 1. Example densities from the class of Gaussian scale mixtures. Zh) denotes a positive gamma variable, with density p(z) = [l/rh)] z"Y- 1 exp (-z). The characteristic function of a random variable x is defined as ?",(t) ~ J~oo p(x) exp (jxt) dx . ponents for natural images are typically less than one. Simoncelli [5, 9] has modeled the variance dependencies of pairs of wavelet coefficients. Romberg et al. [10] have modeled wavelet densities using two-component mixtures of Gaussians. Huang and Mumford [11] have modeled marginal densities and cross-sections of joint densities with multi-dimensional generalized Laplacians. In the following sections, we explore the semi-parametric class of Gaussian scale mixtures. We show that members of this class satisfy the dual requirements of being heavy-tailed, and exhibiting multiplicative scaling between coefficients. We also show that a particular member of this class, in which the multiplier variables are distributed according to a gamma density, captures the range of joint statistical behaviors seen in wavelet coefficients of natural images. We derive an estimator for the multipliers, and show that a nonlinear "normalization" procedure can be used to Gaussianize the wavelet coefficients. Lastly, we form random cascades by linking the multipliers on a multiresolution tree. 1 Scale Mixtures of Gaussians A random vector Y is a Gaussian scale mixture (GSM) if Y 4 zU, where 4 denotes is a scalar random variable; U N(O, Q) is a equality in distribution; z 2:: Gaussian random vector; and z and U are independent. ? f'V As a consequence, any GSM variable has a density given by an integral: py(Y) = 1 00 -00 1 ( [21r]~ Iz 2 Q1 1 / 2 exp - yTQ-1Y) 2z2 <Pz(z)dz. where <Pz is the probability density of the mixing variable z (henceforth the multiplier) . A special case of a GSM is a finite mixture of Gaussians, where z is a discrete random variable. More generally, it is straightforward to provide conditions on either the density [12] or characteristic function of X that ensure it is a GSM, but these conditions do not necessarily provide an explicit form of <Pz. Nevertheless, a number of well-known distributions may be written as Gaussian scale mixtures. For the scalar case, a few of these densities, along with their associated characteristic functions, are listed in Table 1. Each variable is characterized by a scale parameter A, and a tail parameter. All of the GSM models listed in Table 1 produce heavy-tailed marginal and variance-scaling joint densities. 857 Scale Mixtures ofGaussians and the Statistics ofNatural Images baboon flower boats -2 -2 .... -4 lI. 8' 8' ~-6 t::.HjH = 0.00079 -2'r---------, ( 1-6 [-y, .x2] = [0.97,15.04] frog ~ -8 [0.45, 13.77] 0.0030 [0.78,26.83] 0.0030 [0.80, 15.39] 0.0076 Figure 1. GSMs (dashed lines) fitted to empirical histograms (solid lines). Below each plot are the parameter values, and the relative entropy between the histogram (with 256 bins) and the model, as a fraction of the histogram entropy. 2 Modeling Natural Images As mentioned in the introduction, natural images exhibit striking non-Gaussian behavior, both in their marginal and joint statistics. In this section, we show that this behavior is consistent with a GSM, using the first of the densities given in Table 1 for illustration. 2.1 Marginal distributions We begin by examining the symmetrized Gamma class as a model for marginal distributions of wavelet coefficients. Figure 1 shows empirical histograms of a particular wavelet subband1 for four different natural images, along with the best fitting instance of the symmetrized Gamma distribution. Fitting was performed by minimizing the relative entropy (i.e., the Kullback-Leibler divergence, denoted t::.H) between empirical and theoretical histograms. In general, the fits are quite good: the fourth plot shows one of the worst fits in our data set. 2.2 Normalized components For a GSM random vector Y :1 zU, the normalized variable Yjz formed by component-wise division is Gaussian-distributed. In order to test this behavior empirically, we model a given wavelet coefficient Yo and a collection of neighbors {Yl, ... ,YN} as a GSM vector. For our examples, we use a neighborhood of N = 11 coefficients corresponding to basis functions at 4 adjacent positions, 5 orientations, and 2 scales. Although the multiplier z is unknown, we can estimate it by maximizing the log likelihood of the observed coefficients: ~ arg max z { log p(Y Iz) }. Under reasonable conditions, the normalized quantity Yjz should converge in distribution to a Gaussian as the number of neighbors increases. The estimate is simple to derive: argmax {logp(Ylz)} z z z - z argmin {N log(z) z + yT Q- 1Yj2z 2 } lWe use the steer able pyramid, an overcomplete multiscale representation described in [13]. The marginal and joint statistics of other multiscale oriented representations are similar. M J. Wainwright and E. P. Simoncelli 858 baboon boats flowers frog ... -. f-e " -. f-a " -7 ~ ~ ~-e ~-8 -. b..H/ H = 0.00035 0.00041 0.00042 0.00043 Figure 2. Marginal log histograms (solid lines) of the normalized coefficient ZI for a single subband of four natural images. Each shape is close to an inverted parabola, in agreement with Gaussians (dashed lines) of equivalent empirical variance. Below each plot is the relative entropy between the histogram (with 256 bins) and a variance-matched Gaussian, as a fraction of the total histogram entropy. where Q ~ lE [UUT] is the positive definite covariance matrix of the underlying Gaussian vector U. Given the estimate i, we then compute the normalized coefficient v ~ Yo/i. This is a generalization of the variance normalization proposed by Ruderman and Bialek[I], and the weighted sum of squares normalization procedure used by Simoncelli [5, 14]. Figure 2 shows the marginal histograms (in the log domain) of this normalized coefficient for four natural images, along with Gaussians of equal empirical variance. In contrast to histograms of the raw coefficients (shown in Figure 1), the histograms of normalized coefficients are nearly Gaussian. The GSM model makes a stronger prediction: that normalized quantities corresponding to nearby wavelet pairs should be jointly Gaussian. Specifically, a pair of normalized coefficients should be either correlated or uncorrelated Gaussians, depending on whether the underlying Gaussians U = [Ul U2]T are correlated or uncorrelated. We examine this prediction by collecting joint conditional histograms of normalized coefficients. The top row of Figure 3 shows joint conditional histograms for raw wavelet coefficients (taken from the same four natural images as Figure 2). The first two columns correspond to adjacent spatial scales; though decorrelated, they exhibit the familiar form of multiplicative scaling. The latter two columns correspond to adjacent orientations; in addition to being correlated, they also exhibit the multiplicative form of dependency. The bottom row shows the same joint conditional histograms, after the coefficients have been normalized. Whereas Figure 2 demonstrates that normalized coefficients are close to marginally Gaussian, Figure 3 demonstrates that they are also approximately jointly Gaussian. These observations support the use of a Gaussian scale mixture for modeling natural images. 2.3 Joint distributions The GSM model is a reasonable approximation for groups of nearby wavelet coefficients. However, the components of GSM vectors are highly dependent, whereas the dependency between wavelet coefficients decreases as (for example) their spatial separation increases. Consequently, the simple GSM model is inadequate for global modeling of coefficients. We are thus led to use a graphical model (such as tree) that specifies probabilistic relations between the multipliers. The wavelet coefficients themselves are considered observations, and are linked indirectly by their shared dependency on the (hidden) multipliers. 859 Scale Mixtures of Gaussians and the Statistics of Natural Images baboon ~H / H = 0.0643 boats flowers frog 0.0743 0.0572 0.0836 Figure 3. Top row: joint conditional histograms of raw wavelet coefficients for four natural images. Bottom row: joint conditional histograms of normalized pairs of coefficients. Below each plot is the relative entropy between the joint histogram (with 256 X 256 bins) and a covariance-matched Gaussian, as a fraction of the total histogram entropy. For concreteness, we model the wavelet coefficient at node s as y(s) ~ IIx(s) II u(s), where x(s) is Gaussian, so that z ~ IIxll is the square root of a gamma variable of index 0.5. For illustration, we assume that the multipliers are linked by a multiscale autoregressive (MAR) process [15] on a tree: x(s) = I-' x(P(s)) + )1- 1-'2 w(s) where p( s) is the parent of node s. Two wavelet coefficients y (s) and y (t) are linked through the multiplier at their common ancestral node denoted s /\ t. In particular, the joint distributions are given by + VI (s)11 u(s) Ill-'d(t,SAt) x(s /\ t) + V2(t)11 u(t) y(s) = Ill-'d(s,SAt) x(s /\ t) y(t) = where VI, V2 are independent white noise processes; and d( , ) denotes the distance between a node and one of its ancestors on the tree (e.g., d(s,p(s)) = 1). For nodes sand t at the same scale and orientation but spatially separated by a distance of ~(s, t), the distance between s and the common ancestor s /\ t grows as d(s, s /\ t) '" [log2(~(s, t)) + 1]. The first row of Figure 4 shows the range of behaviors seen in joint distributions taken from a wavelet subband of a particular natural image, compared to simulated GSM gamma distributions with I-' = 0.92. The first column corresponds to a pair of wavelet filters in quadrature phase (Le., related by a Hilbert transform). Note that for this pair of coefficients, the contours are nearly circular, an observation that has been previously made by Zetzsche [4]. Nevertheless, these two coefficients are dependent, as shown by the multiplicative scaling in the conditional histogram of the third row. This type of scaling dependency has been extensively documented by Simoncelli [5, 9]. Analogous plots for the simulated Gamma model, with zero spatial separation are shown in rows 2 and 4. As in the image data, the contours of the joint density are very close to circular, and the conditional distribution shows a striking variance dependency. M J Wainwright and E. P Simoncelli 860 - quad. pair image data overlapping near distant ..~. . ? :' .. J simulated model image data simulated model Figure 4. Examples of empirically observed distributions of wavelet coefficients, compared with simulated distributions from the GSM gamma model. First row: Empirical joint histograms for the "mountain" image, for four pairs of wavelet coefficients, corresponding to basis functions with spatial separations ~ {O, 4, 8, 128}. Second row: Simulated joint distributions for Gamma variables with f.J = 0.92 and the same spatial separations. Contour lines are drawn at equal intervals of log probability. Third row: Empirical conditional histograms for the "mountain" image. Fourth row: Simulated conditional histograms for Gamma variables. For these conditional distributions, intensity corresponds to probability, except that each column has been independently rescaled to fill the full range of intensities. = The remaining three columns of figure 4 show pairs of coefficients drawn from identical wavelet filters at spatial displacements ~ = {4, 8, 128}, corresponding to a pair of overlapping filters, a pair of nearby filters, and a distant pair. Note the progression in the contour shapes from off-circular, to a diamond shape, to a concave "star" shape. The model distributions behave similarly, and show the same range of contours for simulated pairs of coefficients. Thus, consistent with empirical observations, a GSM model can produce a range of dependency between pairs of wavelet coefficients. Again, the marginal histograms retain the same form throughout this range. 3 Conclusions We have proposed the class of Gaussian scale mixtures for modeling natural images. Models in this class typically exhibit heavy-tailed marginals, as well as multiplicative scaling between adjacent coefficients. We have demonstrated that a particular GSM (the symmetrized Gamma family) accounts well for both the marginal and joint distributions of wavelet coefficients from natural images. More importantly, this model suggests a hidden Markov structure for natural images, in which wavelet coefficients are linked by hidden multipliers. Romberg et al. [10] have made a related proposal using two-state discrete multipliers, corresponding to a finite mixture of Gaussians. Scale Mixtures ofGaussians and the Statistics of Natural Images 861 We have demonstrated that the hidden multipliers can be locally estimated from measurements of wavelet coefficients. Thus, by conditioning on fixed values of the multipliers, estimation problems may be reduced to the classical Gaussian case. Moreover, we described how to link the multipliers on a multiresolution tree, and showed that such a random cascade model accounts well for the drop-off in dependence of spatially separated coefficients. We are currently exploring EM-like algorithms for the problem of dual parameter and state estimation. Acknowledgements We thank Bill Freeman, David Mumford, Mike Schneider, Ilya Pollak, and Alan Willsky for helpful discussions. References [1] D. L. Ruderman and W. Bialek. Statistics of natural images: Scaling in the woods. Phys . Rev. Letters, 73(6):814-817, 1994. [2] D. J . Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 4(12):2379-2394, 1987. [3] A. H. Tewfik and M. Kim. Correlation structure of the discrete wavelet coefficients of fractional Brownian motion. IEEE Trans. Info . Theory, 38:904-909, Mar. 1992. [4] C. Zetzsche, B. Wegmann, and E. Barth. Nonlinear aspects of primary vision: Entropy reduction beyond decorrelation. In Int 'l Symp. Soc. for Info. Display, volume 24, pages 933-936, 1993. [5] E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Con/., pages 673-678, Nov. 1997. [6] S. G. Mallat. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Pat. Anal. Mach . Intell., 11:674-693, July 1989. [7] E. P. Simoncelli and E. H. Adelson. Noise removal via Bayesian wavelet coring. In Proc . IEEE ICIP, volume I, pages 379-382 , September 1996. [8] P. Moulin and J. Liu. Analysis of multiresolution image denoising schemes using a generalized Gaussian and complexity priors. IEEE Trans. Info. Theory, 45:909-919 , Apr. 1999. [9] R W. Buccigrossi and E . P. Simoncelli. Image compression via joint statistical characterization in the wavelet domain . IEEE Trans . Image. Proc., 8(12) :1688-1701 , Dec. 1999. [10] J .K. Romberg, H. Choi, and RG. Baraniuk. Bayesian wavelet domain image modeling using hidden Markov trees. In Proc. IEEE ICIP, Kobe, Japan, Oct. 1999. [11] J. Huang and D. Mumford. Statistics of natural images and models. In CVPR , paper 216, 1999. [12] D.F . Andrews and C.L. Mallows. Scale mixtures of normal distributions. J. Royal Stat. Soc., 36:99- 102, 1974. [13] E. P. Simoncelli and W. T . Freeman. The steerable pyramid: A flexible architecture for multi-scale derivative computation. In Proc. IEEE ICIP, volume III, pages 444447, Oct . 1995. [14] E. P. Simoncelli and O. Schwartz. Image statistics and cortical normalization models. In M. S. Kearns, S. A. SoHa, and D. A. Cohn, editors, Adv. Neural Information Processing Systems, volume 11, pages 153-159, Cambridge, MA , May 1999. [15] K. Chou, A. Willsky, and R Nikoukhah. Multiscale systems, Kalman filters, and Riccati equations. IEEE Trans. Automatic Control, 39(3):479-492, Mar. 1994.
1750 |@word compression:2 stronger:1 covariance:2 decomposition:1 q1:1 solid:2 reduction:1 liu:1 z2:1 dx:1 written:1 distant:2 shape:4 plot:5 drop:1 characterization:2 node:5 mathematical:1 along:3 fitting:2 symp:1 behavior:7 themselves:1 examine:2 surge:1 multi:2 freeman:2 quad:1 begin:1 matched:2 underlying:2 moreover:1 mountain:2 argmin:1 collecting:1 concave:1 demonstrates:2 schwartz:1 control:1 grant:1 yn:1 positive:3 engineering:1 local:1 gsms:1 consequence:1 despite:1 mach:1 approximately:2 frog:3 suggests:2 baboon:3 range:6 mallow:1 definite:1 steerable:1 procedure:3 displacement:1 empirical:9 cascade:2 radial:1 close:4 romberg:3 py:2 equivalent:1 bill:1 demonstrated:3 dz:1 maximizing:1 yt:1 straightforward:1 independently:1 estimator:2 importantly:1 orthonormal:1 fill:1 analogous:1 mallat:1 agreement:1 parabola:1 observed:2 bottom:2 mike:1 electrical:1 capture:1 worst:1 adv:1 mjwain:1 decrease:1 rescaled:1 mentioned:1 complexity:1 division:1 basis:2 joint:21 represented:1 separated:2 neighborhood:1 quite:1 cvpr:1 statistic:12 pollak:1 jointly:2 transform:1 riccati:1 mixing:2 multiresolution:4 parent:1 requirement:1 produce:2 derive:3 oo:1 depending:1 stat:1 andrew:1 ex:1 soc:3 c:1 exhibiting:1 filter:5 stochastic:1 alp:2 char:1 bin:3 sand:1 generalization:1 opt:1 exploring:1 mjw:1 considered:1 normal:1 exp:6 estimation:2 proc:4 currently:1 weighted:1 mit:2 gaussian:24 yo:2 ytq:1 likelihood:1 contrast:1 kim:1 am:1 chou:1 inst:1 helpful:1 dependent:2 wegmann:1 typically:5 hidden:7 relation:2 ancestor:2 arg:1 dual:2 orientation:3 ill:2 denoted:2 exponent:1 flexible:1 spatial:6 special:2 marginal:13 equal:2 field:1 identical:1 adelson:1 nearly:2 few:1 kobe:1 oriented:1 gamma:12 divergence:1 intell:1 familiar:1 argmax:1 phase:1 interest:1 highly:2 circular:3 mixture:16 zetzsche:2 integral:1 tree:6 mip:1 overcomplete:1 theoretical:2 fitted:1 witnessed:1 instance:2 modeling:6 lwe:1 steer:1 column:5 logp:1 restoration:1 deviation:1 examining:1 inadequate:2 characterize:1 dependency:9 st:1 density:19 ancestral:1 retain:1 probabilistic:1 yl:1 off:2 synthesis:1 iy:2 ilya:1 nongaussian:1 ctr:1 again:1 huang:2 henceforth:1 iixll:1 derivative:1 li:1 japan:1 account:2 star:1 student:1 coefficient:47 int:1 satisfy:1 explicitly:1 vi:2 multiplicative:5 performed:1 root:1 linked:5 formed:1 square:2 variance:8 characteristic:3 t3:1 correspond:2 raw:3 bayesian:2 accurately:1 marginally:1 researcher:1 gsm:18 phys:1 decorrelated:2 frequency:1 associated:1 con:1 fractional:1 hilbert:1 barth:1 courant:1 follow:1 response:1 though:1 mar:3 furthermore:1 lastly:1 correlation:1 ruderman:2 cohn:1 multiscale:5 nonlinear:3 overlapping:2 grows:1 building:1 normalized:13 y2:1 multiplier:14 equality:1 spatially:2 leibler:1 white:1 adjacent:4 generalized:4 motion:1 image:39 wise:1 common:2 empirically:2 conditioning:1 volume:4 tail:3 linking:1 nikoukhah:1 marginals:2 eps:1 measurement:1 cambridge:2 ofgaussians:2 automatic:1 similarly:1 stable:2 base:1 brownian:1 recent:1 showed:1 inverted:1 captured:1 seen:2 moulin:1 schneider:1 converge:1 signal:1 semi:1 ii:1 full:1 photographic:1 simoncelli:13 dashed:2 july:1 alan:1 characterized:1 cross:1 laplacian:2 prediction:2 vision:2 histogram:23 normalization:5 pyramid:2 cell:1 dec:1 proposal:1 whereas:2 fellowship:1 addition:1 interval:1 member:2 near:1 iii:1 fit:2 zi:1 architecture:1 whether:1 heavier:1 ul:1 york:2 generally:1 listed:2 extensively:1 locally:1 documented:1 reduced:1 specifies:1 nsf:1 estimated:1 discrete:3 gaussianize:2 iz:2 group:2 four:6 nevertheless:2 drawn:2 concreteness:1 fraction:3 year:1 sum:1 ylz:1 wood:1 letter:1 fourth:2 baraniuk:1 striking:4 throughout:1 reasonable:2 family:1 separation:4 scaling:8 display:1 x2:1 nearby:3 aspect:1 martin:1 according:1 em:1 rev:1 taken:2 asilomar:1 equation:1 previously:1 gaussians:10 progression:1 v2:2 appropriate:1 indirectly:1 symmetrized:4 denotes:3 top:2 ensure:1 include:1 remaining:1 graphical:1 iix:1 log2:1 subband:2 classical:1 quantity:2 mumford:3 parametric:1 primary:1 dependence:1 bialek:2 exhibit:8 september:1 distance:3 link:1 thank:1 simulated:8 willsky:2 kalman:1 modeled:4 index:1 illustration:2 minimizing:1 info:3 anal:1 unknown:1 diamond:1 observation:4 markov:3 finite:2 behave:1 pat:1 extended:1 intensity:2 david:1 pair:14 icip:3 trans:4 able:1 suggested:1 beyond:1 usually:1 flower:3 below:3 laplacians:2 max:1 royal:1 wainwright:4 power:1 decorrelation:1 natural:23 rely:1 boat:3 scheme:1 prior:2 acknowledgement:1 removal:1 zh:1 relative:4 law:1 coring:1 consistent:2 editor:1 uncorrelated:2 heavy:4 row:11 supported:1 buccigrossi:1 neighbor:3 absolute:1 distributed:2 cortical:2 contour:5 autoregressive:1 collection:1 made:2 nov:1 implicitly:1 kullback:1 global:1 sat:2 eero:2 spectrum:1 tailed:3 table:4 jz:2 career:1 necessarily:1 domain:3 apr:1 rh:1 noise:2 quadrature:1 ny:1 position:1 explicit:4 yjz:2 third:2 wavelet:36 choi:1 zu:2 uut:1 nyu:1 pz:3 entropy:8 rg:1 led:1 soha:1 explore:1 nserc:1 scalar:2 u2:1 corresponds:2 ma:2 oct:2 conditional:10 consequently:1 ponents:1 shared:1 specifically:1 except:1 denoising:1 kearns:1 total:2 support:1 latter:1 correlated:3
821
1,751
A Neurodynamical Approach to Visual Attention JosefZihl Gustavo Deco Institute of Psychology Siemens AG Corporate Technology Neuropsychology Neural Computation, ZT IK 4 Ludwig-Maximilians-University Munich Otto-Hahn-Ring 6 Leopoldstr. 13 80802 Munich, Germany 81739 Munich, Germany Gustavo.Deco@mchp.siemens.de Abstract The psychophysical evidence for "selective attention" originates mainly from visual search experiments. In this work, we formulate a hierarchical system of interconnected modules consisting in populations of neurons for modeling the underlying mechanisms involved in selective visual attention. We demonstrate that our neural system for visual search works across the visual field in parallel but due to the different intrinsic dynamics can show the two experimentally observed modes of visual attention, namely: the serial and the parallel search mode. In other words, neither explicit model of a focus of attention nor saliencies maps are used. The focus of attention appears as an emergent property of the dynamic behavior of the system. The neural population dynamics are handled in the framework of the mean-field approximation. Consequently, the whole process can be expressed as a system of coupled differential equations. 1 Introduction Traditional theories of human vision considers two functionally distinct stages of visual processing [1]. The first stage, termed the preattentive stage, implies an unlimitedcapacity system capable of processing the information contained in the entire visual field in parallel. The second stage is termed the attentive or focal stage, and is characterized by the serial processing of visual information corresponding to local spatial regions. This stage of processing is typically associated with a limited-capacity system which allocates its resources to a single particular location in visual space. The designed psychophysical experiments for testing this hypothesis consist of visual search tasks. In a visual search test the subject have to look at the display containing a frame filled with randomly positioned items in order to seek for an a priori defined target item. All other items in a frame which are not the target are called distractors. The number of items in a frame is called the frame size. The relevant variable to be measured is the reaction time as a function of the frame size. In this context, the Feature Integration Theory, assumes that the two stage processes operate sequentially [1]. The first early pre attentive stage runs in parallel over the complete visual field extracting single primitive features without A Neurodynamical Approach to Visual Attention J1 integrating them. The second attentive stage has been likened to a spotlight. This metaphor alludes that attention is focally allocated to a local region of the visual field where stimuli are processed in more detail and passed to higher level of processing, while, in the other regions not illuminated by the attentional spotlight, no further processing occurs. Computational models formulated in the framework of feature integration theory require the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that limited computational resources in the system are not overloaded. The priority map serves to represent topographically the relevance of different parts of the visual field, in order to have a mechanism for guiding the attentional focus on salient regions of the retinal input. The focused area will be gated, such that only the information within will be passed further to yet higher levels, concerned with object recognition and action. The disparity between these two stages of attentional visual processing originated a vivid experimental disputation. Duncan and Humphreys [2] have postulated a hypothesis that integrates both attentional modes (parallel and serial) as an instantiates of a common principle. This principle sustains in both schemes that a selection is made. In the serial focal scheme, the selection acts on in the space dimension, while in the parallel spread scheme the selection concentrates in feature dimensions, e.g. color. On the other hand, Duncan's attentional theory [3] proposed that after a first parallel search a competition is initiated, which ends up by accepting only one object namely the target. Recently, several electrophysiological experiments have been performed which seems to support this hypothesis [4]. Chelazzi et al. [4] measured IT (inferotemporal) neurons in monkeys observing a display containing a target object (that the monkey has seen previously) and a distractor. They report a short period during which the neuron's response is enhanced. After this period the activity level of the neuron remains high if the target is the neuron's effective stimulus, and decay otherwise. The challenging question is therefore: is really the linear increasing reaction time observed in some visual search tests due to a serial mechanism? or is there only parallel processing followed by a dynamical time consuming latency? In other words, are really priority maps and spotlight paradigm required? or can a neurodynamical approach explain the observed psychophysical experiments? Furthermore, it should be clarified if the feature dimension search is achieved independently in each feature dimension or is done after integrating the involved feature dimensions. We study in this paper these questions from a computational perspective. We formulate a neurodynamical model consisting in interconnected populations of biological neurons specially designed for visual search tasks. We demonstrate that it is plausible to build a neural system for visual search, which works across the visual field in parallel but due to the different intrinsic dynamics can show the two experimentally observed modes of visual attention, namely: the serial focal and the parallel spread over the space mode. In other words, neither explicit serial focal search nor saliency maps should be assumed. The focus of attention is not ?included in the system but just results after convergence of the dynamical behavior of the neural networks. The dynamics of the system can be interpreted as an intrinsic dynamical routing for binding features if top-down information is available. Our neurodynamical computational model requires independent competition mechanism along each feature dimension for explaining the experimental data, implying the necessity of the independent character of the search in separated and not integrated feature dimensions. The neural population dynamics are handled in the framework of the meanfield approximation yielding a system of coupled differential equations. 2 Neurodynamical model We extend with the present model the approach of Usher and Niebur [5], which is based on the experimental data of Chelazzi et al. [4], for explaining the results of visual search experiments. The hierarchical architecture of our system is shown in Figure 1. The input retina is given as a matrix of visual items. The location of each item at the retina is 12 G. Deco and J. Zihl specified by two indices ij meaning the position at the row i and the column j . The dimension of this matrix is SxS, i.e. the frame size is also SxS . The information is processed at each spatial location in parallel. Different feature maps extract for the item at each position the local values of the features. In the present work we hypothesize that selective attention is guided by an independent mechanism which corresponds to the independent search of each feature. Let us assume that each visual item can be defined by K features. Each feature k can adopt L(k) values, for example the feature color can have the values red or green (in this case L( color) =2). For each feature map k exist L(k) layers of neurons for characterizing the presence of each feature value. Stern SpikIng - Neurons 1-I I I - I Figure 1: Hierarchical architecture of spiking neural modules for visual selective attention. Solid arrows denote excitatory connections and dotted arrows denote inhibitory connections A cell assembly consisting in a population of full connected excitatory integrate-and-fire spiking neurons (pyramidal cells) is allocated in each layer and for each item location for encoding the presence of a specific feature value (e.g. color red) at the corresponding position. This corresponds to a sparse distributed representation. The feature maps are topographically ordered, i.e. the receptive fields of the neurons belonging to the cell assembly ij at one of these maps are sensible to the location ij at the retinal input. We further assume that the cell assemblies in layers corresponding to a feature dimension are mutually inhibitory. Inhibition is modeled, according to the constraint imposed by Dale's principle, by a different pool of inhibitory neurons. Each feature dimension has therefore an independent pool of inhibitory neurons. This accounts for the neurophysiological fact that the response of V4 neurons sensible to a specific feature value is enhanced and the activity of the other neurons sensible to other feature values are suppressed. A high level map consisting also in a topographically ordered excitatory cell assemblies is introduced for integration of the different feature dimension at each item location, i.e. for binding the features of each item. These cell assemblies are also mutually inhibited through a common 13 A Neurodynamical Approach to Visual Attention pool of inhibitory neurons. This layer corresponds to the modeling of IT neurons, which show location specific enhancement of activity by suppression of the responses of the cell assemblies associated to other locations. This fact would yield a dynamical formation of a focus of attention without explicitly assuming any spotlight. Top-down information consisting in the feature values at each feature dimension of the target item is feed in the system by including an extra excitatory input to the corresponding feature layers. The whole system analyzes the information at all locations in parallel. Larger reaction times correspond to slower dynamical convergence at all levels, i.e. feature map and integration map levels. Instead of solving the explicit set of integrate-and-fire neural equations, the Hebbian cell assemblies adopted representation impels to adopt a dynamic theory whose dependent variables are the activation levels of the cell popUlations. Assuming an ergodic behavior [5] it is possible to derive the dynamic equations for the cell assembly activities level by utilizing the mean-field approximation [5]. The essential idea consists in characterizing each cell assembly by means of each activity x, and an input current that is characteristic for all cells in the popUlation, denoted by I, which satisfies: x = (I) F(l) which is the response function that transforms current into discharge rates for an integrateand-fire spiking neuron with deterministic input, time membrane constant 't and absolute refractory time T r ? The system of differential equations describing the dynamics of the feature maps are: P F A -bF(l k(t))+Io+I ijkl+I kl+V a S S 'tpa/Pk(t) = -IPk(t) + L(k) eLL L F(lijk/(t)) i-lj-lk-l -dF(lPk(t)) where Iijk/(t) is the input current for the population with receptive field at location ij of the feature map k that analysis the value feature I, I Pk( t) is the current in the inhibitory pool bounded to the feature map layers of the feature dimension k. The frame size is S. The additive Gaussian noise v considered has standard deviation 0.002. The synaptic time constants were 't = 5 msec for the excitatory populations and 'tp = 20 for the inhibitory pools. The synaptic weights chosen were: a = 0.95, b = 0.8, c = 2. and d = 0.1 . 10 = 0.025 is a diffuse spontaneous background input, IF ijk/ is the sensory input to the cells in feature map k sensible to the value I and with receptive fields at the location ij at the retina. This input characterizes the presence of the respective feature value at the corresponding position. A value of 0.05 corresponds to the presence of the respective feature value and a value of 0 to the absence of it. The top-down target information IA kl was equal 0.005 for the layers which code the target properties and 0 otherwise. The higher level integrating assemblies are described by following differential equation system: 14 G. Deco and J. Zihl a H ij(t) 'tHa/ _ = - - [ij(t) + a F(lij(t? - b F(l K lo+w L PH (t? L(k) LF(lijk/(t?+V k - 11- 1 a PH (t) 'tP'ra"/ = -I PH _ s ~ s H (t) + c ?.J L F(l ij(t? i - Ij- 1 where [Hij(t) is the input current for the population with receptive field at location ij of the high level integrating map, IPH (t) is the associated current in the inhibitory pool. The synaptic time constants were 'tH "" 5 msec for the excitatory populations and 't pH = 2C for the ...-.... inhibitory pools. The ..-..synaptic weights chosen were: ..-.. ..-......-... a = 0.95, b .. 0.8, w .. 1, c .. 1. and d - 0.1 . These systems of differential equations were integrated numerically until a convergence criterion were reached. This criterion were that the neurons in the high level map are polarized, i.e. H F(/'lmaxlnrax . (t? 1_?~-,im"",a"-,xJ;....?~....::;j~ma,,,-x_ _ _ _ >e (S2 - 1) where the index imaxjmax denotes the cell assembly in the high level map with maximal activity and the threshold e was chosen equal to 0.1. The second in the l.h.s measure the mean distractor activity. At each feature dimension the fixed point of the dynamic is given by the activity of cell assemblies at the layers with a common value with the target and corresponding to items having this value. For example, if the target is red, at the color map, the activity at the green layer will be suppressed and the cell assemblies corresponding to red items will be enhanced. At the high-level map, the populations corresponding to location which are maximally in all feature dimensions activated will be enhanced by suppressing the others. In other words, the location that shows all feature dimension equivalent at what top-down is stimulated and required, will be enhanced when the target is at this location. 3 Simulations of visual search tasks In this section we present results simulating the visual search experiments involving feature and conjunction search [1]. Let us define the different kinds of search tasks by given a pair of numbers m and n , where m is the number of feature dimensions by which the dis tractors differ from the target and n is the number of feature dimensions by which the distractor groups simultaneously differ from the target. In other words, the feature search corresponds to aI, I-search; a standard conjunction search corresponds to a 2,1search; a triple conjunction search can be a 3,1 or a 3,2-search if the target differs from all distractor groups in one or in two features respectively. We assume that the items are defined by three feature dimensions (K = 3, e.g. color, size and position), each one 15 A Neurodynamical Approach to Visual Attention having two values (L(k) = 2 for k = 1, 2,3). At each size we repeat the experiment 100 times, each time with different randomly generated distractors and target. T 100 00' ..?... 3,I-search (a) 0 .00 085 080 2,I-search 0 .15 010 065 060 055 050 3,2-search 045 0 .40 0 .35 0.30 ... -_ '. ..,;. . . __ . I, I-search - - ':""..; ~,.....-.::.---=--.:~-~: - -- -- --- ---- -- ---- - -- - -- - --10 .00 2 000 3000 4000 50 .00 Frame size Figure 2: Search times for feature and conjunction searches obtained utilizing the presented model. We plot as result the mean value T of the 100 simulated reaction times (in msec) as a function of the frame size. In Figure 2, the results obtained for I, 1~ 2, 1~ 3, I and 3,2searches are shown. The slopes of the reaction time vs. frame size curves for all simulations are absolutely consistent with the existing experimental results[I). The experimental work reports that in feature search (1,1) the target is detected in parallel across the visual field. Furthermore, the slopes corresponding to ?standard conjunction search and triple conjunction search are a linear function of the frame size, where by the slope of the triple conjunction search is steeper or very flat than in the case of standard search (2,1) if the target differs from the distractors in one (3,1) or two features (3,2) respectively. In order to analyze more carefully the dynamical evolution of the system, we plot in Figure 3 the temporal evolution of the rate activity corresponding to the target and to the distractors at the high-level integrating map and also separately for each feature dimension level for a parallel (I, I-search) and a serial (3,I-search) visual tasks. The frame size used is 25. It is interesting to note that in the case of I, I-search the convergence time in all levels are very small and therefore this kind of search appears as a parallel search. In the case of 3, I-search the latency of the dynamic takes more time and therefore this kind of search appears as a serial one, in spite that the underlying mechanisms are parallel. In this case (see Figure 3-c) the large competition present in each feature dimension delays the convergence of the dynamics at each feature dimension and therefore also at the highlevel map. Note in Figure 3-c the slow suppression of the distractor activity that reflects the underlying competition. G. Deco and J. Zihl 16 (High-Level-Map) Rates dOOOO I="T---,--- . - - - - - , - -- - ......---=. ~ Thrget-Activlty (l,l-search) (a) Thrget-Activlty (3,I-search) Distractors-Activity (l,l-search) \ Distractors-Activity (3,I-search) \ , ' ~~~~=*=:~~?~:;~:;;;,:~::,j.~C:~~-~:~ :JC)aoo Time Rates ;.t\. . . Rates (b) l,l-search f .... .. . (:. . ?::I(..........:???:?..:;-;~-:..::?'r-J..~ ....;./'.~...:~""~+ !. ",,-- Distractors-Activit~ . . . ,,'\,. .... (c) 3~1-search ~? ---..------- _.............._-_.-.---. .. Time Time Figure 3: Activity levels during visual search experiments. (a) High-level-map rates for target F(lHimaxjmax(t? and mean distractors-activity. (b) Feature-level map rates for target and one distractor activity for 1,1-search. There is one curve for each feature dimension (i.e. 3 for target and 3 for distractor. (c) the same as (b) but for 3,1search. References [1] Treisman, A. (1988) Features and objects: The fourteenth Barlett memorial lecture. The Quarterly Journal of Experimental Psychology, 4OA, 201-237. [2] Duncan, J. and Humphreys, G. (1989) Visual search and stimulus similarity. Psychological Review, 96, 433-458. [3] Duncan, J. (1980) The locus of interference in the perception of simultaneous stimuli. Psychological Review, 87, 272-300. [4] Chelazzi, L., Miller, E., Duncan, J. and Desimone, R. (1993) A neural basis for visual search in inferior temporal cortex. Nature (London), 363, 345-347. [5] Usher, M. and Niebur, E. (1996) Modeling the temporal dynamics of IT neurons in visual search: A mechanism for top-down selective attention. Journal of Cognitive Neuroscience, 8, 311-327.
1751 |@word seems:1 bf:1 simulation:2 seek:1 solid:1 necessity:1 disparity:1 suppressing:1 reaction:5 existing:1 current:6 activation:1 yet:1 additive:1 j1:1 aoo:1 hypothesize:1 designed:2 plot:2 v:1 implying:1 item:15 short:1 accepting:1 location:15 clarified:1 registering:1 along:1 differential:5 ik:1 consists:1 ra:1 behavior:3 nor:2 distractor:7 metaphor:1 increasing:1 underlying:3 bounded:1 what:1 kind:3 interpreted:1 monkey:2 ag:1 temporal:3 act:1 originates:1 local:3 likened:1 io:1 encoding:1 initiated:1 challenging:1 limited:2 testing:1 lf:1 differs:2 area:2 word:5 pre:1 integrating:5 spite:1 selection:3 context:1 equivalent:1 map:26 imposed:1 deterministic:1 primitive:1 attention:16 independently:1 ergodic:1 focused:1 formulate:2 utilizing:2 population:12 discharge:1 target:21 enhanced:5 spontaneous:1 hypothesis:3 recognition:1 observed:4 module:2 region:4 connected:1 neuropsychology:1 dynamic:13 solving:1 topographically:3 basis:1 emergent:1 separated:1 distinct:1 effective:1 london:1 detected:1 formation:1 whose:1 larger:1 plausible:1 otherwise:2 otto:1 highlevel:1 interconnected:2 maximal:1 relevant:1 ludwig:1 competition:4 convergence:5 enhancement:1 ring:1 object:4 derive:1 measured:2 ij:10 tpa:1 implies:1 differ:2 concentrate:1 guided:1 human:1 routing:1 require:1 integrateand:1 really:2 biological:1 im:1 considered:1 early:1 adopt:2 integrates:1 reflects:1 gaussian:1 conjunction:7 focus:5 mainly:1 suppression:2 dependent:1 entire:1 typically:1 integrated:2 lj:1 selective:5 germany:2 denoted:1 priori:1 activit:1 spatial:2 integration:4 ell:1 field:13 equal:2 having:2 look:1 report:2 stimulus:4 others:1 inhibited:1 retina:3 x_:1 randomly:2 simultaneously:1 consisting:5 fire:3 yielding:1 activated:1 desimone:1 capable:1 allocates:1 respective:2 filled:1 psychological:2 column:1 modeling:3 tp:2 deviation:1 delay:1 v4:1 pool:7 treisman:1 deco:5 containing:2 priority:3 cognitive:1 account:1 de:1 retinal:3 postulated:1 jc:1 explicitly:1 performed:1 observing:1 characterizes:1 red:4 reached:1 steeper:1 analyze:1 parallel:16 slope:3 characteristic:1 miller:1 yield:1 saliency:3 correspond:1 ijkl:1 niebur:2 explain:1 simultaneous:1 lpk:1 synaptic:4 attentive:3 involved:2 associated:3 distractors:8 color:6 electrophysiological:1 positioned:1 carefully:1 appears:3 feed:1 higher:3 response:4 maximally:1 done:1 furthermore:2 just:1 stage:10 until:1 hand:1 mode:5 evolution:2 during:2 inferior:1 criterion:2 complete:1 demonstrate:2 meaning:1 recently:1 common:3 spiking:4 refractory:1 extend:1 functionally:1 numerically:1 spotlight:4 ai:1 focal:4 similarity:1 cortex:1 inhibition:1 inferotemporal:1 thrget:2 perspective:1 termed:2 seen:1 analyzes:1 paradigm:1 period:2 full:1 corporate:1 hebbian:1 memorial:1 characterized:1 serial:9 involving:1 vision:1 df:1 represent:1 achieved:1 cell:16 background:1 separately:1 pyramidal:1 allocated:2 extra:1 operate:1 specially:1 usher:2 subject:1 extracting:1 presence:4 concerned:1 xj:1 psychology:2 architecture:2 idea:1 handled:2 passed:2 action:1 latency:2 amount:1 transforms:1 ph:4 processed:2 exist:1 inhibitory:9 dotted:1 neuroscience:1 group:2 salient:1 threshold:1 neither:2 run:1 fourteenth:1 barlett:1 vivid:1 polarized:1 duncan:5 illuminated:1 layer:9 followed:1 display:2 chelazzi:3 activity:16 constraint:1 flat:1 diffuse:1 munich:3 according:1 instantiates:1 belonging:1 membrane:1 across:3 suppressed:2 character:1 maximilians:1 sustains:1 interference:1 equation:7 mutually:2 resource:2 previously:1 remains:1 describing:1 mechanism:8 locus:1 serf:1 end:1 adopted:1 available:1 quarterly:1 hierarchical:3 simulating:1 slower:1 existence:1 assumes:1 top:5 denotes:1 assembly:13 hahn:1 build:1 psychophysical:3 question:2 occurs:1 sxs:2 receptive:4 traditional:1 attentional:5 simulated:1 capacity:1 oa:1 sensible:4 considers:1 assuming:2 code:1 index:2 modeled:1 potentially:1 hij:1 zt:1 stern:1 gated:1 neuron:19 frame:12 overloaded:1 introduced:1 namely:3 required:2 specified:1 kl:2 connection:2 pair:1 dynamical:6 lijk:2 perception:1 ipk:1 green:2 including:1 ia:1 meanfield:1 scheme:3 technology:1 lk:1 coupled:2 extract:1 lij:1 review:2 tractor:1 lecture:1 interesting:2 triple:3 integrate:2 consistent:1 alludes:1 principle:3 row:1 lo:1 excitatory:6 repeat:1 dis:1 institute:1 explaining:2 characterizing:2 absolute:1 mchp:1 sparse:1 distributed:1 curve:2 dimension:23 dale:1 sensory:1 made:1 sequentially:1 incoming:1 assumed:1 consuming:1 search:55 stimulated:1 nature:1 pk:2 spread:2 arrow:2 whole:2 zihl:3 noise:1 s2:1 slow:1 position:5 guiding:1 explicit:3 originated:1 msec:3 humphreys:2 down:5 specific:3 gating:1 decay:1 evidence:1 intrinsic:3 consist:1 essential:1 gustavo:2 neurophysiological:1 visual:37 expressed:1 contained:1 ordered:2 binding:2 corresponds:6 iph:1 satisfies:1 tha:1 ma:1 formulated:1 consequently:1 absence:1 experimentally:2 included:1 reducing:1 called:2 experimental:6 siemens:2 ijk:1 preattentive:1 neurodynamical:8 support:1 relevance:1 absolutely:1
822
1,752
Population Decoding Based on an Unfaithful Model s. Wu, H. Nakahara, N. Murata and S. Amari RIKEN Brain Science Institute Hirosawa 2-1, Wako-shi, Saitama, Japan {phwusi, hiro, mura, amari}@brain.riken.go.jp Abstract We study a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known, or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model which neglects the pair-wise correlation between neuronal activities, and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited-range. The performance of UMLI is compared with that of the maximum likelihood inference based on a faithful model and that of the center of mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkablely and maintaining a high-level decoding accuracy at the same time. The effect of correlation on the decoding accuracy is also discussed. 1 Introduction Population coding is a method to encode and decode stimuli in a distributed way by using the joint activities of a number of neurons (e.g. Georgopoulos et aI., 1986; Paradiso, 1988; Seung and Sompo1insky, 1993). Recently, there has been an expanded interest in understanding the population decoding methods, which particularly include the maximum likelihood inference (MLI), the center of mass (COM), the complex estimator (CE) and the optimal linear estimator (OLE) [see (Pouget et aI., 1998; Salinas and Abbott, 1994) and the references therein]. Among them, MLI has an advantage of having small decoding error (asymptotic efficiency), but may suffers from the expense of computational complexity. Let us consider a population of N neurons coding a variable x. The encoding process of the population code is described by a conditional probability q(rlx) (Anderson, 1994; Zemel et aI., 1998), where the components of the vector r = {rd for i = 1,???, N are the firing rates of neurons. We study the following MLI estimator given by the value of x that maximizes the log likelihood Inp(rlx), where p(rlx) is the decoding model which might be different from the encoding model q(rlx). So far, when people study MLI in a population code, it normally (or implicitly) assumes that p(rlx) is equal to the encoding model q(rlx). This requires that the estimator has full knowledge of the encoding process. Taking account of the complexity of the information process in the brain, it is more natural 193 Population Decoding Based on an Unfaithful Model to assume p(rlx) :I q(rlx). Another reason for choosing this is for saving computational cost. Therefore, a decoding paradigm in which the assumed decoding model is different from the encoding one needs to be studied. In the context of statistical theory, this is called estimation based on an unfaithful or a misspecified model. Hereafter, we call the decoding paradigm of using MLI based on an unfaithful model, UMLI, to distinguish from that of MLI based on the faithful model, which is called FMLI. The unfaithful model studied in this paper is the one which neglects the pair-wise correlation between neural activities. It turns out that UMLI has attracting properties of decreasing the computational cost of FMLI remarkablely and at the same time maintaining a high-level decoding accuracy. 2 The Population Decoding Paradigm of UMLI 2.1 An Unfaithful Decoding Model of Neglecting the Neuronal Correlation Let us consider a pair-wise correlated neural response model in which the neuron activities are assumed to be multivariate Gaussian q(rlx) = I lexp[--LA J(21ra 2 )N det(A) 2a 2 . . -1 1J (r ? - f-(x))(r ? - f ?(x))] 1 1 J J (I) , 1,J where fi(X) is the tuning function. In the present study, we will only consider the radial symmetry tuning function. Two different correlation structures are considered. One is the uniform correlation model (Johnson, 1980; Abbott and Dayan, 1999), with the covariance matrix Aij = 8ij where the parameter c (with -1 + c(l - 8ij ), (2) < c < 1) determines the strength of correlation. The other correlation structure is of limited-range (Johnson, 1980; Snippe and Koenderink, 1992; Abbott and Dayan, 1999), with the covariance matrix A lJ?? -- b1i - jl , (3) where the parameter b (with 0 < b < 1) determines the range of correlation. This structure has translational invariance in the sense that Aij = A kl , if Ii - jl = Ik - ll. The unfaithful decoding model, treated in the present study, is the one which neglects the correlation in the encoding process but keeps the tuning functions unchanged, that is, (4) 2.2 The decoding error of UMLI and FMLI The decoding error of UMLI has been studied in the statistical theory (Akahira and Takeuchi, 1981; Murata et al., 1994). Here we generalize it to the population coding. For convenience, some notations are introduced. \If(r,x) denotes df(r,x)/dx. Eq[f(r,x)] and Vq[f(r,x)] denote, respectively, the mean value and the variance of f(r, x) with respect to the distribution q(rlx). Given an observation of the population activity r*, the UMLI estimate x is the value of x that maximizes the log likelihood Lp(r*,x) = Inp(r*lx). Denote by Xopt the value of x satisfying Eq[\l Lp(r, xopd] = O. For the faithful model where p = q, Xopt = x. Hence, (xopt - x) is the error due to the unfaithful setting, whereas (x - Xopt) is the error due to sampling fluctuations. For the unfaithful model (4), s. 194 Wu, H. Nakahara, N. Murata and S. Amari since Eq[V' Lp(r, Xopt)] = 0, Li[/i{x) - /i(xopdlfI(xopt) = O. Hence, Xopt = x and UMLI gives an unbiased estimator in the present cases. Let us consider the expansion of V' Lp(r*, x) at x. V'Lp(r*,x) ~ V'Lp(r*,x) + V'V'Lp{r*,x) (x - x). (5) Since V' Lp(r*, x) = 0, ~ V'V'Lp{r*,x) (x - x) ~ - ~ V'Lp(r*,x), (6) where N is the number of neurons. Only the large N limit is considered in the present study. Let us analyze the properties of the two random variables ~ V'V' Lp (r* , x) and ~ V' Lp(r*, x). We consider first the uniform correlation model. For the uniform correlation structure, we can write r; = /i(x) + O"(Ei + 11), (7) where 11 and {Ei}, for i = 1,???, N, are independent random variables having zero mean and variance c and 1 - c, respectively. 11 is the common noise for all neurons, representing the uniform character of the correlation. By using the expression (7), we get ~ V'Lp{r*,x) ;0" L Ed: (x) + ;0" L fI (x), i + (8) . ;0" Lf:'(x). (9) t Without loss of generality, we assume that the distribution of the preferred stimuli is uniform. For the radial symmetry tuning functions, ~ Li fI(x) and ~ Li fI'(x) approaches zero when N is large. Therefore, the correlation contributions (the terms of 11) in the above two equations can be neglected. UMLI performs in this case as if the neuronal signals are uncorrelated. Thus, by the weak law of large numbers, ~ V'V' Lp(r*, x) (10) where Qp == Eq[V'V' Lp(r, x)]. According to the central limit theorem, V' Lp (r*, x) / N converges to a Gaussian distribution ~ V'Lp{r*,x) N(O, ~~O"~ N(O, ~~), LfHx)2) (11) where N(O, t 2 ) denoting the Gaussian distribution having zero mean and variance t, and Gp == Vq[V'Lp(r, x)]. 195 Population Decoding Based on an UnfaithfUl Model Combining the results of eqs.(6), (10) and (11), we obtain the decoding error of UMLI, (x - x)UMLI N(O , Q;2Gp), (1 - c)a 2 N(O , L i fHx)2)? = (12) In the similar way, the decoding error of FMLI is obtained, (x - x)FMLI = N(O, Q~2Gq) , (1 - c)a 2 N(O , Li fI(x)2) ' (13) which has the same form as that of UMLI except that Qq and Gq are now defined with respect to the faithful decoding model, i.e., p(rlx) = q(rlx) . To get eq.(13), the condition L i fI(x) = is used. Interestingly, UMLI and FMLI have the same decoding error. This is because the uniform correlation effect is actually neglected in both UMLI and FMLI. ? Note that in FMLI, Qq = Gq = Vq[\7 Lq(rlx)] is the Fisher information . Q-;;2Gq is the Cramer-Rao bound, which is the optimal accuracy for an unbiased estimator to achieve. Eq.(13) shows that FMLI is asymptotically efficient. For an unfaithful decoding model, Qp and Gp are usually different from the Fisher information. We call Q;2Gp the generalized Cramer-Rao bound, and UMLI quasi-asymptotically efficient if its decoding error approaches Q;2Gp asymptotically. Eq.( 12) shows that UMLI is quasi-asymptotic efficient. In the above, we have proved the asymptotic efficiency of FMLI and UMLI when the neuronal correlation is uniform. The result relies on the radial symmetry of the tuning function and the uniform character of the correlation, which make it possible to cancel the correlation contributions from different neurons. For general tuning functions and correlation structures, the asymptotic efficiency of UMLI and FMLI may not hold. This is because the law of large numbers (eq.(IO? and the central limit theorem (eq.(II? are not in general applicable. We note that for the limited-range correlation model, since the correlation is translational invariant and its strength decreases quickly with the dissimilarity in the neurons' preferred stimuli, the correlation effect in the decoding of FMLI and UMLI becomes negligible when N is large. This ensures that the law of large numbers and the central limit theorem hold in the large N limit. Therefore, UMLI and FMLI are asymptotically efficient. This is confirmed in the simulation in Sec.3. When UMLI and FMLI are asymptotic efficient, their decoding errors in the large N limit can be calculated according to the Cramer-Rao bound and the generalized Cramer-Rao bound, respectively, which are a2L L 3 AidI(x)fj(x) [L i UI(X))2J2 a2 i j Aijl f;(x)fj(x)? ij (14) (15) Performance Comparison The performance of UMLI is compared with that of FMLI and of the center of mass decoding method (COM) . The neural population model we consider is a regular array of N neurons (Baldi and Heiligenberg, 1988; Snippe, 1996) with the preferred stimuli uniformly distributed in the range [- D , DJ, that is, Ci = -D + 2iD /(N + 1), for i = 1, ? .. ,N . The comparison is done at the stimulus x = 0. s. 196 Wu, H. Nakahara, N. Murata and S. Amari COM is a simple decoding method without using any information of the encoding process, whose estimate is the averaged value of the neurons' preferred stimuli weighted by the responses (Georgopoulos et aI., 1982; Snippe, 1996), i.e., E i rici x - ==:--A - Ei r i . (16) The shortcoming of COM is a large decoding error. For the population model we consider, the decoding error of COM is calculated to be ( 17) where the condition E i Ii (x )Ci preferred stimuli. = 0 is used, due to the regularity of the distribution of the The tuning function is Gaussian, which has the form Ii(x) = exp[- (x - Ci)2 2a 2 ], (18) where the parameter a is the tuning width. We note that the Gaussian response model does not give zero probability for negative firing rates. To make it more reliable, we set ri = 0 when f i(X) < 0.11 (Ix - cil > 3a), which means that only those neurons which are active enough contribute to the decoding. It is easy to see that this cut-off does not effect much the results of UMLI and FMLI, due to their nature of decoding by using the derivative of the tuning functions. Whereas, the decoding error of COM will be greatly enlarged without cut-off. For the tuning width a, there are N = Int[6a/d - 1J neurons involved in the decoding process, where d is the difference in the preferred stimuli between two consecutive neurons and the function Int[?J denotes the integer part of the argument. In all experiment settings, the parameters are chosen as a = 1 and (J = 0.1. The decoding errors of the three methods are compared for different values of N when the correlation strength is fixed (c = 0.5 for the uniform correlation case and b = 0.5 for the limited-range correlation case), or different values of the correlation strength when N is fixed to be 50. Fig. l compares the decoding errors of the three methods for the uniform correlation model. It shows that UMLI has the same decoding error as that of FMLI, and a lower error than that of COM. The uniform correlation improves the decoding accuracies of the three methods (Fig. lb). In Fig.2, the simulation results for the decoding errors of FMLI and UMLI in the limitedrange correlation model are compared with those obtained by using the Cramer-Rao bound and the generalized Cramer-Rao bound, respectively. It shows that the two results agree very well when the number of neurons, N, is large, which means that FMLI and UMLI are asymptotic efficient as we analyzed. In the simulation, the standard gradient descent method is used to maximize the log likelihood, and the initial guess for the stimulus is chosen as the preferred stimulus of the most active neuron. The CPU time of UMLI is around 1/5 of that of FMLI. UMLI reduces the computational cost of FMLI significantly. Fig.3 compares the decoding errors of the three methods for the limited-range correlation model. It shows that UMLI has a lower decoding error than that of COM. Interestingly, UMLI has a comparable performance with that of FMLI for the whole range of correlation. The limited-range correlation degrades the decoding accuracies of the three methods when the strength is small and improves the accuracies when the strength is large (Fig.3b). 197 Population Decoding Based on an Unfaithfol Model 0015 " _-~ _ _ __ ~_~ . -FMLI. UMLI ??? ? . COM -FMLI. UMLI .. .. ? COM '. ~ g 0010 .. ........ .. W ???? '" ................ c: '6 8 r ~ 0 005 00000L 0 0000 ~-...;.====::;::==== M ~ ~ '. ~L~============~??~??~???~??~?. ,' ~ ~ ............... ~ N 02 04 08 08 10 C (b) (a) Figure 1: Comparing the decoding errors of UMLI, FMLI and COM for the uniform correlation model. 0015 _ _ _- - - -_-- - - CRB. boO.5 - S M R , b=O.5 - - - CRB. boO.S _ _ -.;I SMA , b=08 0015 " - - _ - - - - -_ _ __ - I ~ T ~~"""T"" i GCRB. boO.5 SMR. boO.5 , - - - GCRB. boO.S "'_WC) SMR. b::O.8 I - t ! 0010, ~"'1.~~ . 1 c: '6 8 , , ~ OOO5 r O OOO~? ~---~ ~--M ---~~-~ I OO OOOO~~--~~~-~OO ~-~~=--~ 100 N N (a) (b) Figure 2: Comparing the simulation results of the decoding errors of UMLI and FMLI in the limited-range correlation model with those obtained by using the Cramer-Rao bound and the generalized Cramer-Rao bound, respectively. CRB denotes the Cramer-Rao bound, GCRB the generalized Cramer-Rao bound, and SMR the simulation result. In the simulation, 10 sets of data is generated, each of which is averaged over 1000 trials. (a) FMLI; (b) UMLI. 4 Discussions and Conclusions We have studied a population decoding paradigm in which MLI is based on an unfaithful model. This is motivated by the facts that the encoding process of the brain is not exactly known by the estimator. As an example, we consider an unfaithful decoding model which neglects the pair-wise correlation between neuronal activities. Two different correlation structures are considered, namely, the uniform and the limited-range correlations. The performance of UMLI is compared with that of FMLI and COM. It turns out that UMLI has a lower decoding error than that of COM. Compared with FMLI, UMLI has comparable performance whereas with much less computational cost. It is our future work to understand the biological implication of UMLI. As a by-product of the calculation, we also illustrate the effect of correlation on the decoding accuracies. It turns out that the correlation, depending on its form, can either improve or degrade the decoding accuracy. This observation agrees with the analysis of Abbott and Dayan (Abbott and Dayan , 1999), which is done with respect to the optimal decoding accuracy, i.e., the Cramer-Rao bound. ~-~-- -FMU \ 0015 .. "" ~ 0> .~ ---UMU ... .. 003 - ~ - - - - __ _____ -FMU ---UMU , ???. COM " 00 10 ~ "0 ~ ----- COM - _ _ _--~ ~ 0020 Wu, H. Nakahara, N Murata and S. Amari O s. 198 '--"----,-,-"-"-- o OOS OOOO ~ ~ 1 --~-~ ~--~OO---~'00 N (a) b (b) Figure 3: Comparing the decoding errors of UMLI, FMLI and COM for the limited-range correlation modeL Acknowledgment We thank the three anonymous reviewers for their valuable comments and insight suggestion. S. Wu acknowledges helpful discussions with Danmei Chen . References L. F. Abbott and P. Dayan. 1999. The effect of correlated variability on the accuracy of a population code. Neural Computation. II :91-101. M. Akahira and K. Takeuchi. 1981 . Asymptotic efficiency of statistical estimators: concepts and high order asymptotic efficiency. In Lecture Notes in Statistics 7. C. H. Anderson. 1994. Basic elements of biological computational systems. International Journal of Modern Physics C, 5:135-137. P. Baldi and W. Heiligenberg. 1988. How sensory maps could enhance resolution through ordered arrangements of broadly tuned receivers. Biol. Cybern .. 59:313-318. A. P. Georgopoulos. 1. F. Kalaska, R. Caminiti. and 1. T. Massey. 1982. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. 1. Neurosci .? 2:1527-1537. K. O. Johnson. 1980. Sensory discrimination: neural processes preceding discrimination decision. J. Neurophy.? 43:1793-1815 . M. Murata. S. Yoshizawa. and S. Amari. 1994. Network information criterion-determining the number of hidden units for an artificial neural network model. IEEE. Trans. Neural Networks. 5:865872. A. Pouget. K. Zhang. S. Deneve. and P. E. Latham. 1998. Statistically efficient estimation using population coding. Neural Computation. 10:373-401. E. Salinas and L. F. Abbott. 1994. Vector reconstruction from firing rates. Journal of Computational Neuroscience, 1:89-107. H. P. Snippe and J. J. Koenderink. 1992. Information in channel-coded systems: correlated receivers . Biological Cybernetics. 67: 183-190. H. P. Snippe. 1996. Parameter extraction from population codes: a critical assessment. Neural Computation. 8:511-529. R. S. Zemel. P. Dayan. and A. Pouget. 1998. Population interpolation of population codes. Neural Computation. 10:403-430.
1752 |@word trial:1 simulation:6 covariance:2 aijl:1 initial:1 hereafter:1 denoting:1 tuned:1 interestingly:2 wako:1 com:16 comparing:3 dx:1 motor:1 discrimination:2 guess:1 contribute:1 lx:1 zhang:1 ik:1 prove:1 baldi:2 ra:1 brain:5 decreasing:2 cpu:1 becomes:1 notation:1 maximizes:2 mass:3 exactly:2 normally:1 unit:1 negligible:1 limit:6 io:1 encoding:9 id:1 firing:3 fluctuation:1 interpolation:1 might:1 therein:1 studied:4 limited:9 range:12 statistically:1 averaged:2 faithful:4 acknowledgment:1 lf:1 xopt:7 significantly:1 radial:3 inp:2 regular:1 get:2 convenience:1 caminiti:1 context:1 cybern:1 map:1 reviewer:1 shi:1 center:3 go:1 resolution:1 pouget:3 estimator:8 insight:1 array:1 population:22 qq:2 discharge:1 decode:1 element:1 satisfying:1 particularly:1 cut:2 ensures:1 decrease:1 movement:1 valuable:1 complexity:3 ui:1 seung:1 neglected:2 efficiency:5 joint:1 riken:2 shortcoming:1 ole:1 artificial:1 zemel:2 choosing:1 salina:2 whose:1 amari:6 statistic:1 gp:5 advantage:2 reconstruction:1 mli:7 gq:4 product:1 j2:1 combining:1 achieve:1 regularity:1 converges:1 oo:3 illustrate:1 depending:1 ij:3 eq:10 paradiso:1 direction:1 snippe:5 anonymous:1 biological:3 hold:2 around:1 considered:3 cramer:11 exp:1 sma:1 consecutive:1 a2:1 estimation:2 applicable:1 agrees:1 weighted:1 gaussian:5 encode:1 likelihood:6 greatly:1 sense:1 helpful:1 inference:3 dayan:6 lj:1 hidden:1 relation:1 quasi:2 translational:2 among:1 equal:1 saving:2 having:3 extraction:1 sampling:1 cancel:1 future:1 stimulus:10 modern:1 interest:1 smr:3 analyzed:1 implication:1 neglecting:1 unfaithful:15 rao:11 cost:5 saitama:1 uniform:14 johnson:3 international:1 off:2 physic:1 decoding:57 enhance:1 rlx:13 quickly:1 hirosawa:1 central:3 koenderink:2 derivative:1 li:4 japan:1 account:1 coding:4 sec:1 int:2 analyze:1 contribution:2 accuracy:11 takeuchi:2 variance:3 murata:6 generalize:1 weak:1 confirmed:1 cybernetics:1 suffers:1 ed:1 involved:1 yoshizawa:1 proved:1 knowledge:1 improves:2 oooo:2 actually:1 response:3 ooo:1 done:2 anderson:2 generality:1 correlation:41 ei:3 assessment:1 effect:6 concept:1 unbiased:2 hence:2 ll:1 width:2 criterion:1 generalized:5 latham:1 performs:1 fj:2 heiligenberg:2 wise:4 recently:1 fi:6 misspecified:1 common:1 qp:2 jp:1 jl:2 discussed:1 ai:4 rd:1 tuning:10 dj:1 cortex:1 attracting:1 multivariate:1 preceding:1 paradigm:5 maximize:1 signal:1 ii:5 full:1 reduces:1 calculation:1 kalaska:1 coded:1 basic:1 df:1 cell:1 whereas:3 comment:1 call:2 integer:1 enough:1 easy:1 boo:5 det:1 expression:1 motivated:1 crb:3 neuroscience:1 broadly:1 write:1 ce:1 abbott:7 massey:1 asymptotically:5 deneve:1 wu:5 decision:1 comparable:2 bound:11 distinguish:1 activity:6 strength:6 georgopoulos:3 ri:1 wc:1 argument:1 expanded:1 hiro:1 according:2 mura:1 character:2 lp:18 primate:1 invariant:1 umu:2 equation:1 vq:3 agree:1 lexp:1 turn:4 denotes:3 assumes:1 include:1 maintaining:2 a2l:1 neglect:4 unchanged:1 ooo5:1 arrangement:1 degrades:1 gradient:1 thank:1 degrade:1 reason:1 code:5 expense:1 negative:1 neuron:15 observation:2 descent:1 variability:1 lb:1 introduced:1 pair:4 namely:1 kl:1 trans:1 usually:2 reliable:1 critical:1 natural:1 treated:1 arm:1 representing:1 improve:1 acknowledges:1 understanding:1 determining:1 asymptotic:8 law:3 loss:1 lecture:1 suggestion:1 oos:1 uncorrelated:1 aij:2 understand:1 institute:1 taking:1 distributed:2 calculated:2 sensory:2 simplified:1 far:1 preferred:8 implicitly:1 keep:1 active:2 receiver:2 assumed:2 nature:1 channel:1 symmetry:3 expansion:1 complex:1 neurosci:1 whole:1 noise:1 neuronal:6 enlarged:1 fig:5 cil:1 lq:1 fmu:2 ix:1 theorem:3 ci:3 dissimilarity:1 chen:1 ordered:1 determines:2 relies:1 b1i:1 conditional:1 nakahara:4 fisher:2 except:1 uniformly:1 called:2 invariance:1 la:1 people:1 biol:1 correlated:3
823
1,753
An Analysis of Turbo Decoding with Gaussian Densities Paat Rusmevichientong and Benjamin Van Roy Stanford University Stanford, CA 94305 {paatrus, bvr} @stanford.edu Abstract We provide an analysis of the turbo decoding algorithm (TDA) in a setting involving Gaussian densities. In this context, we are able to show that the algorithm converges and that - somewhat surprisingly - though the density generated by the TDA may differ significantly from the desired posterior density, the means of these two densities coincide. 1 Introd uction In many applications, the state of a system must be inferred from noisy observations. Examples include digital communications, speech recognition, and control with incomplete information. Unfortunately, problems of inference are often intractable, and one must resort to approximation methods. One approximate inference method that has recently generated spectacular success in certain coding applications is the turbo decoding algorithm [1, 2], which bears a close resemblance to message-passing algorithms developed in the coding community a few decades ago [4]. It has been shown that the TDA is also related to well-understood exact inference algorithms [5, 6], but its performance on the intractable problems to which it is applied has not been explained through this connection. Several other papers have further developed an understanding of the turbo decoding algorithm. The exact inference algorithms to which turbo decoding has been related are variants of belief propagation [7J. However, this algorithm is designed for inference problems for which graphical models describing conditional independencies form trees, whereas graphical models associated with turbo decoding possess many loops. To understand the behavior of belief propagation in the presence of loops, Weiss has analyzed the algorithm for cases where only a single loop is present [11]. Other analyses that have shed significant light on the performance of the TDA in its original coding context include [8, 9, 10]. In this paper, we develop a new line of analysis for a restrictive setting in which underlying distributions are Gaussian. In this context, inference problems are tractable and the use of approximation algorithms such as the TDA are unnecessary. However, studying the TDA in this context enables a streamlined analysis that generates new insights into its behavior. In particular, we will show that the algorithm converges and that the mean of the resulting distribution coincides with that of the P. Rusmevichientong and B. V. Roy 576 desired posterior distribution. While preparing this paper, we became aware of two related initiatives, both involving analysis of belief propagation when priors are Gaussian and graphs possess cycles. Weiss and Freeman [12] were studying the case of graphs possessing only cliques of size two. Here, they were able to show that, if belief propagation converges, the mean of the resulting approximation coincides with that of the true posterior distribution. At the same time, Frey [3] studied a case involving graphical structures that generalize those employed in turbo decoding. He also conducted an empirical study. The paper is organized as follows. In Section 2, we provide our working definition of the TDA. In Section 3, we analyze the case of Gaussian densities. Finally, a discussion of experimental results and open issues is presented in Section 4. 2 A Definition of Turbo Decoding Consider a random variable x taking on values in ~n distributed according to a density PO. Let YI and Y2 be two random variables that are conditionally independent given x. For example, YI and Y2 might represent outcomes of two independent transmissions of the signal x over a noisy communication channel. If YI and Y2 are observed, then one might want to infer a posterior density f for x conditioned on YI and Y2. This can be obtained by first computing densities pi and where the first is conditioned on YI and the second is conditioned on Y2. Then, P2, f = a (P~:2), where a is a "normalizing operator" defined by - ag = 9 J g(x)dX' and multiplication/division are carried out pointwise. Unfortunately, the problem of computing f is generally intractable. The computational burden associated with storing and manipulating high-dimensional densities appears to be the primary obstacle. This motivates the idea of limiting attention to densities that factor. In this context, it is convenient to define an operator 71' that generates a density that factors while possessing the same marginals as another density. In particular, this operator is defined by ("9)(') '" !! l ..... ~ I?? ?? J 9(x)dX A dXi for all densities 9 and all a E ~n, where dx /\ dXi = dXI'" dXi-Idxi+I ... dXn. One may then aim at computing 7l'f as a proxy for f. Unfortunately, even this problem is generally intractable. The TDA can be viewed as an iterative algorithm for approximating 71' f. Let operators FI and F2 be defined by FIg = and a (( 7l'~:) ~ ) , An Analysis o/Turbo Decoding with Gaussian Densities 577 for any density g. The TDA is applicable in cases where computation of these two operations is tractable. The algorithm generates sequences qi k) and q~k) according to (HI) _ F (k) d (HI) _ D (k) ql 1 q2 an q2 - r2qI . initialized with densities qiO) and q~O) that factor. The hope is that Ci.(qik)q~k) /Po) converges to an approximation of 7r f. 3 The Gaussian Case We will consider a setting in which joint density of x, Yl, and Y2, is Gaussian. In this context, application of the TDA is not warranted - there are tractable algorithms for computing conditional densities when priors are Gaussian. Our objective, however, is to provide a setting in which the TDA can be analyzed and new insights can be generated. Before proceeding, let us define some notation that will facilitate our exposition. We will write 9 "-' N(/-Lg, ~g) to denote a Gaussian density 9 whose mean vector and covariance matrix are /-Lg and ~g, respectively. For any matrix A, b"(A) will denote a diagonal matrix whose entries are given by the diagonal elements of A. For any diagonal matrices X and Y, we write X ~ Y if Xii ~ Yii for all i. For any pair of nonsingular covariance matrices ~u and ~v such that ~;; 1 + ~; 1 - I is nonsingular, let a matrix AEu .E" be defined by A Eu. E " == (~-l u + ~-l v _ I)-I. To reduce notation, we will sometimes denote this matrix by Auv. When the random variables x, Yt, and Y2 are jointly Gaussian, the densities f, and Po are also Gaussian. We let pi, P2' pi "-' N(/-Ll, ~l)' P2 "-' N(/-L2, ~2)' f "-' N(/-L, ~), and assume that both ~l and ~2 are symmetric positive definite matrices. We will also assume that Po "-' N(O, I) where I is the identity matrix. It is easy to show that A E1 ?E2 is well-defined. The following lemma provides formulas for the means and covariances that arise from multiplying and rescaling Gaussian densities. The result follows from simple algebra, and we state it without proof. Lemma 1 Let u "-' N(/-Lu, ~u) and v "-' N(/-Lv, ~v), where definite. If ~;;l + ~;l - I is positive definite then Ci. (;~) "-' N (Auv ~u and ~v are positive (~~l /-Lu + ~;l/-Lv) ,Auv) . One immediate consequence of this lemma is an expression for the mean of f: /-L = AE 1.E2 (~ll/-Ll + ~2l/-L2). Let S denote the set of covariance matrices that are diagonal and positive definite. Let 9 denote the set of Gaussian densities with covariance matrices in S. We then have the following result, which we state without proof. Lemma 2 The set 9 is closed under Fl and F2 ? If the TDA is initialized with qiO), q~O) E g, this lemma allows us to represent all iterates using appropriate mean vectors and covariance matrices. P. Rusmevichientong and B. V. Roy 578 3.1 Convergence Analysis Under suitable technical conditions, it can be shown that the sequence of mean vectors and covariance matrices generates by the TDA converges. Due to space limitations, we will only present results pertinent to the convergence of covariance matrices. FUrthermore, we will only present certain central components of the analyses. For more complete results and detailed analyses, we refer the reader to our upcoming full-length paper. Recall that the TDA generates sequences qi k ) and q~k) according to (HI) qI = F Iq2(k) an d (HI) q2 = D (k) L'2qI . As discussed earlier, if the algorithm is initialized with elements of 9, by Lemma 2, q1(k) '" N (m(k) E(k)) 1 , 1 and q(k) '" 2 N (m(k) ~(k)) 2 '~2 , for appropriate sequences of mean vectors and covariance matrices. It turns out that there are mappings 7i : S 1--+ S and 72 : S 1--+ S such that Eik +1) = (Elk)) , To establish convergence of Elk) and E~k), it suffices to 7i (E~k)) and E~k+1) = 72 for all k. Let T == 7i 072. show that Tn(E~O)) converges. The following theorem establishes this and further points out that the limit does not depend on the initial iterates. Theorem 1 There exists a matrix V* E S such that lim m(V) = V*, n->oo for all V E S. 3.1.1 Preliminary Lemmas Our proof of Theorem 1 relies on a few lemmas that we will present in this section. We begin with a lemma that captures important abstract properties of the function T. Due to space constraints, we omit the proof, even though it is nontrivial. Lemma 3 (a) There exists a matrix DES such that for all DES, D ::; T(D) ::; f. (b) For all X, YES, if X::; Y then T(X) ::; T(Y). (c) The function T is continuous on S. (d) For all f3 E (0,1) and DES, (f3 + o:)T (D) ::; T (f3D) for some 0: > o. The following lemma establishes convergence when the sequence of covariance matrices is initialized with the identity matrix. Lemma 4 The sequence Tn (f) converges in S to a fixed point of T. Proof; By Lemma 3(a), T(1) ::; f, and it follows from monotonicity of T (Lemma 3(b)) that Tn+1(I) ::; Tn(I) for all n. Since Tn(I) is bounded below by a matrix DES, the sequence converges in S. The fact that the limit is a fixed point of T follows from the continuity of T (Lemma 3( c) ). ? Let V* = limn->oo Tn(I). This matrix plays the following special role. Lemma 5 The matrix V* is the unique fixed point in S of T. An Analysis of Turbo Decoding with Gaussian Densities 579 Proof: Because Tn (1) converges to V* and T is monotonic, no matrix V E S with V i= V* and V* ::; V ::; I can be a fixed point. Furthermore, by Lemma 3(a), no matrix V E S with V ~ I and V i= I can be a fixed point. For any V E S with V::; V*, let f3v = sup {f3 E (0, 111f3V* ::; V} . For any V E S with V i= V* and V ::; V*, we have f3v < 1. For such a V, by Lemma 3(d), there is an a > 0 such that T(f3vV*) ~ (f3v + a)V*, and therefore T(V) i= V. The result follows. ? 3.1.2 Proof of Theorem 1 Proof: For V E S with V* ::; V ::; I convergence to V* follows from Lemma 4 and monotonicity (Lemma 3(b)). For V E S with V ~ I, convergence follows from the fact that V* ::; T(V) ::; I, which is a consequence of the two previously invoked lemmas together with Lemma 3(a). Let us now address the case of V E S with V ::; V*. Let f3v be defined as in the proof of Lemma 5. Then, f3v V* ::; T (f3v V*). By monotonicity, Tn (f3v V*) ::; Tn+I(f3v V*) ::; V* for all n. It follows that Tn(f3v V*) converges, and since T is continuous, the limit must be the unique fixed point V*. We have established convergence for elements V of S satisfying V ::; V* or V ~ V*. For other elements ? of S, convergence follows from the monotonicity of T. 3.2 Analysis of the Fixed Point As discussed in the previous section, under suitable conditions, FI 0 F2 and F2 0 FI each possess a unique fixed point, and the TDA converges on these fixed points. Let qi ,...., N (f-Lq~ , Eq~) and q2 ,...., N (f-Lq2' Eq* ) denote the fixed points of FI 0 F2 and F2 0 F I , respectively. Based on Theorem 1, Eq~ and Eqi are in S. The following lemma provides an equation relating means associated with the fixed AEI E *' and AE * E2' which are used in points. It is not hard to show that Aq*q*, 1 2 ' q2 ql ' the statement, are well-defined. Lemma 6 Aq~qi (E;~lf-Lq; + E~lf-Lqi) = AE1 ,Eq2 (E1lf-LI + E~If-Lq2) = AEq~,E2 (E;;lf-Lq~ + E2"If-L2) Proof: It follows from the definitions of FI and F2 that, if qi = F l q2 and q2 = F2qi, a ql* q2* = Po * * = Po a7r PI q2 * *. Po a7r qlP2 The result then follows from Lemma 1 and the fact that of a distribution. 7r does not alter the mean ? We now prove a central result of this paper: the mean of the density generated by the TDA coincides with the mean f-L of the desired posterior density f. Theorem 2 a (qi q2/po) ,...., N (f-L, Aq; qi ) Proof: By Lemma 1, f-L = AE 1 ,E 2 (E1lf-LI is Aq~q2 (E;~l f-Lq; + E;i f-Lqi)' + E2"If-L2) , while the mean of a(qiq2lpo) We will show that these two expressions are equal. 580 , P Rusmevichientong and B. V. Roy . . ~\ Figure 1: Evolution of errors. Multiplying the equations from Lemma 6 by appropriate matrices, we obtain Aq*q* A~11 , Eq:i Aq*q* 1 2 1 2 1 2 (2:-.I j.Lq*1 + 2:-. ql q2 j.Lq*) = Aq*q* 1 2 (2:11 j.Ll + 2:-} , q2 j.Lq*) 2 and It follows that ( Aq~q:i (A~;,Eq:i +A~:~ ,E2) - I ) Aq~q:i (2:;il j.Lq~ + 2:;:i1j.Lq:i) = Aqiq:i (2:11j.Ll + 2:2'1 j.L2) , and therefore ( A~IE q:i +A~1 1, q~ , E 2 -Aq-.I (2: q-.Ij.Lq* +2: q-.Ij.Lq*) = 2:11j.Ll+2:2'1j.L2' q*) Aq*q* 1 2 1 2 1 1 2 2 ? 4 Discussion and Experimental Results The limits of convergence qi and q2 of the TDA provide an approximation a( qi q2 / po) to 7r f. We have established that the mean of this approximation coincides with that of the desired density. One might further expect that the covariance matrix of a(qiq2/PO) approximates that of 7r f, and even more so, that qi and q2 bear some relation to pi and P2' Unfortunately, as will be illustrated by experimental results in this section, such expectations appear to be inaccurate. We performed experiments involving 20 and 50 dimensional Gaussian densities (Le., x was either 20 or 50 dimensional in each instance). Problem instances were sampled randomly from a fixed distribution. Due to space limitations, we will not describe the tedious details of the sampling mechanism. Figure 1 illustrates the evolution of certain "errors" during representative runs of the TDA on 20-dimensional problems. The first graph plots relative errors in means of densities a(q~n)q~n) /po) generated by iterates of the TDA. As indicated by our analysis, these errors converge to zero . The second chart plots a measure of relative error for the covariance of a(q~n)q~n) /po) versus that of 7rf for representative runs. Though these covariances converge, the ultimate errors are far from zero. The two An Analysis of Turbo Decoding with Gaussian Densities 581 Figure 2: Errors after 50 iterations. final graphs plot errors between the means of qin ) and q~n) and those of pi and pi , respectively. Again, though these means converge, the ultimate errors can be large. Figure 2 provides plots of the same sorts of errors measured on 1000 different instances of 50-dimensional problems after the 50th iteration of the TDA. The horizontal axes are labeled with indices of the problem instances. Note that the errors in the first graph are all close to zero (the units on the vertical axis must be multiplied by 10- 5 and errors are measured in relative terms). On the other hand, errors in the other graphs vary dramatically. It is intriguing that - at least in the context of Gaussian densities - the TDA can effectively compute conditional means without accurately approximating conditional densities. It is also interesting to note that, in the context of communications, the objective is to choose a code word x that is comes close to the transmitted code x. One natural way to do this involves assigning to x the code word that maximizes the conditional density J, i.e., the one that has the highest chance of being correct. In the Gaussian case that we have studied, this corresponds to the mean of J - a quantity that is computed correctly by the TDA! It will be interesting to explore generalizations of the line of analysis presented in this paper to other classes of densities. References [lJ S. Benedetto and G. Montorsi, "Unveiling turbo codes: Some results on parallel concatenated coding schemes," in IEEE Trans. Inform. Theory, vol. 42 , pp. 409-428 , Mar. 1996. {2] G. Berrou, A. Glavieux, a.nd P. Thitimajshima, "Near Shannon limit error-correcting coding: 'TUrbo codes," in Proc. 1998 Int. Conf. Commun., Geneva, Switzerland, May 1993 , pp. 1064-1070. [3J B. Frey, "Turbo Factor Analysis." To appear in Advances in Neural Information Processing Systems 11J. [4J R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press , 1963. [5J F. R. Kschischang and B . J. Frey, "Iterative Decoding of Compound Codes by Probability Propagation in Graphical Models," in IEEE Journal on Sel ected Areas in Commun., vol. 16, 2, pp. 219-230, Feb. 1998. [6J R. J. McEliece, D. J . C. MacKay, and J-F. Cheng, "Turbo Decod ing as an Instance of Pearl' s " Belief Propagation" Algorithm, " in IEEE Journal on Selected Areas in Commun., vol. 16, 2, pp . 140-152 , Feb. 1998. [7] J. Pearl, Probabuistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufmann, 1988. [8] T . Richardson, "The Geometry of Turbo-Decoding Dynamics," Dec. 1998. To appear in IEEE Trans. Inform. Th eory. [9J T. Richardson and R. Urbanke, "The Capacity of Low-Density Parity Check Codes under Message-Passing Decoding", submitted to the IEEE Trans. on Information Th eory. [10J T. Richardson, A. Shokrollahi, and R . Urbanke, "Design of Provably Good Low-Density Parity Check Codes," submitted to the IEEE Trans. on Information Th eory. [l1J Y . Weiss, " Belief Propagation and Revision in Networks with Loops, " November 1997. Available by ftp to publications.ai.mit.edu. [12] Y . Weiss and W. T . Freeman , "Correctness of belief propagation in Gaussian graphical m odels of arbitrary topology." To appear in Advances >n Neural Information Processtng Systems 11J.
1753 |@word nd:1 tedious:1 open:1 covariance:13 q1:1 lq2:2 initial:1 assigning:1 dx:3 must:4 intriguing:1 enables:1 pertinent:1 designed:1 plot:4 selected:1 iterates:3 provides:3 initiative:1 prove:1 f3v:10 behavior:2 shokrollahi:1 freeman:2 revision:1 begin:1 underlying:1 notation:2 bounded:1 maximizes:1 q2:16 developed:2 ag:1 shed:1 control:1 unit:1 omit:1 appear:4 before:1 positive:4 understood:1 frey:3 limit:5 consequence:2 might:3 studied:2 mateo:1 unique:3 definite:4 lf:3 area:2 empirical:1 significantly:1 convenient:1 word:2 close:3 operator:4 context:8 spectacular:1 yt:1 attention:1 correcting:1 insight:2 limiting:1 play:1 exact:2 element:4 roy:4 recognition:1 satisfying:1 labeled:1 observed:1 role:1 capture:1 cycle:1 eu:1 highest:1 benjamin:1 dynamic:1 depend:1 algebra:1 division:1 f2:7 ae1:1 po:12 joint:1 aei:1 describe:1 outcome:1 whose:2 stanford:3 plausible:1 richardson:3 jointly:1 noisy:2 final:1 sequence:7 qin:1 yii:1 paatrus:1 loop:4 decod:1 convergence:9 transmission:1 converges:11 paat:1 ftp:1 oo:2 develop:1 measured:2 ij:2 eq:5 p2:4 involves:1 come:1 differ:1 switzerland:1 correct:1 suffices:1 generalization:1 preliminary:1 a7r:2 mapping:1 vary:1 proc:1 applicable:1 correctness:1 odels:1 establishes:2 hope:1 mit:2 gaussian:20 aim:1 sel:1 publication:1 ax:1 check:3 inference:7 inaccurate:1 lj:1 relation:1 manipulating:1 provably:1 issue:1 special:1 mackay:1 equal:1 aware:1 f3:3 sampling:1 preparing:1 alter:1 eik:1 intelligent:1 few:2 randomly:1 geometry:1 message:2 iq2:1 analyzed:2 light:1 aeu:1 tree:1 incomplete:1 urbanke:2 initialized:4 desired:4 instance:5 earlier:1 obstacle:1 entry:1 conducted:1 density:37 ie:1 yl:1 decoding:14 together:1 again:1 central:2 choose:1 conf:1 resort:1 rescaling:1 benedetto:1 li:2 elk:2 de:4 rusmevichientong:4 coding:5 int:1 performed:1 closed:1 analyze:1 sup:1 sort:1 parallel:1 il:1 chart:1 l1j:1 became:1 kaufmann:1 nonsingular:2 yes:1 generalize:1 accurately:1 lu:2 multiplying:2 ago:1 submitted:2 inform:2 definition:3 streamlined:1 pp:4 e2:7 associated:3 dxi:4 proof:11 sampled:1 eq2:1 recall:1 lim:1 organized:1 appears:1 lqi:2 wei:4 though:4 mar:1 furthermore:2 mceliece:1 working:1 hand:1 horizontal:1 propagation:8 continuity:1 indicated:1 resemblance:1 facilitate:1 true:1 y2:7 evolution:2 symmetric:1 illustrated:1 conditionally:1 ll:6 during:1 coincides:4 complete:1 eqi:1 tn:10 reasoning:1 invoked:1 recently:1 possessing:2 fi:5 discussed:2 he:1 approximates:1 relating:1 marginals:1 tda:22 significant:1 refer:1 cambridge:1 ai:1 aq:11 feb:2 posterior:5 commun:3 compound:1 certain:3 success:1 yi:5 transmitted:1 morgan:1 somewhat:1 employed:1 converge:3 berrou:1 signal:1 full:1 infer:1 ing:1 technical:1 e1:1 qi:12 involving:4 variant:1 ae:3 expectation:1 iteration:2 sometimes:1 represent:2 qik:1 dec:1 whereas:1 want:1 limn:1 posse:3 dxn:1 near:1 presence:1 easy:1 topology:1 reduce:1 idea:1 expression:2 introd:1 ultimate:2 speech:1 passing:2 dramatically:1 generally:2 detailed:1 thitimajshima:1 eory:3 correctly:1 xii:1 write:2 unveiling:1 vol:3 independency:1 graph:6 run:2 reader:1 fl:1 hi:4 cheng:1 turbo:16 auv:3 nontrivial:1 constraint:1 generates:5 ected:1 according:3 i1j:1 explained:1 equation:2 previously:1 describing:1 turn:1 mechanism:1 tractable:3 studying:2 available:1 operation:1 multiplied:1 appropriate:3 original:1 include:2 graphical:5 restrictive:1 concatenated:1 establish:1 approximating:2 upcoming:1 objective:2 quantity:1 primary:1 diagonal:4 capacity:1 bvr:1 length:1 code:9 pointwise:1 index:1 ql:4 unfortunately:4 lg:2 statement:1 design:1 motivates:1 vertical:1 observation:1 november:1 immediate:1 communication:3 arbitrary:1 community:1 inferred:1 pair:1 connection:1 established:2 pearl:2 trans:4 address:1 able:2 below:1 rf:1 belief:7 suitable:2 natural:1 scheme:1 axis:1 carried:1 prior:2 understanding:1 l2:6 f3d:1 multiplication:1 relative:3 expect:1 bear:2 interesting:2 limitation:2 versus:1 lv:2 digital:1 proxy:1 storing:1 pi:7 surprisingly:1 parity:3 understand:1 taking:1 van:1 distributed:1 coincide:1 san:1 far:1 qio:2 approximate:1 geneva:1 clique:1 monotonicity:4 unnecessary:1 continuous:2 iterative:2 decade:1 channel:1 ca:2 kschischang:1 warranted:1 arise:1 fig:1 representative:2 lq:11 formula:1 theorem:6 normalizing:1 intractable:4 burden:1 uction:1 exists:2 effectively:1 ci:2 conditioned:3 illustrates:1 explore:1 gallager:1 monotonic:1 corresponds:1 chance:1 relies:1 ma:1 conditional:5 glavieux:1 viewed:1 identity:2 exposition:1 hard:1 lemma:28 experimental:3 shannon:1
824
1,754
Learning Factored Representations for Partially Observable Markov Decision Processes Brian Sallans Department of Computer Science University of Toronto Toronto M5S 2Z9 Canada Gatsby Computational Neuroscience Unit* University College London London WCIN 3AR U.K. sallans@cs.toronto.edu Abstract The problem of reinforcement learning in a non-Markov environment is explored using a dynamic Bayesian network, where conditional independence assumptions between random variables are compactly represented by network parameters. The parameters are learned on-line, and approximations are used to perform inference and to compute the optimal value function. The relative effects of inference and value function approximations on the quality of the final policy are investigated, by learning to solve a moderately difficult driving task. The two value function approximations, linear and quadratic, were found to perform similarly, but the quadratic model was more sensitive to initialization. Both performed below the level of human performance on the task. The dynamic Bayesian network performed comparably to a model using a localist hidden state representation, while requiring exponentially fewer parameters. 1 Introduction Reinforcement learning (RL) addresses the problem of learning to act so as to maximize a reward signal provided by the environment. Online RL algorithms try to find a policy which maximizes the expected time-discounted reward. They do this through experience by performing sample backups to learn a value function over states or state-action pairs. If the decision problem is Markov in the observable states, then the optimal value function over state-action pairs yields all of the information required to find the optimal policy for the decision problem. When complete know ledge of the environment is not available, states which are different may look the same; this uncertainty is called perceptual aliasing [1], and causes decision problems to have dynamics which are non-Markov in the perceived state . ? Correspondence address Learning Factored Representations for POMDPs 1.1 1051 Partially observable Markov decision processes Many interesting decision problems are not Markov in the inputs. A partially observable Markov decision process (POMDP) is a formalism in which it is assumed that a process is Markov, but with respect to some unobserved (i.e. "hidden") random variable. The state of the variable at time t, denoted st, is dependent only on the state at the previous time step and on the action performed. The currently-observed evidence is assumed to be independent of previous states and observations given the current state. The state of the hidden variable is not known with certainty, so a belief state is maintained instead. At each time step, the beliefs are updated by using Bayes' theorem to combine the belief state at the previous time step (passed through a model of the system dynamics) with newly observed evidence. In the case of discrete time and finite discrete state and actions, a POMDP is typically represented by conditional probability tables (CPTs) specifying emission probabilities for each state, and transition probabilities and expected rewards for states and actions. This corresponds to a hidden Markov model (HMM) with a distinct transition matrix for each action. The hidden state is represented by a single random variable that can take on one of K values. Exact belief updates can be computed using Bayes' rule. The value function is not over the discrete state, but over the real-valued belief state. It has been shown that the value function is piecewise linear and convex [2]. In the worst case, the number of linear pieces grows exponentially with the problem horizon, making exact computation of the optimal value function intractable. Notice that the localist representation, in which the state is encoded in a single random variable, is exponentially inefficient: Encoding n bits of information about the state of the process requires 2n possible hidden states. This does not bode well for the abilities of models which use this representation to scale up to problems with high-dimensional inputs and complex non-Markov structure. 1.2 Factored representations A Bayesian network can compactly represent the state of the system in a set of random variables [3]. A two time-slice dynamic Bayesian network (DBN) represents the system at two time steps [4]. The conditional dependencies between random variables from time t to time t + 1, and within time step t, are represented by edges in a directed acyclic graph. The conditional probabilities can be stored explicitly, or parameterized by weights on edges in the graph. If the network is densely-connected then inference is intractable [5]. Approximate inference methods include Markov chain Monte Carlo [6], variational methods [7], and belief state simplification [8]. In applying a DBN to a large problem there are three distinct issues to disentangle: How well does a parameterized DBN capture the underlying POMDP; how much is the DBN hurt by approximate inference; and how good must the approximation of the value function be to achieve reasonable performance? We try to tease these issues apart by looking at the performance of a DBN on a problem with a moderately large state-space and non-Markov structure. 2 The algorithm We use a fully-connected dynamic sigmoid belief network (DSBN) [9], with K units at each time slice (see figure 1). The random variables Si are binary, and conditional proba- B. Sa/lans 1052 Figure 1: Architecture of the dynamic sigmoid belief network. Circles indicate random variables, where a filled circle is observed and an empty circle is unobserved. Squares are action nodes, and diamonds are rewards. bilities relating variables at adjacent time-steps are encoded in action-specific weights: P(s~+1 = II{sD~=l,at) = a (twi~st) (1) k=l where wi~ is the weight from the ith unit at time step t to the kth unit at time step t + 1, assuming action at is taken at time t. The nonlinearity is the usual sigmoid function: a(x) = 1/1 +exp{ -x}. Note that a bias can be incorporated into the weights by clamping one of the binary units to 1. The observed variables are assumed to be discrete; the conditional distribution of an output given the hidden state is multinomial and parameterized by output weights. The probability of observing an output with value t is given by: K P(ot = ll{sDk =1) = exp {L:~1 Uklst} { K } t 101 L:m=l exp L: k=l UkmSk (2) where ot E 0 and Ukl denotes the output weight from hidden unit k to output value t. 2.1 Approximate inference Inference in the fully-connected Bayesian network is intractable. Instead we use a variational method with a fully-factored approximating distribution: P(stlst-1,a t - 1,ot) ~ Pst ~ K II,u~t(I-,uk)l-S~ k==l (3) where the ,uk are variational parameters to be optimized. This is the standard mean-field approximation for a sigmoid belief network [10]. The parameters,u are optimized by iterating the mean-field equations, and converge in a few iterations. The values of the variational parameters at time t are held fixed while computing the values for step t + 1. This is analogous to running only the forward portion of the HMM forward-backward algorithm [11]. The parameters of the DSBN are optimized online using stochastic gradient ascent in the log-likelihood: U f--- (4) 1053 Learning Factored Representations for POMDPs where Wand U are the transition and emission matrices respectively, aw and au are learning rates, the vector J-L contains the fully-factored approximate belief state, and 1/ is a vector of zeros with a one in the otth place. The notation [?]k denotes the kth element of a vector (or kth column of a matrix). 2.2 Approximating the value function Computing the optimal value function is also intractable. If a factored state-space representation is appropriate, it is natural (if extreme) to assume that the state-action value function can be decomposed in the same way: K L Qk (J-Lt, at) Q(Pst, at) ~ t::. Q F(J-L, at) (5) k=l This simplifying assumption is still not enough to make finding the optimal value function tractable. Even if the states were completely independent, each Qk would still be piecewise-linear and convex, with the number of pieces scaling exponentially with the horizon. We test two approximate value functions, a linear approximation: K L qk,a' J-Lk = [Q]a t T . J-L (6) + [blat (7) k=l and a quadratic approximation: K L ?k,at J-Lk + qk,a t J-Lk + bat k=l [~]at T . (J-L 0 J-L) + [Q]at T . J-L Where ~, Q and b are parameters of the approximations. The notation [?]i denotes the ith column of a matrix, [.]T denotes matrix transpose and 0 denotes element-wise vector multiplication. We update each term of the factored approximation with a modified Q-Iearning rule [12], which corresponds to a delta-rule where the target for input J-L is rt + 'Y maxa Q F (J-LH I , a): ?k,a t qk,a t bat ttt- ?k,a t + a J-Lk EB qk ,at + a J-Lk EB bat + a EB (8) Here a is a learning rate, 'Y is the temporal discount factor, and EB is the Bellman residual: (9) 3 Experimental results The "New York Driving" task [13] involves navigating through slower and faster one-way traffic on a multi-lane highway. The speed of the agent is fixed, and it must change lanes to avoid slower cars and move out of the way of faster cars. If the agent remains in front of a faster car, the driver of the fast car will honk its horn, resulting in a reward of -1.0. Instead of colliding with a slower car, the agent can squeeze past in the same lane, resulting in a reward of -10.0. A time step with no horns or lane-squeezes constitutes clear progress, and is rewarded with +0 .1. See [13] for a detailed description of this task. B. Sal/ans 1054 Table 1: Sensory input for the New York driving task Dimension Hear horn Gaze object Gaze speed Gaze distance Gaze refined distance Gaze colour I Size I Values 2 3 2 3 2 6 yes,no truck, shoulder, road looming, receding far, near, nose far-half, near-half red, blue, yellow, white, gray, tan A modified version of the New York Driving task was used to test our algorithm. The task was essentially the same as described in [13], except that the "gaze side" and "gaze direction" inputs were removed. See table 1 for a list of the modified sensory inputs. The performance of a number of algorithms and approximations were measured on the task: a random policy; Q-Iearning on the sensory inputs; a model with a localist representation (i.e. the hidden state consisted of a single multinomial random variable) with linear and quadratic approximate value functions; the DSBN with mean-field inference and linear and quadratic approximations; and a human driver. The localist representation used the linear Q-Iearning approximation of [14], and the corresponding quadratic approximation. The quadratic approximations were trained both from random initialization, and from initialization with the corresponding learned linear models (and random quadratic portion). The non-human algorithms were each trained for 100000 iterations, and in each case a constant learning rate of 0.01 and temporal decay rate of 0.9 were used. The human driver (the author) was trained for 1000 iterations using a simple character-based graphical display, with each iteration lasting 0.5 seconds. Stochastic policies were used for all RL algorithms, with actions being chosen from a Boltzmann distribution with temperature decreasing over time: (10) The DSBN had 4 hidden units per time slice, and the localist model used a multinomial with 16 states. The Q-Iearner had a table representation with 2160 entries. After training, each non-human algorithm was tested for 20 trials of 5000 time steps each. The human was tested for 2000 time steps, and the results were renormalized for comparison with the other methods. The results are shown in figure 2. All results were negative, so lower numbers indicate better performance in the graph. The error bars show one standard deviation across the 20 trials. There was little performance difference between the localist representation and the DSBN but, as expected, the DSBN was exponentially more efficient in its hidden-state representation. The linear and quadratic approximations performed comparably, but well below human performance. However, the DSBN with quadratic approximation was more sensitive to initialization. When initialized with random parameter settings, it failed to find a good policy. However, it did converge to a reasonable policy when the linear portion of the quadratic model was initialized with a previously learned linear model. The hidden units in the DSBN encode useful features of the input, such as whether a car was at the "near" or "nose" position. They also encode some history, such as current gaze direction. This has advantages over a simple stochastic policy learned via Q-Iearning: If the Q-Iearner knows that there is an oncoming car, it can randomly select to look left or right. The DSBN systematically looks to the left, and then to the right, wasting fewer actions. Learning Factored Representations for POMDPs 1055 4000 3500 3000 "1:l 2500 ~ ~2000 ~ I 1500 1000 4 Figure 2: Results on the New York Driving task for nine algorithms: R=random; Q=Q-Ieaming; LC=linear multinomial; QCR=quadratic multinomial, random init.; QCL=quadratic multinomial, linear init; LD=linear DSBN; QDR=quadratic DSBN, random init.; QDL=quadratic DSBN, linear init.; H=human Discussion The DSBN performed better than a standard Q-learner, and comparably to a model with a localist representation, despite using approximate inference and exponentially fewer parameters. This is encouraging, since an efficient encoding of the state is a prerequisite for tackling larger decision problems. Less encouraging was the value-function approximation: When compared to human performance, it is clear that all methods are far from optimal, although again the factored approximation of the DSBN did not hurt performance relative to the localist multinomial representation. The sensitivity to initialization of the quadratic approximation is worrisome, but the success of initializing from a simpler model suggests that staged learning may be appropriate, where simple models are learned and used to initialize more complex models. These findings echo those of [14] in the context of learning a non-factored approximate value function. There are a number of related works, both in the fields of reinforcement learning and Bayesian networks. We use the sigmoid belief network mean-field approximation given in [10], and discussed in the context of time-series models (the "fully factored" approximation) in [15]. Approximate inference in dynamic Bayesian networks has been discussed in [15] and [8]. The additive factored value function was used in the context of factored MDPs (with no hidden state) in [16], and the linear Q-learning approximation was given in [14]. Approximate inference was combined with more sophisticated value function approximation in [17]. To our knowledge, this is the first attempt to explore the practicality of combining all of these techniques in order to solve a single problem. There are several possible extensions. As described above, the representation learned by the DSBN is not tuned to the task at hand. The reinforcement information could be used to guide the learning of the DSBN parameters[18, 13]. Also, if this were done, then the reinforcement signals would provide additional evidence as to what state the POMDP is in, and could be used to aid inference. More sophisticated function approximation could be used [17]. Finally, although this method appears to work in practice, there is no guarantee that the reinforcement learning will converge. We view this work as an encouraging first step, with much further study required. 5 Conclusions We have shown that a dynamic Bayesian network can be used to construct a compact representation useful for solving a decision problem with hidden state. The parameters of the DBN can be learned from experience. Learning occurs despite the use of simple value- 1056 B. Sallans function approximations and mean-field inference. Approximations of the value function result in good performance, but are clearly far from optimal. The fully-factored assumptions made for the belief state and the value function do not appear to impact performance, as compared to the non-factored model. The algorithm as presented runs entirely on-line by performing "forward" inference only. There is much room for future work, including improving the utility of the factored representation learned, and the quality of approximate inference and the value function approximation. Acknowledgments We thank Geoffrey Hinton, Zoubin Ghahramani and Andy Brown for helpful discussions, the anonymous referees for valuable comments and criticism, and particularly Peter Dayan for helpful discussions and comments on an early draft of this paper. This research was funded by NSERC Canada and the Gatsby Charitable Foundation. References [1] S.D. Whitehead and D.H. Ballard. Learning to perceive and act by trial and error. Machine Learning, 7, 1991. [2] EJ. Sondik. The optimal control of partially observable Markov processes over the infinite horizon: Discounted costs. Operations Research, 26:282-304, 1973. [3] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, 1988. [4] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computationallntelligence, 5, 1989. [5] Gregory F. Cooper. The computational complexity of probabilistic inference using Bayesian belief networks. Anijiciallntelligence, 42:393-405, 1990. [6] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto, 1993. [7] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine Learning, 1999. in press. [8] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. UA1'98, 1998. [9] R. M. Neal. Connectionist learning of belief networks. Artijiciallntelligence, 56:71-113, 1992. [10] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean field theory for sigmoid belief networks. Journal of Artijiciallntelligence Research, 4:61-76, 1996. [11] Lawrence R. Rabiner and Biing-Hwang Juang. An introduction to hidden Markov models. IEEE ASSAP Magazine , 3:4-16, January 1986. [12] CJ.C.H. Watkins and P. Dayan. Q-Iearning. Machine Learning, 8:279-292, 1992. [13] A.K. McCallum. Reinforcement learning with selective perception and hidden state. Dept. of Computer Science, Universiy of Rochester, Rochester NY, 1995. Ph.D. thesis. [14] M.L. Littman, A.R. Cassandra, and L.P. Kaelbling. Learning policies for partially observable environments: Scaling up. In Proc. International Conference on Machine Learning, 1995. [15] Z. Ghahramani and M. I. Jordan. Factorial hidden Markov models. Machine Learning, 1997. [16] D. Koller and R. Parr. Computing factored value functions for policies in structured MDPs. In Proc. lJCA/'99, 1999. [17] A. Rodriguez, R. Parr, and D. Koller. Reinforcement learning using approximate belief states. In S. A. Solla, T. K. Leen, and K.-R. Mtiller, editors, Advances in Neural Information Processing Systems, volume 12. The MIT Press, Cambridge, 2000. [18] L. Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In Tenth National Conference on AI, 1992.
1754 |@word trial:3 version:1 simplifying:1 tr:1 ld:1 contains:1 series:1 tuned:1 past:1 current:2 si:1 tackling:1 artijiciallntelligence:2 must:2 additive:1 update:2 half:2 fewer:3 mccallum:1 ith:2 draft:1 node:1 toronto:4 simpler:1 driver:3 combine:1 expected:3 aliasing:2 multi:1 bellman:1 discounted:2 decomposed:1 decreasing:1 little:1 encouraging:3 provided:1 underlying:1 notation:2 maximizes:1 what:1 maxa:1 unobserved:2 finding:2 wasting:1 guarantee:1 certainty:1 temporal:2 act:2 iearning:5 uk:2 control:1 unit:8 appear:1 sd:1 despite:2 encoding:2 initialization:5 au:1 eb:4 mateo:1 specifying:1 suggests:1 directed:1 bat:3 horn:3 acknowledgment:1 practice:1 persistence:1 road:1 zoubin:1 context:3 applying:1 dean:1 convex:2 pomdp:4 perceive:1 factored:18 rule:3 hurt:2 analogous:1 updated:1 target:1 tan:1 magazine:1 exact:2 element:2 referee:1 particularly:1 observed:4 initializing:1 capture:1 worst:1 connected:3 solla:1 removed:1 valuable:1 bilities:1 sal:1 environment:4 complexity:1 moderately:2 reward:6 littman:1 dynamic:9 renormalized:1 trained:3 solving:1 learner:1 completely:1 compactly:2 represented:4 distinct:2 fast:1 london:2 monte:2 refined:1 encoded:2 larger:1 solve:2 valued:1 plausible:1 ability:1 echo:1 final:1 online:2 advantage:1 combining:1 achieve:1 description:1 squeeze:2 juang:1 wcin:1 empty:1 object:1 measured:1 progress:1 sa:1 c:1 involves:1 indicate:2 direction:2 honk:1 stochastic:4 human:9 oncoming:1 anonymous:1 brian:1 crg:1 extension:1 biing:1 exp:3 lawrence:1 parr:2 driving:5 early:1 perceived:1 proc:3 currently:1 sensitive:2 highway:1 mit:1 clearly:1 modified:3 avoid:1 ej:1 jaakkola:2 encode:2 emission:2 likelihood:1 criticism:1 helpful:2 inference:19 dependent:1 dayan:2 typically:1 hidden:17 koller:3 selective:1 issue:2 denoted:1 initialize:1 field:7 construct:1 ukl:1 represents:1 look:3 constitutes:1 future:1 report:1 connectionist:1 piecewise:2 intelligent:1 few:1 causation:1 looming:1 randomly:1 densely:1 national:1 proba:1 attempt:1 extreme:1 held:1 chain:2 andy:1 ieaming:1 edge:2 experience:2 lh:1 filled:1 initialized:2 circle:3 formalism:1 column:2 ar:1 localist:8 cost:1 kaelbling:1 deviation:1 entry:1 front:1 stored:1 dependency:1 aw:1 gregory:1 combined:1 st:2 international:1 sensitivity:1 probabilistic:3 gaze:8 again:1 thesis:1 inefficient:1 explicitly:1 piece:2 performed:5 try:2 cpts:1 view:1 sondik:1 observing:1 traffic:1 portion:3 red:1 bayes:2 ttt:1 rochester:2 square:1 qk:6 kaufmann:1 yield:1 rabiner:1 yes:1 yellow:1 bayesian:9 comparably:3 carlo:2 pomdps:3 m5s:1 history:1 newly:1 pst:2 knowledge:1 car:7 cj:1 sophisticated:2 z9:1 appears:1 leen:1 done:1 hand:1 twi:1 rodriguez:1 quality:2 gray:1 hwang:1 grows:1 effect:1 requiring:1 consisted:1 brown:1 neal:2 white:1 adjacent:1 ll:1 maintained:1 complete:1 temperature:1 reasoning:2 variational:5 wise:1 sigmoid:6 multinomial:7 rl:3 exponentially:6 volume:1 discussed:2 relating:1 cambridge:1 ai:1 dbn:6 similarly:1 nonlinearity:1 had:2 funded:1 disentangle:1 apart:1 rewarded:1 binary:2 success:1 morgan:1 additional:1 converge:3 maximize:1 signal:2 ii:2 technical:1 faster:3 impact:1 essentially:1 iteration:4 represent:1 ot:3 ascent:1 comment:2 jordan:3 near:3 enough:1 independence:1 architecture:1 whether:1 utility:1 colour:1 passed:1 peter:1 york:4 cause:1 nine:1 action:12 useful:2 iterating:1 clear:2 detailed:1 factorial:1 discount:1 ph:1 notice:1 neuroscience:1 delta:1 per:1 blue:1 discrete:4 lan:1 tenth:1 backward:1 graph:3 wand:1 run:1 parameterized:3 uncertainty:1 place:1 reasonable:2 sallans:3 decision:9 scaling:2 bit:1 entirely:1 simplification:1 display:1 correspondence:1 quadratic:16 truck:1 colliding:1 lane:4 speed:2 performing:2 department:2 structured:1 bode:1 across:1 character:1 wi:1 making:1 lasting:1 taken:1 equation:1 remains:1 previously:1 know:2 nose:2 tractable:2 whitehead:1 staged:1 available:1 operation:1 prerequisite:1 appropriate:2 slower:3 ledge:1 denotes:5 running:1 include:1 graphical:2 practicality:1 ghahramani:3 approximating:2 move:1 occurs:1 rt:1 usual:1 gradient:1 kth:3 navigating:1 distance:2 thank:1 hmm:2 assuming:1 difficult:1 negative:1 policy:10 boltzmann:1 perform:2 diamond:1 observation:1 markov:16 finite:1 january:1 hinton:1 looking:1 incorporated:1 shoulder:1 canada:2 pair:2 required:2 optimized:3 learned:8 distinction:1 chrisman:1 pearl:1 address:2 bar:1 below:2 receding:1 perception:1 boyen:1 hear:1 including:1 belief:16 natural:1 residual:1 mdps:2 lk:5 multiplication:1 relative:2 fully:6 interesting:1 worrisome:1 acyclic:1 geoffrey:1 foundation:1 agent:3 editor:1 charitable:1 systematically:1 transpose:1 tease:1 bias:1 side:1 guide:1 saul:2 slice:3 dimension:1 transition:3 sensory:3 forward:3 author:1 reinforcement:9 made:1 san:1 far:4 approximate:12 observable:6 compact:1 assumed:3 table:4 learn:1 ballard:1 ca:1 init:4 improving:1 investigated:1 complex:3 did:2 backup:1 gatsby:2 cooper:1 aid:1 lc:1 ny:1 position:1 perceptual:3 watkins:1 theorem:1 specific:1 explored:1 list:1 decay:1 evidence:3 intractable:4 kanazawa:1 stlst:1 horizon:3 clamping:1 cassandra:1 lt:1 explore:1 failed:1 nserc:1 partially:5 iearner:2 ua1:1 sdk:1 corresponds:2 conditional:6 room:1 change:1 infinite:1 except:1 called:1 experimental:1 select:1 college:1 dept:1 tested:2
825
1,755
Optimal Kernel Shapes for Local Linear Regression Dirk Ormoneit Trevor Hastie Department of Statistics Stanford University Stanford, CA 94305-4065 ormoneit@stat.stanjord.edu Abstract Local linear regression performs very well in many low-dimensional forecasting problems. In high-dimensional spaces, its performance typically decays due to the well-known "curse-of-dimensionality". A possible way to approach this problem is by varying the "shape" of the weighting kernel. In this work we suggest a new, data-driven method to estimating the optimal kernel shape. Experiments using an artificially generated data set and data from the UC Irvine repository show the benefits of kernel shaping. 1 Introduction Local linear regression has attracted considerable attention in both statistical and machine learning literature as a flexible tool for nonparametric regression analysis [Cle79, FG96, AMS97]. Like most statistical smoothing approaches, local modeling suffers from the so-called "curse-of-dimensionality", the well-known fact that the proportion of the training data that lie in a fixed-radius neighborhood of a point decreases to zero at an exponential rate with increasing dimension of the input space. Due to this problem, the bandwidth of a weighting kernel must be chosen very big so as to contain a reasonable sample fraction . As a result, the estimates produced are typically highly biased. One possible way to reduce the bias of local linear estimates is to vary the "shape" of the weighting kernel. In this work, we suggest a method for estimating the optimal kernel shape using the training data. For this purpose, we parameterize the kernel in terms of a suitable "shape matrix" , L, and minimize the mean squared forecasting error with respect to L. For such an approach to be meaningful, the "size" of the weighting kernel must be constrained during the minimization to avoid overfitting. We propose a new, entropy-based measure of the kernel size as a constraint. By analogy to the nearest neighbor approach to bandwidth selection [FG96], the suggested measure is adaptive with regard to the local data density. In addition, it leads to an efficient gradient descent algorithm for the computation of the optimal kernel shape. Experiments using an artificially generated data set and data from the UC Irvine repository show that kernel shaping can improve the performance of local linear estimates substantially. The remainder of this work is organized as follows. In Section 2 we briefly review Optimal Kernel Shapes for Local Linear Regression 541 local linear models and introduce our notation. In Section 3 we formulate an objective function for kernel shaping, and in Section 4 we discuss entropic neighborhoods. Section 5 describes our experimental results and Section 6 presents conclusions. 2 Local Linear Models Consider a nonlinear regression problem where a continuous response y E JR is to be predicted based on a d-dimensional predictor x E JRd. Let D == {(Xt, Yt), t = 1, . .. ,T} denote a set of training data. To estimate the conditional expectation f(xo) == E[ylxo], we consider the local linear expansion f(x) ~ 0:0 + (x - xo),/3o in the neighborhood of Xo. In detail, we minimize the weighted least squares criterion T C(o:,/3;xo) == ~)Yt - 0: - (Xt - xo)'/3)2k(xt,xo) (1) t=1 to determine estimates of the parameters 0:0 and /30. Here k(xt, xo) is a non-negative weighting kernel that assigns more weight to residuals in the neighborhood of Xo than to residuals distant from Xo. In multivariate problems, a standard way of defining k(xt, xo) is by applying a univariate, non-negative "mother kernel" </>(z) to the distance measure Ilxt - xolln == J(Xt - xo)'O(Xt - xo): k(xt, xo) == : (1lxt - xolln) ES=1 </> (1lx s - . (2) xolln) Here 0 is a positive definite d x d matrix determining the relative importance assigned to different directions of the input space. For example, if </>(z) is a standard normal density, k(xt, xo) is a normalized multivariate Gaussian with mean Xo and covariance matrix 0- 1 . Note that k(xt, xo) is normalized so as to satisfy E;=1 k(xt, xo) = 1. Even though this restriction is not relevant directly with regard to the estimation of 0:0 and /30, it will be needed in our discussion of entropic neighborhoods in Section 4. Using the shorthand notation i(xo, 0) == (&0, ~b)" the solution of the minimization problem (1) may be written conveniently as i(xo,O) = (X'W X)-1 X'WY, (3) where X is the T x (d + 1) design matrix with rows (1, x~ - xb)" Y is the vector of response values, and W is a TxT diagonal matrix with entries Wt,t = k(xt, xo). The resulting local linear fit at Xo using the inverse covariance matrix 0 is simply !(xo; 0) == &0. Obviously, !(xo; 0) depends on 0 through the definition of the weighting kernel (2). In the discussion below, our focus is on choices of 0 that lead to favorable estimates of the unknown function value f(xo). 3 Kernel Shaping The local linear estimates resulting from different choices of 0 vary considerably in practice. A common strategy is to choose 0 proportional to the inverse sample covariance matrix. The remaining problem of finding the optimal scaling factor is equivalent to the problem of bandwidth selection in univariate smoothing [FG96, BBB99]. For example, the bandwidth is frequently chosen as a function of the distance between Xo and its kth nearest neighbor in practical applications [FG96]. In this paper, we take a different viewpoint and argue that optimizing the "shape" 542 D. Onnoneit and T. Hastie of the weighting kernel is at least as important as optimizing the bandwidth. More specifically, for a fixed "volume" of the weighting kernel, the bias of the estimate can be reduced drastically by shrinking the kernel in directions of large nonlinear variation of f (x), and stretching it in directions of small nonlinear variation. This idea is illustrated using the example shown in Figure 1. The plotted function is sigmoidal along an index vector K, and constant in directions orthogonal to K,. Therefore, a "shaped" weighting kernel is shrunk in the direction K, and stretched orthogonally to K" minimizing the exposure of the kernel to the nonlinear variation. Figure 1: Left: Example of a single index model of the form y = g(X'K) with K = (1,1) and g(z) = tanh(3z). Right: The contours of g(z) are straight lines orthogonal to K. To distinguish formally the metric and the bandwidth of the weighting kernel, we rewrite 0 as follows: (4) 0== A' (LL' + I). Here A corresponds to the inverse bandwidth, and L may be interpreted as a metricor shape-matrix. Below we suggest an algorithm which is designed to minimize the bias with respect to the kernel metric. Clearly, for such an approach to be meaningful, we need to restrict the "volume" of the weighting kernel; otherwise, the bias of the estimate could be minimized trivially by choosing a zero bandwidth. For example, we might define A contingent on L so as to satisfy 101 = c for some constant c. A serious disadvantage of this idea is that, by contrast to the nearest neighbor approach, 101 is independent of the design. As a more appropriate alternative, we define A in terms of a measure of the number of neighboring observations. In detail, we fix the volume of k(xt, xo) in terms of the "entropy" of the weighting kernel. Then, we choose A so as to satisfy the resulting entropy constraint. Given this definition of the bandwidth, we determine the metric of k (Xt, xo) by minimizing the mean squared prediction error: T C(L; D) == I)Yt - f(Xt; 0?2 (5) t=l with respect to L. In this way, we obtain an approximation of the optimal kernel shape because the expectation of C(L; D) differs from the bias only by a variance term which is independent of L. Details of the entropic neighborhood criterion and of the numerical minimization procedure are described next. 4 Entropic Neighborhoods We mentioned previously that, for a given shape matrix L, we choose the bandwidth parameter A in (4) so as to fulfill a volume constraint on the weighting kernel. For this purpose, we interpret the kernel weights k(xt, xo) as probabilities. In particular, Optimal Kernel Shapes for Local Linear Regression as k(Xt, xo) > 0 and entropy of k(xt, xo): E t k(xt, xo) 543 = 1 by definition (2), we can formulate the local T H(O) == - I: k(xt, xo) log k(xt, xo). (6) t=l The entropy of a probability distribution is typically thought of as a measure of uncertainty. In the context of the weighting kernel k(xt, xo), H(O) can be used as a smooth measure of the "size" of the neighborhood that is used for averaging. To see this, note that in the extreme case where equal weights are placed on all observations in D, the entropy is maximized. At the other extreme, if the single nearest neighbor of Xo is assigned the entire weight of one, the entropy attains its minimum value zero. Thus, fixing the entropy at a constant value c is similar to fixing the number k in the nearest neighbor approach. Besides justifying (6), the correspondence between k and c can also be used to derive a more intuitive volume parameter than the entropy level c. We specify c in terms of a hypothetical weighting kernel that places equal weight on the k nearest neighbors of Xo and zero weight on the remaining observations. Note that the entropy of this hypothetical kernel is log k. Thus, it is natural to characterize the size of an entropic neighborhood in terms of k, and then to determine A by numerically solving the nonlinear equation system (for details, see [OH99]) H(O) = logk. (7) More precisely, we report the number of neighbors in terms of the equivalent sample fraction p == kiT to further intuition. This idea is illustrated in Figure 2 using a one- and a two-dimensional example. The equivalent sample fractions are p = 30% and p = 50%, respectively. Note that in both cases the weighting kernel is wider in regions with few observations, and narrower in regions with many observations. As a consequence, the number of observations within contours of equal weighting remains approximately constant across the input space. " . . ': . ,. 0?:??:??."???---------. . . .,', . ~', .~,' .,'. ': . 0.2 . 03 04 0.1 O. ot 1 Figure 2: Left: Univariate weighting kernel k(-, xo) evaluated at Xo = 0.3 and Xo = 0.7 based on a sample data set of 100 observations (indicated by the bars at the bottom) . Right: Multivariate weighting kernel k(?, xo) based on a sample data set of 200 observations. The two ellipsoids correspond to 95% contours of a weighting kernel evaluated at (0.3,0.3)' and (0.6,0.6)' . To summarize, we define the value of A by fixing the equivalent sample fraction parameter p, and subsequently minimize the prediction error on the training set with respect to the shape matrix L. Note that we allow for the possibility that L may be of reduced rank I :::; d as a means of controlling the number of free parameters. As a minimization procedure, we use a variant of gradient descent that D. Ormoneit and T. Hastie 544 accounts for the entropy constraint. In particular, our algorithm relies on the fact that (7) is differentiable with respect to L. Due to space limitations, the interested reader is referred to [OH99] for a formal derivation of the involved gradients and for a detailed description of the optimization procedure. 5 Experiments In this section we compare kernel shaping to standard local linear regression using a fixed spherical kernel in two examples. First, we evaluate the performance using a simple toy problem which allows us to estimate confidence intervals for the prediction accuracy using Monte Carlo simulation. Second, we investigate a data set from the machine learning data base at UC Irvine [BKM98]. 5.1 Mexican Hat Function In our first example, we employ Monte Carlo simulation to evaluate the performance of kernel shaping in a five-dimensional regression problem. For this purpose, 20 sets of 500 data points each are generated independently according to the model y = coS(SJxI + x~) . exp( -(xi + x~)). (8) Here the predictor variables Xl, ... ,X5 are drawn according to a five-dimensional standard normal distribution. Note that, even though the regression is carried out in a five-dimensional predictor space, y is really only a function of the variables Xl and X2 . In particular, as dimensions two through five do not contribute any information with regard to the value of y, kernel shaping should effectively discard these variables. Note also that there is no noise in this example. Figure 3: Left: "True" Mexican hat function. Middle: Local linear estimate using a spherical kernel (p = 2%). Right: Local linear estimate using kernel shaping (p = 2%) . Both estimates are based on a training set consisting of 500 data points. Figure 3 shows a plot of the true function, the spherical estimate, and the estimate using kernel shaping as functions of Xl and X2. The true function has the familiar "Mexican hat" shape, which is recovered by the estimates to different degrees. We evaluate the local linear estimates for values of the equivalent neighborhood fraction parameter p in the range from 1% to 15%. Note that, to warrant a fair comparison, we used the entropic neighborhood also to determine the bandwith of the spherical estimate. For each value of p, 20 models are estimated using the 20 artificially generated training sets, and subsequently their performance is evaluated on the training set and on the test set of 31 x 31 grid points shown in Figure 3. The shape matrix L has maximal rank 1 = 5 in this experiment. Our results for local linear regression using the spherical kernel and kernel shaping are summarized in Table 1. Performance is measured in terms of the mean R 2 -value of the 20 models, and standard deviations are reported in parenthesis. 545 Optimal Kernel Shapes for Local Linear Regression Algorithm spherical kernel spherical kernel spherical kernel spherical kernel spherical kernel kernel shaping kernel shaping kernel shaping kernel shaping p=l% p=2% p=5% p = 10% p= 20% p= 1% p=2% p=5% p= 15% Training R2 0.961 (0.005) 0.871 (0.014) 0.680 (0.029) 0.507 (0.038) 0.341 (0.039) 0.995 (0.001) 0.984 (0.002) 0.923 (0.009) 0.628 (0.035) 0.215 0.293 0.265 0.213 0.164 0.882 0.909 0.836 0.517 (0.126) (0.082) (0.043) (0.030) (0.021) (0.024) (0.017) (0.023) (0 .035) Table 1: Performances in the toy problem. The results for kernel shaping were obtained using 200 gradient descent steps with step size a = 0.2 . The results in Table 1 indicate that the optimal performance on the test set is obtained using the parameter values p = 2% both for kernel shaping (R2 = 0.909) and for the spherical kernel (R2 = 0.293). Given the large difference between the R2 values, we conclude that kernel shaping clearly outperforms the spherical kernel on this data set. ---- Figure 4: The eigenvectors of the estimate of n obtained on the first of 20 training sets. The graphs are ordered from left to right by increasing eigenvalues (decreasing extension of the kernel in that direction): 0.76,0.76, 0.76, 33.24, 34.88. Finally, Figure 4 shows the eigenvectors of the optimized n on the first of the 20 training sets. The eigenvectors are arranged according to the size of the corresponding eigenvalues. Note that the two rightmost eigenvectors, which correspond to the directions of minimum kernel extension, span exactly the Xl -x2-space where the true function lives. The kernel is stretched in the remaining directions, effectively discarding nonlinear contributions from X3, X4, and X5' 5.2 Abalone Database The task in our second example is to predict the age of abalone based on several measurements. More specifically, the response variable is obtained by counting the number of rings in the shell in a time-consuming procedure. Preferably, the age of the abalone could be predicted from alternative measurements that may be obtained more easily. In the data set, eight candidate measurements including sex, dimensions, and various weights are reported along with the number of rings of the abalone as predictor variables. We normalize these variables to zero mean and unit variance prior to estimation. Overall, the data set consists of 4177 observations. To prevent possible artifacts resulting from the order of the data records, we randomly draw 2784 observations as a training set and use the remaining 1393 observations as a test set. Our results are summarized in Table 2 using various settings for the rank l, the equivalent fraction parameter p, and the gradient descent step size a. The optimal choice for p is 20% both for kernel shaping (R2 = 0.582) and for the spherical kernel (R2 = 0.572). Note that the performance improvement due to kernel shaping is negligible in this experiment. D. Ormoneit and T. Hastie 546 Kernel spherical kernel spherical kernel spherical kernel spherical kernel spherical kernel spherical kernel kernel shaping kernel shaping kernel shaping kernel shaping kernel shaping kernel shaping l - 5, l = 5, l = 2, l = 2, l = 2, l = 2, p = 0.05 p = 0.10 P = 0.20 P = 0.50 p = 0.70 P = 0.90 p - 0.20, a = 0.5 p = 0.20, a = 0.2 P = 0.10, a = 0.2 P = 0.20, a = 0.2 P = 0.50, a = 0.2 p = 0.20, a = 0.5 Training R2 0.752 0.686 0.639 0.595 0.581 0.568 0.705 0.698 0.729 0.663 0.603 0.669 0.543 0.564 0.572 0.565 0.552 0.533 0.575 0.577 0.574 0.582 0.571 0.582 Table 2: Results using the Abalone database after 200 gradient descent steps. 6 Conclusions We introduced a data-driven method to improve the performance of local linear estimates in high dimensions by optimizing the shape of the weighting kernel. In our experiments we found that kernel shaping clearly outperformed local linear regression using a spherical kernel in a five-dimensional toy example, and led to a small performance improvement in a second, real-world example. To explain the results of the second experiment, we note that kernel shaping aims at exploiting global structure in the data. Thus, the absence of a larger performance improvement may suggest simply that no corresponding structure prevails in that data set. That is, even though optimal kernel shapes exist locally, they may vary accross the predictor space so that they cannot be approximated by any particular global shape. Preliminary experiments using a localized variant of kernel shaping did not lead to significant performance improvements in our experiments. Acknowledgments The work of Dirk Ormoneit was supported by a grant of the Deutsche Forschungsgemeinschaft (DFG) as part of its post-doctoral program. Trevor Hastie was partially supported by NSF grant DMS-9803645 and NIH grant ROI-CA-72028-01. Carrie Grimes pointed us to misleading formulations in earlier drafts of this work. References [AMS97] C. G. Atkeson, A. W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence Review, 11:11-73, 1997. [BBB99] M. Birattari, G. Bontempi, and H. Bersini. Lazy learning meets the recursive least squares algorithm. In M. J. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11. The MIT Press, 1999. [BKM98] C. Blake, E. Koegh, and C. J. Merz. UCI Repository of machine learning databases. http://vvw.ics.uci.edu/-mlearn/MLRepository.html. [Cle79] W . S. Cleveland. Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74:829-836, 1979. [FG96] J. Fan and 1. Gijbels. Local Polynomial Modelling and Its Applications. Chapman & Hall, 1996. [OH99] D. Ormoneit and T . Hastie. Optimal kernel shapes for local linear regression. Tech. report 1999-11, Department of Statistics, Stanford University, 1999.
1755 |@word repository:3 middle:1 briefly:1 polynomial:1 proportion:1 sex:1 simulation:2 covariance:3 rightmost:1 outperforms:1 recovered:1 attracted:1 must:2 written:1 numerical:1 distant:1 shape:21 designed:1 plot:1 intelligence:1 record:1 draft:1 contribute:1 lx:1 sigmoidal:1 five:5 along:2 shorthand:1 consists:1 introduce:1 frequently:1 spherical:20 decreasing:1 curse:2 accross:1 increasing:2 cleveland:1 estimating:2 notation:2 deutsche:1 interpreted:1 substantially:1 finding:1 preferably:1 hypothetical:2 exactly:1 unit:1 grant:3 positive:1 negligible:1 local:25 consequence:1 meet:1 approximately:1 might:1 doctoral:1 co:1 range:1 practical:1 acknowledgment:1 practice:1 recursive:1 definite:1 differs:1 x3:1 procedure:4 thought:1 confidence:1 suggest:4 cannot:1 selection:2 context:1 applying:1 restriction:1 equivalent:6 yt:3 exposure:1 attention:1 independently:1 formulate:2 assigns:1 variation:3 controlling:1 carrie:1 approximated:1 database:3 bottom:1 parameterize:1 region:2 solla:1 decrease:1 mentioned:1 intuition:1 rewrite:1 solving:1 easily:1 various:2 derivation:1 monte:2 artificial:1 neighborhood:11 choosing:1 stanford:3 larger:1 otherwise:1 statistic:2 obviously:1 differentiable:1 eigenvalue:2 propose:1 maximal:1 remainder:1 neighboring:1 relevant:1 uci:2 intuitive:1 description:1 normalize:1 lxt:1 exploiting:1 ring:2 bkm98:2 wider:1 derive:1 fixing:3 stat:1 measured:1 nearest:6 predicted:2 indicate:1 direction:8 radius:1 shrunk:1 subsequently:2 fix:1 really:1 preliminary:1 extension:2 hall:1 blake:1 normal:2 exp:1 roi:1 ic:1 predict:1 vary:3 entropic:6 purpose:3 estimation:2 favorable:1 outperformed:1 tanh:1 tool:1 weighted:3 minimization:4 mit:1 clearly:3 gaussian:1 aim:1 fulfill:1 avoid:1 varying:1 focus:1 schaal:1 improvement:4 rank:3 modelling:1 tech:1 contrast:1 attains:1 typically:3 entire:1 interested:1 overall:1 flexible:1 html:1 smoothing:3 constrained:1 uc:3 equal:3 shaped:1 chapman:1 x4:1 warrant:1 minimized:1 report:2 serious:1 few:1 employ:1 randomly:1 dfg:1 familiar:1 consisting:1 highly:1 possibility:1 investigate:1 extreme:2 grime:1 bontempi:1 xb:1 orthogonal:2 plotted:1 modeling:1 earlier:1 disadvantage:1 deviation:1 entry:1 predictor:5 characterize:1 reported:2 considerably:1 density:2 squared:2 choose:3 prevails:1 american:1 toy:3 account:1 summarized:2 bandwith:1 satisfy:3 depends:1 contribution:1 minimize:4 square:2 accuracy:1 variance:2 stretching:1 maximized:1 correspond:2 produced:1 carlo:2 straight:1 mlearn:1 explain:1 suffers:1 trevor:2 definition:3 involved:1 dm:1 irvine:3 dimensionality:2 organized:1 shaping:28 response:3 specify:1 arranged:1 evaluated:3 though:3 formulation:1 ams97:2 cohn:1 nonlinear:6 artifact:1 indicated:1 contain:1 true:4 normalized:2 assigned:2 moore:1 illustrated:2 ll:1 during:1 x5:2 mlrepository:1 abalone:5 criterion:2 performs:1 nih:1 common:1 volume:5 association:1 interpret:1 numerically:1 measurement:3 significant:1 mother:1 stretched:2 trivially:1 grid:1 pointed:1 base:1 multivariate:3 optimizing:3 driven:2 discard:1 life:1 minimum:2 contingent:1 kit:1 determine:4 jrd:1 smooth:1 justifying:1 post:1 parenthesis:1 prediction:3 variant:2 regression:15 txt:1 expectation:2 metric:3 kernel:88 addition:1 interval:1 biased:1 ot:1 birattari:1 counting:1 forschungsgemeinschaft:1 fit:1 hastie:6 bandwidth:10 restrict:1 reduce:1 stanjord:1 idea:3 forecasting:2 detailed:1 eigenvectors:4 nonparametric:1 locally:3 reduced:2 http:1 exist:1 nsf:1 estimated:1 drawn:1 prevent:1 graph:1 fraction:6 gijbels:1 inverse:3 uncertainty:1 place:1 reasonable:1 reader:1 draw:1 scaling:1 distinguish:1 correspondence:1 fan:1 constraint:4 precisely:1 x2:3 span:1 department:2 according:3 jr:1 describes:1 across:1 xo:40 equation:1 previously:1 remains:1 discus:1 needed:1 eight:1 appropriate:1 alternative:2 hat:3 remaining:4 bersini:1 objective:1 strategy:1 diagonal:1 gradient:6 kth:1 distance:2 argue:1 besides:1 index:2 ellipsoid:1 minimizing:2 negative:2 design:2 unknown:1 observation:11 descent:5 defining:1 dirk:2 introduced:1 optimized:1 suggested:1 bar:1 wy:1 below:2 summarize:1 program:1 including:1 suitable:1 natural:1 ormoneit:6 residual:2 improve:2 misleading:1 orthogonally:1 carried:1 review:2 literature:1 prior:1 determining:1 relative:1 limitation:1 proportional:1 analogy:1 localized:1 age:2 degree:1 viewpoint:1 editor:1 row:1 placed:1 supported:2 free:1 drastically:1 bias:5 allow:1 formal:1 neighbor:7 benefit:1 regard:3 dimension:4 world:1 contour:3 adaptive:1 atkeson:1 global:2 overfitting:1 conclude:1 consuming:1 xi:1 continuous:1 table:5 robust:1 ca:2 expansion:1 artificially:3 did:1 big:1 noise:1 fair:1 referred:1 shrinking:1 scatterplots:1 exponential:1 xl:4 lie:1 candidate:1 weighting:21 xt:22 discarding:1 r2:7 decay:1 effectively:2 importance:1 logk:1 entropy:11 led:1 simply:2 univariate:3 conveniently:1 lazy:1 ordered:1 partially:1 corresponds:1 relies:1 shell:1 conditional:1 narrower:1 absence:1 considerable:1 specifically:2 wt:1 averaging:1 mexican:3 kearns:1 called:1 experimental:1 e:1 merz:1 meaningful:2 formally:1 evaluate:3
826
1,757
Manifold Stochastic Dynamics for Bayesian Learning Mark Zlochin Department of Computer Science Technion - Israel Institute of Technology Technion City, Haifa 32000, Israel zmark@cs.technion.ac.il YoramBaram Department of Computer Science Technion - Israel Institute of Technology Technion City, Haifa 32000, Israel baram@cs.technion.ac.il Abstract We propose a new Markov Chain Monte Carlo algorithm which is a generalization of the stochastic dynamics method. The algorithm performs exploration of the state space using its intrinsic geometric structure, facilitating efficient sampling of complex distributions. Applied to Bayesian learning in neural networks, our algorithm was found to perform at least as well as the best state-of-the-art method while consuming considerably less time. 1 Introduction In the Bayesian framework predictions are made by integrating the function of interest over the posterior parameter distribution, the lattt~r being the normalized product of the prior distribution and the likelihood. Since in most problems the integrals are too complex to be calculated analytically, approximations are needed. Early works in Bayesian learning for nonlinear models [Buntineand Weigend 1991, MacKay 1992] used Gaussian approximations to the posterior parameter distribution. However, the Gaussian approximation may be poor, especially for complex models, because of the multi-modal character of the posterior distribution. Hybrid Monte Carlo (HMC) [Duane et al. 1987] introduced to the neural network community by [Neal 1996], deals more successfully with multi-modal distributions but is very time consuming. One of the main causes of HMC inefficiency is the anisotropic character of the posterior distribution - the density changes rapidly in some directions while remaining almost constant in others. We present a novel algorithm which overcomes the above problem by using the intrinsic geometrical structure of the model space. 2 Hybrid Monte Carlo Markov Chain Monte Carlo (MCMC) [Gilks et al. 1996] approximates the value E[a] =/ a(O)Q(O)dO 695 Manifold Stochastic Dynamics for Bayesian Learning by the mean a= 1 N IV L a(O(t?) t=l where e(l) , ... , O(N) are successive states of the ergodic Markov chain with invariant distribution Q(8) . In addition to ergodicity and invariance of Q(O) another quality we would like the Markov chain to have is rapid exploration of the state space. While the first two qualities are rather easily attained, achieving rapid exploration of the state space is often nontrivial. A state-of-the-art MCMC method, capable of sampling from complex distributions, is Hybrid Monte Carlo [Duane et al. 1987]. The algorithm is expressed in terms of sampling from canonical distribution for the state, q, of a "physical" system, defined in terms of the energy function E( q) I: P(q) ex exp(-E(q)) (1) To allow the use of dynamical methods, a "momentum" variable, p, is introduced , with the same dimensionality as q. The canonical distribution over the "phase space" is defined to be: P(q,p) ex exp(-H(q , p)) (2) where H(q , p) = E(q) + K(p) is the "Hamiltonian", which represents the total energy. f{ (p) is the "kinetic energy" due to momentum, defined as n 2 J!.L ~2m' K (p) = ' " i=l (3) l where pi , i = 1, . . . , n are the momentum components and m i is the "mass" associated with i'th component, so that different components can be given different weight. Sampling from the canonical distribution can be done using stochastic dynamics method [Andersen 1980], in which the task is split into two sub tasks - sampling uniformly from values of q and p with a fixed total energy, H(q , p) , and sampling states with different values of H. The first task is done by simulating the Hamiltonian dynamics of the system: dqi dT BH =+BPi Pi m j Different energy levels are obtained by occasional stochastic Gibbs sampling [Geman and Geman 1984] of the momentum. Since q and p are independent, p may be updated without reference to q by drawing a value with probability density proportional to exp( - K (p)), which, in the case of (3), can be easily done, since the Pi'S have independent Gaussian distributions. In practice, Hamiltonian dynamics cannot be simulated exactly, but can be approximated by some discretization using finite time steps. One common approximation is leapfrog discretization [Neal 1996] , In the hybrid Monte Carlo method stochastic dynamic transitions are used to generate candidate states for the Metropolis algorithm [Metropolis et al. 1953]. This eliminates certain 1Note E( q) that any probability density that is nowhere zero can be put in this form, by simply defining log Z, for any convenient Z). = - log P( q) - M Zlochin and Y. Baram 696 drawbacks of the stochastic dynamics such as systematic errors due to leapfrog discretization, since Metropolis algorithm ensures that every transition keeps canonical distribution invariant. However, the empirical comparison between the uncorrected stochastic dynamics and the HMC in application to Bayesian learning in neural networks [Neal 1996] showed that with appropriate discretization stepsize there is no notable difference between the two methods. A modification proposed in [Horowitz 1991] instead of Gibbs sampling of momentum, is to replace p each time by p. cos (0) + ( .sin( 0), where 0 is a small angle and ( is distributed according to N(O, 1). While keeping canonical distribution invariant, this scheme, called momentum persistence, improves the rate of exploration. 3 Riemannian geometry e A Riemannian manifold [Amari 1997] is a set ~ R n equipped with a metric tensor G which is a positive semidefinite matrix defining the inner product between infinitesimal increments as: < dOl, d0 2 >= doT . G . d0 2 Let us denote entries of G by Gi,j and entries of G- l by Gi,j. This inner product naturally gives us the norm II dO IIb=< dO, dO >= dOT. G . dO. The Jeffrey prior over e is defined by the density function: 11" ( where 3.1 0) ex: JiG(ijI I . I denotes determinant. Hamiltonian dynamics over a manifold For Riemannian manifold the dynamics take a more general form than the one described in section 2. If the metric tensor is G and all masses are set to one then the Hamiltonian is given by: H(q,p) = E(q) 1 + 2pT . G- l . P (4) The dynamics are governed by the following set of differential equations [Chavel 1993]: where r~, k are the Christoffel symbols given by: r i. k J, and =! ~Gi,m(OGm,k + oGm,j 2~ q = ~: is related to p by q = G-lp. oqj Oqk _ OGj,k) oqm 697 Manifold Stochastic Dynamics for Bayesian Learning 3.2 Riemannian geometry of functions In regression the log-likelihood is proportional to the empirical error, which is simply the Euclidean distance between the target point, t, and candidate function evaluated over the sample. Therefore, the most natural distance measure between the models is the Euclidean seminorm : I d(Ol,{;2)2 =11 hi - !(Plir= L(f(Xi,01) - !(Xi,02)f (5) i=1 The resulting metric tensor is: I G = L{Y'e!(xi,O). Y'd(Xi,Of} = JT . J (6) i=1 where V' e denotes gradient and J = [(] ~~~ d] is the Jacobian matrix. J 3.3 Bayesian geometry A Bayesian approach would suggest the inclusion of prior assumptions about the parameters in the manifold geometry. If, for example, a priori 0 "" N (0, 1/ a), then the log-posterior can be written as: I 10gp(Olx) = P L(f(Xi , OI) i=l where n - t)2 + a L(Ok - 0)2 k=1 P is inverse noise variance. Therefore, the natural metric in the model space is I d(01, ( 2)2 = n P L(f(Xi, ( 1) - !(Xi, ( 2))2 + a L(O.! - Ok)2 k=1 i=l with the metric tensor: "T " i <I >I GB=p?G+a?I=J .J (7) where j is the "extended Jacobian": j"j ={ i (8) where &i,j is the Kroneker's delta. Note, that as a -+ 0, G B -+ PG, hence as the prior becomes vaguer we approach a nonBayesian paradigm. If, on the other hand, a -+ 00 or P . G -+ 0, the Bayesian geometry approaches the Euclidean geometry ofthe parameter space. These are the qualities that we would like the Bayesian geometry to have - if the prior is "strong" in comparison to the likelihood, the exact form of G should be of little importance. The definitions above can be applied to any log-concave prior distribution with the inverse Hessian of the log-prior, (V'V' logp( 0)) -1, replacing a I in (7). The framework is not restricted to regression. For a general distribution class it is natural to use Fisher information matrix, I, as a metric tensor [Amari 1997}. The Bayesian metric tensor then becomes: GB = I + (V'V'logp(O))-l (9) 698 4 M Zlochin and y. Baram Manifold Stochastic Dynamics As mentioned before, the energy landscape in many regression problems is anisotropic. This degrades the performance of HMC in two aspects: ? The dynamics may not be optimal for efficient exploration of the posterior distribution as suggested by the studies of Gaussian diffusions [Hwang et al. 1993]. ? The resulting differential equations are stiff [Gear 1971], leading to large discretization errors, which in turn necessitates small time steps, implying that the computational burden is high. Both of these problems disappear if instead of the Euclidean Hamiltonian dynamics used in HMC we simulate dynamics over the manifold equipped with the metric tensor G B proposed in the previous section. In the context of regression from the definition G B = jT . j, we obtain an alternative ? & d2 q . . & equatIOn lor dT2 ,In a matnx lorm: 2 ' d q = -G- 1 ("V E dT2 B + jT oj q) dT (10) In the canonical distribution P(q,p) ex: exp(-H(q,p)) the conditional distribution of p given q is a zero-mean Gaussian with the covariance matrix G B (q) and the marginal distribution over q is proportional to exp( -E(q))1r(q). This is equivalent to mUltiplying the prior by the Jeffrey prior2. The sampling from the canonical distribution is two-fold: ? Simulate the Hamiltonian dynamics (3.1) for one time-step using leapfrog discretisation. ? Replace p using momentum persistence. Unlike the HMC case, the momentum perturbation (is distributed according to N(O, GB). The actual weights mUltiplying the matrices I and G in (7) may be chosen to be different from the specified a and /3, so as to improve numerical stability. 5 Empirical comparison 5.1 Robot ann problem We compared the performance of the Manifold Stochastic Dynamics (MSD) algorithm with the standard HMC. The comparison was carried using MacKay's robot arm problem which is a common benchmark for Bayesian methods in neural networks [MacKay 1992, Neal 1996]. The robot arm problem is concerned with the mapping: YI = 2.0 cos Xl + 1.3 COS(XI + X2) + el, Y2 = 2.0 sin Xl + 1.3 sin(xi + X2) + e2 where el, e2 are independent Gaussian noise variables of standard deviation 0.05. The dataset used by Neal and Mackay contained 200 examples in the training set and 400 in the test set. 2In fact, since the actual prior over the weights is unknown, a truly Bayesian approach would be to use a non-informative prior such as 71"( q). In this paper we kept the modified prior which is the product of 7I"(q) and a zero-mean Gaussian. 699 Manifold Stochastic Dynamics for Bayesian Learning 1.2 , 0.9 , 0.8 0.7 ...,...-- - - , , , - --"- 0.8 , 0.6 "- 0.6 0.5 0.4 0.4 0.3 0.2 0.2 0 0. 1 0 0 10 20 30 40 50 -0.2 0 10 20 30 40 50 Figure 1: Average (over the 10 runs) autocorrelation of input-to-hidden (left) and hiddento-output (right) weights for HMC with 100 and 30 leapfrog steps per iteration and MSD with single leapfrog step per iteration. The horizontal axis gives the lags, measured in number of iterations. We used a neural network with two input units, one hidden layer containing 8 tanh units and two linear output units. The hyperparameter f3 was set to its correct value of 400 and 5.2 0" was chosen to be 1. Algorithms We compared MSD with two versions of HMC - with 30 and with 100 leapfrog steps per iteration, henceforth referred to as HMC30 and HMCIOO. MSD was run with a single leapfrog step per iteration. In all three algorithms momentum was resampled using persistence with cos(O) = 0.95. A single iteration of HMC100 required about 4.8 . 10 6 floating point operations (flops), HMC30 required 1.4 . 10 6 flops and MSD required 0.5 . 10 6 flops. Hence the computationalload of MSD was about one third of that of HMC30 and 10 times lower than that of HMClOO. The discretization stepsize for HMC was chosen so as to keep the rejection rate below 5%. An equivalent criterion of average error in the Hamiltonian around 0.05 was used for the MSD. All three sampling algorithms were run 10 times, each time for 3000 iteration with the first 1000 samples discarded in order to allow the algorithms to reach the regions of high probability. 5.3 Results One appropriate measure for the rate of state space exploration is weights autocorrelation [Neal 1996]. As shown in Figure 1, the behavior of MSD was clearly superior to that of HMC. Another value of interest is the total squared error over the test set. The predictions for the test set were made as follows. A subsample of 100 parameter vectors waS generated by taking every twentieth sample vector starting from 1001 and on. The predicted value was M. Zlochin and Y. Baram 700 the average over the empirical function distribution of this sUbsample. The total squared errors, nonnalized with respect to the variance on the test cases, have the following statistics (over the 10 runs): HMC30 HMCI00 MSD average 1.314 1.167 1.161 standard deviation 0.074 0.044 0.023 The average error ofHMC30 is high, indicating that the algorithm failed to reach the region of high probability. The errors of HMC 100 and MSD are comparable but the standard deviation for MSD is twice as low as that for HMC 100, meaning that the estimate obtained using MSD is more reliable. 6 Conclusion We have described a new algorithm for efficient sampling from complex distributions such as those appearing in Bayesian learning with non-linear models. The empirical comparison shows that our algorithm achieves results superior to the best achieved by existing algorithms in considerably smaller computation time. References [Amari 1997] Amari S., "Natural Gradient Works Efficiently in Learning", Neural Computation, vol. 10, pp.251-276. [Andersen 1980] Andersen H.e., "Molecular dynamics simulations at constant pressure and/or temperature", Journal of Chemical Physics, vol. 3,pp. 589-603. [Buntine and Weigend 1991] "Bayesian back-propagation", Complex systems, vol. 5, pp. 603-643. [Chavel 1993] Chavel I., Riemannian Geometry: A Modem Introduction, University Press, Cambridge. [Duane et al. 1987] "Hybrid Monte Carlo", Physics Letters B,vol. 195,pp. 216-222. [Gear 1971] Gear e.W., Numerical initial value problems in ordinary differential equations, Prentice Hall. [Geman and Geman 1984] Geman S.,Geman D., "Stochastic relaxation,Gibbs distributions and the Bayesian restoration of images", IEEE Trans.,PAMI6,721-741. [Gilks et al. 1996] Gilks W.R., Richardson S. and Spiegelhalter DJ., Markov Chain Monte Carlo in Practice, Chapman&Hall. [Hwang et al. 1993] Hwang, C.,-R, Hwang-Ma S.,- Y. and Shen. S.,-J., "Accelerating Gaussian diffusions", Ann. Appl. Prob. , vol. 3, 897-913. [Horowitz 1991] Horowitz A.M., "A generalized guided Monte Carlo algorithm", Physics Letters B" vol. 268, pp. 247-252. [MacKay 1992] MacKay D.le., Bayesian Methods for Adaptive Models, Ph.D. thesis, California Institute of Technology. [Metropolis et al. 1953] Metropolis N., Rosenbluth A.W., Rosenbluth M.N., Teller A.H. and Teller E., "Equation of State Calculations by Fast Computing Machines", Journal of Chemical Physics,vol.21,pp. 1087-1092. [Neal 1996] Neal, R.M., Bayesian Learn ing for Neural Networks, Springer 1996. PART V IMPLEMENTATION
1757 |@word determinant:1 version:1 norm:1 d2:1 simulation:1 covariance:1 pg:1 pressure:1 initial:1 inefficiency:1 existing:1 discretization:6 written:1 numerical:2 informative:1 implying:1 gear:3 hamiltonian:8 successive:1 lor:1 differential:3 autocorrelation:2 rapid:2 behavior:1 multi:2 ol:1 little:1 actual:2 equipped:2 becomes:2 mass:2 israel:4 every:2 concave:1 exactly:1 unit:3 positive:1 before:1 twice:1 appl:1 co:4 gilks:3 matnx:1 practice:2 empirical:5 convenient:1 persistence:3 integrating:1 suggest:1 cannot:1 bh:1 put:1 context:1 prentice:1 equivalent:2 starting:1 ergodic:1 shen:1 stability:1 increment:1 updated:1 pt:1 target:1 exact:1 nowhere:1 approximated:1 iib:1 geman:6 region:2 ensures:1 mentioned:1 dt2:2 dynamic:21 necessitates:1 easily:2 fast:1 monte:9 lag:1 drawing:1 amari:4 statistic:1 gi:3 richardson:1 gp:1 propose:1 product:4 rapidly:1 ac:2 measured:1 strong:1 c:2 uncorrected:1 predicted:1 direction:1 guided:1 drawback:1 correct:1 stochastic:13 exploration:6 generalization:1 around:1 hall:2 exp:5 mapping:1 achieves:1 early:1 tanh:1 city:2 successfully:1 clearly:1 gaussian:8 modified:1 rather:1 hiddento:1 leapfrog:7 likelihood:3 el:2 hidden:2 priori:1 art:2 mackay:6 marginal:1 f3:1 sampling:11 chapman:1 represents:1 others:1 floating:1 phase:1 geometry:8 jeffrey:2 interest:2 truly:1 semidefinite:1 chain:5 integral:1 capable:1 discretisation:1 iv:1 euclidean:4 haifa:2 logp:2 restoration:1 ordinary:1 deviation:3 entry:2 technion:6 jig:1 too:1 buntine:1 considerably:2 density:4 systematic:1 physic:4 prior2:1 andersen:3 squared:2 thesis:1 containing:1 henceforth:1 bpi:1 horowitz:3 leading:1 notable:1 il:2 oi:1 variance:2 efficiently:1 ofthe:1 landscape:1 bayesian:20 carlo:9 multiplying:2 reach:2 definition:2 infinitesimal:1 energy:6 pp:6 e2:2 naturally:1 associated:1 riemannian:5 dataset:1 baram:4 iji:1 dimensionality:1 improves:1 back:1 ok:2 attained:1 dt:2 modal:2 done:3 evaluated:1 ergodicity:1 hand:1 horizontal:1 replacing:1 nonlinear:1 propagation:1 quality:3 hwang:4 seminorm:1 normalized:1 y2:1 analytically:1 hence:2 chemical:2 neal:8 deal:1 sin:3 criterion:1 generalized:1 performs:1 temperature:1 geometrical:1 meaning:1 image:1 novel:1 common:2 superior:2 physical:1 anisotropic:2 approximates:1 cambridge:1 gibbs:3 inclusion:1 dj:1 dot:2 robot:3 posterior:6 showed:1 stiff:1 certain:1 dqi:1 yi:1 paradigm:1 ii:1 d0:2 ing:1 calculation:1 christoffel:1 msd:12 molecular:1 prediction:2 regression:4 metric:8 iteration:7 achieved:1 addition:1 eliminates:1 unlike:1 olx:1 split:1 concerned:1 inner:2 gb:3 accelerating:1 hessian:1 cause:1 ph:1 generate:1 canonical:7 delta:1 per:4 hyperparameter:1 vol:7 achieving:1 diffusion:2 kept:1 relaxation:1 weigend:2 run:4 angle:1 inverse:2 letter:2 prob:1 almost:1 comparable:1 layer:1 hi:1 resampled:1 fold:1 nontrivial:1 x2:2 aspect:1 simulate:2 department:2 according:2 poor:1 smaller:1 character:2 lp:1 metropolis:5 modification:1 invariant:3 restricted:1 equation:5 turn:1 nonbayesian:1 needed:1 operation:1 occasional:1 appropriate:2 simulating:1 appearing:1 stepsize:2 alternative:1 denotes:2 remaining:1 especially:1 disappear:1 tensor:7 degrades:1 gradient:2 distance:2 simulated:1 manifold:11 hmc:13 implementation:1 rosenbluth:2 unknown:1 perform:1 modem:1 markov:5 discarded:1 benchmark:1 finite:1 defining:2 extended:1 flop:3 nonnalized:1 perturbation:1 community:1 introduced:2 required:3 specified:1 california:1 kroneker:1 trans:1 suggested:1 dynamical:1 below:1 oj:1 reliable:1 natural:4 hybrid:5 arm:2 scheme:1 improve:1 technology:3 spiegelhalter:1 axis:1 carried:1 prior:11 geometric:1 teller:2 proportional:3 pi:3 keeping:1 allow:2 institute:3 taking:1 distributed:2 calculated:1 transition:2 made:2 adaptive:1 overcomes:1 keep:2 consuming:2 xi:9 learn:1 complex:6 main:1 noise:2 subsample:2 facilitating:1 referred:1 sub:1 momentum:9 xl:2 candidate:2 governed:1 jacobian:2 third:1 jt:3 dol:1 symbol:1 intrinsic:2 burden:1 importance:1 rejection:1 simply:2 twentieth:1 failed:1 expressed:1 contained:1 springer:1 duane:3 kinetic:1 ma:1 conditional:1 ann:2 replace:2 fisher:1 change:1 uniformly:1 total:4 called:1 invariance:1 indicating:1 mark:1 mcmc:2 ex:4
827
1,758
Channel Noise in Excitable Neuronal Membranes Amit Manwani; Peter N. Steinmetz and Christof Koch Computation and Neural Systems Program, M-S 139-74 California Institute of Technology Pasadena, CA 91125 {quixote,peter,koch } @klab.caltech.edu Abstract Stochastic fluctuations of voltage-gated ion channels generate current and voltage noise in neuronal membranes. This noise may be a critical determinant of the efficacy of information processing within neural systems. Using Monte-Carlo simulations, we carry out a systematic investigation of the relationship between channel kinetics and the resulting membrane voltage noise using a stochastic Markov version of the Mainen-Sejnowski model of dendritic excitability in cortical neurons. Our simulations show that kinetic parameters which lead to an increase in membrane excitability (increasing channel densities, decreasing temperature) also lead to an increase in the magnitude of the sub-threshold voltage noise. Noise also increases as the membrane is depolarized from rest towards threshold. This suggests that channel fluctuations may interfere with a neuron's ability to function as an integrator of its synaptic inputs and may limit the reliability and precision of neural information processing. 1 Introduction Voltage-gated ion channels undergo random transitions between different conformational states due to thermal agitation. Generally, these states differ in their ionic permeabilities and the stochastic transitions between them give rise to conductance fluctuations which are a source of membrane noise [1]. In excitable cells, voltage-gated channel noise can contribute to the generation of spontaneous action potentials [2, 3], and the variability of spike timing [4] . Channel fluctuations can also give rise to bursting and chaotic spiking dynamics in neurons [5, 6] . Our interest in studying membrane noise is based on the thesis that noise ultimately limits the ability of neurons to transmit and process information. To study this problem, we combine methods from information theory, membrane biophysics and compartmental neuronal modeling to evaluate ability of different biophysical components of a neuron, such as the synapse, the dendritic tree, the soma and so on, to transmit information [7, 8, 9] . These neuronal components differ in the type, density, and kinetic properties of their constituent ion channels. Thus, measuring the impact of these differences on membrane noise rep? http://www.klab.caltech.edwquixote A. Manwani, P. N. Steinmetz and C. Koch 144 resents a fundamental step in our overall program of evaluating information transmission within and between neurons. Although information in the nervous system is mostly communicated in the form of action potentials, we first direct our attention to the study of sub-threshold voltage fluctuations for three reasons. Firstly, voltage fluctuations near threshold can cause variability in spike timing and thus directly influence the reliability and precision of neuronal activity. Secondly, many computations putatively performed in the dendritic tree (coincidence detection, multiplication, synaptic integration and so on) occur in the sub-threshold regime and thus are likely to be influenced by sub-threshold voltage noise. Lastly, several sensory neurons in vertebrates and invertebrates are non-spiking and an analysis of voltage fluctuations can be used to study information processing in these systems as well. Extensive investigations of channel noise were carried out prior to the advent of the patchclamp technique in order to provide indirect evidence for the existence of single ion channels (see [1] for an excellentreview). More recently, theoretical studies have focused on the effect of random channel fluctuations on spike timing and reliability of individual neurons [4], as well as their effect on the dynamics of interconnected networks of neurons [5, 6). In this paper, we determine the effect of varying the kinetic parameters, such as channel density and the rate of channel transitions, on the magnitude of sub-threshold voltage noise in an iso-potential membrane patches containing stochastic voltage-gated ion channels using Monte-Carlo simulations. The simulations are based on the Mainen-Sejnowski (MS) kinetic model of active channels in the dendrites of cortical pyramidal neurons [10). By varying two model parameters (channel densities and temperature), we investigate the relationship between excitability and noise in neuronal membranes. By linearizing the channel kinetics, we derive analytical expressions which provide closed-form estimates of noise magnitudes; we contrast the results of the simulations with the linearized expressions to determine the parameter range over which they can be used. 2 Monte-Carlo Simulations Consider an iso-potential membrane patch containing voltage-gated K+and Na+channels and leak channels, dV = 9K (Vm -c dt rn EK) + 9Na (Vm - ENa ) + 9L ( Vm - Ed + Iinj (1) where C is the membrane capacitance and 9K (9Na, 9L) and EK (ENa, EL) denote the K+(Na+, leak) conductance and the K+(Na+, leak) reversal potential respectively. Current injected into the patch is denoted by I inj , with the convention that inward current is negative. The channels which give rise to potassium and sodium conductances switch randomly between different conformational states with voltage-dependent transition rates. Thus,9K and 9Na are voltage-dependent random processes and eq. 1 is a non-linear stochastic differential equation. Generally, ion channel transitions are assumed to be Markovian [11] and the stochastic dynamics of eq. 1 can be studied using Monte-Carlo simulations of finitestate Markov models of channel kinetics. Earlier studies have carried out simulations of stochastic versions of the classical HodgkinHuxley kinetic model [12] to study the effect of conductance fluctuations on neuronal spiking [13, 2, 4]. Since we are interested in sub-threshold voltage noise, we consider a stochastic Markov version of a less excitable kinetic model used to describe dendrites of cortical neurons [10]. We shall refer to it as the Mainen-Sejnowski (MS) kinetic scheme. The K+conductance is modeled by a single activation sub-unit (denoted by n) whereas the Na+conductance is comprised of three identical activation sub-units (denoted by m) and one inactivation sub-unit (denoted by h). Thus, the stochastic discrete-state Markov models of the K+and Na+channel have 2 and 8 states respectively (shown in Fig. 1). The Channel Noise in Excitable Neural Membranes 145 single channel conductances and the densities of the ion channels (K+ ,Na+) are denoted by (,K,''(Na) and ('TJK,'f)Na) respectively. Thus, 9K and 9Na) are given by the products of the respective single channel conductances and the corresponding numbers of channels in the conducting states. A Figure I: Kinetic scheme for the voltage-gated Mainen-Sejnowski K+(A) and Na+(B) channels. no and nl represent the closed and open states of K+channel. mO-2hl represent the 3 closed states, mO-3ho the four inactivated states and m3hl the open state of the Na+ channel. were held at the fixed value corresponding to for details ofthis procedure). We performed Monte-Carlo simulations of the MS kinetic scheme using a fixed time step of i).t = 10 J.tsec. During each step, the number of sub-units undergoing transitions between states i and j was determined by drawing a pseudorandom binomial deviate (bnldev subroutine [14] driven by the ran2 subroutine of the 2nd edition) with N equal to the number of sub-units in state i and p given by the conditional probability of the transition between i and j. After updating the number of channels in each state, eq. 1 was integrated using fourth order Runge-Kutta integration with adaptive step size control [14]. During each step, the channel conductances the new numbers of open channels. (See [4] Due to random channel transitions, the membrane voltage fluctuates around the steady-state resting membrane voltage 6 Vrest . By varying the magnitude of the constant injected current linj, the 4 steady-state voltage can be varied over a broad range, which depends on the channel densities. The current required to maintain the membrane at a holding voltage Vhold can be determined from the -4 steady-state I-V curve of the system, as shown in Fig. 2. Voltages for which -6 the slope of the I-V curve is negative _8L-~------~----~------~~ cannot be maintained as steady-states. -70 -60V (mV)50 -40 By injecting an external current to offset m the total membrane current, a fixed point in the negative slope region can be obFigure 2: Steady-state I-V curves for different multiples (f\,Na) of the nominal MS Na+channel tained but since the fixed point is unstadensity. Circles indicate locations of fixed-points ble, any perturbation, such as a stochastic in the absence of current injection. ion channel opening or closing, causes the system to be driven to the closest stable fixed point. We measured sub-threshold voltage noise only for stable steady-state holding voltages. A typical voltage trace from our simulations is shown in Fig. 3. To estimate the standard deviation of the voltage noise accurately, simulations were performed for a total of 492 seconds, divided into 60 blocks of 8.2 seconds each, for each steady-state value. A. Manwani, P. N. Steinmetz and C. Koch 146 5r---~----~--~----~---. -~ '" ~ ffi o 4 -65 ~ ~ 3 ! -66 ~ Q) c: '0 2 ~ ~ D E :0 - 67 Z 1 200 100 3 :> .?. 1:; 300 Time (msec) ~ 400 Figure 3: Monte-Carlo simulations of a 1000 j.Lm 2 membrane patch with stochastic Na+ and deterministic K+ channels with MS kinetics. Bottom record shows the number of open Na+ channels as a function of time. Top trace shows the corresponding fluctuations of the membrane voltage. Summary of nominal MS parameters: em = 0.75 j.LF/cm 2 , 11K = 1.5 channels/j.Lm2 , 11Na = 2 channelslj.Lm2 , EK = -90 mY, ENa =60 mY, EL =-70 mY, gL =0.25 pSlj.Lm2 , "IK = "INa = 20 pS. Linearized Analysis The non-linear stochastic differential equation (eq. 1) cannot be solved analytically. However, one can linearize it by expressing the ionic conductances and the membrane voltage as small perturbations (8) around their steady-state values: -c d~~m = (9~ + 9Na + 9L) 8Vm + (V~ - EK) 89K + (V~ - ENa) 89Na (2) where 9~ and 9Na denote the values of the ionic conductances at the steady-state voltage va. G = 9K + 9N a + 9 L is the total steady-state patch conductance. Since the leak channel conductance is constant , 89 L = o. On the other hand, 89K and 89N a depend on 8V and t. It is known that, to first order, the voltage- and time-dependence of active ion channels can be modeled as phenomenological impedances [15, 16]. Fig . 4 shows the linearized equivalent circuit of a membrane patch, given by the parallel combination of the capacitance C, the conductance G and three series RL branches representing phenomenological models of K+activation, Na+activation and Na+inactivation. In = 9K(EK - V~) + 9Na(ENa - V~) (3) represents the current noise due to fluctuations in the channel conductances (denoted by 9K and 9Na) at the membrane voltage V~ (also referred to as holding voltage Vhald) . The details of the linearization are provided [16]. The complex admittance (inverse of the impedance) of Fig. 4 is given by, I I I Y(J) = G + j27r fC + . + . + . (4) Tn + J27rf l n Tm + J27rf l m Th + J27rf l h The variance of the voltage fluctuations O"~ can be computed as, 2 O"v = / 00 SIn(J) -00 df IY(J)12 (5) where the power spectral density of In is given by the sum of the individual channel current noise spectra, SIn(J) = SIK(J) + SINa(J). For the MS scheme, the autocovariance of the K+ current noise for patch of area A, clamped at a voltage V~, can be derived using [1, 11], CIK (t) = A 'f/K "Ik ( V~ - EK)2 noo (1 - noo) e- Itl/rn (6) where n oo and Tn respectively denote the steady-state probability and time constant of the K+ activation sub-unit at V~ . The power spectral density of the K+ current noise S I K (J) can be obtained from the Fourier transform of CI K (t), S IK (f) = 2 A 'f/K "Ik (V~ - EK )2noo Tn 1 + (21ffTn)2 (7) Channel Noise in Excitable Neural Membranes c 147 Figure 4: Linearized circuit of the membrane patch containing stochastic voltage-gated ion channels. C denotes the membrane capacitance, G is the sum of the steady-state conductances of the channels and the leak. ri's and li'S denote the phenomenological resistances and inductances due to the voltage- and time-dependent ionic conductances. G Thus, SIK(J) is a single Lorentzian spectrum with cut-off frequency determined by Similarly, the auto-covariance of the MS Na+ current noise can be written as [1], CINa(t) = A rJNa ,iva (V~ - ENa)2 m~ hoo [m 3 (t) h(t) - m~ hoo] Tn. (8) where m(t) = moo + (1 - moo) e- t / Tm , h(t) = hoo + (1 - hoo) e- t / Th (9) As before, moo (h oo ) and Tm (Th) are the open probability and the time constant of Na+activation (inactivation) sub-unit. The Na+current noise spectrum SINa(J) can be expressed as a sum of Lorentzian spectra with cut-off frequencies corresponding to the seven time constants T m, Th, 2 T m, 3 T m, T m + Th, 2 T m + Th and 3 T m + Th. The details of the derivations can be found in [8]. B A 5 + + 3 4 1 1 0~~~--~~----~4~0----~-20 o~o:s:-=----::-:---~:------! Vh01d(mV) -60 -40 -20 VhOId(mv) Figure 5: Standard deviation of the voltage noise av in a 1000 f..?m 2 patch as a function of the holding voltage Vho1d . Circles denote results of the Monte-Carlo simulations for the nominal MS parameter values (see Fig. 3). The solid curve corresponds to the theoretical expression obtained by linearizing the channel kinetics. (A) Effect of increasing the sodium channel density by a factor (compared to the nominal value) of 2 (pluses), 3 (asterisks) and 4 (squares) on the magnitude of voltage noise. (B) Effect of increasing both the sodium and potassium channel densities by a factor of two (pluses). 4 Effect of Varying Channel Densities Fig. 5 shows the voltage noise for a 1000 J.im 2 patch as a function of the holding voltage for different values of the channel densities. Noise increases as the membrane is depolarized from rest towards -50 mV and the rate of increase is higher for higher Na+densities. The range of Vho1d for sub-threshold behavior extends up to -20 m V for nominal densities, A. Manwani, P N Steinmetz and C. Koch 148 but does not exceed -60 m V for higher N a+ densities. For moderate levels of depolarization, an increase in the magnitude of the ionic current noise with voltage is the dominant factor which leads to an increase in voltage noise; for higher voltages phenomenological impedances are large and shunt away the current noise. Increasing Na+density increases voltage noise, whereas, increasing K+density causes a decrease in noise magnitude (compare Fig. SA and SB). We linearized closed-form expressions provide accurate estimates of the noise magnitudes when the noise is small (of the order 3 m V). 5 Effect of Varying Temperature Fig. 6 shows that voltage noise decreases with temperature. To model the effect of temperature, transition rates were scaled by a factor Q':oT/lO (QlO = 2.3 for K+, QlO = 3 for Na+) . Temperature increases the rates of channel transitions and thus the bandwidth of the ionic current noise fluctuations. The magnitude of the current noise, on the other hand, is independent of temperature. Since the membrane acts as a low-pass RC filter (at moderately depolarized voltages, the phenomenological inductances are small), increasing the bandwidth of the noise results in lower voltage noise as the high frequency components are filtered out. 6 Conclusions 1.5 .---~---~---...-, 0_5 o~--~---~----~ 20 25 30 35 T (CelsiuS) Figure 6: ay as a function of temperature for a 1000 J-Lm 2 patch with MS kinetics (Vhold = -60 mV). Circles denote Monte-Carlo simulations. solid curve denotes linearized approximation. We studied sub-threshold voltage noise due to stochastic ion channel fluctuations in an isopotential membrane patch with Mainen-Sejnowski kinetics. For the MS kinetic scheme, noise increases as the membrane is depolarized from rest, up to the point where the phenomenological impedances due to the voltage- and time-dependence of the ion channels become large and shunt away the noise. Increasing Na+channel density increases both the magnitude of the noise and its rate of increase with membrane voltage. On the other hand, increasing the rates of channel transitions by increasing temperature, leads to a decrease in noise. It has previously been shown that neural excitability increases with Na+channel density [17] and decreases with temperature [IS] . Thus, our findings suggest that an increase in membrane excitability is inevitably accompanied by an increase in the magnitude of sub-threshold voltage noise fluctuations . The magnitude and the rapid increase of voltage noise with depolarization suggests that channel fluctuations can contribute significantly to the variability in spike timing [4] and the stochastic nature of ion channels may have a significant impact on information processing within individual neurons. It also potentially argues against the conventional role of a neuron as integrator of synaptic inputs [18], as the the slow depolarization associated with integration of small synaptic inputs would be accompanied by noise. making the membrane voltage a very unreliable indicator of the integrated inputs. We are actively investigating this issue more carefully. When the magnitudes of the noise and the phenomenological impedances are small, the non-linear kinetic schemes are well-modeled by their linearized approximations. We have found this to be valid for other kinetic schemes as well [19] . These analytical approximations can be used to study noise in more sophisticated neuronal models incorporating realistic dendritic geometries, where Monte-Carlo simulations may be too computationally intensive to use. Channel Noise in Excitable Neural Membranes 149 Acknowledgments This work was funded by NSF, NIMH and the Sloan Center for Theoretical Neuroscience. We thank our collaborators Michael London, Idan Segev and YosefYarom for their invaluable suggestions. References [1] DeFelice LJ. (1981). Introduction to Membrane Noise. Plenum Press: New York, New York. [2] Strassberg A .F. & DeFelice LJ. (1993). Limitations of the Hodgkin-Huxley formalism: effect of single channel kinetics on transmembrane voltage dynamics. Neural Computation, 5:843855 . [3] Chow C. & White l (1996). Spontaneous action potentials due to channel fluctuations. Biophy. 1.,71 :3013-3021. [4] Schneidman E., Freedman B. & Segev I. (1998) . Ion-channel stochasticity may be critical in determining the reliability and precision of spike timing. Neural Computation, 10:1679-1703. [5] DeFelice LJ. & Isaac A. (1992). Chaotic states in a random world. 1. Stat. Phys., 70:339-352. [6] White lA., Budde T. & Kay A.R. (1995). A bifurcation analysis of neuronal subthreshold oscillations. Biophy. J., 69:1203-1217. [7] Manwani A. & Koch C. (1998). Synaptic transmission: An information-theoretic perspective. In: Jordan M., Kearns M.S. & SoBa S.A., eds. , Advances in Neural Information Processing Systems 10. pp 201-207. MIT Press: Cambridge, Massachusetts. [8] Manwani A. & Koch C. (1999). Detecting and estimating signals in noisy cable structures: I. Neuronal noise sources. Neural Computation. In press. [9] Manwani A. & Koch C. (1999) . Detecting and estimating signals in noisy cable structures: II. Information-theoretic analysis . Neural Computation. In press. [10] Mainen Z.F. & Sejnowski TJ. (1995) . Reliability of spike timing in neocortical neurons. Science, 268: 1503-1506. [11] Johnston D. & Wu S.M. (1995) . Foundations of Cellular Neurophysiology. MIT Press: Cambridge, Massachusetts. [12] Hodgkin A.L. & Huxley A.F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. 1. Physiol. (London), 117:500-544. [13] Skaugen E. & Wallre L. (1979). Firing behavior in a stochastic nerve membrane model based upon the Hodgkin-Huxley equations. Acta Physiol. Scand., 107:343-363. [14] Press W.H., Teukolsky S.A., Vetterling w.T. & Flannery B.P. (1992). Numerical Recipes in C: The An of Scientific Computing. Cambridge University Press, second edn. [15] Mauro A., Conti F., Dodge F. & Schor R. (1970). Subthreshold behavior and phenomenological impedance of the squid giant axon. 1. Gen. Physiol., 55:497-523 . [16] Koch C. (1984). Cable theory in neurons with active, linearized membranes. BioI. Cybem., 50:15-33. [17] Sabah N.H. & Leibovic K.N. (1972). The effect of membrane parameters on the properties of the nerve impulse. Biophys. 1., 12:1132-44. [18] Shadlen M.N. & Newsome w.T. (1998). The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. 1. Neurosci., 18:3870-3896. [19] P. N. Steinmetz A. Manwani M.L. & Koch C. (1999). Sub-threshold voltage noise due to channel fluctuations in active neuronal membranes. In preparation.
1758 |@word neurophysiology:1 determinant:1 version:3 nd:1 open:5 squid:1 simulation:15 linearized:8 covariance:1 solid:2 carry:1 series:1 efficacy:1 mainen:6 current:19 activation:6 written:1 moo:3 physiol:3 numerical:1 realistic:1 nervous:1 iso:2 record:1 filtered:1 hodgkinhuxley:1 detecting:2 contribute:2 putatively:1 location:1 firstly:1 rc:1 direct:1 differential:2 become:1 ik:4 combine:1 rapid:1 behavior:3 integrator:2 decreasing:1 increasing:9 vertebrate:1 provided:1 estimating:2 circuit:2 advent:1 inward:1 cm:1 depolarization:3 iva:1 finding:1 giant:1 quantitative:1 act:1 scaled:1 control:1 unit:7 christof:1 agitation:1 before:1 timing:6 limit:2 fluctuation:18 firing:1 plus:2 acta:1 shunt:2 studied:2 bursting:1 suggests:2 range:3 acknowledgment:1 block:1 lf:1 communicated:1 chaotic:2 procedure:1 area:1 lorentzian:2 significantly:1 biophy:2 suggest:1 cannot:2 influence:1 www:1 equivalent:1 deterministic:1 conventional:1 center:1 attention:1 focused:1 kay:1 j27r:1 transmit:2 plenum:1 spontaneous:2 nominal:5 discharge:1 edn:1 updating:1 cut:2 bottom:1 role:1 coincidence:1 solved:1 region:1 decrease:4 transmembrane:1 leak:5 nimh:1 moderately:1 dynamic:4 ultimately:1 depend:1 upon:1 dodge:1 indirect:1 derivation:1 describe:1 london:2 monte:9 sejnowski:6 fluctuates:1 soba:1 compartmental:1 drawing:1 ability:3 transform:1 noisy:2 runge:1 biophysical:1 analytical:2 interconnected:1 product:1 gen:1 description:1 constituent:1 recipe:1 potassium:2 transmission:2 p:1 derive:1 linearize:1 oo:2 stat:1 measured:1 sa:1 eq:4 indicate:1 convention:1 differ:2 vrest:1 filter:1 stochastic:16 investigation:2 collaborator:1 dendritic:4 secondly:1 im:1 kinetics:8 koch:10 klab:2 around:2 mo:2 lm:2 tjk:1 iinj:1 injecting:1 mit:2 tained:1 inactivation:3 varying:5 voltage:56 derived:1 contrast:1 dependent:3 el:2 sb:1 integrated:2 lj:3 chow:1 vetterling:1 pasadena:1 subroutine:2 interested:1 overall:1 issue:1 denoted:6 integration:3 bifurcation:1 equal:1 identical:1 represents:1 broad:1 opening:1 randomly:1 steinmetz:5 individual:3 geometry:1 maintain:1 conductance:17 detection:1 interest:1 investigate:1 nl:1 tj:1 held:1 implication:1 accurate:1 respective:1 autocovariance:1 tree:2 permeability:1 circle:3 theoretical:3 formalism:1 modeling:1 earlier:1 markovian:1 measuring:1 newsome:1 deviation:2 comprised:1 too:1 conduction:1 my:3 density:19 fundamental:1 systematic:1 vm:4 off:2 michael:1 iy:1 na:34 connectivity:1 thesis:1 containing:3 external:1 ek:7 li:1 actively:1 potential:6 accompanied:2 coding:1 sloan:1 mv:5 depends:1 performed:3 closed:4 parallel:1 slope:2 square:1 variance:1 conducting:1 subthreshold:2 accurately:1 ionic:6 carlo:9 influenced:1 phys:1 synaptic:5 ed:2 finitestate:1 against:1 quixote:1 frequency:3 pp:1 isaac:1 associated:1 massachusetts:2 carefully:1 sophisticated:1 cik:1 nerve:3 higher:4 dt:1 synapse:1 lastly:1 hand:3 interfere:1 impulse:1 scientific:1 effect:11 manwani:8 analytically:1 excitability:5 white:2 sin:2 during:2 maintained:1 steady:12 excitation:1 m:11 linearizing:2 ay:1 theoretic:2 neocortical:1 tn:4 argues:1 invaluable:1 temperature:10 recently:1 spiking:3 rl:1 resting:1 refer:1 expressing:1 celsius:1 significant:1 cambridge:3 ena:6 similarly:1 closing:1 stochasticity:1 reliability:5 phenomenological:8 funded:1 stable:2 dominant:1 closest:1 perspective:1 moderate:1 driven:2 rep:1 caltech:2 determine:2 schneidman:1 signal:2 ii:1 branch:1 multiple:1 divided:1 biophysics:1 impact:2 va:1 admittance:1 df:1 represent:2 ion:14 cell:1 whereas:2 johnston:1 pyramidal:1 source:2 ot:1 rest:3 depolarized:4 undergo:1 jordan:1 near:1 exceed:1 switch:1 bandwidth:2 tm:3 intensive:1 inductance:2 expression:4 peter:2 resistance:1 york:2 cause:3 action:3 generally:2 conformational:2 schor:1 generate:1 http:1 nsf:1 neuroscience:1 discrete:1 shall:1 four:1 soma:1 threshold:13 sum:3 inverse:1 injected:2 fourth:1 hodgkin:3 extends:1 wu:1 patch:12 oscillation:1 ble:1 activity:1 occur:1 huxley:3 segev:2 ri:1 invertebrate:1 fourier:1 pseudorandom:1 injection:1 combination:1 membrane:40 hoo:4 em:1 cable:3 making:1 hl:1 dv:1 computationally:1 equation:3 previously:1 ffi:1 resents:1 reversal:1 studying:1 lm2:3 away:2 spectral:2 ho:1 existence:1 binomial:1 top:1 denotes:2 amit:1 classical:1 capacitance:3 spike:6 dependence:2 kutta:1 thank:1 mauro:1 seven:1 cellular:1 reason:1 modeled:3 relationship:2 scand:1 mostly:1 potentially:1 holding:5 trace:2 negative:3 rise:3 gated:7 av:1 neuron:16 markov:4 inevitably:1 thermal:1 variability:3 rn:2 varied:1 perturbation:2 required:1 extensive:1 california:1 regime:1 program:2 power:2 critical:2 indicator:1 sodium:3 representing:1 scheme:7 technology:1 carried:2 excitable:6 auto:1 deviate:1 prior:1 multiplication:1 determining:1 ina:1 generation:1 suggestion:1 limitation:1 asterisk:1 foundation:1 shadlen:1 sik:2 lo:1 summary:1 gl:1 institute:1 curve:5 cortical:4 transition:11 evaluating:1 valid:1 world:1 sensory:1 adaptive:1 unreliable:1 active:4 investigating:1 cybem:1 assumed:1 spectrum:4 conti:1 impedance:6 channel:67 nature:1 ca:1 inactivated:1 dendrite:2 complex:1 neurosci:1 noise:57 edition:1 freedman:1 neuronal:11 fig:9 referred:1 slow:1 axon:1 precision:3 sub:18 msec:1 clamped:1 undergoing:1 offset:1 evidence:1 ofthis:1 incorporating:1 ci:1 magnitude:13 linearization:1 biophys:1 flannery:1 fc:1 likely:1 idan:1 expressed:1 corresponds:1 teukolsky:1 kinetic:12 conditional:1 itl:1 bioi:1 towards:2 absence:1 determined:3 typical:1 kearns:1 total:3 inj:1 pas:1 isopotential:1 defelice:3 la:1 noo:3 sina:2 preparation:1 evaluate:1
828
1,759
Effects of Spatial and Temporal Contiguity on the Acquisition of Spatial Information Thea B. Ghiselli-Crippa and Paul W. Munro Department of Information Science and Telecommunications University of Pittsburgh Pittsburgh, PA 15260 tbgst@sis.pitt.edu, munro@sis.pitt.edu Abstract Spatial information comes in two forms: direct spatial information (for example, retinal position) and indirect temporal contiguity information, since objects encountered sequentially are in general spatially close. The acquisition of spatial information by a neural network is investigated here. Given a spatial layout of several objects, networks are trained on a prediction task. Networks using temporal sequences with no direct spatial information are found to develop internal representations that show distances correlated with distances in the external layout. The influence of spatial information is analyzed by providing direct spatial information to the system during training that is either consistent with the layout or inconsistent with it. This approach allows examination of the relative contributions of spatial and temporal contiguity. 1 Introduction Spatial information is acquired by a process of exploration that is fundamentally temporal, whether it be on a small scale, such as scanning a picture, or on a larger one, such as physically navigating through a building, a neighborhood, or a city. Continuous scanning of an environment causes locations that are spatially close to have a tendency to occur in temporal proximity to one another. Thus, a temporal associative mechanism (such as a Hebb rule) can be used in conjunction with continuous exploration to capture the spatial structure of the environment [1]. However, the actual process of building a cognitive map need not rely solely on temporal associations, since some spatial information is encoded in the sensory array (position on the retina and proprioceptive feedback). Laboratory studies show different types of interaction between the relative contributions of temporal and spatial contiguities to the formation of an internal representation of space. While Clayton and Habibi's [2] series of recognition priming experiments indicates that priming is controlled only by temporal associations, in the work of McNamara et al. [3] priming in recognition is observed only when space and time are both contiguous. In addition, Curiel and Radvansky's [4] work shows that the effects of spatial and temporal contiguity depend on whether location or identity information is emphasized during learning. Moreover, other experiments ([3]) also show how the effects clearly depend on the task and can be quite different if an explicitly spatial task is used (e.g., additive effects in location judgments). T. B. Ghiselli-Crippa and P W. Munro 18 labels labels labels (A coeff.) labels labels coordinates coordinates (B coeff.) labels Figure 1: Network architectures: temporal-only network (left); spatio-temporal network with spatial units part of the input representation (center); spatio-temporal network with spatial units part of the output representation (right) . 2 Network architectures The goal of the work presented in this paper is to study the structure of the internal representations that emerge from the integration of temporal and spatial associations. An encoder-like network architecture is used (see Figure 1), with a set of N input units and a set of N output units representing N nodes on a 2-dimensional graph. A set of H units is used for the hidden layer. To include space in the learning process, additional spatial units are included in the network architecture. These units provide a representation of the spatial information directly available during the learning/scanning process. In the simulations described in this paper, two units are used and are chosen to represent the (x, y) coordinates of the nodes in the graph . The spatial units can be included as part of the input representation or as part of the output representation (see Figure 1, center and right panels): both choices are used in the experiments, to investigate whether the spatial information could better benefit training as an input or as an output [5]. In the second case, the relative contribution of the spatial information can be directly manipulated by introducing weighting factors in the cost function being minimized. A two-term cost function is used, with a cross-entropy term for the N label units and a squared error term for the 2 coordinate units, ri indicates the actual output of unit i and ti its desired output. The relative influence of the spatial information is controlled by the coefficients A and B. 3 Learning tasks The left panel of Figure 2 shows an example of the type of layout used; the effective layout used in the study consists of N = 28 nodes. For each node, a set of neighboring nodes is defined, chosen on the basis of how an observer might scan the layout to learn the node labels and their (spatial) relationships; in Figure 2, the neighborhood relationships are represented by lines connecting neighboring nodes. From any node in the layout, the only allowed transitions are those to a neighbor, thus defining the set of node pairs used to train the network (66 pairs out of C(28, 2) = 378 possible pairs). In addition, the probability of occurrence of a particular transition is computed as a function of the distance to the corresponding neighbor. It is then possible to generate a sequence of visits to the network nodes, aimed at replicating the scanning process of a human observer studying the layout. 19 Spatiotemporal Contiguity Effects on Spatial Information Acquisition eraser knife cup coin eraser button Figure 2: Example of a layout (left) and its permuted version (right). Links represent allowed transitions. A larger layout of 28 units was used in the simulations. The basic learning task is similar to the grammar learning task of Servan-Schreiber et al. [6] and to the neighborhood mapping task described in [1] and is used to associate each of the N nodes on the graph and its (x, y) coordinates with the probability distribution of the transitions to its neighboring nodes. The mapping can be learned directly, by associating each node with the probability distribution of the transitions to all its neighbors: in this case, batch learning is used as the method of choice for learning the mapping. On the other hand, the mapping can be learned indirectly, by associating each node with itself and one of its neighbors, with online learning being the method of choice in this case; the neighbor chosen at each iteration is defined by the sequence of visits generated on the basis of the transition probabilities. Batch learning was chosen because it generally converges more smoothly and more quickly than online learning and gives qualitatively similar results. While the task and network architecture described in [1] allowed only for temporal association learning, in this study both temporal and spatial associations are learned simultaneously, thanks to the presence of the spatial units. However, the temporalonly (T-only) case, which has no spatial units, is included in the simulations performed for this study, to provide a benchmark for the evaluation of the results obtained with the spatio-temporal (S- T) networks. The task described above allows the network to learn neighborhood relationships for which spatial and temporal associations provide consistent information, that is, nodes experienced contiguously in time (as defined by the sequence) are also contiguous in space (being spatial neighbors). To tease apart the relative contributions of space and time, the task is kept the same, but the data employed for training the network is modified: the same layout is used to generate the temporal sequence, but the x , y coordinates of the nodes are randomly permuted (see right panel of Figure 2). If the permuted layout is then scanned following the same sequence of node visits used in the original version, the net effect is that the temporal associations remain the same, but the spatial associations change so that temporally neighboring nodes can now be spatially close or distant: the spatial associations are no longer consistent with the temporal associations. As Figure 4 illustrates, the training pairs (filled circles) all correspond to short distances in the original layout, but can have a distance anywhere in the allowable range in the permuted layout. Since the temporal and spatial distances were consistent in the original layout, the original spatial distance can be used as an indicator of temporal distance and Figure 4 can be interpreted as a plot of temporal distance vs. spatial distance for the permuted layout. The simulations described in the following include three experimental conditions: temporal only (no direct spatial information available); space and time consistent (the spatial coordinates and the temporal sequence are from the same layout); space and time inconsistent (the spatial coordinates and the temporal sequence are from different layouts). T. B. Ghise/li-Crippa and P. W. Munro 20 Hidden unit representations are compared using Euclidean distance (cosine and inner product measures give consistent results); the internal representation distances are also used to compute their correlation with Euclidean distances between nodes in the layout (original and permuted). The correlations increase with the number of hidden units for values of H between 5 and 10 and then gradually taper off for values greater than 10. The results presented in the remainder of the paper all pertain to networks trained with H = 20 and with hidden units using a tanh transfer function; all the results pertaining to S-T networks refer to networks with 2 spatial output units and cost function coefficients A = 0.625 and B = 6.25. 4 Results Figure 3 provides a combined view of the results from all three experiments. The left panel illustrates the evolution of the correlation between internal representation distances and layout (original and permuted) distances. The right panel shows the distributions of the correlations at the end of training (1000 epochs). The first general result is that, when spatial information is available and consistent with the temporal information (original layout), the correlation between hidden unit distances and layout distances is consistently better than the correlation obtained in the case of temporal associations alone. The second general result is that, when spatial information is available but not consistent with the temporal information (permuted layout), the correlation between hidden unit distances and original layout distances (which represent temporal distances) is similar to that obtained in the case of temporal associations alone, except for the initial transient. When the correlation is computed with respect to the permuted layout distances, its value peaks early during training and then decreases rapidly, to reach an asymptotic value well below the other three cases. This behavior is illustrated in the box plots in the right panel of Figure 3, which report the distribution of correlation values at the end of training. 4.1 Temporal-only vs. spatio-temporal As a first step in this study, the effects of adding spatial information to the basic temporal associations used to train the network can be examined. Since the learning task is the same for both the T-only and the S-T networks except for the absence or presence of spatial information during training, the differences observed can be attributed to the additional spatial information available to the S-T networks. The higher correlation between internal representation distances and original layout distances obtained when spatial information is 0 ~ ., 8 ., S and T CO"Isistent 0 . 0 . T-o" Sand T InCOnsistent 0 i:i -==~ 0 (corr with T distance) ii ... ?8 " "0 0 =s: ........... E:2 S and T Ir'ICOOSlStent (corr. Wflh S distance) '"ci ~ --'----' N 0 0 0 0 0 200 400 600 Ollnber 01 epochs 800 1000 SandT con_atent T-only SandT SandT Inconsistent ineon.stant (corr " th T ast ) (corr wth 5 dst ) Figure 3: Evolution of correlation during training (0 - 1000 epochs) (left). Distributions of correlations at the end of training (1000 epochs) (right). Spatiotemporal Contiguity Effects on Spatial Information Acquisition - 21 N dHU = 0.6 + 3.4d T + 0.3ds - 2.1( dT)2 + 0.4( d S )2 - 0.4d T ds 0 ., 25 0 ", E ~ '" 0 15 ... 0 05 N 0 15 0 0 00 02 04 08 10 12 Figure 4: Distances in the original layout (x) vs_ distances in the permuted layout (y)_ The 66 training pairs are identified by filled circles_ " Figure 5: Similarities (Euclidean distances) between internal representations developed by a S-T network (after 300 epochs)_ Figure 4 projects the data points onto the x, y plane_ available (see Figure 3) is apparent also when the evolution of the internal representations is examined_ As Figure 6 illustrates, the presence of spatial information results in better generalization for the pattern pairs outside the training set While the distances between training pairs are mapped to similar distances in hidden unit space for both the T-only and the S-T networks, the T-only network tends to cluster the non-training pairs into a narrow band of distances in hidden unit space. In the case of the S-T network instead, the hidden unit distances between non-training pairs are spread out over a wider range and tend to reflect the original layout distances. 4.2 Permuted layout As described above, with the permuted layout it is possible to decouple the spatial and temporal contributions and therefore study the effects of each. A comprehensive view of the results at a particular point during training (300 epochs) is presented in Figure 5, where the x, y plane represents temporal distance vs. spatial distance (see also Figure 4) and the z axis represents the similarity between hidden unit representations. The figure also includes a quadratic regression surface fitted to the data points. The coefficients in the equation of the surface provide a quantitative measure of the relative contributions of spatial (ds) and temporal distances (dT ) to the similarity between hidden unit representations (d HU ): (2) In general, after the transient observed in early training (see Figure 3), the largest and most significant coefficients are found for dT and (dT?, indicating a stronger dependence of dHU on temporal distance than on spatial distance. The results illustrated in Figure 5 represent the situation at a particular point during training (300 epochs). Similar plots can be generated for different points during training, to study the evolution of the internal representations. A different view of the evolution process is provided by Figure 7, in which the data points are projected onto the x,Z plane (top panel) and the y,z plane (bottom panel) at four different times during training. In the top panel, 14 T. B. Ghiselli-Crippa and P W Munro 22 ,.. ,.. .. roo ::: ~ _ 0 ? N ~ ~ ~ ~ ~ -. .. - ::: ~ ~ 02 " 06 O. "_d " 12 . ,, ~' ~ : ~ ~ ~, 02 os "-' .. ' 02 .. . 06 - , tP . DO 0 , , .I' ~ 12 .', 00 02 " "-' 06 ." 02 ~ ~ .~. ',' ~ .. .. " " ~ 06 00 02 "_d _ 0 ? N ~ : ~, ~ ~ .. f/Po <P " e, . DO .:. ~ 00 02 " .. .. "-' 10 12 .. .. .. " 12 " _d ::: ~ , ~ 12 0 N ~ ~ ',~-, 00 ~ ~ ~ _ ? ~ ~ 12 " 0 00 ~ ::: g 10 ~ ~ ~ ~ O. , ::: ;; ~ .~ 00 00 ~. ~ , "_d , :; ~ ~ ~ ~ 00 ~ ~ ~ ~ i ~ ~ ~, ::: ~ ~ ~ _ 0 ? N ~ ~ ~ ~ o ,.~,o : s rIP 0 00 0 ? ',' 00 02 " O. "-' o. " 12 Figure 6: Internal representation distances vs. original layout distances: S-T network (top) vs. T-only network (bottom). The training pairs are identified by filled circles. The presence of spatial information results in better generalization for the pairs outside the training set. the internal representation distances are plotted as a function of temporal distance (i.e., the spatial distance from the original layout), while in the bottom panel they are plotted as a function of spatial distance (from the permuted layout). The higher asymptotic correlation between internal representation distances and temporal distances, as opposed to spatial distances (see Figure 3), is apparent also from the examination of the evolutionary plots, which show an asymptotic behavior with respect to temporal distances (see Figure 7, top panel) very similar to the T-only case (see Figure 6, bottom panel). 5 Discussion The first general conclusion that can be drawn from the examination of the results described in the previous section is that, when the spatial information is available and consistent with the temporal information (original layout), the similarity structure of the hidden unit representations is closer to the structure of the original layout than that obtained by using temporal associations alone. The second general conclusion is that, when the spatial information is available but not consistent with the temporal information (permuted layout), the similarity structure of the hidden unit representations seems to correspond to temporal more than spatial proximity. Figures 5 and 7 both indicate that temporal associations take precedence over spatial associations. This result is in agreement with the results described in [1], showing how temporal associations (plus some high-level constraints) significantly contribute to the internal representation of global spatial information. However, spatial information certainly is very beneficial to the (temporal) acquisition of a layout, as proven by the results obtained with the S-T network vs. the T-only network. In terms of the model presented in this paper, the results illustrated in Figures 5 and 7 can be compared with the experimental data reported for recognition priming ([2], [3], [4]), with distance between internal representations corresponding to reaction time. The results of our model indicate that distances in both the spatially far and spatially close condition appear to be consistently shorter for the training pairs (temporally close) than for the nontraining pairs (temporally distant), highlighting a strong temporal effect consistent with the data reported in [2] and [4] (for spatially far pairs) and in [3] (only for the spatially close Spatiotemporal Contiguity Effects on Spatial Information Acquisition ~~ 0_ ri ; -~-' ~. ~~.. . Sl ........... 0 " ...... . j!I!A .. ,. 0 ,. ~ ~ ~ 23 \ 0 ? lfIiiIo '0' 110 0 ~'--_ _ _ _ _-.J 00 02 O. 01 01 10 12 00 02 O. ~ 01 01 10 12 00 02 0.4 01 01 10 12 02 0" 01 In_d (S) 01 '0 12 ~l.- 00 0.2 o. ot .._d(S) 02 O. 01 10 12 0.0 02 04 08 ...u:I (S) 01 oa 10 12 In_den L..-_ _ _ _- . l 00 0 0 l'I_d (T) In_d(TI In _d (T} all 10 12 00 _ _ _ _ _-.J 02 O. 06 oa 10 12 !rUi (S) Figure 7: Internal representation distances vs. temporal distances (top) and vs. spatial distances (bottom) for a S-T network (permuted layout). The training pairs are identified by filled circles. The asymptotic behavior with respect to temporal distances (top panel) is similar to the T-only condition. The bottom panel indicates a weak dependence on spatial distances. case). For the training pairs (temporally close), slightly shorter distances are obtained for spatially close pairs vs. spatially far pairs; this result does not provide support for the experimental data reported in either [3] (strong spatial effect) or [2] (no spatial effect). For the non-training pairs (temporally distant), long distances are found throughout, with no strong dependence on spatial distance; this effect is consistent with all the reported experimental data. Further simulations and statistical analyses are necessary for a more conclusive comparison with the experimental data. References [1] Ghiselli-Crippa, TB. & Munro, P.w. (1994). Emergence of global structure from local associations. In J.D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in Neural Information Processing Systems 6, pp. 1101-1108. San Francisco, CA: Morgan Kaufmann. [2] Clayton, K.N. & Habibi, A. (1991). The contribution of temporal contiguity to the spatial priming effect. Journal of Experimental Psychology: Learning. Memory. and Cognition 17:263-271. [3] McNamara, TP., Halpin. J.A. & Hardy, J.K. (1992). Spatial and temporal contributions to the structure of spatial memory. Journal of Experimental Psychology: Learning. Memory. and Cognition 18:555-564. [4] Curiel, J.M. & Radvansky, G.A. (1998). Mental organization of maps. Journal of Experimental Psychology: Learning. Memory. and Cognition 24:202-214. [5] Caruana, R. & de Sa, VR. (1997). Promoting poor features to supervisors: Some inputs work better as outputs . In M.e. Mozer, M.I. Jordan, & T Petsche (Eds.), Advances in Neural Information Processing Systems 9, pp. 389-395. Cambridge, MA: MIT Press. [6] Servan-Schreiber, D., Cleeremans, A. & McClelland, J.L. (1989). Learning sequential structure in simple recurrent networks. In D.S. Touretzky (Ed.), Advances in Neural Information Processing Systems 1, pp. 643-652. San Mateo, CA: Morgan Kaufmann.
1759 |@word version:2 stronger:1 seems:1 hu:1 simulation:5 initial:1 series:1 hardy:1 reaction:1 si:2 distant:3 additive:1 plot:4 v:9 alone:3 plane:3 short:1 wth:1 provides:1 mental:1 node:19 location:3 contribute:1 direct:4 consists:1 acquired:1 alspector:1 behavior:3 actual:2 project:1 provided:1 moreover:1 panel:14 interpreted:1 contiguity:9 developed:1 temporal:55 quantitative:1 nontraining:1 ti:2 unit:28 appear:1 local:1 tends:1 solely:1 might:1 plus:1 mateo:1 examined:1 co:1 range:2 significantly:1 onto:2 close:8 pertain:1 ast:1 influence:2 map:2 center:2 layout:39 rule:1 array:1 coordinate:8 rip:1 agreement:1 pa:1 associate:1 recognition:3 observed:3 bottom:6 crippa:5 capture:1 cleeremans:1 decrease:1 mozer:1 environment:2 trained:2 depend:2 basis:2 po:1 indirect:1 represented:1 train:2 effective:1 pertaining:1 formation:1 neighborhood:4 outside:2 quite:1 encoded:1 larger:2 apparent:2 roo:1 encoder:1 grammar:1 emergence:1 itself:1 associative:1 online:2 sequence:8 net:1 interaction:1 product:1 remainder:1 neighboring:4 rapidly:1 cluster:1 converges:1 object:2 wider:1 develop:1 recurrent:1 sa:1 strong:3 come:1 indicate:2 exploration:2 human:1 transient:2 sand:1 generalization:2 precedence:1 proximity:2 mapping:4 cognition:3 pitt:2 early:2 label:8 tanh:1 largest:1 schreiber:2 city:1 mit:1 clearly:1 modified:1 conjunction:1 consistently:2 indicates:3 hidden:13 spatial:72 integration:1 represents:2 minimized:1 report:1 fundamentally:1 retina:1 randomly:1 manipulated:1 simultaneously:1 comprehensive:1 organization:1 investigate:1 evaluation:1 certainly:1 analyzed:1 closer:1 necessary:1 shorter:2 filled:4 euclidean:3 desired:1 circle:3 sandt:3 plotted:2 fitted:1 contiguous:2 servan:2 tp:2 caruana:1 cost:3 introducing:1 mcnamara:2 supervisor:1 reported:4 scanning:4 spatiotemporal:3 combined:1 thanks:1 peak:1 off:1 connecting:1 quickly:1 squared:1 reflect:1 opposed:1 external:1 cognitive:1 li:1 de:1 retinal:1 includes:1 coefficient:4 explicitly:1 performed:1 view:3 observer:2 contribution:8 ir:1 kaufmann:2 judgment:1 correspond:2 weak:1 reach:1 touretzky:1 ed:3 acquisition:6 pp:3 attributed:1 higher:2 dt:4 box:1 anywhere:1 correlation:13 d:3 hand:1 o:1 curiel:2 building:2 effect:15 evolution:5 spatially:9 laboratory:1 proprioceptive:1 illustrated:3 during:10 cosine:1 allowable:1 habibi:2 permuted:15 association:18 refer:1 significant:1 cup:1 cambridge:1 replicating:1 longer:1 similarity:5 surface:2 apart:1 tesauro:1 morgan:2 additional:2 greater:1 employed:1 ii:1 cross:1 knife:1 long:1 visit:3 controlled:2 prediction:1 basic:2 regression:1 stant:1 physically:1 iteration:1 represent:4 addition:2 ot:1 tend:1 cowan:1 inconsistent:4 jordan:1 presence:4 psychology:3 architecture:5 associating:2 identified:3 inner:1 whether:3 munro:6 cause:1 generally:1 aimed:1 dhu:2 band:1 mcclelland:1 generate:2 sl:1 four:1 drawn:1 eraser:2 kept:1 contiguously:1 graph:3 button:1 telecommunication:1 dst:1 throughout:1 coeff:2 layer:1 quadratic:1 encountered:1 occur:1 scanned:1 constraint:1 ri:2 department:1 poor:1 beneficial:1 remain:1 slightly:1 gradually:1 equation:1 mechanism:1 end:3 studying:1 available:8 promoting:1 indirectly:1 petsche:1 occurrence:1 batch:2 coin:1 original:15 top:6 include:2 dependence:3 evolutionary:1 navigating:1 distance:58 link:1 mapped:1 oa:2 relationship:3 providing:1 benchmark:1 situation:1 defining:1 clayton:2 pair:19 radvansky:2 conclusive:1 learned:3 narrow:1 below:1 pattern:1 tb:1 memory:4 examination:3 rely:1 indicator:1 representing:1 temporally:5 picture:1 axis:1 epoch:7 relative:6 asymptotic:4 proven:1 consistent:12 tease:1 taper:1 neighbor:6 emerge:1 benefit:1 feedback:1 transition:6 sensory:1 qualitatively:1 projected:1 san:2 far:3 global:2 sequentially:1 pittsburgh:2 francisco:1 spatio:4 continuous:2 learn:2 transfer:1 ca:2 investigated:1 priming:5 spread:1 paul:1 allowed:3 hebb:1 vr:1 experienced:1 position:2 weighting:1 emphasized:1 thea:1 showing:1 adding:1 corr:4 sequential:1 ci:1 illustrates:3 rui:1 entropy:1 smoothly:1 ghiselli:4 highlighting:1 ma:1 identity:1 goal:1 in_d:2 absence:1 change:1 included:3 except:2 decouple:1 tendency:1 experimental:8 indicating:1 internal:15 support:1 scan:1 correlated:1
829
1,760
Effects of Spatial and Temporal Contiguity on the Acquisition of Spatial Information Thea B. Ghiselli-Crippa and Paul W. Munro Department of Information Science and Telecommunications University of Pittsburgh Pittsburgh, PA 15260 tbgst@sis.pitt.edu, munro@sis.pitt.edu Abstract Spatial information comes in two forms: direct spatial information (for example, retinal position) and indirect temporal contiguity information, since objects encountered sequentially are in general spatially close. The acquisition of spatial information by a neural network is investigated here. Given a spatial layout of several objects, networks are trained on a prediction task. Networks using temporal sequences with no direct spatial information are found to develop internal representations that show distances correlated with distances in the external layout. The influence of spatial information is analyzed by providing direct spatial information to the system during training that is either consistent with the layout or inconsistent with it. This approach allows examination of the relative contributions of spatial and temporal contiguity. 1 Introduction Spatial information is acquired by a process of exploration that is fundamentally temporal, whether it be on a small scale, such as scanning a picture, or on a larger one, such as physically navigating through a building, a neighborhood, or a city. Continuous scanning of an environment causes locations that are spatially close to have a tendency to occur in temporal proximity to one another. Thus, a temporal associative mechanism (such as a Hebb rule) can be used in conjunction with continuous exploration to capture the spatial structure of the environment [1]. However, the actual process of building a cognitive map need not rely solely on temporal associations, since some spatial information is encoded in the sensory array (position on the retina and proprioceptive feedback). Laboratory studies show different types of interaction between the relative contributions of temporal and spatial contiguities to the formation of an internal representation of space. While Clayton and Habibi's [2] series of recognition priming experiments indicates that priming is controlled only by temporal associations, in the work of McNamara et al. [3] priming in recognition is observed only when space and time are both contiguous. In addition, Curiel and Radvansky's [4] work shows that the effects of spatial and temporal contiguity depend on whether location or identity information is emphasized during learning. Moreover, other experiments ([3]) also show how the effects clearly depend on the task and can be quite different if an explicitly spatial task is used (e.g., additive effects in location judgments). T. B. Ghiselli-Crippa and P W. Munro 18 labels labels labels (A coeff.) labels labels coordinates coordinates (B coeff.) labels Figure 1: Network architectures: temporal-only network (left); spatio-temporal network with spatial units part of the input representation (center); spatio-temporal network with spatial units part of the output representation (right) . 2 Network architectures The goal of the work presented in this paper is to study the structure of the internal representations that emerge from the integration of temporal and spatial associations. An encoder-like network architecture is used (see Figure 1), with a set of N input units and a set of N output units representing N nodes on a 2-dimensional graph. A set of H units is used for the hidden layer. To include space in the learning process, additional spatial units are included in the network architecture. These units provide a representation of the spatial information directly available during the learning/scanning process. In the simulations described in this paper, two units are used and are chosen to represent the (x, y) coordinates of the nodes in the graph . The spatial units can be included as part of the input representation or as part of the output representation (see Figure 1, center and right panels): both choices are used in the experiments, to investigate whether the spatial information could better benefit training as an input or as an output [5]. In the second case, the relative contribution of the spatial information can be directly manipulated by introducing weighting factors in the cost function being minimized. A two-term cost function is used, with a cross-entropy term for the N label units and a squared error term for the 2 coordinate units, ri indicates the actual output of unit i and ti its desired output. The relative influence of the spatial information is controlled by the coefficients A and B. 3 Learning tasks The left panel of Figure 2 shows an example of the type of layout used; the effective layout used in the study consists of N = 28 nodes. For each node, a set of neighboring nodes is defined, chosen on the basis of how an observer might scan the layout to learn the node labels and their (spatial) relationships; in Figure 2, the neighborhood relationships are represented by lines connecting neighboring nodes. From any node in the layout, the only allowed transitions are those to a neighbor, thus defining the set of node pairs used to train the network (66 pairs out of C(28, 2) = 378 possible pairs). In addition, the probability of occurrence of a particular transition is computed as a function of the distance to the corresponding neighbor. It is then possible to generate a sequence of visits to the network nodes, aimed at replicating the scanning process of a human observer studying the layout. 19 Spatiotemporal Contiguity Effects on Spatial Information Acquisition eraser knife cup coin eraser button Figure 2: Example of a layout (left) and its permuted version (right). Links represent allowed transitions. A larger layout of 28 units was used in the simulations. The basic learning task is similar to the grammar learning task of Servan-Schreiber et al. [6] and to the neighborhood mapping task described in [1] and is used to associate each of the N nodes on the graph and its (x, y) coordinates with the probability distribution of the transitions to its neighboring nodes. The mapping can be learned directly, by associating each node with the probability distribution of the transitions to all its neighbors: in this case, batch learning is used as the method of choice for learning the mapping. On the other hand, the mapping can be learned indirectly, by associating each node with itself and one of its neighbors, with online learning being the method of choice in this case; the neighbor chosen at each iteration is defined by the sequence of visits generated on the basis of the transition probabilities. Batch learning was chosen because it generally converges more smoothly and more quickly than online learning and gives qualitatively similar results. While the task and network architecture described in [1] allowed only for temporal association learning, in this study both temporal and spatial associations are learned simultaneously, thanks to the presence of the spatial units. However, the temporalonly (T-only) case, which has no spatial units, is included in the simulations performed for this study, to provide a benchmark for the evaluation of the results obtained with the spatio-temporal (S- T) networks. The task described above allows the network to learn neighborhood relationships for which spatial and temporal associations provide consistent information, that is, nodes experienced contiguously in time (as defined by the sequence) are also contiguous in space (being spatial neighbors). To tease apart the relative contributions of space and time, the task is kept the same, but the data employed for training the network is modified: the same layout is used to generate the temporal sequence, but the x , y coordinates of the nodes are randomly permuted (see right panel of Figure 2). If the permuted layout is then scanned following the same sequence of node visits used in the original version, the net effect is that the temporal associations remain the same, but the spatial associations change so that temporally neighboring nodes can now be spatially close or distant: the spatial associations are no longer consistent with the temporal associations. As Figure 4 illustrates, the training pairs (filled circles) all correspond to short distances in the original layout, but can have a distance anywhere in the allowable range in the permuted layout. Since the temporal and spatial distances were consistent in the original layout, the original spatial distance can be used as an indicator of temporal distance and Figure 4 can be interpreted as a plot of temporal distance vs. spatial distance for the permuted layout. The simulations described in the following include three experimental conditions: temporal only (no direct spatial information available); space and time consistent (the spatial coordinates and the temporal sequence are from the same layout); space and time inconsistent (the spatial coordinates and the temporal sequence are from different layouts). T. B. Ghise/li-Crippa and P. W. Munro 20 Hidden unit representations are compared using Euclidean distance (cosine and inner product measures give consistent results); the internal representation distances are also used to compute their correlation with Euclidean distances between nodes in the layout (original and permuted). The correlations increase with the number of hidden units for values of H between 5 and 10 and then gradually taper off for values greater than 10. The results presented in the remainder of the paper all pertain to networks trained with H = 20 and with hidden units using a tanh transfer function; all the results pertaining to S-T networks refer to networks with 2 spatial output units and cost function coefficients A = 0.625 and B = 6.25. 4 Results Figure 3 provides a combined view of the results from all three experiments. The left panel illustrates the evolution of the correlation between internal representation distances and layout (original and permuted) distances. The right panel shows the distributions of the correlations at the end of training (1000 epochs). The first general result is that, when spatial information is available and consistent with the temporal information (original layout), the correlation between hidden unit distances and layout distances is consistently better than the correlation obtained in the case of temporal associations alone. The second general result is that, when spatial information is available but not consistent with the temporal information (permuted layout), the correlation between hidden unit distances and original layout distances (which represent temporal distances) is similar to that obtained in the case of temporal associations alone, except for the initial transient. When the correlation is computed with respect to the permuted layout distances, its value peaks early during training and then decreases rapidly, to reach an asymptotic value well below the other three cases. This behavior is illustrated in the box plots in the right panel of Figure 3, which report the distribution of correlation values at the end of training. 4.1 Temporal-only vs. spatio-temporal As a first step in this study, the effects of adding spatial information to the basic temporal associations used to train the network can be examined. Since the learning task is the same for both the T-only and the S-T networks except for the absence or presence of spatial information during training, the differences observed can be attributed to the additional spatial information available to the S-T networks. The higher correlation between internal representation distances and original layout distances obtained when spatial information is 0 ~ ., 8 ., S and T CO"Isistent 0 . 0 . T-o" Sand T InCOnsistent 0 i:i -==~ 0 (corr with T distance) ii ... ?8 " "0 0 =s: ........... E:2 S and T Ir'ICOOSlStent (corr. Wflh S distance) '"ci ~ --'----' N 0 0 0 0 0 200 400 600 Ollnber 01 epochs 800 1000 SandT con_atent T-only SandT SandT Inconsistent ineon.stant (corr " th T ast ) (corr wth 5 dst ) Figure 3: Evolution of correlation during training (0 - 1000 epochs) (left). Distributions of correlations at the end of training (1000 epochs) (right). Spatiotemporal Contiguity Effects on Spatial Information Acquisition - 21 N dHU = 0.6 + 3.4d T + 0.3ds - 2.1( dT)2 + 0.4( d S )2 - 0.4d T ds 0 ., 25 0 ", E ~ '" 0 15 ... 0 05 N 0 15 0 0 00 02 04 08 10 12 Figure 4: Distances in the original layout (x) vs_ distances in the permuted layout (y)_ The 66 training pairs are identified by filled circles_ " Figure 5: Similarities (Euclidean distances) between internal representations developed by a S-T network (after 300 epochs)_ Figure 4 projects the data points onto the x, y plane_ available (see Figure 3) is apparent also when the evolution of the internal representations is examined_ As Figure 6 illustrates, the presence of spatial information results in better generalization for the pattern pairs outside the training set While the distances between training pairs are mapped to similar distances in hidden unit space for both the T-only and the S-T networks, the T-only network tends to cluster the non-training pairs into a narrow band of distances in hidden unit space. In the case of the S-T network instead, the hidden unit distances between non-training pairs are spread out over a wider range and tend to reflect the original layout distances. 4.2 Permuted layout As described above, with the permuted layout it is possible to decouple the spatial and temporal contributions and therefore study the effects of each. A comprehensive view of the results at a particular point during training (300 epochs) is presented in Figure 5, where the x, y plane represents temporal distance vs. spatial distance (see also Figure 4) and the z axis represents the similarity between hidden unit representations. The figure also includes a quadratic regression surface fitted to the data points. The coefficients in the equation of the surface provide a quantitative measure of the relative contributions of spatial (ds) and temporal distances (dT ) to the similarity between hidden unit representations (d HU ): (2) In general, after the transient observed in early training (see Figure 3), the largest and most significant coefficients are found for dT and (dT?, indicating a stronger dependence of dHU on temporal distance than on spatial distance. The results illustrated in Figure 5 represent the situation at a particular point during training (300 epochs). Similar plots can be generated for different points during training, to study the evolution of the internal representations. A different view of the evolution process is provided by Figure 7, in which the data points are projected onto the x,Z plane (top panel) and the y,z plane (bottom panel) at four different times during training. In the top panel, 14 T. B. Ghiselli-Crippa and P W Munro 22 ,.. ,.. .. roo ::: ~ _ 0 ? N ~ ~ ~ ~ ~ -. .. - ::: ~ ~ 02 " 06 O. "_d " 12 . ,, ~' ~ : ~ ~ ~, 02 os "-' .. ' 02 .. . 06 - , tP . DO 0 , , .I' ~ 12 .', 00 02 " "-' 06 ." 02 ~ ~ .~. ',' ~ .. .. " " ~ 06 00 02 "_d _ 0 ? N ~ : ~, ~ ~ .. f/Po <P " e, . DO .:. ~ 00 02 " .. .. "-' 10 12 .. .. .. " 12 " _d ::: ~ , ~ 12 0 N ~ ~ ',~-, 00 ~ ~ ~ _ ? ~ ~ 12 " 0 00 ~ ::: g 10 ~ ~ ~ ~ O. , ::: ;; ~ .~ 00 00 ~. ~ , "_d , :; ~ ~ ~ ~ 00 ~ ~ ~ ~ i ~ ~ ~, ::: ~ ~ ~ _ 0 ? N ~ ~ ~ ~ o ,.~,o : s rIP 0 00 0 ? ',' 00 02 " O. "-' o. " 12 Figure 6: Internal representation distances vs. original layout distances: S-T network (top) vs. T-only network (bottom). The training pairs are identified by filled circles. The presence of spatial information results in better generalization for the pairs outside the training set. the internal representation distances are plotted as a function of temporal distance (i.e., the spatial distance from the original layout), while in the bottom panel they are plotted as a function of spatial distance (from the permuted layout). The higher asymptotic correlation between internal representation distances and temporal distances, as opposed to spatial distances (see Figure 3), is apparent also from the examination of the evolutionary plots, which show an asymptotic behavior with respect to temporal distances (see Figure 7, top panel) very similar to the T-only case (see Figure 6, bottom panel). 5 Discussion The first general conclusion that can be drawn from the examination of the results described in the previous section is that, when the spatial information is available and consistent with the temporal information (original layout), the similarity structure of the hidden unit representations is closer to the structure of the original layout than that obtained by using temporal associations alone. The second general conclusion is that, when the spatial information is available but not consistent with the temporal information (permuted layout), the similarity structure of the hidden unit representations seems to correspond to temporal more than spatial proximity. Figures 5 and 7 both indicate that temporal associations take precedence over spatial associations. This result is in agreement with the results described in [1], showing how temporal associations (plus some high-level constraints) significantly contribute to the internal representation of global spatial information. However, spatial information certainly is very beneficial to the (temporal) acquisition of a layout, as proven by the results obtained with the S-T network vs. the T-only network. In terms of the model presented in this paper, the results illustrated in Figures 5 and 7 can be compared with the experimental data reported for recognition priming ([2], [3], [4]), with distance between internal representations corresponding to reaction time. The results of our model indicate that distances in both the spatially far and spatially close condition appear to be consistently shorter for the training pairs (temporally close) than for the nontraining pairs (temporally distant), highlighting a strong temporal effect consistent with the data reported in [2] and [4] (for spatially far pairs) and in [3] (only for the spatially close Spatiotemporal Contiguity Effects on Spatial Information Acquisition ~~ 0_ ri ; -~-' ~. ~~.. . Sl ........... 0 " ...... . j!I!A .. ,. 0 ,. ~ ~ ~ 23 \ 0 ? lfIiiIo '0' 110 0 ~'--_ _ _ _ _-.J 00 02 O. 01 01 10 12 00 02 O. ~ 01 01 10 12 00 02 0.4 01 01 10 12 02 0" 01 In_d (S) 01 '0 12 ~l.- 00 0.2 o. ot .._d(S) 02 O. 01 10 12 0.0 02 04 08 ...u:I (S) 01 oa 10 12 In_den L..-_ _ _ _- . l 00 0 0 l'I_d (T) In_d(TI In _d (T} all 10 12 00 _ _ _ _ _-.J 02 O. 06 oa 10 12 !rUi (S) Figure 7: Internal representation distances vs. temporal distances (top) and vs. spatial distances (bottom) for a S-T network (permuted layout). The training pairs are identified by filled circles. The asymptotic behavior with respect to temporal distances (top panel) is similar to the T-only condition. The bottom panel indicates a weak dependence on spatial distances. case). For the training pairs (temporally close), slightly shorter distances are obtained for spatially close pairs vs. spatially far pairs; this result does not provide support for the experimental data reported in either [3] (strong spatial effect) or [2] (no spatial effect). For the non-training pairs (temporally distant), long distances are found throughout, with no strong dependence on spatial distance; this effect is consistent with all the reported experimental data. Further simulations and statistical analyses are necessary for a more conclusive comparison with the experimental data. References [1] Ghiselli-Crippa, TB. & Munro, P.w. (1994). Emergence of global structure from local associations. In J.D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in Neural Information Processing Systems 6, pp. 1101-1108. San Francisco, CA: Morgan Kaufmann. [2] Clayton, K.N. & Habibi, A. (1991). The contribution of temporal contiguity to the spatial priming effect. Journal of Experimental Psychology: Learning. Memory. and Cognition 17:263-271. [3] McNamara, TP., Halpin. J.A. & Hardy, J.K. (1992). Spatial and temporal contributions to the structure of spatial memory. Journal of Experimental Psychology: Learning. Memory. and Cognition 18:555-564. [4] Curiel, J.M. & Radvansky, G.A. (1998). Mental organization of maps. Journal of Experimental Psychology: Learning. Memory. and Cognition 24:202-214. [5] Caruana, R. & de Sa, VR. (1997). Promoting poor features to supervisors: Some inputs work better as outputs . In M.e. Mozer, M.I. Jordan, & T Petsche (Eds.), Advances in Neural Information Processing Systems 9, pp. 389-395. Cambridge, MA: MIT Press. [6] Servan-Schreiber, D., Cleeremans, A. & McClelland, J.L. (1989). Learning sequential structure in simple recurrent networks. In D.S. Touretzky (Ed.), Advances in Neural Information Processing Systems 1, pp. 643-652. San Mateo, CA: Morgan Kaufmann. Neural Representation of Multi-Dimensional Stimuli Christian W. Eurich, Stefan D. Wilke and Helmut Schwegler Institut fUr Theoretische Physik Universitat Bremen, Germany (eurich,swilke,schwegler)@physik.uni-bremen.de Abstract The encoding accuracy of a population of stochastically spiking neurons is studied for different distributions of their tuning widths. The situation of identical radially symmetric receptive fields for all neurons, which is usually considered in the literature, turns out to be disadvantageous from an information-theoretic point of view. Both a variability of tuning widths and a fragmentation of the neural population into specialized subpopulations improve the encoding accuracy. 1 Introduction The topic of neuronal tuning properties and their functional significance has focused much attention in the last decades. However, neither empirical findings nor theoretical considerations have yielded a unified picture of optimal neural encoding strategies given a sensory or motor task. More specifically, the question as to whether narrow tuning or broad tuning is advantageous for the representation of a set of stimulus features is still being discussed. Empirically, both situations are encountered: small receptive fields whose diameter is less than one degree can, for example, be found in the human retina [7] , and large receptive fields up to 180 0 in diameter occur in the visual system of tongue-projecting salamanders [10]. On the theoretical side, arguments have been put forward for small [8] as well as for large [5, 1,9, 3, 13] receptive fields. In the last years, several approaches have been made to calculate the encoding accuracy of a neural population as a function of receptive field size [5, 1,9,3, 13]. It has turned out that for a firing rate coding, large receptive fields are advantageous provided that D 2: 3 stimulus features are encoded [9, 13]. For binary neurons, large receptive fields are advantageous also for D = 2 [5,3]. However, so far only radially symmetric tuning curves have been considered. For neural populations which lack this symmetry, the situation may be very different. Here we study the encoding accuracy of a popUlation of stochastically spiking neurons. A Fisher information analysis performed on different distributions of tunings widths will indeed reveal a much more detailed picture of neural encoding strategies. C. W. Eurich. S. D. Wilke and H. Schwegler J J6 2 Model Consider a D-dimensional stimulus space, X. A stimulus is characterized by a position x (Xl, ... , XD) E X, where the value of feature i, Xi (i 1, ... , D), is measured relative to the total range of values in the i-th dimension such that it is dimensionless. Information about the stimulus is encoded by a popUlation of N stochastically spiking neurons. They are assumed to have independent spike generation mechanisms such that the joint probability distribution for observing n = (n(l), ... ,n(k), ... ,n(N?) spikes within a time interval T, Ps(n; x), can be written in the form = = N Ps(n;x) = II ps(k) (n(k); x), (1) k=l where Ps(k) (n(k); x) is the single-neuron probability distribution of the number of observed spikes given the stimulus at position x. Note that (1) does not exclude a correlation of the neural firing rates, i.e., the neurons may have common input or even share the same tuning function. The firing rates depend on the stimulus via the local values of the tuning functions, such that x) can be written in the form Ps(k) (n(k); x) = S (n(k), j(k) (x), T), where the tuning function of neuron k, j(k) (x), gives its mean firing rate in response to the stimulus at position x. We assume here a form of the tuning function that is not necessarily radially symmetric, Ps(k) (n(k); f(') (x) = F4> (t (Xi ~~r) )2) =, F? ( e( ')2) , (2) where e(k) = (c~k), ... , c};?) is the center of the tuning curve of neuron k, O'~k) is its tuning width in the i-th dimension, k )2 := (Xi - c~k?)2/O'ik)2 for i = 1, ... ,D, and ~(k)2 := ~~k)2 + ... + ~~)2. F > 0 denotes the maximal firing rate of the neurons, which requires that maxz~o fj>(z) = 1. d We assume that the tuning widths O't), . .. ,O'~) of each neuron k are drawn from a distribution PO' (0'1, ... ,O'D). For a population oftuning functions with centers e(l), ... , e(N), a density 1}(x) is introduced according to 1}(x) := L:~=l 8(x - e(k?). The encoding accuracy can be quantified by the Fisher information matrix, J, which is defined as (3) where E[ . ..J denotes the expectation value over the probability distribution P(n; x) [2]. The Fisher information yields a lower bound on the expected error of an unbiased estimator that retrieves the stimulus x from the noisy neural activity (Cramer-Rao inequality) [2]. The minimal estimation error for the i-th feature Xi, ti,min, is given by t;,min = (J - 1 )ii which reduces to t;,min = 1/ Jii(X) if J is diagonal. We shall now derive a general expression for the popUlation Fisher information. In the next chapter, several cases and their consequences for neural encoding strategies will be discussed. For model neuron (k), the Fisher information (3) reduces to (k) J ij . (k) (X'O'I (k) _ ""'O'D) - 1 (k) O'i (k)Aq.. O'j ( ~ (k)2 ,F,T ) (k) (k) ~i ~j , (4) 117 Neural Representation of Multi-Dimensional Stimuli where the dependence on the tuning widths is indicated by the list of arguments. The function A.p depends on the shape of the tuning function and is given in [13]. The independence assumption (1) implies that the population Fisher information is the sum of . d??d I ",N J(k)( (k) (k)) . U7 t he contn?b? utlOns 0 f the III IVI ua neurons, L.Jk=1 ij x; 0"1 , ... ,0"D ne now define a population Fisher information which is averaged over the distribution of tuning widths Pt:T(0"1, . .. ,O"D): N (Jij (x)) 17 = L / d0"1 . .. dO"D Pt:T(0"1,? .. , O"D) Ji~k) (x; 0"1, ? .. , O"D) . (5) k= 1 Introducing the density of tuning curves, 1J(x), into (5) and assuming a constant distribution, 1J(x) == 1J == const., one obtains the result that the population Fisher information becomes independentofx and that the off-diagonal elements of J vanish [13]. The average population Fisher information then becomes (Jij)t:T = 1JD K.p (F, r, D ) \/ flt:l 0"1) ~ 0"; Vij, 17 (6) where K.p depends on the geometry of the tuning curves and is defined in [13]. 3 Results In this section, we consider different distributions of tuning widths in (6) and discuss advantageous and disadvantageous strategies for obtaining a high representational accuracy in the neural population. Radially symmetric tuning curves. the tuning-width distribution reads For radially symmetric tuning curves of width a, D Pt:T(O"l, .. . ,O"D) = II O(O"i -a); i=l see Fig. 1a for a schematic visualization of the arrangement of the tuning widths for the case D = 2. The average population Fisher information (6) for i = j becomes (Jii)t:T = 1JDK.p(F, r, D) aD - 2, (7) a result already obtained by Zhang and Sejnowski [13]. Equation (7) basically shows that the minimal estimation error increases with a for D = 1, that it does not depend on a for D = 2, and that it decreases as a increases for D 2: 3. We shall discuss the relevance of this case below. Identical tuning curves without radial symmetry. Next we discuss tuning curves which are identical but not radially symmetric; the tuning-width distribution for this case is D Pt:T(0"1, . .. ,O"D) = II O(O"i - ad, i=l where ai denotes the fixed width in dimension i. For i = j, the average population Fisher information (6) reduces to [11,4] (Jii)t:T = 1JDK.p ( F, r, D) Dfl 1=1 0"1 -2 O"i . (8) c. 118 (a) W. Eurich, S. D. Wilke and H. Schwegler (b) / Figure 1: Visualization of different distributions of tuning widths for D = 2. (a) Radially symmetric tuning curves. The dot indicates a fixed (j, while the diagonalline symbolizes a variation in (j discussed in [13]. (b) Identical tuning curves which are not radially symmetric. (c) Tuning widths uniformly distributed within a small rectangle. (d) Two sUbpopulations each of which is narrowly tuned in one dimension and broadly tuned in the other direction. . (c) (d) . b _ b 2 . Equation (8) contains (7) as a special case. From (8) it becomes immediately clear that the expected minimal square encoding error for the i-th stimulus feature, ?~ min = 1/ (Jii(X))u, depends on i, i. e., the population specializes in certain features. The error obtained in dimension i thereby depends on the tuning widths in all dimensions. Which encoding strategy is optimal for a population whose task it is to encode a single feature, say feature i, with high accuracy while not caring about the other dimensions? In order to answer this question, we re-write (8) in terms of receptive field overlap. For the tuning functions f(k) (x) encountered empirically, large values ofthe single-neuron Fisher information (4) are typically restricted to a region around the center of the tuning function, c(k). The fraction p({3) of the Fisher information that falls into a region ED J~(k)2 ~ (3 aroundc(k) is given by f p({3) := d E; d X D D "",D X L....i=l X (k) ( ) J ii X 2:~t=l J~~) ( ) u X j3 f d~ ~D+l At/>(e, F, T) o (9) 00 f d~ ~D+l At/>(~2, F, T) o where the index (k) was dropped because the tuning curves are assumed to have identical shapes. Equation (9) allows the definition of an effective receptive field, RF~~, inside of which neuron k conveys a major fraction Po of Fisher information, RF~~ := {xl~ ~ {3o} , where (3o is chosen such that p({3o) = Po. The Fisher information a neuron k carries is small unless x E RF~~. This has the consequence that a fixed stimulus x is actually encoded only by a subpopulation of neurons. The point x in stimulus space is covered by 27r D/ 2({30)D D _ (10) Ncode:= 1] Dr(D/2) (Jj }1 receptive fields. With the help of (10), the average population Fisher information (8) can be re-written as (11) Equation (11) can be interpreted as follows: We assume that the population of neurons encodes stimulus dimension i accurately, while all other dimensions are of secondary importance. The average population Fisher information for dimension i, (Jii ) u, is determined by the tuning width in dimension i, (ji, and by the size of the active subpopulation, N code ' There is a tradeoff between these quantities. On the one hand, the encoding error can be decreased by decreasing (ji, which enhances the Fisher information carried by each single Neural Representation ofMulti-Dimensional Stimuli 119 neuron. Decreasing ai, on the other hand, will also shrink the active subpopulation via (10). This impairs the encoding accuracy, because the stimulus position is evaluated from the activity of fewer neurons. If (11) is valid due to a sufficient receptive field overlap, Ncode can be increased by increasing the tuning widths, aj, in all other dimensions j i- i. This effect is illustrated in Fig. 2 for D = 2. X2 c=:> x2, s X2 ,II"\..\ U x2,s Figure 2: Encoding strategy for a stimulus characterized by parameters Xl,s and X2,s' Feature Xl is to be encoded accurately. Effective receptive field shapes are indicated for both populations. If neurons are narrowly tuned in X2 (left), the active population (solid) is small (here: Ncode = 3). Broadly tuned receptive fields for X2 (right) yield a much larger population (here: Ncode = 27) thus increasing the encoding accuracy. It shall be noted that although a narrow tuning width ai is advantageous, the limit ai ---t 0 yields a bad representation. For narrowly tuned cells, gaps appear between the receptive fields: The condition 17(X) == const. breaks down, and (6) is no longer valid. A more detailed calculation shows that the encoding error diverges as ai --* 0 [4]. The fact that the encoding error decreases for both narrow tuning and broad tuning - due to (11) - proves the existence of an optimal tuning width, An example is given in Fig. 3a. 3 rTI~--~------~----~------~ 1\ (b) Ii 1\ Ii 0.8 II II I; 2 1\ I , ;to.6 ~ ~~~~;::~-:.~~;: A N~O.4 w ----- ---- ----- -- --- v 0.2 O'----~--~--~-----'-------' o 0.5 1 A 1.5 2 Figure 3: (a) Example for the encoding behavior with narrow tuning curves arranged on a regular lattice of dimension D = 1 (grid spacing ~). Tuning curves are Gaussian, and neural firing is modeled as a Poisson process, Dots indicate the minimal square encoding error averaged over a uniform distribution of stimuli, (E~in)' as a function ofa. The minimum is clearly visible. The dotted line shows the corresponding approximation according to (8). The inset shows Gaussian tuning curves of optimal width, ao pt ~ 0.4~. (b) 9D()..) as a function of ).. for different values of D. c. W. 120 Eurich, S. D. Wilke and H. Schwegler Narrow distribution of tuning curves. In order to study the effects of encoding the stimulus with distributed tuning widths instead of identical tuning widths as in the previous cases, we now consider the distribution g:i e D Pu(lT1,'" ,lTD) = [lTi - (O'i - i)] e [(O'i + i) -lTi] , (12) e denotes the Heaviside step function. Equation (12) describes a uniform distriwhere bution in a D-dimensional cuboid of size b1 , ... , bD around (0'1, .. . 0'D); cf. Fig. 1c. A straightforward calculation shows that in this case, the average population Fisher information (6) for i = j becomes (Jii)u = f/DKtj) (F, T, D) n~l O'~ 0'1 { 1 1 + 12 (bO'i 2+ 0 [( O'ib 4] }. i ) i ) (13) A comparison with (8) yields the astonishing result that an increase in bi results in an increase in the i-th diagonal element of the average population Fisher information matrix and thus in an improvement in the encoding of the i-th stimulus feature, while the encoding in dimensions j :f. i is not affected. Correspondingly, the total encoding error can be decreased by increasing an arbitrary number of edge lengths of the cube. The encoding by a population with a variability in the tuning curve geometries as described is more precise than that by a uniform population. This is true/or arbitrary D. Zhang and Sejnowski [13] consider the more artificial situation of a correlated variability ofthe tuning widths: tuning curves are always assumed to be radially symmetric. This is indicated by the diagonal line in Fig. 1a. A distribution of tuning widths restricted to this subset yields an average population Fisher information ex: (O'D-2) and does not improve the encoding for D = 2 or D=3. Fragmentation into D subpopulations. Finally, we study a family of distributions of tuning widths which also yields a lower minimal encoding error than the uniform population. Let the density of tuning curves be given by 1 D Pu(lT1,'" ,lTD) = D L 6( lTi i=l AO') II 6(lTj - 0'), (14) j?-i where A > O. For A = 1, the population is uniform as in (7). For A :f. 1, the population is split up into D subpopulations; in subpopulation i, lTi is modified while lTj == 0' for j :f. i. See Fig. Id for an example. The diagonal elements ofthe average population Fisher information are (Jii)u {1 + (D = f/DKtj)(F, T, D) -D-2 IT DA I)A 2 } ' (15) where the term in brackets will be abbreviated as 9D(A). (Jii)u does not depend on i in this case because of the symmetry in the sUbpopulations. Equation (15) and the uniform case (7) differ by 9D(A) which will now be discussed. Figure 3b shows 9D(A) for different values of D. For A = 1, 9D(A) = 1 and (7) is recovered as expected. 9D(A) = 1 also holds for A = 1/ (D - 1) < 1: narrowing one tuning width in each subpopulation will at first decrease the resolution provided D 2: 3; this is due to the fact that Ncode is decreased. For A < 1/(D - 1), however, 9D(A) > 1, and the resolution exceeds (Jii)u in (7) because each neuron in the i-th subpopulation carries a high Fisher information in the i-th dimension. D = 2 is a special case where no impairment of encoding occurs because the effect of a decrease of Ncode is less pronounced. Interestingly, an increase in A also yields an improvement in the encoding accuracy. This is a combined effect resulting from an increase in Ncode on the one hand and the existence of D subpopulations, D - 1 of Neural Representation of Multi-Dimensional Stimuli 121 which maintain their tuning widths in each dimension on the other hand. The discussion of 9D(>") leads to the following encoding strategy. For small >.., (Jii)u increases rapidly, which suggests a fragmentation of the population into D subpopulations each of which encodes one feature with high accuracy, i.e., one tuning width in each subpopulation is small whereas the remaining tuning widths are broad. Like in the case discussed above, the theoretical limit of this method is a breakdown of the approximation of TJ == const. and the validity of (6) due to insufficient receptive field overlap. 4 Discussion and Outlook We have discussed the effects of a variation of the tuning widths on the encoding accuracy obtained by a population of stochastically spiking neurons. The question of an optimal tuning strategy has turned out to be more complicated than previously assumed. More specifically, the case which focused most attention in the literature - radially symmetric receptive fields [5, 1,9, 3, 13] - yields a worse encoding accuracy than most other cases we have studied: uniform populations with tuning curves which are not radially symmetric; distributions of tuning curves around some symmetric or non-symmetric tuning curve; and the fragmentation of the population into D subpopulations each of which is specialized in one stimulus feature. In a next step, the theoretical results will be compared to empirical data on encoding properties of neural popUlations. One aspect is the existence of sensory maps which consist of neural subpopulations with characteristic tuning properties for the features which are represented. For example, receptive fields of auditory neurons in the midbrain of the barn owl have elongated shapes [6]. A second aspect concerns the short-term dynamics of receptive fields. Using single-unit recordings in anaesthetized cats, Worgotter et al. [12] observed changes in receptive field size taking place in 50-lOOms. Our findings suggest that these dynamics alter the resolution obtained for the corresponding stimulus features. The observed effect may therefore realize a mechanism of an adaptable selective signal processing. References [1] Baldi, P. & HeiJigenberg, W. (1988) BioI. Cybern. 59:313-318. [2] Deco, G. & Obradovic, D. (1997) An Information-Theoretic Approach to Neural Computing. New York: Springer. [3] Eurich, C. W. & Schwegler, H. (1997) BioI. Cybern. 76: 357-363. [4] Eurich, C. W. & Wilke, S. D. (2000) NeuraL Compo (in press). [5] Hinton, G. E., McClelland, J. L. & Rumelhart, D. E (1986) In Rumelhart, D. E. & McClelland, J. L. (eds.), ParaLLeL Distributed Processing, Vol. 1, pp. 77-109. Cambridge MA: MIT Press. [6] Knudsen, E. I. & Konishi, M. (1978) Science 200:795-797. [7] Kuffter, S. W. (1953) 1. Neurophysiol. 16:37-68. [8] Lettvin, J. Y., Maturana, H. R., McCulloch, W. S. & Pitts, W. H. (1959) Proc. Inst. Radio Eng. NY 47:1940-1951. [9] Snippe, H. P. & Koenderink, J. J. (1992) BioI. Cybern. 66:543-551. [10] Wiggers, W., Roth, G., Eurich, C. W. & Straub, A. (1995) J. Camp. Physiol. A 176:365-377. [11] Wilke, S. D. & Eurich, C. W. (1999) In Verleysen, M. (ed.), ESANN 99, European Symposium on Artificial Neural Networks, pp. 435-440. Brussels: D-Facto. [12] Worgotter, F., Suder, K., Zhao, Y., Kerscher, N., Eysel, U. T. & Funke, K. (1998) Nature 396:165-168. [13] Zhang, K. & Sejnowski, T. J. (1999) NeuraL Compo 11:75-84.
1760 |@word version:2 advantageous:5 stronger:1 seems:1 physik:2 hu:1 simulation:5 eng:1 thereby:1 outlook:1 solid:1 carry:2 initial:1 series:1 contains:1 hardy:1 tuned:5 interestingly:1 reaction:1 recovered:1 si:2 written:3 bd:1 realize:1 physiol:1 additive:1 distant:3 visible:1 shape:4 christian:1 motor:1 plot:4 v:9 alone:3 fewer:1 plane:3 short:2 compo:2 dfl:1 wth:1 provides:1 mental:1 node:19 location:3 contribute:1 jdk:2 zhang:3 direct:4 symposium:1 ik:1 consists:1 inside:1 baldi:1 acquired:1 expected:3 indeed:1 alspector:1 behavior:4 nor:1 multi:3 decreasing:2 actual:2 ua:1 becomes:5 project:1 provided:3 moreover:1 increasing:3 panel:14 mcculloch:1 straub:1 interpreted:2 contiguity:9 developed:1 unified:1 finding:2 temporal:55 quantitative:1 nontraining:1 ti:3 ofa:1 xd:1 wilke:6 facto:1 unit:29 appear:2 dropped:1 local:2 tends:1 limit:2 consequence:2 encoding:31 id:1 solely:1 firing:6 might:1 plus:1 mateo:1 examined:1 studied:2 quantified:1 suggests:1 co:1 range:3 bi:1 averaged:2 empirical:2 significantly:1 radial:1 subpopulation:16 regular:1 suggest:1 onto:2 close:8 pertain:1 put:1 ast:1 influence:2 dimensionless:1 cybern:3 map:3 maxz:1 center:5 elongated:1 roth:1 layout:39 attention:2 straightforward:1 focused:2 resolution:3 immediately:1 rule:1 estimator:1 array:1 population:37 konishi:1 coordinate:8 variation:2 pt:5 rip:1 agreement:1 pa:1 associate:1 element:3 recognition:3 jk:1 rumelhart:2 breakdown:1 observed:6 bottom:6 narrowing:1 crippa:5 capture:1 calculate:1 cleeremans:1 region:2 decrease:5 mozer:1 environment:2 dynamic:2 trained:2 depend:5 astonishing:1 basis:2 neurophysiol:1 po:4 joint:1 indirect:1 represented:2 retrieves:1 chapter:1 cat:1 train:2 effective:3 sejnowski:3 pertaining:1 artificial:2 formation:1 neighborhood:4 outside:2 quite:1 encoded:5 larger:3 apparent:2 whose:2 say:1 roo:1 encoder:1 grammar:1 emergence:1 itself:1 noisy:1 associative:1 online:2 sequence:8 net:1 interaction:1 maximal:1 jij:2 product:1 remainder:1 neighboring:4 turned:2 eysel:1 rapidly:2 representational:1 pronounced:1 ltj:2 cluster:1 p:6 diverges:1 converges:1 object:2 wider:1 derive:1 develop:1 recurrent:1 help:1 maturana:1 measured:1 ij:2 sa:1 strong:3 esann:1 come:1 indicate:3 implies:1 differ:1 direction:1 f4:1 snippe:1 exploration:2 human:2 transient:2 owl:1 sand:1 ao:2 generalization:2 schwegler:6 precedence:1 hold:1 proximity:2 around:3 considered:2 cramer:1 barn:1 mapping:4 cognition:3 pitt:3 major:1 early:2 estimation:2 proc:1 label:8 tanh:1 radio:1 largest:1 schreiber:2 city:1 stefan:1 mit:2 clearly:2 gaussian:2 always:1 modified:2 conjunction:1 encode:1 improvement:2 consistently:2 fur:1 indicates:4 salamander:1 obradovic:1 helmut:1 camp:1 inst:1 worgotter:2 typically:1 hidden:13 selective:1 germany:1 verleysen:1 spatial:72 integration:1 special:2 cube:1 field:19 identical:6 represents:2 broad:3 alter:1 minimized:1 report:1 stimulus:24 fundamentally:1 retina:2 randomly:1 manipulated:1 simultaneously:1 comprehensive:1 geometry:2 maintain:1 organization:1 investigate:1 evaluation:1 certainly:1 analyzed:1 bracket:1 tj:1 edge:1 closer:1 necessary:1 shorter:2 institut:1 unless:1 filled:4 euclidean:3 desired:1 circle:3 sandt:3 plotted:2 re:2 theoretical:4 minimal:5 fitted:1 tongue:1 increased:1 rao:1 contiguous:2 servan:2 tp:2 caruana:1 lattice:1 cost:3 introducing:2 subset:1 mcnamara:2 uniform:7 supervisor:1 universitat:1 reported:4 answer:1 scanning:4 spatiotemporal:3 combined:2 thanks:1 density:3 peak:1 off:2 connecting:1 quickly:1 squared:1 reflect:1 deco:1 opposed:1 dr:1 worse:1 external:1 cognitive:1 stochastically:4 koenderink:1 zhao:1 li:1 jii:10 exclude:1 de:2 retinal:1 coding:1 includes:1 coefficient:4 explicitly:1 depends:4 ad:2 performed:2 view:4 observer:2 break:1 observing:1 bution:1 disadvantageous:2 complicated:1 parallel:1 contribution:8 square:2 ir:1 accuracy:13 kaufmann:2 characteristic:1 judgment:1 correspond:2 yield:8 symbolizes:1 theoretische:1 ofthe:3 weak:1 accurately:2 basically:1 j6:1 reach:1 touretzky:1 ed:6 definition:1 acquisition:6 pp:5 conveys:1 attributed:1 auditory:1 radially:11 actually:1 adaptable:1 higher:2 dt:4 response:1 arranged:1 evaluated:1 box:1 shrink:1 anywhere:1 correlation:14 d:3 hand:5 o:1 curiel:2 lack:1 aj:1 reveal:1 indicated:3 building:2 effect:21 validity:1 unbiased:1 true:1 evolution:5 spatially:9 symmetric:13 laboratory:1 read:1 proprioceptive:1 illustrated:4 during:10 width:31 noted:1 cosine:1 allowable:1 theoretic:2 fj:1 consideration:1 habibi:2 common:1 specialized:2 permuted:15 functional:1 spiking:4 empirically:2 ji:3 association:18 discussed:6 he:1 refer:1 significant:1 cup:1 cambridge:2 ai:5 tuning:63 u7:1 grid:1 replicating:1 aq:1 dot:2 longer:2 similarity:5 surface:2 pu:2 apart:1 tesauro:1 certain:1 inequality:1 binary:1 morgan:2 minimum:1 additional:2 greater:1 employed:1 signal:1 ii:12 reduces:3 d0:1 exceeds:1 characterized:2 calculation:2 cross:1 knife:1 long:1 rti:1 visit:3 controlled:2 schematic:1 prediction:1 j3:1 basic:2 regression:1 stant:1 expectation:1 poisson:1 physically:1 iteration:1 represent:4 cell:1 addition:2 whereas:1 spacing:1 interval:1 decreased:3 ot:1 ivi:1 recording:1 tend:1 cowan:1 inconsistent:4 jordan:1 presence:4 iii:1 split:1 caring:1 independence:1 psychology:3 architecture:5 associating:2 identified:3 inner:1 tradeoff:1 narrowly:3 whether:4 expression:1 munro:6 impairs:1 ltd:2 york:1 cause:1 jj:1 impairment:1 generally:1 detailed:2 aimed:1 clear:1 covered:1 dhu:2 band:1 mcclelland:3 diameter:2 generate:2 sl:1 dotted:1 broadly:2 write:1 shall:3 vol:1 affected:1 four:1 drawn:2 neither:1 lti:4 eraser:2 kept:1 rectangle:1 contiguously:1 graph:3 button:1 fraction:2 year:1 sum:1 telecommunication:1 dst:1 place:1 throughout:1 family:1 coeff:2 layer:1 bound:1 quadratic:1 encountered:3 yielded:1 activity:2 lettvin:1 occur:2 scanned:1 constraint:1 ri:2 x2:7 encodes:2 aspect:2 argument:2 min:4 department:1 according:2 brussels:1 poor:1 funke:1 remain:1 beneficial:1 slightly:1 describes:1 midbrain:1 projecting:1 gradually:1 restricted:2 equation:7 visualization:2 previously:1 turn:1 discus:3 mechanism:3 abbreviated:1 end:3 studying:1 available:8 promoting:1 indirectly:1 petsche:1 occurrence:1 batch:2 coin:1 jd:1 original:15 existence:3 top:6 denotes:4 include:2 cf:1 remaining:1 const:3 prof:1 anaesthetized:1 question:3 arrangement:1 spike:3 already:1 receptive:19 strategy:8 dependence:4 quantity:1 diagonal:5 occurs:1 evolutionary:1 navigating:1 enhances:1 distance:58 link:1 mapped:1 oa:2 topic:1 assuming:1 code:1 length:1 index:1 relationship:3 modeled:1 providing:1 insufficient:1 neuron:24 benchmark:1 knudsen:1 defining:1 situation:5 variability:3 precise:1 hinton:1 arbitrary:2 clayton:2 introduced:1 pair:19 radvansky:2 conclusive:1 eurich:9 learned:3 narrow:6 below:2 pattern:1 usually:1 tb:1 rf:3 memory:4 overlap:3 examination:3 rely:1 indicator:1 representing:1 loom:1 improve:2 temporally:5 picture:3 ne:1 axis:1 specializes:1 carried:1 epoch:7 literature:2 relative:7 asymptotic:4 generation:1 proven:1 degree:1 sufficient:1 consistent:12 vij:1 bremen:2 share:1 last:2 tease:1 side:1 taper:1 neighbor:6 fall:1 taking:1 correspondingly:1 emerge:1 benefit:1 distributed:3 feedback:1 curve:21 dimension:16 transition:6 valid:2 sensory:3 forward:1 qualitatively:1 made:1 projected:1 san:2 far:4 obtains:1 uni:1 cuboid:1 global:2 sequentially:1 active:3 b1:1 pittsburgh:2 assumed:4 francisco:1 spatio:4 xi:4 continuous:2 decade:1 learn:2 transfer:1 nature:1 ca:2 correlated:2 symmetry:3 obtaining:1 investigated:1 priming:5 necessarily:1 european:1 da:1 significance:1 spread:1 paul:1 allowed:3 neuronal:1 fig:6 hebb:1 lt1:2 vr:1 ny:1 experienced:1 position:6 xl:4 vanish:1 ib:1 weighting:1 down:1 bad:1 emphasized:1 inset:1 thea:1 showing:1 list:1 flt:1 concern:1 consist:1 adding:1 corr:4 sequential:1 ci:1 fragmentation:4 importance:1 illustrates:3 rui:1 gap:1 entropy:1 smoothly:1 ghiselli:4 visual:1 highlighting:1 bo:1 springer:1 ma:2 bioi:3 identity:1 goal:1 in_d:2 absence:1 fisher:23 change:2 included:3 specifically:2 except:2 uniformly:1 determined:1 decouple:1 total:2 secondary:1 tendency:1 experimental:8 indicating:1 internal:15 support:1 scan:1 relevance:1 heaviside:1 ex:1
830
1,761
Learning from user feedback in image retrieval systems Nuno Vasconcelos Andrew Lippman MIT Media Laboratory, 20 Ames St, E15-354, Cambridge, MA 02139, {nuno,lip} @media.mit.edu, http://www.media.mit.edwnuno Abstract We formulate the problem of retrieving images from visual databases as a problem of Bayesian inference. This leads to natural and effective solutions for two of the most challenging issues in the design of a retrieval system: providing support for region-based queries without requiring prior image segmentation, and accounting for user-feedback during a retrieval session. We present a new learning algorithm that relies on belief propagation to account for both positive and negative examples of the user's interests. 1 Introduction Due to the large amounts of imagery that can now be accessed and managed via computers, the problem of content-based image retrieval (CBIR) has recently attracted significant interest among the vision community [1, 2, 5]. Unlike most traditional vision applications, very few assumptions about the content of the images to be analyzed are allowable in the context of CBIR. This implies that the space of valid image representations is restricted to those of a generic nature (and typically of low-level) and consequently the image understanding problem becomes even more complex. On the other hand, CBIR systems have access to feedback from their users that can be exploited to simplify the task of finding the desired images. There are, therefore, two fundamental problems to be addressed. First, the design of the image representation itself and, second, the design of learning mechanisms to facilitate the interaction. The two problems cannot, however, be solved)n isolation as the careless selection of the representation will make learning more difficult and vice-versa . ...- The impact of a poor image representation on the difficulty ofthe learning problem is visible in CBIR systems that rely on holistic metrics of image similarity, forcing user-feedback to be relative to entire images. In response to a query, the CBII<System? suggests a few images and the user rates those images according to how well they satisfy the goals of the search. Because each image usually contains several different objects or visual concepts, this rating is both difficult and inefficient. How can the user rate an iJpage that contains the concept of interest, but in which this concept only occupies 30% of the field of view, the remaining 70% being filled with completely unrelated stuff? And_how many example images will the CBIR system have to see, in order to figure out what the concept of interest is? A much better interaction paradigm is to let the user explicitly select the regions of the image that are relevant to the search, i.e. user-feedback at the region level. However, region-based feedback requires sophisticated image representations. The problem is that the most obvious choice, object-based representations, is difficult to implement because it is still too hard to segment arbitrary images in a meaningful way. We have argued N Vasconcelos and A. Lippman 978 that a better fonnulation is to view the problem as one of Bayesian inference and rely on probabilistic image representations. In this paper we show that this fonnulation naturally leads to 1) representations with support for region-based interaction without segmentation and 2) intuitive mechanisms to account for both positive and negative user feedback. 2 Retrieval as Bayesian inference The standard interaction paradigm for CBIR is the so-called "query by example", where the user provides the system with a few examples, and the system retrieves from the database images that are visually similar to these examples. The problem is naturally fonnulated as one of statistical classification: given a representation (or feature) space F the goal is to find a map 9 : F --+ M = {I, ... , K} from F to the set M of image classes in the database. K, the cardinality of M, can be as large as the number of items in the database (in which case each item is a class by itself), or smaller. If the goal of the retrieval system is to minimize the probability of error, it is well known that the optimal map is the Bayes classifier [3] g*(x) = argmaxP(Si = llx) = arg max P(XISi = I)P(Si = 1) t (1) t where x are the example features provided by the user and Si is a binary variable indicating the selection of class i. In the absence of any prior infonnation about which class is most suited for the query, an uninfonnative prior can be used and the optimal decision is the maximum likelihood criteria g*(x) = argmaxP(xlSi = 1). (2) t Besides theoretical soundness, Bayesian retrieval has two distinguishing properties of practical relevance. First, because the features x in equation (1) can be any subset of a given query image, the retrieval criteria is valid for both region-based and image-based queries. Second, due to its probabilistic nature, the criteria also provides a basis for designing retrieval systems that can account for user-feedback through belief propagation. 3 Bayesian relevance feedback Suppose that instead of a single query x we have a sequence of t queries {XI, ... , Xt}, where t is a time stamp. By simple application of Bayes rule P(Si = llxl,'" ,Xt) = 'YtP(XtISi = I)P(Si = IlxI,'" ,Xt-d, (3) where 'Yt is a nonnalizing constant and we have assumed that, given the knowledge of the correct image class, the current query Xt is independent of the previous ones. This basically means that the user provides the retrieval system with new infonnation at each iteration of the interaction. Equation (3) is a simple but intuitive mechanism to integrate infonnation over time. It states that the system's beliefs about the user's interests at time t - 1 simply become the prior beliefs for iteration t. New data provided by the user at time t is then used to update these beliefs, which in turn become the priors for iteration t + 1. From a computational standpoint the procedure is very efficient since the only quantity that has to be computed at each time step is the likelihood of the data in the corresponding query. Notice that this is exactly equation (2) and would have to be computed even in the absence of any learning. By taking logarithms and solving for the recursion, equation (3) can also be written as log P(Si = I1xJ,'" , Xt) t-I t-J k=O k=O = 2: log 'Yt-k + 2: log P(Xt-k lSi = I) + log P(Si = 1) , (4) 979 Learning from User Feedback in Image Retrieval Systems exposing the main limitation of the belief propagation mechanism: for large t the contribution, to the right-hand side of the equation, of the new data provided by the user is very small, and the posterior probabilities tend to remain constant. This can be avoided by penalizing older tenns with a decay factor at-k t-l t-l k=O k=O L at-k log,t-k + L at-k 10gP(xt-kISi = 1) + aologP(Si = 1), where at is a monotonically decreasing sequence. In particular, if at-k = a( 1 - a)k, a E (0,1] we have 10gP(Si = llxl, ... ,Xt) = log,: +alogP(xtISi = 1) + (1 - a) log P(Si = llxl, ... , Xt-l). Because,: does not depend on i, the optimal class is S; = argm~{alogP(xtISi = t 4 1) + (1- a) 10gP(Si = llxl, ... ,Xt-J}}. (5) Negative feedback In addition to positive feedback, there are many situations in CBIR where it is useful to rely on negative user-feedback. One example is the case of image classes characterized by overlapping densities. This is illustrated in Figure 1 a) where we have two classes with a common attribute (e.g. regions of blue sky) but different in other aspects (class A also contains regions of grass while class B contains regions of white snow). If the user starts with an image of class B (e.g. a picture of a snowy mountain), using regions of sky as positive examples is not likely to quickly take hirnlher to the images of class A. In fact, all other factors being equal, there is an equal likelihood that the retrieval system will return images from the two classes. On the other hand, if the user can explicitly indicate interest in regions of sky but not in regions of snow, the likelihood that only images from class A will be returned increases drastically. a) b) c) d) Figure 1: a) two overlapping image classes. b) and c) two images in the tile database. d) three examples of pairs of visually similar images that appear in different classes. Another example of the importance of negative feedback are local minima of the search space. These happen when in response to user feedback, the system returns exactly the same images as in a previous iteration. Assuming that the user has already given the system all the possible positive feedback, the only way to escape from such minima is to choose some regions that are not desirable and use them as negative feedback. In the case of the example above, if the user gets stuck with a screen full of pictures of white mountains, he/she can simply select some regions of snow to escape the local minima. In order to account for negative examples, we must penalize the classes under which these score well while favoring the classes that assign a high score to the positive examples. 980 Vasconcelos and A. Lippman N. Unlike positive examples, for which the likelihood is known, it is not straightforward to estimate the likelihood of a particular negative example given that the user is searching for a certain image class. We assume that the likelihood with which Y will be used as a negative example given that the target is class i, is equal to the likelihood with which it will be used as a positive example given that the target is any other class. Denoting the use of Y as a negative example by y, this can be written as P(yiSi = 1) = P(yiSi = 0). (6) This assumption captures the intuition that a good negative example when searching for class i, is one that would be a good positive example if the user were looking for any class other than i. E.g. if class i is the only one in the database that does not contain regions of sky, using pieces of sky as negative examples will quickly eliminate the other images in the database. Under this assumption, negative examples can be incorporated into the learning by simply choosing the class i that maximizes the posterior odds ratio [4] between the hypotheses "class i is the target" and "class i is not the target" Si* = arg max P(Si = lIXt",.,Xl,yt, ... ,Yl) P(Si=Olxt",.,XI,Yt, .. . ,YJ) t = arg max P(Si = llx t, ... ,Xl) ----O~--'---'--'----7 P(Si=OIYt, ... ,YI) t where x are the positive and Y the negative examples, and we have assumed that, given the positive (negative) examples, the posterior probability of a given class being (not being) the target is independent of the negative (positive) examples. Once again, the procedure of the previous section can be used to obtain a recursive version of this equation and include a decay factor which penalizes ancient terms Si* = arg m~x { a 1og P(xtI Si=l) + ( 1 - a )1 og P(Si=I IXl, ... ,Xt-l)} . t P(YtISi = 0) = 0IYl, ... , Yt-d P(Si Using equations (4) and (6) P(Si = 0IYI , ... ,yt) <X IT P(YkISi = 0) = IT P(YkISi = 1) k <X k P(Si=IIYl, ... ,Yt), we obtain Si* = arg m~x { a 1og P(XtISi = 1) t P(YtISi=l) + (1 - a )1 og P(Si = l lxl, ... ,Xt-l)} _. P(Si=IIYl, ... ,Yt-d (7) While maximizing the ratio of posterior probabilities is a natural way to favor image classes that explain well the positive examples and poorly the negative ones, it tends to over-emphasize the importance of negative examples. In particular, any class with zero probability of generating the negative examples will lead to a ratio of 00, even if it explains very poorly the positive examples. To avoid this problem we proceed in two steps: ? start by solving equation (5), i.e. sort the classes according to how well they explain the positive examples. ? select the subset of the best N classes and solve equation (7) considering only the classes in this subset. 5 Experimental evaluation We performed experiments to evaluate 1) the accuracy of Bayesian retrieval on regionbased queries and 2) the improvement in retrieval performance achievable with relevance 981 Leamingfrom User Feedback in Image Retrieval Systems feedback. Because in a normal browsing scenario it is difficult to know the ground truth for the retrieval operation (at least without going through the tedious process of hand-labeling all images in the database), we relied instead on a controlled experimental set up for which ground truth is available. All experiments reported on this section are based on the widely used Brodatz texture database which contains images of 112 textures, each of them being represented by 9 different patches, in a total of 1008 images. These were split into two groups, a small one with 112 images (one example of each texture), and a larger one with the remaining 896. We call the first group the test database and the second the Brodatz database. A synthetic database with 2000 images was then created from the larger set by randomly selecting 4 images at a time and making a 2 x 2 tile out of them. Figure 1 b) and c) are two examples of these tiles. We call this set the tile database. 5.1 Region-based queries We performed two sets of experiments to evaluate the performance of region-based queries. In both cases the test database was used as a test set and the image features were the coefficients of the discrete cosine transform (DCT) of an 8 x 8 block-wise image decomposition over a grid containing every other image pixel. The first set of experiments was performed on the Brodatz database while the tile database was used in the second. A mixture of 16 Gaussians was estimated, using EM, for each of the images in the two databases. In both sets of experiments, each query consisted of selecting a few image blocks from an image in the test set, evaluating equation (2) for each of the classes and returning those that best explained the query. Performance was measured in terms of precision (percent of the retrieved images that are relevant to the query) and recall (percent of the relevant images that are retrieved) averaged over the entire test set. The query images contained a total of 256 non-overlapping blocks. The number of these that were used in each query varied between 1 (0.3 % of the image size) and 256 (100 %). Figure 2 depicts precision-recall plots as a function of this number. - ., ==u~ ~\ .... -~,,? - ,M_ ...,.121_ -- - ,-...121_ 'M_ - ~~~~~u~~ .. ~.~.~ ..~~~ Figure 2: Precision-recall curves as a function of the number of feature vectors included in the query. Left: Brodatz database. Right: Tile database. The graph on the left is relative to the Brodatz database. Notice that precision is generally high even for large values of recall and the performance increases quickly with the percentage of feature vectors included in the query. In particular, 25% of the texture patch (64 blocks) is enough to achieve results very close to those obtained with all pixels. This shows that the retrieval criteria is robust to missing data. The graph on the left presents similar results for the tile database. While there is some loss in performance, this loss is not dramatic - a decrease between 10 and 15 % in precision for any given recall. In fact, the results are still good: when a reasonable number of feature vectors is included in the query, about 8.5 out of the 10 top retrieved images are, on average, relevant. Once again, performance improves rapidly with the number of feature vectors in the query and 25% of N. Vasconcelos and A. Lippman 982 the image is enough for results comparable to the best. This confirms the argument that Bayesian retrieval leads to effective region-based queries even for imagery composed by mUltiple visual stimulae. 5.2 Learning The performance of the learning algorithm was evaluated on the tile database. The goal was to determine if it is possible to reach a desired target image by starting from a weakly related one and providing positive and negative feedback to the retrieval system. This simulates the interaction between a real user and the CBIR system and is an iterative process, where each iteration consists of selecting a few examples, using them as queries for retrieval and examining the top M retrieved images to find examples for the next iteration. M should be small since most users are not willing to go through lots of false positives to find the next 10 corresponding to one screenful of images. query. In all experiments we set M = The most complex problem in testing is to determine a good strategy for selecting the examples to be given to the system. The closer this strategy is to what a real user would do, the higher the practical significance of the results. However, even when there is clear ground truth for the retrieval (as is the case of the tile database) it is not completely clear how to make the selection. While it is obvious that regions of texture classes that appear in the target should be used as positive feedback, it is much harder to determine automatically what are good negative examples. As Figure 1 d) illustrates, there are cases in which textures from two different classes are visually similar. Selecting images from one of these classes as a negative example for the other will be a disservice to the learner. While real users tend not to do this, it is hard to avoid such mistakes in an automatic setting, unless one does some sort of pre-classification of the database. Because we wanted to avoid such pre-classification we decided to stick with a simple selection procedure and live with these mistakes. At each step of the iteration, examples were selected in the following way: among the 10 top images returned by the retrieval system, the one with most patches from texture classes also present in the target image was selected to be the next query. One block from each patch in the query was then used as a positive (negative) example if the class of that patch was also (was not) represented in the target image. This strategy is a worst-case scenario. First, the learner might be confused by conflicting negative examples. Second, as seen above, better retrieval performance can be achieved if more than one block from each region is included in the queries. However, using only one block reduced the computational complexity of each iteration, allowing us to average results over several runs of the learning process . We performed 100 runs with randomly selected target images. In all cases, the initial query image was the first in the database containing one class in common with the target. The performance of the learning algorithm can be evaluated in various ways. We considered two metrics: the percentage of the runs which converged to the right target, and the number of iterations required for convergence. Because, to prevent the learner from entering loops, any given image could only be used once as a query, the algorithm can diverge in two ways. Strong divergence occurs when, at a given time step, the images (among the top 10) that can be used as queries do not contain any texture class in common with the target. In such situation, a real user will tend to feel that the retrieval system is incoherent and abort the search. Weak divergence occurs when all the top 10 images have previously been used. This is a less troublesome situation because the user could simply look up more images (e.g. the next 10) to get new examples. We start by analyzing the results obtained with positive feedback only. Figure 3 a) and b) present plots of the convergence rate and median number of iterations as a function of the decay factor Q. While when there is no learning (Q = I) only 43% of the runs converge, 983 Learningfrom User Feedback in Image Retrieval Systems the convergence rate is always higher when learning takes place and for a significant range of 0 (0 E [0.5,0.8]) it is above 60%. This not only confirms that learning can lead to significant improvements of retrieval performance but also shows that a precise selection of o is not crucial. Furthermore, when convergence occurs it is usually very fast, taking from 4 to 6 iterations. On the other hand, a significant percentage of runs do not converge and the majority of these are cases of strong divergence. As illustrated by Figure 3 c) and d), this percentage decreases significantly when both positive and negative examples are allowed. The rate of convergence is in this case usually between 80 and 90 % and strong divergence never occurs. And while the number of iterations for convergence increases, convergence is still fast (usually below 10 iterations). This is indeed the great advantage of negative examples: they encourage some exploration of the database which avoids local minima and leads to convergence. Notice that, when there is no learning, the convergence rate is high and learning can actually increase the rate of divergence. We believe that this is due to the inconsistencies associated with the negative example selection strategy. However, when convergence occurs, it is always faster if learning is employed. ," - ~ - ~ - - _. - - - - _ ..0 -1,-,-.. ... ==-1 ......IT a) 1ft I? ? ? It .A ? ., .ft .......... 1M I b) I?.? ==-1 .... '0 - " .ft c) '.1 ... I.e ... I U UI d) Figure 3: Learning performance as a function of 0_ Left: Percent of runs which converged. Right: Median number of iterations. Top: positive examples. Bottom: positive and negative examples. References [1] S. Belongie, C. Carson, H. Greenspan, and J. Malik. Color-and texture-based image segmentation using EM and its application to content-based image retrieval. In International Conference on Computer Vision, pages 675-682, Bombay, India, 1998. [2] I. Cox, M. Miller, S. Omohundro, and P. Yianilos. Pic Hunter: Bayesian Relevance Feedback for Image Retrieval. In Int. Con! on Pattern Recognition, Vienna, Austria, 1996. [3] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. SpringerVerlag, 1996. [4] A. Gelman, J. Carlin, H. Stem, and D. Rubin. Bayesian Data Analysis. Chapman Hall, 1995. [5] A. Pentland, R. Picard, and S. Sclaroff. Photobook: Content-based Manipulation of Image Databases. International Journal of Computer Vision, Vol. 18(3):233-254, June 1996. PART IX CONTROL, NAVIGATION AND PLANNING
1761 |@word cox:1 version:1 achievable:1 yisi:2 tedious:1 willing:1 confirms:2 accounting:1 decomposition:1 dramatic:1 harder:1 initial:1 contains:5 score:2 selecting:5 denoting:1 current:1 si:25 attracted:1 written:2 fonnulated:1 exposing:1 must:1 visible:1 happen:1 dct:1 wanted:1 plot:2 update:1 grass:1 selected:3 item:2 argm:1 provides:3 ames:1 accessed:1 become:2 retrieving:1 consists:1 indeed:1 planning:1 decreasing:1 automatically:1 xti:1 cardinality:1 considering:1 becomes:1 provided:3 confused:1 unrelated:1 snowy:1 medium:3 maximizes:1 what:3 mountain:2 m_:2 finding:1 iiyl:2 sky:5 every:1 stuff:1 exactly:2 returning:1 classifier:1 stick:1 control:1 appear:2 positive:23 local:3 tends:1 mistake:2 troublesome:1 analyzing:1 lugosi:1 might:1 suggests:1 challenging:1 range:1 gyorfi:1 averaged:1 decided:1 practical:2 yj:1 testing:1 recursive:1 block:7 implement:1 lippman:4 procedure:3 cbir:8 significantly:1 pre:2 get:2 cannot:1 close:1 selection:6 gelman:1 context:1 live:1 www:1 map:2 yt:8 maximizing:1 missing:1 straightforward:1 go:1 starting:1 formulate:1 rule:1 searching:2 feel:1 target:13 suppose:1 user:34 distinguishing:1 designing:1 hypothesis:1 recognition:2 database:27 bottom:1 ft:3 solved:1 capture:1 worst:1 region:20 decrease:2 intuition:1 complexity:1 ui:1 depend:1 solving:2 segment:1 iyl:1 weakly:1 learner:3 completely:2 basis:1 represented:2 retrieves:1 various:1 fast:2 effective:2 query:31 labeling:1 choosing:1 widely:1 solve:1 larger:2 favor:1 soundness:1 gp:3 transform:1 itself:2 sequence:2 advantage:1 interaction:6 relevant:4 loop:1 rapidly:1 holistic:1 poorly:2 achieve:1 intuitive:2 convergence:10 argmaxp:2 generating:1 brodatz:5 object:2 andrew:1 measured:1 strong:3 implies:1 indicate:1 snow:3 correct:1 attribute:1 exploration:1 occupies:1 explains:1 argued:1 assign:1 considered:1 ground:3 normal:1 visually:3 great:1 hall:1 infonnation:3 vice:1 mit:3 always:2 avoid:3 greenspan:1 og:4 june:1 she:1 improvement:2 likelihood:8 inference:3 typically:1 entire:2 eliminate:1 favoring:1 going:1 pixel:2 arg:5 issue:1 among:3 classification:3 field:1 equal:3 once:3 vasconcelos:4 never:1 chapman:1 look:1 simplify:1 escape:2 few:5 randomly:2 composed:1 divergence:5 interest:6 picard:1 evaluation:1 analyzed:1 mixture:1 navigation:1 closer:1 encourage:1 unless:1 filled:1 logarithm:1 penalizes:1 desired:2 ancient:1 theoretical:1 bombay:1 ytp:1 subset:3 examining:1 too:1 llxl:4 reported:1 synthetic:1 st:1 density:1 fundamental:1 international:2 probabilistic:3 yl:1 diverge:1 quickly:3 imagery:2 again:2 containing:2 choose:1 tile:9 inefficient:1 return:2 account:4 coefficient:1 int:1 satisfy:1 explicitly:2 piece:1 performed:4 view:2 lot:1 start:3 bayes:2 sort:2 relied:1 contribution:1 minimize:1 accuracy:1 miller:1 ofthe:1 weak:1 bayesian:9 hunter:1 basically:1 converged:2 explain:2 reach:1 nuno:2 obvious:2 naturally:2 associated:1 con:1 recall:5 knowledge:1 color:1 improves:1 austria:1 segmentation:3 sophisticated:1 actually:1 higher:2 response:2 evaluated:2 furthermore:1 hand:5 overlapping:3 propagation:3 abort:1 believe:1 facilitate:1 requiring:1 concept:4 consisted:1 contain:2 managed:1 entering:1 laboratory:1 illustrated:2 white:2 during:1 cosine:1 criterion:4 carson:1 allowable:1 omohundro:1 percent:3 image:75 wise:1 recently:1 ilxi:1 common:3 he:1 significant:4 cambridge:1 versa:1 llx:2 automatic:1 grid:1 session:1 access:1 similarity:1 iyi:1 nonnalizing:1 posterior:4 retrieved:4 forcing:1 scenario:2 manipulation:1 certain:1 binary:1 tenns:1 inconsistency:1 yi:1 exploited:1 seen:1 minimum:4 employed:1 determine:3 paradigm:2 converge:2 monotonically:1 full:1 desirable:1 multiple:1 stem:1 faster:1 characterized:1 retrieval:28 controlled:1 impact:1 vision:4 metric:2 iteration:14 achieved:1 penalize:1 addition:1 addressed:1 median:2 standpoint:1 crucial:1 unlike:2 fonnulation:2 tend:3 simulates:1 odds:1 call:2 split:1 enough:2 isolation:1 carlin:1 returned:2 proceed:1 useful:1 generally:1 clear:2 amount:1 reduced:1 http:1 percentage:4 lsi:1 lixt:1 notice:3 estimated:1 blue:1 discrete:1 vol:1 group:2 prevent:1 penalizing:1 graph:2 run:6 place:1 reasonable:1 patch:5 decision:1 comparable:1 alogp:2 aspect:1 argument:1 according:2 poor:1 smaller:1 remain:1 em:2 making:1 explained:1 restricted:1 equation:10 previously:1 turn:1 mechanism:4 know:1 available:1 operation:1 gaussians:1 generic:1 top:6 remaining:2 include:1 vienna:1 learningfrom:1 malik:1 already:1 quantity:1 occurs:5 strategy:4 traditional:1 majority:1 assuming:1 besides:1 devroye:1 providing:2 ratio:3 difficult:4 negative:28 design:3 allowing:1 xisi:1 pentland:1 situation:3 looking:1 incorporated:1 precise:1 varied:1 arbitrary:1 community:1 pic:1 rating:1 pair:1 required:1 conflicting:1 leamingfrom:1 usually:4 below:1 pattern:2 max:3 belief:6 natural:2 difficulty:1 rely:3 recursion:1 older:1 e15:1 picture:2 created:1 incoherent:1 prior:5 understanding:1 relative:2 loss:2 limitation:1 ixl:1 integrate:1 rubin:1 drastically:1 side:1 india:1 taking:2 feedback:24 curve:1 valid:2 evaluating:1 avoids:1 stuck:1 avoided:1 emphasize:1 assumed:2 belongie:1 xi:2 search:4 iterative:1 lip:1 nature:2 robust:1 complex:2 yianilos:1 significance:1 main:1 allowed:1 screen:1 depicts:1 precision:5 xl:2 stamp:1 ix:1 xt:12 decay:3 false:1 importance:2 texture:9 illustrates:1 browsing:1 sclaroff:1 suited:1 lxl:1 simply:4 likely:1 visual:3 contained:1 truth:3 relies:1 ma:1 goal:4 consequently:1 absence:2 content:4 hard:2 springerverlag:1 included:4 called:1 total:2 experimental:2 meaningful:1 indicating:1 select:3 support:2 photobook:1 relevance:4 evaluate:2
831
1,762
Building Predictive Models from Fractal Representations of Symbolic Sequences Peter Tioo Georg Dorffner Austrian Research Institute for Artificial Intelligence Schottengasse 3, A-101O Vienna, Austria {petert,georg}@ai.univie.ac.at Abstract We propose a novel approach for building finite memory predictive models similar in spirit to variable memory length Markov models (VLMMs). The models are constructed by first transforming the n-block structure of the training sequence into a spatial structure of points in a unit hypercube, such that the longer is the common suffix shared by any two n-blocks, the closer lie their point representations. Such a transformation embodies a Markov assumption - n-blocks with long common suffixes are likely to produce similar continuations. Finding a set of prediction contexts is formulated as a resource allocation problem solved by vector quantizing the spatial n-block representation. We compare our model with both the classical and variable memory length Markov models on three data sets with different memory and stochastic components. Our models have a superior performance, yet, their construction is fully automatic, which is shown to be problematic in the case of VLMMs. 1 Introduction Statistical modeling of complex sequences is a prominent theme in machine learning due to its wide variety of applications (see e.g. [5)). Classical Markov models (MMs) of finite order are simple, yet widely used models for sequences generated by stationary sources. However, MMs can become hard to estimate due to the familiar explosive increase in the number of free parameters when increasing the model order. Consequently, only low order MMs can be considered in practical applications. Some time ago, Ron, Singer and Tishby [4] introduced at this conference a Markovian model that could (at least partially) overcome the curse of dimensionality in classical MMs. The basic idea behind their model was simple: instead of fixed-order MMs consider variable memory length Markov models (VLMMs) with a "deep" memory just where it is really needed (see also e.g. [5][7]). The size of VLMMs is usually controlled by one or two construction parameters. Unfortunately, constructing a series of increasingly complex VLMMs (for example to enter a model selection phase on a validation set) by varying the construction parameters can be P Tino and G. DorjJner 646 a troublesome task [1). Construction often does not work "smoothly" with varying the parameters. There are large intervals of parameter values yielding unchanged VLMMs interleaved with tiny parameter regions corresponding to a large spectrum of VLMM sizes. In such cases it is difficult to fully automize the VLMM construction. To overcome this drawback, we suggest an alternative predictive model similar in spirit to VLMMs. Searching for the relevant prediction contexts is reformulated as a resource allocation problem in Euclidean space solved by vector quantization. A potentially prohibitively large set of alilength-L blocks is assigned to a much smaller set of prediction contexts on a suffix basis. To that end, we first transform the set of L-blocks appearing in the training sequence into a set of points in Euclidean space, such that points corresponding to blocks sharing a long common suffix are mapped close to each other. Vector quantization on such a set partitions the set of L-blocks into several classes dominated by common suffixes. Quantization centers play the role of predictive contexts. A great advantage of our model is that vector quantization can be performed on a completely self-organized basis. We compare our model with both classical MMs and VLMMs on three data sets representing a wide range of grammatical and statistical structure. First, we train the models on the Feigenbaum binary sequence with a very strict topological and metric organization of allowed subsequences. Highly specialized, deep prediction contexts are needed to model this sequence. Classical Markov models cannot succeed and the full power of admitting a limited number of variable length contexts can be exploited. The second data set consists of quantized daily volatility changes of the Dow Jones Industrial Average (DnA). Predictive models are used to predict the direction of volatility move for the next day. Financial time series are known to be highly stochastic with a relatively shallow memory structure. In this case, it is difficult to beat low-order classical MMs. One can perform better than MMs only by developing a few deeper specialized contexts, but that, on the other hand, can lead to overfitting. Finally, we test our model on the experiments of Ron, Singer and Tishby with language data from the Bible [5]. They trained classical MMs and a VLMM on the books of the Bible except for the book of Genesis. Then the models were evaluated on the bases of negative log-likelihood on an unseen text from Genesis. We compare likelihood results of our model with those of MMs and VLMMs. 2 Predictive models We consider sequences S = 8182 .. . over a finite alphabet A = {I, 2, ... , A} generated by stationary sources. The set of all sequences over A with exactly n symbols is denoted by An . = An information source over A {I, 2, ... , A} is defined by a family of consistent prob0,1,2, ..., :LIEA Pn+ 1 (ws) = Pn(w), for all wEAn ability measures Pn on An, n (AO {A} and Po(A) 1, A denotes the empty string). = = = In applications it is useful to consider probability functions Pn that are easy to handle. This can be achieved, for example, by assuming a finite source memory of length at most L, and formulating the conditional measures P(slw) PL+ 1 (WS)/PL(w), WEAL, using a function c : AL ~ C, from L-blocks over A to a (presumably small) finite set C of prediction contexts: = P(slw) = P(sjc(w)). (1) In Markov models (MMs) of order n :s; L, for all L-blocks w E A L, c( w) is the length-n Predictive Models from Fractal Representations of Sequences 647 suffix ofw, i.e. c(uv) = v, v E An, U E A L - n . In variable memory length Markov models (VLMMs), the suffices c( w) of L-blocks w E AL can have different lengths, depending on the particular L-block w. For strategies of selecting and representing the prediction contexts through prediction suffix trees and/or probabilistic suffix automata see, for example, [4](5]. VLMM construction is controlled by one, or several parameters regulating selection of candidate contexts and growing/pruning decisions. Prediction context function c : AL -+ C in Markov models of order n ~ L, can be interpreted as a natural homomorphism c : AL -+ AL 1? corresponding to the equivalence relation E ~ AL X AL on L-blocks over A: two L-blocks u, v are in the same class, i.e. ( U, v) E E, if they share the same suffix of length n. The factor set ALI ? = C = An consists of all n-blocks over A. Classical MMs define the equivalence E on the suffix bases, but regardless of the suffix structure present in the training data. Our idea is to keep the Markov-motivated suffix strategy for constructing E, but at the same time take into an account the data suffix structure. Vector quantization on a set of B points in a Euclidean space positions N < < B codebook vectors (CV s), each CV representing a subset of points that are closer to it than to any other CV, so that the overall error of substituting CVs for points they represent is minimal. In other words, CVs tend to represent points lying close to each other (in a Euclidean metric). In order to use vector quantization for determining relevant predictive contexts we need to do two things: 1. Define a suitable metric in the sequence space that would correspond to Markov assumptions: (a) two sequences are "close" if they share a common suffix (b) the longer is the common suffix the closer are the sequences 2. Define a uniformly continuous map from the sequence metric space to the Euclidean space, i.e. sequences that are close in the sequence space (i.e. share a long common suffix) are mapped close to each other in the Euclidean space. In [6] we rigorously study a class of such spatial representations of symbolic structures. Specifically, a family of distances between two L-blocks U = UIU2 ...UL-IUL and v VI V2?? .VL-l VL over A = {I, 2, ... , A}, expressed as = 1 L dk(u, v) = Lk i=l L - i + 1c5 (Ui, Vi), k $ 2' (2) ? with c5(i,j) = 1 if i = j, and c5(i,j) = otherwise, correspond to Markov assumption. The parameter k influences the rate of "forgetting the past". We construct a map from the sequence metric space to the Euclidean space as follows: Associate with each symbol i E A a map (3) operating on a unit D-dimensional hypercube [0, l]D . Dimension of the hypercube should be large enough so that each symbol i is associated with a unique vertex, i.e. D = flog2 A 1 and tj #- tj whenever i #- j. The map u : AL -+ [0, l]D, from L-blocks VIV2 ... VL over A to the unit hypercube, U(VI V2 ... VL) = VdVL-l( .,.(V2(VI(X*))) ...)) = (VL 0 VL-l 0 ... 0 V2 0 vt}(x*), (4) 648 P. Tina and G. DarjJner where x? {~}D is the center of the hypercube, is "unifonnly continuous". Indeed, whenever two sequences u, v share a common suffix of length Q, the Euclidean distance between their point representations O'(u) and O'(v) is less than V2kQ. Strictly speaking, for a mathematically correct treatment of unifonn continuity, we would need to consider infinite sequences. Finite blocks of symbols would then correspond to cylinder sets (see [6]). For sake of simplicity we only deal with finite sequences. As with classical Markov models, we define the prediction context function c : A L -t C via an equivalence ? on L-blocks over A: two L-blocks u, v are in the same class if their images under the map 0' are represented by the same codebook vector. In this case, the set of prediction contexts C can be identified with the set of codebook vectors {b I , b2 , ... , bN }, hi E ~D, i 1,2, ... , N. We refer to predictive models with such a context function as prediction/ractal machines (PFMs). The prediction probabilities (1) are determined by = N(i, s) P(slbd = L: aEA N('Z, a )' sEA, (5) where N(i , a) is the number of (L+l)-blocks ua, a E AL, a E A, in the training sequence, such that the point 0'( u) is allocated to the codebook vector bi . 3 Experiments In all experiments we constructed PFMs using a contraction coefficient k = ~ (see eq. (3? and K-means as a vector quantization tool. The first data set is the Feigenbaum sequence over the binary alphabet A = {1,2}. This sequence is well-studied in symbolic dynamics and has a number of interesting properties. First, the topological structure of the sequence can only be described using a context sensitive tool - a restricted indexed context-free grammar. Second, for each block length n = 1, 2, .. ., the distribution of n-blocks is either unifonn, or has just two probability levels. Third, the n-block distributions are organized in a self-similar fashion (see [2]). The sequence can be specified by the subsequence composition rule ' (6) We chose to work with the Feigenbaum sequence, because increasingly accurate modeling of the sequence with finite memory models requires a selective mechanism for deep prediction contexts. We created a large portion of the Feigenbaum sequence and trained a series of classical MMs, variable memory length MMs (VLMMs), and prediction fractal machines (PFMs) on the first 260,000 symbols. The following 200,000 symbols fonned a test set. Maximum memory length L for VLMMs and PFMs was set to 30. As mentioned in the introduction, constructing a series of increasingly complex VLMMs . by varying the construction parameters appeared to be a troublesome task. We spent a fair amount of time finding "critical" parameter values at which the model size changed. In contrast, a fully automatic construction of PFMs involved sliding a window of length L = 30 through the training set; for each window position, mapping the L-block u appearing in the window to the point 0'( u) (eq. (4?, vector-quantizing the resulting set of points (up to 30 codebook vectors). After the quantization step we computed predictive probabilities according to eq. (5). Predictive Models from Fractal Representations of Sequences 649 Table I: Normalized negative log-likelihoods (NNL) on the Feigenbaum test set. model # contexts NNL PFM 2-4 5-7 8-22 232-4 5 0.6666 0.3333 0.1666 0.0833 0.6666 0.3333 0.1666 0.0833 0.6666 VLMM 11 MM 23 2,4,8,16,32 captured block distribution 1-3 1-6 1-12 1-24 1-3 1-6 1-12 1-24 1-3 Negative log-likelihoods per symbol (the base oflogarithm is always taken to be the number of symbols in the alphabet) of the test set computed using the fitted models exhibited a steplike increasing tendency shown in Table 1. We also investigated the ability of the models to reproduce the n-block distribution found in the training and test sets. This was done by letting the models generate sequences of length equal to the length of the training sequence 1,2, ... , 30, computing the L1 distance between the n-block and for each block length n distribution of the training and model-generated sequences. The n-block distributions on the test and training sets were virtually the same for n = 1,2, ... 30. In Table I we show block lengths for which the L1 distance does not exceed a small threshold~ . We set ~ = 0.005, since in this experiment, either the L1 distance was less 0.005, or exceeded 0.005 by a large amount. = An explanation of the step-like behavior in the log-likelihood and n-block modeling behavior of VLMMs and PFMs is out of the scope of this paper. We briefly mention, however, that by combining the knowledge about the topological and metric structur~s of the Feigenbaum sequence (e.g. [2]) with a careful analysis of the models, one can show why and when an inclusion of a prediction context leads to an abrupt improvement in the modeling performance. In fact, we can show that VLMMs and PFMs constitute increasingly better approximations to the infinite self-similar Feigenbaum machine known in symbolic dynamics [2]. The classical MM totally fails in this experiment, since the context length 5 is far too small to enable the MM to mimic the complicated subsequence structure in the Feigenbaum sequence. PFMs and VLMMs quickly learn to explore a limited number of deep prediction contexts and perform comparatively well. In the second experiment, a time series {xtJ of the daily values ofthe Dow Jones Industrial Average (DJIA) from Feb. 1 1918 until April 1 1997 was transformed into a time series of returns rt log Xt+1 - log Xt, and divided into 12 partially overlapping epochs, each containing about 2300 values (spanning approximately 9 years). We consider the squared return r; a volatility estimate for day t. Volatility change forecasts (volatility is going to increase or decrease) based on historical returns can be interpreted as a buying or selling signal for a straddle (see e.g. [3]). If the volatility decreases we go short (straddle is sold), if it increases we take a long position (straddle is bought). In this respect, the quality of a volatility model can be measured by the percentage of correctly predicted directions of daily volatility differences. = P Tino and G. DorfJner 650 Table 2: Prediction perfonnance on the DJIA volatility series. model PPM VLMM MM 1 71.08 68.67 68.56 Percent correct on test set 2 4 3 5 70.39 69.70 70.05 72.12 68.18 68.79 69.25 69.41 69.11 69.78 68.28 69.50 6 72.46 68.29 73.13 PPM VLMM MM 7 74.01 69.83 74.16 8 71.77 67.00 71.96 11 71,77 69.80 71.74 12 74.19 70.25 71.07 9 73.84 67.96 69.95 10 73.84 70.76 69.16 The series {r~+1 - r~} of differences between the successive squared returns is transfonned as follows: into a sequence {Dt} over 4 symbols by quantizing the series {r~+1 - rn I (extreme down), D t -- { 2 (nonnal down), 3 (nonnal up), 4 (extreme up), if rr+1 - rr < 01 < 0 if 01 ~ r;+1 - r~ < a 2 2 if a ~ rt~1 - r t < O2 if 02 ~ rt +1 - r;, (7) where the parameters 01 and ()2 correspond to Q percent and (100 - Q) percent sample quantiles, respectively. So, the upper (lower) Q% of all daily volatility increases (decreases) in the sample are considered extremal, and the lower (upper) (50 - Q)% of daily volatility increases (decreases) are viewed as nonnal. Each epoch is partitioned into training, validation and test parts containing 110, 600 and 600 symbols, respectively. Maximum memory length L for VLMMs and PFMs was set to 10 (two weeks). We trained classical MMs, VLMMs and PFMs with various numbers of prediction contexts (up to 256) and extremal event quantiles Q E {5, 10, 15, ... , 45}. For each model class, the model size and the quantile Q to be used on the test set were'selected according to the validation set perfonnance. Perfonnance of the models was quantified as the percentage of correct guesses of the volatility change direction for the next day. If the next symbol is 1 or 2 (3 or 4) and the sum of conditional next symbol probabilities for 1 and 2 (3 and 4) given by a model is greater than 0.5, the model guess is considered correct. Results are shown in Table 2. Paired t-test reveals that PFMs significantly (p < 0.005) outperfonn both VLMMs and classical MMs. Of course, fixed-order MMs are just special cases of VLMMs, so theoretically, VLMMs cannot perfonn worse than MMs. We present separate results for MMs and VLMMs to illustrate practical problems in fitting VLMMs. Besides familiar problems with setting the construction parameter values, one-parameter-schemes (like that presented in [4] and used here) operate only on small subsets of potential VLMMs. On data sets with a rather shallow memory structure, this can have a negative effect. The third experiment extends the work of Ron, Singer and Tishby [5]. They tested classical MMs and VLMMs on the Bible. The alphabet is English letters and the blank character (27 symbols). The training set consisted of the Bible except for the book of Genesis. The test set was a portion of 236 characters from the book of Genesis. They set the maximal memory depth to L = 30 and constructed a VLMM with about 3000 contexts. Summarizing the results in [5], classical MMs of order 0, 1, 2 and 3 achieved negative log-likelihoods per 651 Predictive Models from Fractal Representations of Sequences character (NNL) of 0.853, 0.681, 0.560 and 0.555, respectively. The authors point out a 18954. huge difference between the number of states in MMs of order 2 and 3: 27 3 - 272 VLMM performed much better and achieved an NNL of 0.456. In our experiments, we 30 (the same maximal memory length was used for set the maximal memory length to L VLMM construction in [5]). PFMs were constructed by vector quantizing a 5-dimensional (alphabet has 27 symbols) spatial representation of 3D-blocks appearing in the training set. On the test set, PFMs with 100, 500, 1O(}(} and 3000 predictive contexts achieved an NNL of 0.622, 0.518, 0.510 and 0.435. = = 4 Conclusion We presented a novel approach for building finite memory predictive models similar in spirit to variable memory length Markov models (VLMMs). Constructing a series of VLMMs is often a troublesome and highly time-consuming task requiring a lot of interactive steps. Our predictive models, prediction fractal machines (PFMs), can be constructed in a completely automatic and intuitive way - the number of codebook vectors in the vector quantization PFM construction step corresponds to the number of predictive contexts. We tested our model on three data sets with different memory and stochastic components. VLMMs excel over the classical MMs on the Feigenbaum sequence requiring deep prediction contexts. On this sequence, PFMs achieved the same performance as their rivals - VLMMs. On financial time series, PFMs significantly outperform the purely symbolic Markov models - MMs and VLMMs. On natural language Bible data, our PFM outperforms a VLMM of comparable size. Acknowledgments This work was supported by the Austrian Science Fund (FWF) within the research project "Adaptive Information Systems and Modeling in Economics and Management Science" (SFB 010) and the Slovak Academy of Sciences grant SAY 2/6018/99. The Austrian Research Institute for Artificial Intelligence is supported by the Austrian Federal Ministry of Science and Transport. References [1] P. BOhlmann. Model selection for variable length Markov chains and tuning the context algorithm. Annals of the Institute of Statistical Mathematics, (in press), 1999. [2] 1. Freund, W. Ebeling, and K. Rateitschak. Self-similar sequences and uniVersal scaling of dynamical entropies. PhysicaL Review E, 54(5), pp. 5561-5566, 1996. [3] 1. Noh, R.F. Engle, and A. Kane. Forecasting volatility and option prices of the s&p 500 index. JournaL of Derivatives, pp. 17-30, 1994. [4] D. Ron, Y. Singer, and N. Tishby. The power of amnesia. In Advances in Neural Information Processing Systems 6, pp. 176-183. Morgan Kaufmann, 1994. [5] D. Ron, Y. Singer, and N. Tishby. The power of amnesia. Machine Learning, 25,1996. [6] P. Tino. Spatial representation of symbolic sequences through iterative function system. IEEE Transactions on Systems. Man. and Cybernetics Part A: Systems and Humans, 29(4), pp. 386-392, 1999. [7] M.1. Weinberger, 1.1. Rissanen, and M. Feder. A universal finite memory source. IEEE Transactions on Information Theory, 41 (3), pp. 643-652,1995.
1762 |@word briefly:1 bn:1 contraction:1 homomorphism:1 mention:1 series:11 selecting:1 past:1 o2:1 outperforms:1 blank:1 yet:2 partition:1 fund:1 stationary:2 intelligence:2 selected:1 guess:2 short:1 quantized:1 codebook:6 ron:5 successive:1 constructed:5 become:1 amnesia:2 consists:2 fitting:1 theoretically:1 forgetting:1 indeed:1 behavior:2 growing:1 buying:1 curse:1 window:3 increasing:2 ua:1 totally:1 project:1 interpreted:2 string:1 finding:2 transformation:1 perfonn:1 interactive:1 exactly:1 prohibitively:1 unit:3 grant:1 troublesome:3 approximately:1 chose:1 studied:1 quantified:1 equivalence:3 kane:1 limited:2 range:1 bi:1 practical:2 unique:1 acknowledgment:1 block:33 universal:2 significantly:2 word:1 suggest:1 symbolic:6 cannot:2 close:5 selection:3 context:28 influence:1 map:5 center:2 go:1 regardless:1 economics:1 automaton:1 simplicity:1 abrupt:1 sjc:1 rule:1 financial:2 outperfonn:1 searching:1 handle:1 annals:1 construction:11 play:1 associate:1 role:1 solved:2 region:1 decrease:4 mentioned:1 transforming:1 slovak:1 ui:1 rigorously:1 dynamic:2 ppm:2 trained:3 ali:1 predictive:16 purely:1 basis:2 completely:2 selling:1 po:1 represented:1 various:1 alphabet:5 train:1 artificial:2 widely:1 say:1 otherwise:1 grammar:1 ability:2 unseen:1 transform:1 sequence:40 advantage:1 quantizing:4 rr:2 propose:1 maximal:3 relevant:2 combining:1 academy:1 intuitive:1 empty:1 sea:1 produce:1 volatility:13 depending:1 spent:1 ac:1 illustrate:1 measured:1 wean:1 eq:3 predicted:1 direction:3 drawback:1 correct:4 stochastic:3 human:1 enable:1 ao:1 suffices:1 really:1 mathematically:1 strictly:1 pl:2 mm:29 lying:1 considered:3 great:1 presumably:1 mapping:1 predict:1 scope:1 week:1 substituting:1 extremal:2 sensitive:1 tool:2 federal:1 always:1 feigenbaum:9 rather:1 pn:4 varying:3 improvement:1 likelihood:6 industrial:2 contrast:1 summarizing:1 suffix:17 vl:6 w:2 relation:1 selective:1 reproduce:1 transformed:1 going:1 nonnal:3 overall:1 noh:1 denoted:1 spatial:5 special:1 equal:1 construct:1 jones:2 mimic:1 few:1 xtj:1 familiar:2 phase:1 explosive:1 cylinder:1 organization:1 huge:1 regulating:1 highly:3 extreme:2 yielding:1 bible:5 behind:1 admitting:1 tj:2 chain:1 accurate:1 closer:3 daily:5 unifonnly:1 perfonnance:3 tree:1 indexed:1 euclidean:8 minimal:1 fitted:1 modeling:5 markovian:1 vertex:1 subset:2 tishby:5 too:1 straddle:3 probabilistic:1 quickly:1 squared:2 management:1 containing:2 worse:1 book:4 derivative:1 return:4 account:1 potential:1 b2:1 coefficient:1 vi:4 performed:2 lot:1 portion:2 option:1 complicated:1 kaufmann:1 correspond:4 ofthe:1 cybernetics:1 ago:1 sharing:1 whenever:2 pp:5 involved:1 associated:1 treatment:1 austria:1 knowledge:1 dimensionality:1 organized:2 exceeded:1 dt:1 day:3 april:1 evaluated:1 done:1 just:3 until:1 hand:1 dow:2 transport:1 ebeling:1 overlapping:1 continuity:1 quality:1 building:3 effect:1 normalized:1 consisted:1 requiring:2 assigned:1 deal:1 tino:3 self:4 prominent:1 l1:3 percent:3 image:1 novel:2 common:8 superior:1 specialized:2 physical:1 refer:1 composition:1 ai:1 enter:1 cv:5 automatic:3 uv:1 tuning:1 mathematics:1 inclusion:1 language:2 longer:2 operating:1 base:3 feb:1 binary:2 vt:1 exploited:1 captured:1 ministry:1 greater:1 pfm:3 morgan:1 signal:1 sliding:1 full:1 long:4 divided:1 paired:1 controlled:2 prediction:20 basic:1 austrian:4 metric:6 represent:2 achieved:5 interval:1 source:5 allocated:1 operate:1 exhibited:1 strict:1 tend:1 virtually:1 thing:1 spirit:3 bought:1 fwf:1 exceed:1 pfms:16 easy:1 enough:1 variety:1 identified:1 idea:2 engle:1 motivated:1 feder:1 sfb:1 ul:1 forecasting:1 peter:1 aea:1 dorffner:1 reformulated:1 speaking:1 schottengasse:1 constitute:1 fractal:6 deep:5 useful:1 amount:2 rival:1 dna:1 continuation:1 generate:1 outperform:1 percentage:2 problematic:1 unifonn:2 per:2 correctly:1 georg:2 threshold:1 rissanen:1 year:1 sum:1 letter:1 extends:1 family:2 decision:1 scaling:1 comparable:1 interleaved:1 hi:1 topological:3 sake:1 dominated:1 formulating:1 relatively:1 developing:1 according:2 smaller:1 increasingly:4 character:3 partitioned:1 shallow:2 restricted:1 taken:1 resource:2 mechanism:1 singer:5 needed:2 letting:1 end:1 v2:4 appearing:3 alternative:1 weinberger:1 denotes:1 tina:1 vienna:1 embodies:1 quantile:1 hypercube:5 classical:16 unchanged:1 comparatively:1 move:1 strategy:2 rt:3 distance:5 separate:1 mapped:2 spanning:1 transfonned:1 assuming:1 length:24 besides:1 index:1 difficult:2 unfortunately:1 potentially:1 negative:5 ofw:1 perform:2 upper:2 liea:1 markov:16 sold:1 finite:10 beat:1 genesis:4 rn:1 introduced:1 specified:1 usually:1 univie:1 dynamical:1 appeared:1 memory:21 explanation:1 power:3 suitable:1 critical:1 natural:2 event:1 representing:3 scheme:1 lk:1 created:1 excel:1 text:1 epoch:2 review:1 determining:1 freund:1 fully:3 interesting:1 allocation:2 validation:3 weal:1 consistent:1 tiny:1 share:4 course:1 changed:1 supported:2 free:2 english:1 deeper:1 institute:3 wide:2 grammatical:1 overcome:2 dimension:1 depth:1 author:1 c5:3 fonned:1 adaptive:1 historical:1 far:1 transaction:2 djia:2 pruning:1 keep:1 overfitting:1 reveals:1 consuming:1 spectrum:1 subsequence:3 continuous:2 iterative:1 why:1 table:5 learn:1 investigated:1 complex:3 constructing:4 allowed:1 fair:1 quantiles:2 fashion:1 fails:1 theme:1 position:3 lie:1 candidate:1 third:2 nnl:5 down:2 xt:2 symbol:14 dk:1 quantization:9 forecast:1 smoothly:1 entropy:1 likely:1 explore:1 expressed:1 partially:2 corresponds:1 succeed:1 conditional:2 viewed:1 formulated:1 consequently:1 careful:1 shared:1 price:1 slw:2 hard:1 change:3 man:1 specifically:1 except:2 uniformly:1 infinite:2 determined:1 tendency:1 tested:2
832
1,763
Coastal Navigation with Mobile Robots Nicholas Roy and Sebastian Thrun School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 {nicholas. roy Isebastian. thrun } @cs.cmu.edu Abstract The problem that we address in this paper is how a mobile robot can plan in order to arrive at its goal with minimum uncertainty. Traditional motion planning algorithms often assume that a mobile robot can track its position reliably, however, in real world situations, reliable localization may not always be feasible. Partially Observable Markov Decision Processes (POMDPs) provide one way to maximize the certainty of reaching the goal state, but at the cost of computational intractability for large state spaces. The method we propose explicitly models the uncertainty of the robot's position as a state variable, and generates trajectories through the augmented pose-uncertainty space. By minimizing the positional uncertainty at the goal, the robot reduces the likelihood it becomes lost. We demonstrate experimentally that coastal navigation reduces the uncertainty at the goal, especially with degraded localization. 1 Introduction For an operational mobile robot, it is essential to prevent becoming lost. Early motion planners assumed that a robot would never be lost - that a robot could always know its position via dead reckoning without error [7]. This assumption proved to be untenable due to the small and inevitable inconsistencies in actual robot motion; robots that rely solely on dead reckoning for their position estimates lose their position quickly. Mobile robots now perform position tracking using a combination of sensor data and odometry [2, 10, 5]. However, the robot's ability to track its position can vary considerably with the robot's position in the environment. Some parts of the environment may lack good features for localization [11]. Other parts of the environment can have a large number of dynamic features (for example, people) that can mislead the localization system. Motion planners rarely, if ever, take the robot's position tracking ability into consideration. As the robot's localization suffers, the likelihood that the robot becomes lost increases, and as a consequence, the robot is less likely to complete the given trajectory. Most localization systems therefore compensate by adding environment-specific knowledge to the localization system, or by adding additional sensing capabilities to the robot, to guarantee that the robot can complete every possible path. In general, however, such alterations to the position tracking abilities of the robot have limitations, and an alternative scheme must be used to ensure that the robot can navigate with maximum reliability. The conventional planners represent one end of a spectrum of approaches (figure 1), in that a plan can be computed easily, but at the cost of not modelling localization performance. At opposite end of the spectrum is the Partially Observable Markov Decision Process N Roy and S. Thntn 1044 Conventional Path Planner POMDP Tractable Not Robust Intracwble Robust Figure 1: The continuum of possible approaches to the motion planning, from the robust but intractable POMDP, to the potentially failure-prone but real-time conventional planners. Coastal navigation lies in the middle of this spectrum. (POMDP). POMDPs in a sense are the brass ring of planning with uncertainty; a POMDP policy will make exactly the right kind of compromise between conventional optimality considerations and certainty of achieving the goal state. Many people have examined the use of POMDPs for mobile robot navigation [5, 6, 8]. However, computing a POMDP solution is computationally intractable (PSPACE-hard) for large state systems - a mobile robot operating in the real world often has millions of possible states. As a result, many of the mobile robot POMDP solutions have made simplifying assumptions about the world in order to reduce the state space size. Many of these assumptions do not scale to larger environments or robots. In contrast, our hypothesis is that only a small number of the dimensions of the uncertainty matter, and that we can augment the state with these dimensions to approximate a solution to the POMDP. The coastal navigation model developed in this paper represents a tradeoff between robust trajectories and computational tractability, and is inspired by traditional navigation of ships. Ships often use the coasts of continents for navigation in the absence of better tools such as GPS, since being close to the land allows sailors to determine with high accuracy where they are. The success of this method results from coast lines containing enough information in their structure for accurate localization. By navigating sufficiently close to areas of the map that have high information content, the likelihood of getting lost can be minimized. 2 Modelling Uncertainty The problem that we address in this paper is how a mobile robot can plan in order to arrive at its goal with minimum uncertainty. Throughout this discussion, we will be assuming a known map of the environment [9]. The position, x, of the robot is given as the location (x, y) and direction e, defined over a space X = (X, Y, 8). Our localization method is a grid-based implementation of Markov localization [3, 5]. This method represents the robot's belief in its current position using a 3-dimensional grid over X = (X, Y, 8), which allows for a discrete approximation of arbitrary probability distributions. The probability that the robot has a particular pose x is given by the probability p(x). State Augmentation We can extend the state of the robot from the 3-dimensional pose space to an augmented pose-uncertainty space. We can represent the uncertainty of the robot's positional distribution as the entropy, H(Px ) =- J p(x) log(p(x)) dx (1) x We therefore represent the state space of the robot as the tuple S (x,y,e,H(x,y,e)) (x, H(x)) State Transitions In order to construct a plan between two points in the environment, we need to be able to represent the effect of the robot's sensing and moving actions. The implementation of Markov localization provides the following equations for the tracking Coastal Navigation with Mobile Robots 1045 the robot's pose from x to x': p(x'lu) J p(x'lx, u)p(x)dx (2) x p(x'lz) ap(zlx)p(x) (3) These equations are taken from [3, 12], where equation (2) gives the prediction phase of localization (after motion u), and equation (3) gives the update phase of localization (after receiving observation z). a is a normalizing constant. We extend these equations to the fourth dimension as follows: p(slu) p(slz) (p(xlu),ll(p(xlu))) (p(xlz), ll(p(xlz))) (4) (5) 3 Planning Equations (4) and (5) provide a mechanism for tracking the robot's state, and in fact contain redundant information, since the extra state variable ll(x) is also contained in the probability distribution p(x). However, in order to make the planning problem tractable, we cannot in fact maintain the probabilistic sensing model. To do so would put the planning problem firmly in the domain ofPOMDPs, with the associated computational intractability. Instead, we make a simplifying assumption, that is, that the positional probability distribution of the robot can be represented at all times by a Gaussian centered at the mean x. This allows us to approximate the positional distribution with a single statistic, the entropy. In POMDP terms, we using the assumption of Gaussian distributions to compress the belief space to a single dimension. We can now represent the positional probability distribution completely with the vector s, since the width of the Gaussian is represented by the entropy ll(x). More importantly, the simplifying assumption allows us to track the state of the robot deterministically. Although the state transitions are stochastic (as in equation (4?, the observations are not. At any point in time, the sensors identify the true state of the system, with some certainty given by II (p(xlz)). This allows us to compress the state transitions into a single rule: p(slu) (p(xlu),ll(p(xlu,z))) (6) The final position of the robot depends only on the motion command 1l and can be identified by sensing z. However, the uncertainty of the pose, ll(p(xlll, z)), is a function not only of the motion command but also the sensing. The simplifying assumption of Gaussian models is in general untenable for localization; however, we shall see that this assumption is sufficient for the purpose of motion planning. One final modification must be made to the state transition rule. In a perfect world, it would be possible to predict exactly what observation would be made. However, it is exactly the stochastic and noisy nature of real sensors that generates planning difficulty, yet the update rule (6) assumes that it is possible to predict measurement z at pose x. Deterministic prediction is not possible; however, it is possible to compute probabilities for sensor measurements, and thus generate an expected value for the entropy based on the probability distribution of observations Z, which leads to the final state transition rule: p(slu) (p(xlu), Ez[ll(p(xlu , z))]) (7) where Ez[ll(p(xlll, z))] represents the expected value of the entropy of the pose distribution over the space of possible sensor measurements. With the transition rule in equation (7), we can now compute the transition probabilities for any particular state using a model of the robot's motion, a model of the robot's sensor and a map of th~ environment. The probability p(xlu) is given by a model of the robot's motion, and can be easily precomputed for each action u. The expectation term Ez [ll] N. Roy and S. Thrun 1046 can also be precomputed for each possible state s. The precomputation of these transition probabilities is very time-intensive, because it requires simulating sensing at each state in the environment, and then computing the posterior distribution. However, as the precomputation is a one-time operation for the environment and robot, planning itself can be an online operation and is (in the limit) unaffected by the speed of computing the transition probabilities. 3.1 Computing Trajectories With the state update rule given in equation (7), we can now compute the optimal trajectory to a particular goal. We would in fact like to compute not just the optimal trajectory from the current robot position, but the optimal action from any position in the world. If the robot should deviate from the expected trajectory for any reason (such as error in the motion, or due to low-level control constraints), interests of efficiency suggest precomputing actions for continuing to the goal, rather than continually replanning as these contingencies arise. Note that the motion planning problem as we have now phrased it can be viewed as the problem of computing the optimal policy for a given problem. The Markovian, stochastic nature of the transitions, coupled with the need to compute the optimal policy for all states, suggests a value iteration approach. Value iteration attempts to find the policy that maximizes the long-term reward [1,4]. The problem becomes one of finding the value function, J(s) which assigns a value to each state. The optimal action at each state can then be easily computed by determining the expected value of each action at each state, from the neighboring values. We use a modified form of Bellman's equations to give the value of state J (s) and policy as N J(Si) m:x[R(sd + C(s, u) + L p(Sj lSi, u) . J(Sj)] (8) j=1 N argmax[R(si) Il + C(s, u) + L p(Sj lSi, u) . J(Sj)] (9) j=1 By iterating equation (8), the value function iteratively settles to a converged value over all states. Iteration stops when no state value changes above some threshold value. In the above equations, R(sd is the immediate reward at state si, p(Sj lSi , u) is the transition probability from state si to state Sj, and C(s, u) is the cost of taking action u at state s. Note that the form of the equations is undiscounted in the traditional sense, however, the additive cost term plays a similar role in that the system is penalized for policies that take longer trajectories. The cost in general is simply the distance of one step in the given direction u, although the cost of travel close to obstacles is higher, in order to create a safety margin around obstacles. The cost of an action that would cause a collision is infinite, preventing such actions from being used. The immediate reward is localized only at the goal pose. However, the goal pose has a range of possible values for the uncertainty, creating a set of goal states, g. In order to reward policies that arrive at a goal state with a lower uncertainty, the reward is scaled linearly with goal state uncertainty. R( xd = {~ - H (s) S (; 9 otherwise (10) By implementing the value iteration given in the equations (8) and (9) in a dynamic program, we can compute the value function in O( nkcrid where n is the number of states in the environment (number of positions x number of entropy levels) and kcrit is the number of iterations to convergence. With the value function computed, we can generate the optimal action for any state in O(a) time, where a is the number of actions out of each state. Coastal Navigation with Mobile Robots 1047 4 Experimental Results Figure 2 shows the mobile robot, Minerva, used for this research. Minerva is a RWI B-18, and senses using a 360 0 field of view laser range finder at 10 increments. Figure 2: Minerva, the B-18 mobile robot used for this research, and an example environment map, the Smithsonian National Museum of American History. The black areas are the walls and obstacles. Note the large sparse areas in the center of the environment. Also shown in figure 2 is an example environment,the Smithsonian National Museum of American History. Minerva was used to generate this map, and operated as a tour-guide in the museum for two weeks in the summer of 1998. This museum has many of the features that make localization difficult -large open spaces, and many dynamic obstacles (people) that can mislead the sensors. Start POSitiOIl Startillg Positioll ~ ~~ ~ ~ . ..-A. (a) Conventional .,.~ I ~ (b) Coastal (c) Sensor Map Figure 3: Two examples in the museum environment. The left trajectory is given by a conventional, shortest-path planner. The middle trajectory is given by the coastal navigation planner. The black areas correspond to obstacles, the dark grey areas correspond to regions where sensor infonnation is available, the light grey areas to regions where no sensor infonnation is available. Figure 3 shows the effect of different planners in the sample environment. Panel (a) shows the trajectory of a conventional, shortest distance planner. Note that the robot moves di- 1048 N. Roy and S. Thrun rectly towards the goal. Panel (b) shows the trajectory given by the coastal planner. In both examples, the robot moves towards an obstacle, and relocalizes once it is in sensor range of the obstacle, before moving towards the goal. These periodic relocalizations are essential for the robot to arrive at the goal with minimum positional uncertainty, and maximum reliability. Panel (c) shows the sensor map of the environment. The black areas show obstacles and walls, and the light grey areas are where no information is available to the sensors, because all environmental features are outside the range of the sensors. The dark grey areas indicate areas where the information gain from the sensors is not zero; the darker grey the area, the better the information gain from the sensors. Positional Uncenainty at Goal 20 Conventional Navigation - coastal Navigation - 18 16 14 ~ 12 "e,., to t;I Jl 8 6 4 !......+ ..........+- ......... 2 0 ?2 0 0.5 I 1.5 2 2.5 .1 . . . . . . .I............... 3 3.5 4 4.5 J 5 5.5 Figure 4: The performance of the coastal navigation algorithm compared to the coastal motion planner. The graph depicts the entropy of the position probability distribution against the range of the laser sensor. Note that the coastal navigation dramatically improves the certainty of the goal position with shorter range laser sensing. Figure 4 is a comparison of the average positional certainty (computed as entropy of the positional probability) of the robot at its goal position, compared to the range of the laser range sensor. As the range of the laser range gets shorter, the robot can see fewer and fewer environmental features - this is essentially a way of reducing the ability of the robot to localize itself. The upper line is the performance of a conventional shortest-distance path planner, and the lower line is the coastal planner. The coastal planner has a lower uncertainty for all ranges of the laser sensor, and is substantially lower at shorter ranges, confirming that the coastal navigation has the most effect when the localization is worst. 5 Conclusion In this paper, we have described a particular problem of motion planning - how to guarantee that a mobile robot can reach its goal with maximum reliability. Conventional motion planners do not typically plan according to the ability of the localization unit in different areas of the environment, and thus make no claims about the robustness of the generated trajectory. In contrast, POMDPs provide the correct solution to the problem of robust trajectories, however, computing the solution to a POMDP is intractable for the size of the state space for typical mobile robot environments. We propose a motion planner with an augmented state space that represents positional uncertainty explicitly as an extra dimension. The motion planner then plans through poseuncertainty space, to arrive at the goal pose with the lowest possible uncertainty. This can be seen to be an approximation to a POMDP where the multi-dimensional belief space is represented as a subset of statistics, in this case the entropy of the belief space. We have shown some experimental comparisons with a conventional motion planner. Not only did the coastal navigation generated trajectories that provided substantial improvement of the positional certainty at the goal compared to the conventional planner, but the improvement became more pronounced as the localization was degraded. Coastal Navigation with Mobile Robots 1049 The model presented here, however, is not complete. The entire methodology hinges upon the assumption that the robot's probability distribution can be adequately represented by the entropy of the distribution. This assumption is valid if the distribution is restricted to a uni-modal Gaussian, however, most Markov localization methods that are based on this assumption fail, because multi-modal, non-Gaussian positional distributions are quite common for moving robots. Nonetheless, it may be that multiple uncertainty statistics along multiple dimensions (e.g., x and y) may do a better job of capturing the uncertainty sufficiently. It is an question for future work as to how many statistics can capture the uncertainty of a mobile robot, and under what environmental conditions. Acknowledgments The authors gratefully acknowledge the advice and collaboration of Tom Mitchell throughout the development of this work. Wolfram Burgard and Dieter Fox played an instrumental role in the development of earlier versions of this work, and their involvement and discussion of this new model is much appreciated. This work was partially funded by the Fonds pour la Formation de Chercheurs et l' Aide ala Recherche (FCAR). References [1] R. Bellman. Dynamic Programming. Princeton University Press, NJ, 1957. [2] w. Burgard, D. Fox, D. Hennig, and T. Schmidt. Estimating the absolute position of a mobile robot using position probability grids. In AAAI, 1996. [3] D. Fox, W. Burgard, and S. Thrun. Active Markov localization for mobile robots. Robotics and Autonomous Systems, 25(3-4), 1998. [4] R. A. Howard. Dynamic Programming and Markov Processes. MIT, 1960. [5] L. Kaelbling, A. R. Cassandra, and J. A. Kurien. Acting under uncertainty: Discrete Bayesian models for mobile-robot navigation. In IROS, 1996. [6] S. Koenig and R. Simmons. The effect of representation and knowledge on goal-directed exploration with reinforcement learning algorithms. Machine Learning Journal, 22:227-250,1996. [7] J .-c. Latombe. Robot Motion Planning. Kluwer Academic Publishers, 1991. [8] S. Mahadevan and N. Khaleeli. Robust mobile robot navigation using partially-observable semi-Markov decision processes. 1999. [9] H. P. Moravec and A. Elfes. High resolution maps from wide angle sonar. In ICRA, 1985. [10] R. Sim and G. Dudek. Mobile robot localization from learned landmarks. In lROS, 1998. [11] H. Takeda, C. Facchinetti, and J.-c. Latombe. Planning the motions of mobile robot in a sensory uncertainty field. IEEE Trans. on Pattern Analysis and Machine Intelligence, 16(10), 1994. [12] S. Thrun, D. Fox, and W. Burgard. A probabilistic approach to concurrent mapping and localization for mobile robots. Machine Learning, 431,1998.
1763 |@word version:1 middle:2 instrumental:1 open:1 grey:5 simplifying:4 ala:1 current:2 si:4 yet:1 dx:2 must:2 additive:1 confirming:1 update:3 intelligence:1 fewer:2 wolfram:1 recherche:1 provides:1 smithsonian:2 location:1 lx:1 along:1 expected:4 pour:1 planning:13 multi:2 bellman:2 inspired:1 actual:1 becomes:3 provided:1 estimating:1 maximizes:1 panel:3 lowest:1 what:2 kind:1 substantially:1 developed:1 finding:1 nj:1 guarantee:2 certainty:6 every:1 xd:1 precomputation:2 exactly:3 scaled:1 control:1 unit:1 continually:1 safety:1 before:1 sd:2 limit:1 consequence:1 solely:1 becoming:1 path:4 ap:1 black:3 examined:1 suggests:1 range:12 directed:1 acknowledgment:1 lost:5 area:12 suggest:1 get:1 cannot:1 close:3 put:1 conventional:12 map:8 xlz:3 deterministic:1 center:1 pomdp:10 resolution:1 mislead:2 assigns:1 rule:6 importantly:1 autonomous:1 increment:1 simmons:1 play:1 programming:2 gps:1 hypothesis:1 pa:1 roy:5 role:2 capture:1 worst:1 region:2 substantial:1 environment:19 reward:5 dynamic:5 compromise:1 localization:23 upon:1 efficiency:1 completely:1 easily:3 represented:4 laser:6 formation:1 outside:1 quite:1 larger:1 otherwise:1 ability:5 statistic:4 noisy:1 itself:2 final:3 online:1 propose:2 neighboring:1 pronounced:1 getting:1 takeda:1 convergence:1 undiscounted:1 perfect:1 ring:1 pose:11 school:1 sim:1 job:1 c:1 indicate:1 direction:2 correct:1 stochastic:3 centered:1 exploration:1 settle:1 implementing:1 wall:2 sufficiently:2 around:1 mapping:1 predict:2 week:1 claim:1 vary:1 early:1 continuum:1 purpose:1 travel:1 lose:1 infonnation:2 replanning:1 coastal:18 concurrent:1 create:1 tool:1 mit:1 sensor:19 always:2 gaussian:6 rwi:1 odometry:1 reaching:1 rather:1 modified:1 mobile:24 command:2 improvement:2 modelling:2 likelihood:3 contrast:2 sense:2 typically:1 entire:1 augment:1 development:2 plan:6 field:2 construct:1 never:1 once:1 represents:4 inevitable:1 future:1 minimized:1 dudek:1 museum:5 national:2 phase:2 argmax:1 maintain:1 attempt:1 interest:1 navigation:19 operated:1 light:2 sens:1 accurate:1 tuple:1 shorter:3 fox:4 continuing:1 earlier:1 obstacle:8 markovian:1 cost:7 tractability:1 kaelbling:1 subset:1 tour:1 burgard:4 periodic:1 considerably:1 probabilistic:2 receiving:1 quickly:1 augmentation:1 aaai:1 containing:1 dead:2 creating:1 american:2 de:1 alteration:1 matter:1 explicitly:2 depends:1 view:1 start:1 capability:1 il:1 degraded:2 accuracy:1 became:1 reckoning:2 correspond:2 identify:1 bayesian:1 lu:1 trajectory:15 pomdps:4 unaffected:1 converged:1 history:2 reach:1 suffers:1 sebastian:1 failure:1 against:1 nonetheless:1 associated:1 di:1 stop:1 gain:2 proved:1 mitchell:1 knowledge:2 improves:1 higher:1 methodology:1 modal:2 tom:1 just:1 koenig:1 lack:1 effect:4 contain:1 true:1 adequately:1 iteratively:1 ll:9 width:1 complete:3 demonstrate:1 continent:1 motion:21 coast:2 consideration:2 common:1 million:1 extend:2 jl:1 kluwer:1 mellon:1 measurement:3 grid:3 gratefully:1 funded:1 reliability:3 moving:3 robot:68 longer:1 operating:1 rectly:1 posterior:1 involvement:1 ship:2 success:1 inconsistency:1 seen:1 minimum:3 additional:1 determine:1 maximize:1 redundant:1 shortest:3 semi:1 ii:1 multiple:2 reduces:2 academic:1 compensate:1 long:1 finder:1 prediction:2 essentially:1 expectation:1 cmu:1 minerva:4 iteration:5 represent:5 pspace:1 robotics:1 publisher:1 extra:2 mahadevan:1 enough:1 identified:1 opposite:1 reduce:1 tradeoff:1 intensive:1 cause:1 action:11 dramatically:1 iterating:1 collision:1 dark:2 generate:3 lsi:3 track:3 carnegie:1 discrete:2 shall:1 hennig:1 threshold:1 achieving:1 localize:1 prevent:1 iros:1 graph:1 angle:1 uncertainty:24 fourth:1 arrive:5 planner:19 throughout:2 decision:3 capturing:1 summer:1 played:1 untenable:2 constraint:1 phrased:1 generates:2 speed:1 optimality:1 px:1 according:1 combination:1 modification:1 restricted:1 dieter:1 taken:1 computationally:1 equation:14 xlu:7 precomputed:2 mechanism:1 fail:1 know:1 tractable:2 end:2 available:3 operation:2 nicholas:2 simulating:1 alternative:1 robustness:1 schmidt:1 latombe:2 compress:2 assumes:1 ensure:1 hinge:1 especially:1 icra:1 move:2 question:1 traditional:3 navigating:1 distance:3 thrun:6 landmark:1 reason:1 assuming:1 minimizing:1 difficult:1 potentially:1 implementation:2 reliably:1 policy:7 perform:1 upper:1 observation:4 markov:8 howard:1 acknowledge:1 immediate:2 situation:1 ever:1 arbitrary:1 learned:1 zlx:1 trans:1 address:2 able:1 pattern:1 program:1 reliable:1 belief:4 difficulty:1 rely:1 scheme:1 firmly:1 coupled:1 slu:3 deviate:1 determining:1 limitation:1 localized:1 contingency:1 sufficient:1 intractability:2 land:1 collaboration:1 prone:1 penalized:1 appreciated:1 guide:1 wide:1 taking:1 absolute:1 sparse:1 dimension:6 moravec:1 world:5 transition:11 valid:1 preventing:1 author:1 made:3 reinforcement:1 sensory:1 lz:1 sj:6 approximate:2 observable:3 uni:1 active:1 pittsburgh:1 assumed:1 spectrum:3 sonar:1 nature:2 robust:6 operational:1 domain:1 precomputing:1 did:1 linearly:1 arise:1 augmented:3 advice:1 depicts:1 darker:1 position:21 deterministically:1 lie:1 specific:1 navigate:1 sensing:7 normalizing:1 essential:2 intractable:3 adding:2 fonds:1 margin:1 cassandra:1 entropy:10 simply:1 likely:1 ez:3 positional:12 contained:1 tracking:5 partially:4 environmental:3 goal:23 viewed:1 towards:3 absence:1 feasible:1 experimentally:1 hard:1 content:1 change:1 infinite:1 reducing:1 typical:1 acting:1 experimental:2 la:1 rarely:1 people:3 aide:1 princeton:1
833
1,764
Noisy Neural Networks and Generalizations Hava T. Siegelmann Industrial Eng. and Management, Mathematics Technion - lIT Haifa 32000, Israel iehava@ie.technion.ac.il Alexander Roitershtein Mathematics Technion - lIT Haifa 32000, Israel roiterst@math.technion.ac.il Asa Ben-Hur Industrial Eng. and Management Technion - lIT Haifa 32000, Israel asa@tx.technion.ac. il Abstract In this paper we define a probabilistic computational model which generalizes many noisy neural network models, including the recent work of Maass and Sontag [5]. We identify weak ergodicjty as the mechanism responsible for restriction of the computational power of probabilistic models to definite languages, independent of the characteristics of the noise: whether it is discrete or analog, or if it depends on the input or not, and independent of whether the variables are discrete or continuous. We give examples of weakly ergodic models including noisy computational systems with noise depending on the current state and inputs, aggregate models, and computational systems which update in continuous time. 1 Introduction Noisy neural networks were recently examined, e.g. in. [1,4, 5]. It was shown in [5] that Gaussian-like noise reduces the power of analog recurrent neural networks to the class of definite languages, which area strict subset of regular languages. Let E be an arbitrary alphabet. LeE? is called a definite language if for some integer r any two words coinciding on the last r symbols are either both in L or neither in L. The ability of a computational system to recognize only definite languages can be interpreted as saying that the system forgets all its input signals, except for the most recent ones. This property is reminiscent of human short term memory. "Definite probabilistic computational models" have their roots in Rabin's pioneering work on probabilistic automata [9]. He identified a condition on probabilistic automata with a finite state space which restricts them to definite languages. Paz [8] generalized Rabin's condition, applying it to automata with a countable state space, and calling it weak ergodicity [7, 8]. In their ground-breaking paper [5], H. T. Siegelmann, A. Roitershtein and A. Ben-Hur 336 Maass and Sontag extended the principle leading to definite languages to a finite interconnection of continuous-valued neurons. They proved that in the presence of "analog noise" (e.g. Gaussian), recurrent neural networks are limited in their computational power to definite languages. Under a different noise model, Maass and Orponen [4] and Casey [1] showed that such neural networks are reduced in their power to regular languages. In this paper we generalize the condition of weak ergodicity, making it applicable to numerous probabilistic computational machines. In our general probabilistic model, the state space can be arbitrary: it is not constrained to be a finite or infinite set, to be a discrete or non-discrete subset of some Euclidean space, or even to be a metric or topological space. The input alphabet is arbitrary as well (e.g., bits, rationals, reals, etc.). The stochasticity is not necessarily defined via a transition probability function (TPF) as in all the aforementioned probabilistic and noisy models, but through the more general Markov operators acting on measures. Our Markov Computational Systems (MCS's) include as special cases Rabin's actual probabilistic automata with cut-point [9], the quasi-definite automata by Paz [8], and the noisy analog neural network by Maass and Sontag [5]. Interestingly, our model also includes: analog dynamical systems and neural models, which have no underlying deterministic rule but rather update probabilistic ally by using finite memory; neural networks with an unbounded number of components; networks of variable dimension (e.g., "recruiting networks"); hybrid systems that combine discrete and continuous variables; stochastic cellular automata; and stochastic coupled map lattices. We prove that all weakly ergodic Markov systems are stable, i.e. are robust with respect to architectural imprecisions and environmental noise. This property is desirable for both biological and artificial neural networks. This robustness was known up to now only for the classical discrete probabilistic automata [8, 9] . To enable practicality and ease in deciding weak ergodicity for given systems, we provide two conditions on the transition probability functions under which the associated computational system becomes weakly ergodic. One condition is based on a version of Doeblin's condition [5] while the second is motivated by the theory of scrambling matrices [7, 8]. In addition we construct various examples of weakly ergodic systems which include synchronous or asynchronous computational systems, and hybrid continuous and discrete time systems. 2 Markov Computational System (MCS) Instead of describing various types of noisy neural network models or stochastic dynamical systems we define a general abstract probabilistic model. When dealing with systems containing inherent elements of uncertainty (e.g., noise) we abandon the study of individual trajectories in favor of an examination of the flow of state distributions. The noise models we consider are homogeneous in time, in that they may depend on the input, but do not depend on time. The dynamics we consider is defined by operators acting in the space of measures, and are called Markov operators [6]. In the following we define the concepts which are required for such an approach. ? Let E be an arbitrary alphabet and be an abstract state space. We assume that a O'-algebra B (not necessarily Borel sets) of subsets of is given, thus (0, B) is a measurable space. Let us denote by P the set of probability measures on (0, B). This set is called a distribution space. ? Let E be a space of finite measures on (0, B) with the total variation norm defined 337 Noisy Neural Networks and Generalizations by Ilpllt = 11-'1(0) = AEB sup I-'(A) - (1) inf I-'(A). AEB Denote by C the set of all bounded linear operators acting from ? to itself. The 1I'lh- norm on ? induces a norm IIPlh sUPJjE'P IIPI-'III in C. An operator P E C is said to be a Markov operator if for any probability measure I-' E P, the image PI-' is again a probability measure. For a Markov operator, IIPIII = 1. = Definition 2.1 A Markov system is a set of Markov operators T = {Pu : u E E}. With any Markov system T, one can associate a probabilistic computational system. If the probability distribution on the initial states is given by the probability measure Po, then the distribution of states after n computational steps on inputs W Wo, WI, ... , W n , is defined as in [5, 8] Pwl-'o(A) = PWn ?? ? ?? Pw1Pwol-'0. (2) Let A and R be two subset of P with the property of having a p-gap = (3) inf III-' - viii = P > 0 JjEA,IIE'R The first set is called a set of accepting distributions and the second is called a set of rejecting distributions. A language L E E* is said to be recognized by Markov computational system M = (?, A, R, E, 1-'0, T) if W E L {:::} Pwl-'o E A W rt. L, {:::} PwPo E R. dist(A, R) = This model of language recognition with a gap between accepting and rejecting spaces agrees with Rabin's model of probabilistic automata with isolated cut-point [9] and the model of analog probabilistic computation [4, 5]. An example of a Markov system is a system of operators defined by TPF on (0, B). Let Pu (x, A) be the probability of moving from a state x to the set of states A upon receiving the input signal u E E. The function Pu(x,') is a probability measure for all x E 0 and PuC A) is a measurable function of x for any A E B. In this case, Pup(A) are defined by (4) 3 Weakly Ergodic MCS Let P E ?, be a Markov operator. The real number J'(P) = 1 - ! sUPJj,I/E'P IIPp is called the ergodicity coefficient of the Markov operator. We denote J(P) = 1 - J'(P) . It can be proven that for any two Markov operators P 1 ,P2, J(PI P2) :S J(Pt}J(P2)' The ergodicity coefficient was introduced by Dobrushin [2] for the particular case of Markov operators induced by TPF P (x, A). In this special case J'(P) 1- SUPx,ySUPA IP(x , A) - P(y,A)I ? Pvlll = Weakly ergodic systems were introduced and studied by paz in the particular case of a denumerable state space 0, where Markov operators are represented by infinite dimensional matrices. The following definition makes no assumption on the associated measurable space. Definition 3.1 A Markov system {Pu , U E E} is called weakly ergodic if for any a > 0, there is an integer r = r( a) such that for any W E E~r and any 1-', v E P, 1 J(Pw) = "2llPwl-' - Pwvlh :S a. (5) 338 H. T. Siege/mann, A. ROitershtein and A. Ben-Hur An MeS M is called weakly ergodic if its associated Markov system {Pu , u E E} is weakly ergodic. ? An MeS M is weakly ergodic if and only if there is an integer r and real number a < 1, such that IlPwJ.l - Pwvlh ::; a for any word w of length r. Our most general characterization of weak ergodicity is as follows: [11]: Theorem 1 An abstract MCS M is weakly ergodic if and only if there exists a multiplicative operator's norm II . 11** on C equivalent to the norm II . liB sUPP,:Ml=O} 1I1~11!1 , and such that SUPUE~ IlPu lIu ::; ? for some number ? < 1. ? The next theorem connects the computational power of weakly ergodic MeS's with the class of definite languages, generalizing the results by Rabin [9], Paz [8, p. 175], and Maass and Sontag [5]. Theorem 2 Let M be a weakly ergodic MCS. If a language L can be recognized by M, then it is definite. ? 4 The Stability Theorem of Weakly Ergodic MCS An important issue for any computational system is whether the machine is robust with respect to small perturbations of the system's parameters or under some external noise. The stability of language recognition by weakly ergodic MeS's under perturbations of their Markov operators was previously considered by Rabin [9] and Paz [7,8]. We next state a general version ofthe stability theorem that is applicable to our wide notion of weakly ergodic systems. We first define two MeS's M and M to be similar if they share the same measurable space (0,8), alphabet E, and sets A and 'fl, and if they differ only by their associated Markov operators. Theorem 3 Let M and M be two similar MCS's such that the first is weakly ergodic. Then there is a > 0, such that if IlPu - 1\lh ::; a for all u E E, then the second is also weakly ergodic. Moreover, these two MCS's recognize exactly the same class of languages. ? Corollary 3.1 Let M and M be two similar MCS's. Suppose that the first is weakly ergodic. Then there exists f3 > 0, such that ifsuPAEB IPu(x, A) -.Pu(x, A)I ::; f3 for all u E E, x E 0, the second is also weakly ergodic. Moreover, these two MCS's recognize exactly the same class of languages. ? A mathematically deeper result which implies Theorem 3 was proven in [11]: Theorem 4 Let M and M be two similar MCS's, such that the first is weakly ergodic and the second is arbitrary. Then, for any a > 0 there exists ? > 0 such that IlPu -1\lh ::; ? for all u E E implies IIPw- .Pw11 1 ::; a for all words wE E* .? Theorem 3 follows from Theorem 4. To see this, one can chose any a < p in Theorem 4 and obser~ that IlPw - .Pwlh ::; a < p implies that the word w is accepted or rejected by M in accordance to whether it is accepted or rejected by M. 339 Noisy Neural Networks and Generalizations 5 Conditions on the Transition Probabilities This section discusses practical conditions for weakly ergodic MCS's in which the Markov operators Pu are induced by transition probability functions as in (4). Clearly, a simple sufficient condition for an MCS to be weakly ergodic is given by sUPUEE d(Pu ) ~ 1 - c, for some c> o. Maass and Sontag used Doeblin's condition to prove the computational power of noisy neural networks [5]. Although the networks in [5] constitute a very particular case of weakly ergodic MCS's, Doeblin's condition is applicable also to our general model. The following version of Doeblin's condition was given by Doob [3]: Definition 5.1 [3] Let P(x, A) be a TPF on (0,8). We say that it satisfies Doeblin condition, D~, if there exists a constant c and a probability measure p on (0,8) such that pn(x,A) ~ cp(A) for any set A E 8. ? If an MCS M is weakly ergodic, then all its associated TPF Pw (x, A), wEE must satisfy for some n = n(w). Doop has proved [3, p. 197] that if P(x,A) satisfies Doeblin's condition D~ with constant c, then for any p, II E P, IIPp - Plliit ~ (1 - c)llp - 11111, i.e., d(P) ~ 1- c. This leads us to the following definition. Do Definition 5.2 Let M be an MCS. We say that the space 0 is small with respect to M if there exists an m > 0 such that all associated TPF Pw (x, A), w E Em satisfy Doeblin's condition D~ uniformly with the same constant c, i.e., Pw (x, A) ~ cpw (A), wE Em. ? The following theorem strengthens the result by Maass and Sontag [5]. Theorem 5 Let M be an MCS. If the space 0 is small with respect to M, then M is weakly ergodic, and it can recognize only definite languages. ? This theorem provides a convenient method for checking weak ergodicity in a given TPF. The theorem implies that it is sufficient to execute the following simple check: choose any integer n, and then verify that for every state x and all input strings wEEn, the "absolutely continuous" part of all TPF Pw , wEEn is uniformly bounded from below: (6) where Pw(x, y) is the density of the absolutely continuous component of Pw(x,?) with respect to 'l/Jw, and C1, C2 are positive numbers. Most practical systems can be defined by null preserving TPF (including for example the systems in [5]). For these systems we provide (Theorem 6) a sufficient and necessary condition in terms of density kernels. A TPF Pu(x, A), u E E is called null preserving with respect to a probability measure pEP if it has a density with respect to p i.e., P(x,A) = IAPu(x,z)p(dz). It is not hard to see, that the property of null preserving per letter u E E implies that all TPF Pw(x, A) of words w E E* are null preserving as well. In this case d(Pu) = 1 - infx,y min{pu(x, z),pu(y, z)}Pu(dz) and we have: In Theorem 6 Let M be an MCS defined by null preserving transition probability functions Pu , u E E. Then, M is weakly ergodic if and only if there exists n such that infwEE" infx,y min{pu(x, z),pu(y, z)}Pu(dz) > o. ? In A similar result was previously established by paz [7, 8] for the case of a denumerable state space O. This theorem allows to treat examples which are not covered by H T. Siegelmann, A. ROitershtein and A. Ben-Hur 340 Theorem 5. For example, suppose that the space 0 is not small with respect to an MCS M, but for some n and any wEEn there exists a measure 1/Jw on (0, B) with the property that for any couple of states x, yEO 1/Jw ({z : min{pw(x, z),Pw(y, z)} ~ cd) ~ C2 , (7) where Pw(x , y) is the density of Pw(x,?) with respect to 1/Jw, and Cl,C2 are positive numbers. This condition may occur even ifthere is no y such that Pu(x, y) S; Cl for all x E O. 6 Examples of Weakly Ergodic Systems 1. The Synchronous Parallel Model Let (Oi , Bi ), i = 1,2, .. . , N be a collection of measurable sets. Define ni = TIj # nj and Hi TIj # Bj. Then (n i , Bi) are measurable spaces. Define also Ei E x n i , and 11 {Pxl,u (Xi , Ai) : (xi, u) E Ed be given stochastic kernels. Each set 11 defines an MCS Mi. We can define an aggregate MCS by setting n TIi Oi, B = TIi Bi , S = TIi Si , R = TIi Ri, and = = = (8) This describes a model of N noisy computational systems that update in synchronous parallelism. The state of the whole aggregate is a vector of states of the individual components , and each receives the states of all other components as part of its input. Theorem 7 [12] Let M be an MCS defined by equation (8). It is weakly ergodic if at least one set of operators T is such that <5(P~,xl) S; 1- C for any u E E, and some positive number c. xi E ni ? 2. The Asynchronous Parallel Model In this model, at every step only one component is activated. Suppose that a collection of N similar MCS 's M i, i = 1, ... , N is given. Consider a probability measure e {fl," ., eN} on the set K {I, ... , N} . Assume that in each computational step only one MCS is activated. The current state of the whole aggregate is represented by the state of its active component. Assume also that the probability of a computational system Mi to be activated, is time-independent and is given by Prob(Md ei. The aggregate system is then described by stochastic kernels = = = N Pu(x, A) = LeiP~(x , A) . (9) i=l Theorem 8 [12] Let M be an MCS defined by formula (9). It is weakly ergodic if at least one set of operators {PJ} , ... , {Pt'} is weakly ergodic. ? 3. Hybrid Weakly Ergodic Systems We now present a hybrid weakly ergodic computational system consisting of both continuous and discrete elements. The evolution of the system is governed by a differential equation , while its input arrives at discrete times. Let n = ffi n , and consider a collection of differential equations Xu(s) = 1/Ju(xu(s)) , u E E, s E [0,00). (10) Noisy Neural Networks and Generalizations 341 Suppose that 1/Ju (x) is sufficiently smooth to ensure the existence and uniqueness of solutions of Equation (10) for s E [0,1] and for any initial condition. Consider a computational system which receives an input u(t) at discrete times to, t l , t 2 .... In the interval t E [ti, ti+d the behavior of the system is described by Equation (10), where s = t-tj. A random initial condition for the time tn is defined by (11) where X u (t,,_d(l) is the state of the system after previously completed computations, and Pu (x, A) , u E E is a family of stochastic kernels on 0 x 8. This describes a system which receives inputs in discrete instants of time; the input letters u E E cause random perturbations of the state X u (t-l)(I) governed by the transition probability functions pu(t)(xu(t-l), A). In all other times the system is a noise-free continuous computational system which evolves according to equation (10). Let 0 = IRn , Xo E 0 be a distinguished initial state, and let Sand R be two subsets of 0 with the property of having a p-gap: dist(S, R) = infxEs,YER Ilx - yll = p > O. The first set is called a set of accepting final states and the second is called a set of reJ'ecting final states. We say that the hybrid computational system M = (0, E, xo, 1/Ju, S, R) recognizes L ~ E* if for all w = WO ... Wn E E* and the end letter $ tj. E the following holds: W E L ?} Prob(xw"s(l) E S) > ~ + c, and W tj. L ?} Prob(x w"s(l) E R) > ~ + c. Theorem 9 [12} Let M be a hybrid computational system. It is weakly ergodic if its set of evolution operators T = {Pu : u E E} is weakly ergodic. ? References [1] Casey, M., The Dynamics of Discrete-Time Computation, With Application to Recurrent Neural Networks and Finite State Machine Extraction, Neural Computation 8, 1135-1178, 1996. [2] Dobrushin, R. L., Central limit theorem for nonstationary Markov chains I, II. Theor. Probability Appl. vol. 1, 1956, pp 65-80, 298-383. [3] Doob J. L., Stochastic Processes. John Wiley and Sons, Inc., 1953. [4] W . Maass and Orponen, P., On the effect of analog noise in discrete time computation, Neural Computation, 10(5), 1998, pp. 1071-1095. [5] W. Maass and Sontag, E., Analog neural nets with Gaussian or other common noise distribution cannot recognize arbitrary regular languages, Neural Computation, 11, 1999, pp. 771-782. [6] Neveu J., Mathematical Foundations of the Calculus of Probability. Holden Day, San Francisco, 1964. [7] Paz A., Ergodic theorems for infinite probabilistic tables. Ann. Math. Statist. vol. 41, 1970, pp. 539-550. [8] Paz A., Introduction to Probabilistic Automata. Academic Press, Inc., London, 1971. [9] Rabin, M., Probabilistic automata, Information and Control, vol 6, 1963, pp. 230-245. [10] Siegelmann H. T., Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhauser, Boston, 1999. [11] Siegelmann H. T . and Roitershtein A., On weakly ergodic computational systems, 1999, submitted. [12] Siegelmann H. T., Roitershtein A., and Ben-Hur, A., On noisy computational systems, 1999, Discrete Applied Mathematics, accepted.
1764 |@word version:3 pw:12 norm:5 calculus:1 eng:2 doeblin:7 initial:4 liu:1 orponen:2 interestingly:1 current:2 si:1 reminiscent:1 must:1 john:1 update:3 short:1 accepting:3 characterization:1 provides:1 math:2 obser:1 unbounded:1 mathematical:1 c2:3 differential:2 prove:2 combine:1 behavior:1 dist:2 actual:1 lib:1 becomes:1 bounded:2 underlying:1 moreover:2 null:5 israel:3 denumerable:2 interpreted:1 string:1 nj:1 every:2 ti:2 exactly:2 control:1 positive:3 accordance:1 treat:1 limit:2 iie:1 chose:1 studied:1 examined:1 appl:1 ease:1 limited:1 bi:3 practical:2 responsible:1 definite:12 ifthere:1 area:1 convenient:1 word:5 regular:3 cannot:1 pxl:1 operator:21 applying:1 restriction:1 equivalent:1 measurable:6 deterministic:1 map:1 dz:3 automaton:10 ergodic:37 rule:1 stability:3 notion:1 variation:1 pt:2 suppose:4 homogeneous:1 associate:1 element:2 recognition:2 strengthens:1 cut:2 dynamic:2 weakly:36 depend:2 algebra:1 asa:2 upon:1 po:1 various:2 tx:1 represented:2 alphabet:4 london:1 artificial:1 aggregate:5 valued:1 say:3 interconnection:1 ability:1 favor:1 noisy:13 abandon:1 itself:1 ip:1 final:2 net:1 ben:5 depending:1 recurrent:3 ac:3 p2:3 implies:5 differ:1 stochastic:7 human:1 enable:1 mann:1 sand:1 generalization:4 biological:1 theor:1 mathematically:1 hold:1 sufficiently:1 considered:1 ground:1 deciding:1 bj:1 recruiting:1 uniqueness:1 applicable:3 agrees:1 clearly:1 gaussian:3 rather:1 pn:1 corollary:1 casey:2 check:1 industrial:2 holden:1 irn:1 quasi:1 doob:2 i1:1 issue:1 aforementioned:1 constrained:1 special:2 construct:1 f3:2 having:2 extraction:1 lit:3 inherent:1 wee:1 recognize:5 individual:2 tpf:11 connects:1 consisting:1 arrives:1 activated:3 tj:3 chain:1 necessary:1 lh:3 euclidean:1 haifa:3 isolated:1 rabin:7 lattice:1 subset:5 technion:6 paz:8 supx:1 ju:3 density:4 ie:1 probabilistic:18 lee:1 receiving:1 again:1 central:1 management:2 containing:1 choose:1 external:1 leading:1 yeo:1 supp:1 tii:4 includes:1 coefficient:2 inc:2 satisfy:2 depends:1 multiplicative:1 root:1 sup:1 parallel:2 il:3 oi:2 ni:2 characteristic:1 identify:1 ofthe:1 generalize:1 weak:6 rejecting:2 mc:25 trajectory:1 submitted:1 ed:1 hava:1 definition:6 pp:5 associated:6 mi:2 couple:1 rational:1 proved:2 hur:5 day:1 coinciding:1 jw:4 execute:1 ergodicity:7 rejected:2 pwl:2 ally:1 receives:3 ei:2 defines:1 effect:1 concept:1 verify:1 evolution:2 imprecision:1 maass:9 generalized:1 tn:1 cp:1 image:1 recently:1 common:1 analog:9 he:1 pep:1 ai:1 mathematics:3 stochasticity:1 language:18 moving:1 stable:1 etc:1 pu:22 recent:2 showed:1 inf:2 preserving:5 recognized:2 ween:3 signal:2 ii:4 desirable:1 reduces:1 smooth:1 academic:1 metric:1 kernel:4 c1:1 addition:1 interval:1 strict:1 induced:2 flow:1 integer:4 nonstationary:1 presence:1 iii:2 wn:1 identified:1 synchronous:3 whether:4 motivated:1 wo:2 sontag:7 cause:1 constitute:1 tij:2 covered:1 statist:1 induces:1 reduced:1 restricts:1 per:1 discrete:14 vol:3 neither:1 pj:1 prob:3 letter:3 uncertainty:1 turing:1 saying:1 family:1 architectural:1 bit:1 fl:2 hi:1 topological:1 occur:1 ri:1 calling:1 aeb:2 min:3 ecting:1 according:1 describes:2 em:2 son:1 wi:1 evolves:1 making:1 xo:2 equation:6 previously:3 describing:1 discus:1 mechanism:1 ffi:1 end:1 generalizes:1 pup:1 distinguished:1 robustness:1 rej:1 existence:1 include:2 ensure:1 completed:1 recognizes:1 instant:1 xw:1 siegelmann:6 practicality:1 classical:1 rt:1 md:1 said:2 me:5 cellular:1 viii:1 length:1 countable:1 neuron:1 markov:23 finite:6 extended:1 scrambling:1 perturbation:3 arbitrary:6 introduced:2 required:1 yll:1 established:1 beyond:1 llp:1 dynamical:2 below:1 parallelism:1 pioneering:1 including:3 memory:2 power:6 hybrid:6 examination:1 ipu:1 numerous:1 coupled:1 checking:1 proven:2 foundation:1 sufficient:3 principle:1 pi:2 share:1 cd:1 last:1 asynchronous:2 free:1 deeper:1 wide:1 dimension:1 transition:6 collection:3 san:1 dealing:1 ml:1 active:1 francisco:1 xi:3 continuous:9 table:1 robust:2 necessarily:2 cl:2 whole:2 noise:12 xu:3 en:1 borel:1 wiley:1 xl:1 governed:2 forgets:1 breaking:1 theorem:24 formula:1 symbol:1 exists:7 yer:1 gap:3 boston:1 generalizing:1 ilx:1 environmental:1 satisfies:2 ann:1 hard:1 infinite:3 except:1 uniformly:2 birkhauser:1 acting:3 called:11 total:1 accepted:3 alexander:1 dobrushin:2 absolutely:2
834
1,765
Learning Informative Statistics: A Nonparametric Approach John W. Fisher III, Alexander T. IhIer, and Paul A. Viola Massachusetts Institute of Technology 77 Massachusetts Ave., 35-421 Cambridge, MA 02139 {jisher,ihler,viola}@ai.mit.edu Abstract We discuss an information theoretic approach for categorizing and modeling dynamic processes. The approach can learn a compact and informative statistic which summarizes past states to predict future observations. Furthermore, the uncertainty of the prediction is characterized nonparametrically by a joint density over the learned statistic and present observation. We discuss the application of the technique to both noise driven dynamical systems and random processes sampled from a density which is conditioned on the past. In the first case we show results in which both the dynamics of random walk and the statistics of the driving noise are captured. In the second case we present results in which a summarizing statistic is learned on noisy random telegraph waves with differing dependencies on past states. In both cases the algorithm yields a principled approach for discriminating processes with differing dynamics and/or dependencies. The method is grounded in ideas from information theory and nonparametric statistics. 1 Introduction Noisy dynamical processes abound in the world - human speech, the frequency of sun spots, and the stock market are common examples. These processes can be difficult to model and categorize because current observations are dependent on the past in complex ways. Classical models come in two sorts: those that assume that the dynamics are linear and the noise is Gaussian (e.g. Weiner etc.); and those that assume that the dynamics are discrete (e.g. HMM's). These approach are wildly popular because they are tractable and well understood. Unfortunately there are many processes where the underlying theoretical assumptions of these models are false. For example we may wish to analyze a system with linear dynamics and non-Gaussian noise or we may wish to model a system with an unknown number of discrete states. We present an information-theoretic approach for analyzing stochastic dynamic processes which can model simple processes like those mentioned above, while retaining the flexibility to model a wider range of more complex processes. The key insight is that we can often learn a simplifying informative statistic of the past from samples using non parametric estimates of both entropy and mutual information. Within this framework we can predict future states and, of equal importance, characterize the uncertainty accompanying those Learning Informative Statistics: A Nonparametric Approach 901 predictions. This non-parametric model is flexible enough to describe uncertainty which is more complex than second-order statistics. In contrast techniques which use squared prediction error to drive learning are focused on the mode of the distribution. Taking an example from financial forecasting, while the most likely sequence of pricing events is of interest, one would also like to know the accompanying distribution of price values (i.e. even if the most likely outcome is appreciation in the price of an asset, knowledge of lower, but not insignificant, probability of depreciation is also valuable). Towards that end we describe an approach that allows us to simultaneously learn the dependencies of the process on the past as well as the uncertainty of future states. Our approach is novel in that we fold in concepts from information theory, nonparametric statistics, and learning. In the two types of stochastic processes we will consider, the challenge is to summarize the past in an efficient way. In the absence of a known dynamical or probabilistic model, can we learn an informative statistic (ideally a sufficient statistic) of the past which minimizes our uncertainty about future states? In the classical linear state-space approach, uncertainty is characterized by mean squared error (MSE) which implicitly assume Gaussian statistics. There are, however, linear systems with interesting behavior due to non-Gaussian statistics which violate the assumption underlying MSE. There are also nonlinear systems and purely probabilistic processes which exhibit complex behavior and are poorly characterized by mean square error and/or the assumption of Gaussian noise. Our approach is applicable to both types of processes. Because it is based on nonparametric statistics we characterize the uncertainty of predictions in a very general way: by a density of possible future states. Consequently the resulting system captures both the dynamics of the systems (through a parameterization) and the statistics of driving noise (through a non parametric modeling). The model can then be used to classify new signals and make predictions about the future. 2 Learning from Stationary Processes In this paper we will consider two related types of stochastic processes, depicted in figure I. These processes differ in how current observations are related to the past. The first type of process, described by the following set of equations, is a discrete time dynamical (possibly nonlinear) system: Xk =G({Xk-t}N ;Wg)+rJk ; {xk}N={Xk, .. . , Xk-(N - l}} (I) where, .T k, the state of the process at time k, is a function of the N previous states and the present value of rJ. In general the sequence {Xk} is not stationary (in the strict sense); however, under fairly mild conditions on {rJk}, namely that {rJk} is a sequence of i.i.d. random variables (which we will always assume to be true), the sequence: ?k = Xk - G({ Xk-t}N;Wg) (2) is stationary. Often termed an innovation sequence, for our purpose the stationarity of 2 will suffice. This leads to a prediction framework for estimating the dynamical parameters, w g , of the system and to which we will adjoin a nonparametric characterization of uncertainty. The second type of process we consider is described by a conditional probability density : Xk "'p(xkl l{Xk-t}N) (3) In this case it is only the conditional statistics of {Xk} that we are concerned with and they are, by definition, constant. 3 Learning Informative Statistics with Nonparametric Estimators We propose to determine the system parameters by minimizing the entropy of the error residuals for systems of type (a). Parametric entropy optimization approaches have been J. W Fisher IlI, A. T. Ihler and P. A. Viola 902 ,-------- r - - - - - - - - - - - - - - - -. ?."11.; + + ~----l (a) I (b) Figure I: Two related systems: (a) dynamical system driven by stationary noise and (b) probabilistic system dependent on the finite past. Dotted box indicates source of stochastic process, while solid box indicates learning algorithm proposed (e.g. [4]), the novelty of our approach; however, is that we estimate entropy nonparametrically. That is, where the differential entropy integral is approximated using a function of the Parzen kernel density estimator [51 (in all experiments we use the Gaussian kernel). It can be shown that minimizing the entropy of the error residuals is equivalent to maximizing their likelihood [11. In this light, the proposed criterion is seeking the maximum likelihood estimate of the system parameters using a nonparametric description of the noise density. Consequently, we solve for the system parameters and the noise density jointly. While there is no explicit dynamical system in the second system type we do assume that the conditional statistics of the observed sequence are constant (or at worst slowly changing for an on-line learning algorithm). In this case we desire to minimize the uncertainty of predictions from future samples by summarizing information from the past. The challenge is to do so efficiently via a function of recent samples. Ideally we would like to find a sufficient statistic of the past; however, without an explicit description of the density we opt instead for an informative statistic. By informative statistic we simply mean one which reduces the conditional entropy of future samples. If the statistic were sufficient then the mutual information has reached a maximum [1]. As in the previous case, we propose to find such a statistic by maximizing the nonparametric mutual information as defined by arg min i (x k, F ( {x k -1 } N; W f) ) (5) Wf argmin H(Xk) + H(F({ };Wj)) - H(XbF({ };Wj))) (6) Wf = (7) By equation 6 this is equivalent to optimizing the joint and marginal entropies (which we do in practice) or, by equation 7, minimizing the conditional entropy. We have previously presented two related methods for incorporating kernel based density estimators into an information theoretic learning framework [2, 3]. We chose the method of [3J because it provides an exact gradient of an approximation to entropy, but more importantly can be converted into an implicit error function thereby reducing computation cost. 903 Learning Informative Statistics: A Nonparametric Approach 4 Distinguishing Random Walks: An Example In random walk the feedback function G( {Xk-l} 1) = Xk-l. The noise is assumed to be independent and identically distributed (i.i.d.). Although the sequence,Xk, is non-stationary the increments (Xk- Xk-l) are stationary. In this context, estimating the statistics of the residuals allows for discrimination between two random walk process with differing noise densities. Furthermore, as we will demonstrate empiricalIy, even when one of the processes is driven by Gaussian noise (an implicit assumption of the MMSE criterion), such knowledge may not be sufficient to distinguish one process from another. Figure 2 shows two random walk realizations and their associate noise densities (solid lines). One is driven by Gaussian noise (17k rv N (O , l), while the other is driven by a bi-modal mixture of gaussians ('17k rv 1N(0.95, 0.3) + 4N( -0.95, 0.3) (note: both densities are zero-mean and unit variance). During learning, the process was modeled as fifth-order auto-regressive (AR5 ). One hundred samples were drawn from a realization of each type and the AR parameters were estimated using the standard MMSE approach and the approach described above. With regards to parameter estimation, both methods (as expected) yield essentially the same parameters with the first coefficient being near unity and the remaining coefficients being near zero. We are interested in the ability to distinguish one process from another. As mentioned. the current approach jointly estimates the parameters of the system as weII as the density of the noise. The nonparametric estimates are shown in figure 2 (dotted lines) . These estimates are then be used to compute the accumulated average log-likelihood (L(EI.:) = I:7=110gp(:ri ) of the residual sequence (Ek ;:::; r/k) under the known and learned densities (figure 3). It is striking (but not surprising) that L( Ek) of the bi-modal mixture under the Gaussian model (dashed lines, top) does not differ significantly from the Gaussian driven increments process (solid lines, top). The explanation follows from the fact that t (8) where Pf (?) is the true density of ? (bi-modal), p( ?) is the assumed density of the likelihood test (unit-variance Gaussian), and D( II) is the KuIlback-Leibler divergence [I). In this case, D(p(E)l lpf( E)) is relatively small (not true for D(Pf (C) ll p(E?) and H(Pf (C)) is less than the entropy of the unit-variance Gaussian (for fixed variance, the Gaussian density has maximum entropy). The consequence is that the likelihood test under the Gaussian assumption does not reliably distinguish the two processes. The likelihood test under the bi-modal density or its nonparametric estimate (figure 3, bottom) does distinguish the two. The method described is not limited to linear dynamic models. It can certainly be used for nonlinear models, so long as the dynamic can be well approximated by differentiable functions. Examples for multi-layer perceptrons are described in [3]. 5 Learning the Structure of a Noisy Random Telegraph Wave A noisy random telegraph wave (RTW) can be described by figure 1(b). Our goal is not to demonstrate that we can analyze random telegraph waves, rather that we can robustly learn an informative statistic of the past for such a process. We define a noisy random telegraph wave as a sequence Xk rv N (J.Lk, (J) where 11k is binomially distributed : J.Lk E {? } J.L P{ _ J.Lk - }_ -J.Lk-l - a 1 *,~ ;V= l x k-, 1 ~!I IX k - . I' *' (9) N ( J.Lk , (J) is Gaussian and a < 1. This process is interesting because the parameters are random functions of a nonlinear combination of the set {Xk} N. Depending on the value of N, we observe different switching dynamics. Figure 4 shows examples of such signals for J. W Fisher III, A. T. Ihler and P. A. Viola 904 ~ ~l2SJ ~ ~lAKJ ~IO ? 201 400 101 _ 1000 o III 400 ... 100 1000 0.00 \ : , -I -I ' D It 0 I Figure 2: Random walk examples (left), comparison of known to learned densities (right). Figure 3: L(?k) under known models (left) as compared to learned models (right). N = 20 (left) and N = 4 (right). Rapid switching dynamics are possible for both signals while N = 20 has periods with longer duration than N = 4. Figure 4: Noisy random telegraph wave: N = 20 (left), N = 4 (right) In our experiments we learn a sufficient statistic which has the form F({x.}past) ~ q (t W/;Xk-.) , (to) where u( ) is the hyperbolic tangent function (i.e. P{ } is a one layer perceptron). Note that a multi-layer perceptron could also be used [3]. In our experiments we train on 100 samples of noisy RTW(N=2o) and RTW(N=4). We then learn statistics for each type of process using M = {4, 5,15,20, 25}. This tests for situations in which the depth is both under-specified and over-specified (as well as perfectly Learning Informative Statistics: A Nonparametric Approach 905 Figure 5: Comparison of Wiener filter (top) non parametric approach (bottom) for synthesis . ...... ~ ? ? P Figure 6: Informative Statistics for noisy random telegraph waves. M = 25 trained on N equal 4 (left) and 20 (right). specified). We will denote FN({Xk}M) as the statistic which was trained on an RTW(N) process with a memory depth of M. Since we implicitly learn a joint density over (Xk, FN( {Xk} M)) synthesis is possible by sampling from that density. Figure 5 compares synthesis using the described method (bottom) to a Wiener filter (top) estimated over the same data. The results using the information theoretic approach (bottom) preserve the structure of the RTW while the Wiener filter results do not. This was achieved by collapsing the information of past samples into a single statistic (avoiding high dimension density estimation). Figure 6 shows the joint density over (Xk, FN ({Xk} M)) for N = {4, 20} and M = 25. We see that the estimated densities are not separable and by virtue of this fact the learned statistic conveys information about the future. Figure 7 shows results from 100 monte carlo trials. In this case the depth of the statistic is matched to the process. Each plot shows the accumulated conditional log likelihood (L(f.k) = E:=1 10gp(XiIFN( {Xk-l} M)) under the learned statistic with error bars. Figure 8 shows similar results after varying the memory depth M = {4, 5,15,20, 25} of the statistic. The figures illustrate robustness to choice of memory depth M. This is not to say that memory depth doesn't matter; that is, there must be some information to exploit, but the empirical results indicate that useful information was extracted. i 6 Conclusions We have described a nonpararnetric approach for finding informative statistics. The approach is novel in that learning is derived from nonpararnetric estimators of entropy and mutual information. This allows for a means by which to 1) efficiently summarize the past, 2) predict the future and 3) characterize the uncertainty of those predictions beyond second-order statistics. Futhermore, this was accomplished without the strong assumptions accompanying parametric approaches. 1. W Fisher Ill, A. T. Ihler and P. A. Viola 906 Figure 7: Conditional L(?k). Solid line indicates RTW(N=20) while dashed line indicates Thick lines indicate the average over all monte carlo runs while the thin lines indicate ?1 standard deviation. The left plot uses a statistic trained on RTW(N=20) while the right plot uses a statistic trained on RTW(N=4). RTW(N=4). ,,-==--= :~:""~Z= 1.'~ ." ???? ... O.OO~--;"'~--;I"'~---;I-!;_;------;_;!;;---='" Figure 8: Repeat of figure 7 for cases with M = {4, 5, 15,20, 25}. Obvious breaks indicate a new set of trials We also presented empirical results which illustrated the utility of our approach. The example of random walk served as a simple illustration in learning a dynamic system in spite of the over-specification of the AR model. More importantly, we demonstrated the ability to learn both the dynamic and the statistics of the underlying noise process. This information was later used to distinguish realizations by their non parametric densities, something not possible using MMSE error prediction. An even more compelling result were the experiments with noisy random telegraph waves. We demonstrated the algorithms ability to learn a compact statistic which efficiently summarized the past for process identification. The method exhibited robustness to the number of parameters of the learned statistic. For example, despite overspecifying the dependence of the memory-4 in three of the cases, a useful statistic was still found. Conversely, despite the memory-20 statistic being underspecified in three of the experiments, useful information from the available past was extracted. It is our opinion that this method provides an alternative to some of the traditional and connectionist approaches to time-series analysis. The use of nonparametric estimators adds flexibility to the class of densities which can be modeled and places less of a constraint on the exact form of the summarizing statistic. References [1] T. Cover and J. Thomas. Elements of Information Theory. John Wiley & Sons, New York, 199]. [2] P. Viola et al. Empricial entropy manipulation for real world problems. In Mozer Touretsky and Hasselmo, editors, Advances in Neural Information ProceSSing Systems, pages ?-?, ]996. [3] J.w. Fisher and J.e. Principe. A methodology for information theoretic feature extraction. In A. Stuberud, editor, Proc. of the IEEE Int loint Conf on Neural Networks, pages ?-?, ] 998. [4] 1. Kapur and H. Kesavan. Entropy Optimization Principles with Applications. Academic Press, New York, ] 992. [5] E. Parzen. On estimation of a probability density function and mode. Ann. of Math Stats., 33:1065-1076, 1962.
1765 |@word mild:1 trial:2 simplifying:1 thereby:1 solid:4 series:1 mmse:3 past:18 current:3 surprising:1 must:1 john:2 fn:3 informative:13 plot:3 discrimination:1 stationary:6 parameterization:1 xk:26 regressive:1 characterization:1 provides:2 math:1 nonpararnetric:2 differential:1 expected:1 market:1 rapid:1 behavior:2 multi:2 pf:3 abound:1 estimating:2 underlying:3 suffice:1 matched:1 argmin:1 minimizes:1 differing:3 finding:1 unit:3 understood:1 consequence:1 switching:2 io:1 despite:2 analyzing:1 chose:1 conversely:1 limited:1 range:1 bi:4 practice:1 spot:1 empirical:2 significantly:1 hyperbolic:1 spite:1 context:1 equivalent:2 demonstrated:2 maximizing:2 duration:1 focused:1 stats:1 insight:1 estimator:5 importantly:2 financial:1 increment:2 exact:2 distinguishing:1 us:2 associate:1 element:1 approximated:2 underspecified:1 observed:1 bottom:4 capture:1 worst:1 wj:2 sun:1 valuable:1 principled:1 mentioned:2 mozer:1 ideally:2 dynamic:14 trained:4 purely:1 joint:4 stock:1 train:1 describe:2 monte:2 outcome:1 solve:1 say:1 wg:2 ability:3 statistic:47 gp:2 jointly:2 noisy:9 sequence:9 differentiable:1 propose:2 realization:3 flexibility:2 poorly:1 description:2 wider:1 depending:1 illustrate:1 oo:1 strong:1 come:1 indicate:4 differ:2 thick:1 filter:3 stochastic:4 human:1 opinion:1 opt:1 accompanying:3 predict:3 driving:2 purpose:1 estimation:3 proc:1 applicable:1 hasselmo:1 mit:1 gaussian:15 always:1 rather:1 varying:1 categorizing:1 derived:1 indicates:4 likelihood:7 contrast:1 ave:1 sense:1 summarizing:3 wf:2 dependent:2 accumulated:2 interested:1 arg:1 flexible:1 ill:1 retaining:1 fairly:1 mutual:4 marginal:1 equal:2 extraction:1 sampling:1 thin:1 future:10 connectionist:1 simultaneously:1 divergence:1 preserve:1 stationarity:1 interest:1 certainly:1 mixture:2 light:1 futhermore:1 integral:1 rtw:9 walk:7 theoretical:1 classify:1 modeling:2 compelling:1 ar:2 cover:1 cost:1 deviation:1 hundred:1 characterize:3 dependency:3 density:27 discriminating:1 probabilistic:3 telegraph:8 parzen:2 synthesis:3 squared:2 possibly:1 slowly:1 collapsing:1 conf:1 ek:2 converted:1 summarized:1 coefficient:2 matter:1 int:1 later:1 break:1 analyze:2 reached:1 wave:8 sort:1 minimize:1 square:1 wiener:3 variance:4 efficiently:3 yield:2 identification:1 carlo:2 served:1 drive:1 asset:1 definition:1 frequency:1 obvious:1 conveys:1 ihler:4 sampled:1 massachusetts:2 popular:1 knowledge:2 methodology:1 modal:4 box:2 wildly:1 furthermore:2 implicit:2 ei:1 nonlinear:4 nonparametrically:2 mode:2 pricing:1 concept:1 true:3 leibler:1 illustrated:1 ll:1 during:1 criterion:2 theoretic:5 xkl:1 demonstrate:2 novel:2 common:1 cambridge:1 ai:1 specification:1 longer:1 etc:1 add:1 something:1 recent:1 optimizing:1 driven:6 termed:1 manipulation:1 accomplished:1 captured:1 determine:1 novelty:1 period:1 dashed:2 signal:3 ii:1 violate:1 rv:3 rj:1 reduces:1 academic:1 characterized:3 long:1 prediction:9 essentially:1 grounded:1 kernel:3 achieved:1 source:1 exhibited:1 strict:1 near:2 iii:3 enough:1 concerned:1 identically:1 perfectly:1 idea:1 loint:1 weiner:1 utility:1 forecasting:1 speech:1 york:2 useful:3 nonparametric:14 dotted:2 estimated:3 discrete:3 key:1 drawn:1 changing:1 run:1 uncertainty:10 striking:1 place:1 summarizes:1 layer:3 distinguish:5 fold:1 constraint:1 ri:1 min:1 separable:1 relatively:1 combination:1 son:1 unity:1 equation:3 previously:1 discus:2 know:1 tractable:1 end:1 available:1 gaussians:1 observe:1 robustly:1 alternative:1 robustness:2 thomas:1 top:4 remaining:1 exploit:1 classical:2 seeking:1 lpf:1 parametric:7 dependence:1 traditional:1 exhibit:1 gradient:1 hmm:1 rjk:3 touretsky:1 modeled:2 illustration:1 minimizing:3 innovation:1 difficult:1 unfortunately:1 binomially:1 reliably:1 unknown:1 appreciation:1 observation:4 finite:1 kapur:1 viola:6 situation:1 namely:1 specified:3 learned:8 beyond:1 bar:1 dynamical:7 challenge:2 summarize:2 memory:6 explanation:1 event:1 residual:4 technology:1 lk:5 auto:1 tangent:1 interesting:2 sufficient:5 principle:1 editor:2 repeat:1 perceptron:2 institute:1 taking:1 fifth:1 distributed:2 regard:1 feedback:1 depth:6 dimension:1 world:2 doesn:1 compact:2 implicitly:2 assumed:2 learn:10 mse:2 complex:4 noise:16 paul:1 wiley:1 wish:2 explicit:2 ix:1 insignificant:1 virtue:1 incorporating:1 false:1 importance:1 conditioned:1 entropy:15 depicted:1 simply:1 likely:2 desire:1 extracted:2 ma:1 conditional:7 goal:1 consequently:2 ann:1 towards:1 price:2 fisher:5 absence:1 reducing:1 ili:1 perceptrons:1 principe:1 alexander:1 categorize:1 avoiding:1
835
1,766
Boosting Algorithms as Gradient Descent Llew Mason Research School of Information Sciences and Engineering Australian National University Canberra, ACT, 0200, Australia lmason@syseng.anu.edu.au Jonathan Baxter Research School of Information Sciences and Engineering Australian National University Canberra, ACT, 0200, Australia Jonathan. Baxter@anu.edu.au Peter Bartlett Research School of Information Sciences and Engineering Australian National University Canberra, ACT, 0200, Australia Peter.Bartlett@anu.edu.au Marcus Frean Department of Computer Science and Electrical Engineering The University of Queensland Brisbane, QLD, 4072, Australia marcusf@elec.uq.edu.au Abstract We provide an abstract characterization of boosting algorithms as gradient decsent on cost-functionals in an inner-product function space. We prove convergence of these functional-gradient-descent algorithms under quite weak conditions. Following previous theoretical results bounding the generalization performance of convex combinations of classifiers in terms of general cost functions of the margin, we present a new algorithm (DOOM II) for performing a gradient descent optimization of such cost functions. Experiments on several data sets from the UC Irvine repository demonstrate that DOOM II generally outperforms AdaBoost, especially in high noise situations, and that the overfitting behaviour of AdaBoost is predicted by our cost functions. 1 Introduction There has been considerable interest recently in voting methods for pattern classification, which predict the label of a particular example using a weighted vote over a set of base classifiers [10, 2, 6, 9, 16, 5, 3, 19, 12, 17, 7, 11, 8]. Recent theoretical results suggest that the effectiveness of these algorithms is due to their tendency to produce large margin classifiers [1, 18]. Loosely speaking, if a combination of classifiers correctly classifies most of the training data with a large margin, then its error probability is small. In [14] we gave improved upper bounds on the misclassification probability of a combined classifier in terms of the average over the training data of a certain cost function of the margins. That paper also described DOOM, an algorithm for directly minimizing the margin cost function by adjusting the weights associated with Boosting Algorithms as Gradient Descent 513 each base classifier (the base classifiers are suppiled to DOOM). DOOM exhibits performance improvements over AdaBoost, even when using the same base hypotheses, which provides additional empirical evidence that these margin cost functions are appropriate quantities to optimize. In this paper, we present a general class of algorithms (called AnyBoost) which are gradient descent algorithms for choosing linear combinations of elements of an inner product function space so as to minimize some cost functional. The normal operation of a weak learner is shown to be equivalent to maximizing a certain inner product. We prove convergence of AnyBoost under weak conditions. In Section 3, we show that this general class of algorithms includes as special cases nearly all existing voting methods. In Section 5, we present experimental results for a special case of AnyBoost that minimizes a theoretically-motivated margin cost functional. The experiments show that the new algorithm typically outperforms AdaBoost, and that this is especially true with label noise. In addition, the theoretically-motivated cost functions provide good estimates of the error of AdaBoost, in the sense that they can be used to predict its overfitting behaviour. 2 AnyBoost Let (x, y) denote examples from X x Y, where X is the space of measurements (typically X ~ JRN) and Y is the space of labels (Y is usually a discrete set or some subset of JR). Let F denote some class of functions (the base hypotheses) mapping X -7 Y, and lin (F) denote the set of all linear combinations of functions in F. Let (,) be an inner product on lin (F), and C: lin (F) -7 ~ a cost functional on lin (F). Our aim is to find a function F E lin (F) minimizing C(F). iteratively via a gradient descent procedure. We will proceed Suppose we have some F E lin (F) and we wish to find a new f E F to add to F so that the cost C(F + Ef) decreases, for some small value of E. Viewed in function space terms, we are asking for the "direction" f such that C(F + Ef) most rapidly decreases. The desired direction is simply the negative of the functional derivative ofC at F, -\lC(F), where: \lC(F)(x) := aC(F + o:Ix) ao: I ' (1) 0:=0 where Ix is the indicator function of x. Since we are restricted to choosing our new function f from F, in general it will not be possible to choose f = -\lC(F), so instead we search for an f with greatest inner product with -\lC(F). That is, we should choose f to maximize - (\lC(F), I). This can be motivated by observing that, to first order in E, C(F + Ef) = C(F) + E (\lC(F), f) and hence the greatest reduction in cost will occur for the f maximizing - (\lC(F), f). For reasons that will become obvious later, an algorithm that chooses to maximize - (\lC(F), f) will be described as a weak learner. f attempting The preceding discussion motivates Algorithm 1 (AnyBoost), an iterative algorithm for finding linear combinations F of base hypotheses in F that minimize the cost functional C (F). Note that we have allowed the base hypotheses to take values in an arbitrary set Y, we have not restricted the form of the cost or the inner product, and we have not specified what the step-sizes should be. Appropriate choices for L. Mason, J Baxter. P. Bartlett and M Frean 514 these things will be made when we apply the algorithm to more concrete situations. Note also that the algorithm terminates when - (\lC(Ft ), It+!) ~ 0, i.e when the weak learner C returns a base hypothesis It+l which no longer points in the downhill direction of the cost function C(F). Thus, the algorithm terminates when, to first order, a step in function space in the direction of the base hypothesis returned by C would increase the cost. Algorithm 1 : Any Boost Require: ? An inner product space (X, (, )) containing functions mapping from X to some set Y. ? A class of base classifiers F ~ X. ? A differentiable cost functional C: lin (F) --+ III ? A weak learner C(F) that accepts F E lin (F) and returns I E F with a large value of - (\lC(F), f). Let Fo(x) := O. for t := 0 to T do Let It+! := C(Ft ). if - (\lC(Ft ), It+!) ~ 0 then return Ft. end if Choose Wt+!. Let Ft+l := Ft + Wt+I!t+1 end for return FT+I. 3 A gradient descent view of voting methods We now restrict our attention to base hypotheses and the inner product I E F mapping to Y = {? I}, (2) for all F, G E lin (F), where S = {Xl, yt), . . . , (Xn, Yn)} is a set of training examples generated according to some unknown distribution 1) on X x Y. Our aim now is to find F E lin (F) such that Pr(x,y)"""Vsgn (F(x)) -=f. Y is minimal, where sgn (F(x)) = -1 if F (x) < 0 and sgn (F (x)) = 1 otherwise. In other words, sgn F should minimize the misclassification probability. The margin of F : X --+ R on example (x,y) is defined as yF(x). Consider margin cost-Iunctionals defined by 1 m C(F) := - L C(YiF(Xi)) m i=l where c: R --+ R is any differentiable real-valued function of the margin. With these definitions, a quick calculation shows: 1 - (\lC(F), I) m = - 2 LYd(Xi)C'(YiF(Xi)). m i=l Since positive margins correspond to examples correctly labelled by sgn F and negative margins to incorrectly labelled examples, any sensible cost function of the Boosting Algorithms as Gradient Descent 515 Table 1: Existing voting methods viewed as AnyBoost on margin cost functions. Algorithm AdaBoost [9] ARC-X4 [2] ConfidenceBoost [19] LogitBoost [12] Cost function e-yF(x) (1 - yF(x))" e In(l yF(x) + e-yl?X?) Step size Line search 1ft Line search Newton-Raphson margin will be monotonically decreasing. Hence -C'(YiF(Xi)) will always be positive. Dividing through by - 2:::1 C'(YiF(Xi)), we see that finding an I maximizing - ('\1 C (F), f) is equivalent to finding an I minimizing the weighted error L D(i) for i = 1, ... ,m. where i: f(Xi):f;Yi Many of the most successful voting methods are, for the appropriate choice of margin cost function c and step-size, specific cases of the AnyBoost algorithm (see Table 3). A more detailed analysis can be found in the full version of this paper [15]. 4 Convergence of Any Boost In this section we provide convergence results for the AnyBoost algorithm, under quite weak conditions on the cost functional C. The prescriptions given for the step-sizes Wt in these results are for convergence guarantees only: in practice they will almost always be smaller than necessary, hence fixed small steps or some form of line search should be used. The following theorem (proof omitted, see [15]) supplies a specific step-size for AnyBoost and characterizes the limiting behaviour with this step-size. Theorem 1. Let C: lin (F) -7 ~ be any lower bounded, Lipschitz differentiable cost functional (that is, there exists L > 0 such that II'\1C(F)- '\1C(F')1I :::; LIIF-F'II lor all F, F' E lin (F)). Let Fo, F l , ... be the sequence 01 combined hypotheses generated by the AnyBoost algorithm, using step-sizes ('\1C(Ft ), It+!) Wt+1 := - Lll/t+!112 . (3) Then AnyBoost either halts on round T with - ('\1C(FT ), IT+1) :::; 0, or C(Ft) converges to some finite value C*, in which case limt-+oo ('\1C(Ft ), It+l) = O. The next theorem (proof omitted, see [15]) shows that if the weak learner can always find the best weak hypothesis It E F on each round of AnyBoost, and if the cost functional C is convex, then any accumulation point F of the sequence (Ft) generated by AnyBoost with the step sizes (3) is a global minimum of the cost. For ease of exposition, we have assumed that rather than terminating when - ('\1C(FT), h+l) :::; 0, AnyBoost simply continues to return FT for all subsequent time steps t. Theorem 2. Let C: lin (F) -7 ~ be a convex cost functional with the properties in Theorem 1, and let (Ft) be the sequence 01 combined hypotheses generated by the AnyBoost algorithm with step sizes given by (3). Assume that the weak hypothesis class F is negation closed (f E F ===} - I E F) and that on each round L. Mason, 1. Baxter, P. Bartlett and M. Frean 516 the AnyBoost algorithm finds a function fHl maximizing - (V'C(Ft ), ft+l)? Then any accumulation point F of the sequence (Ft) satisfies sUP/EF - (V'C(F), f) = 0, and C(F) = infGElin(F) C(G). 5 Experiments AdaBoost had been perceived to be resistant to overfitting despite the fact that it can produce combinations involving very large numbers of classifiers. However, recent studies have shown that this is not the case, even for base classifiers as simple as decision stumps [13, 5, 17]. This overfitting can be attributed to the use of exponential margin cost functions (recall Table 3). The results in in [14] showed that overfitting may be avoided by using margin cost functionals of a form qualitatively similar to 1 C(F) = m m 2: 1 - tanh(>'YiF(xi)), (4) i=l where >. is an adjustable parameter controlling the steepness of the margin cost function c(z) = 1 - tanh(>.z). For the theoretical analysis of [14] to apply, F must be a convex combination of base hypotheses, rather than a general linear combination. Henceforth (4) will be referred to as the normalized sigmoid cost functional. AnyBoost with (4) as the cost functional and (2) as the inner product will be referred to as DOOM II. In our implementation of DOOM II we use a fixed small step-size ? (for all of the experiments ? = 0.05). For all details of the algorithm the reader is referred to the full version of this paper [15]. We compared the performance of DOOM II and AdaBoost on a selection of nine data sets taken from the VCI machine learning repository [4] to which various levels of label noise had been applied. To simplify matters, only binary classification problems were considered. For all of the experiments axis orthogonal hyperplanes (also known as decision stumps) were used as the weak learner. Full details of the experimental setup may be found in [15]. A summary of the experimental results is shown in Figure 1. The improvement in test error exhibited by DOOM II over AdaBoost is shown for each data set and noise level. DOOM II generally outperforms AdaBoost and the improvement is more pronounced in the presence of label noise. The effect of using the normalized sigmoid cost function rather than the exponential cost function is best illustrated by comparing the cumulative margin distributions generated by AdaBoost and DOOM II. Figure 2 shows comparisons for two data sets with 0% and 15% label noise applied. For a given margin, the value on the curve corresponds to the proportion of training examples with margin less than or equal to this value. These curves show that in trying to increase the margins of negative examples AdaBoost is willing to sacrifice the margin of positive examples significantly. In contrast, DOOM II 'gives up' on examples with large negative margin in order to reduce the value of the cost function. Given that AdaBoost does suffer from overfitting and is guaranteed to minimize an exponential cost function of the margins, this cost function certainly does not relate to test error. How does the value of our proposed cost function correlate against AdaBoost's test error? Figure 3 shows the variation in the normalized sigmoid cost function, the exponential cost function and the test error for AdaBoost for two VCI data sets over 10000 rounds. There is a strong correlation between the normalized sigmoid cost and AdaBoost's test error. In both data sets the minimum 517 Boosting Algorithms as Gradient Descent 3.5 3 2.5 ~ II) I !: 2 1; 1.5 bO fl = os > -0 os 1 0.5 g ~ 0 -0.5 -1 -1.5 -2 i ! ! 0 !: .. i 0; -r1 0 .11 f ~ t! I ;! ~ , 01 ~ Q , I '" 0 i 00/0 noise , "! J ~,. ;~ 15(fi.: cleve SOIlar Ionosphere vote I credll brea.l;t-cancer Jmna.. uldlans Il'.'ioc? . . .. ~ noi~e hypo I ,. # ,.~ sphCt! Data set Figure 1: Summary oft est error advantage (with standard error bars) of DOOM II over AdaBoost with varying levels of noise on nine VCI data sets. breast--cancer~ wisconsi n splice noise .. AdaBI')(\'it - - U,,? Mise - DOOM II 15% noise - AdaBoost ............. 15% noise - DOOM n -- 0.8 - 0.8 0.6 0.6 0.4 0.4 0.2 0.2 o~------~~~~~~----~ -1 - - 0% n(!ise .. AdaBo(lst - - 0% ,,,)i.,~ DOOM II 15% noise - AdaBoost . ............ 15% noise - DOOM n. O~ -0.5 o Mar~in 0.5 o~------~~~~------~-----J -1 -0.5 o 0.5 Mar~ in Figure 2: Margin distributions for AdaBoost and DOOM II with 0% and 15% label noise for the breast-cancer and splice data sets. of AdaBoost's test error and the mlllimum of the normalized sigmoid cost very nearly coincide, showing that the sigmoid cost function predicts when AdaBoost will start to overfit. References [1] P . L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2) :525-536 , March 1998. [2] L. Breiman. Bagging predictors. Machine Learning, 24(2):123- 140, 1996. [3] L. Breiman. Prediction games and arcing algorithms. Technical Report 504 , Department of Statistics, University of California, Berkeley, 1998. [4] E. Keogh C. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. http:j jwww.ics.uci.eduj"'mlearnjMLRepository_html. [5] T.G . Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Technical report, Computer Science Department, Oregon State University, 1998. 518 L. Mason, J. Baxter, P Bartlett and M. Frean yotel labor 30.-~--~------~~--~------~ AdaB(>o~1 1e_~1 (';1.)( - - Exponential CQ!;! -- AJaB ()(~3 l re;.;( CIYor - - /1 7 Normalized sigmoid cost ............ . Exponential cost ---.. . No rmalized sigmoid enst ........... .. I 6 \ \ 5 ........'\ \ \\ 1/'~_~ 4 \..i\.. ,...............................,""........................ 2 !O 100 Rounds 1000 10000 ~ 'V'\..,..... O~----~------~-----=~----~ 1 10 100 1000 10000 Rounds Figure 3: AdaBoost test error, exponential cost and normalized sigmoid cost over 10000 rounds of AdaBoost for the labor and vote1 data sets. Both costs have been scaled in each case for easier comparison with test error. [6] H. Drucker and C . Cortes . Boosting decision trees. In Advances in Neural Information Processing Systems 8, pages 479- 485, 1996. [7] N. Duffy and D. Helmbold. A geometric approach to leveraging weak learners. In Computational Learning Theory: 4th European Conference, 1999. (to appear). [8] Y. Freund. An adaptive version of the boost by majority algorithm. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, 1999. (to appear) . [9] Y . Freund and R. E. Schapire. Experiments with a new boosting algorithm. In Machine Learning: Proceedings of the Thirteenth International Conference, pages 148-156, 1996. [10] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119139, August 1997. [11] J . Friedman. Greedy function approximation : A gradient boosting machine. Technical report, Stanford University, 1999. [12] J . Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression : A statistical view of boosting. Technical report, Stanford University, 1998. [13] A. Grove and D . Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 692-699, 1998. [14] L. Mason, P. 1. Bartlett, and J . Baxter. Improved generalization through explicit optimization of margins. Machine Learning, 1999. (to appear) . [15] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Functional Gradient Techniques for Combining Hypotheses. In Alex Smola, Peter Bartlett, Bernard Sch6lkopf, and Dale Schurmanns, editors, Large Margin Classifiers. MIT Press, 1999. To appear. [16] J. R. Quinlan. Bagging, boosting, and C4 .5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 725-730, 1996. [17] G. Ratsch, T. Onoda, and K.-R. Muller. Soft margins for AdaBoost. Technical Report NC-TR-1998-021 , Department of Computer Science, Royal Holloway, University of London, Egham, UK, 1998. [18] R. E. Schapire, Y. Freund, P. L. Bartlett , and W . S. Lee. Boosting the margin : A new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5) :1651- 1686, October 1998. [19] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 80- 91, 1998.
1766 |@word repository:3 version:3 proportion:1 twelfth:1 willing:1 queensland:1 tr:1 reduction:1 outperforms:3 existing:2 comparing:1 must:1 additive:1 subsequent:1 greedy:1 intelligence:2 provides:1 characterization:1 boosting:15 hyperplanes:1 lor:1 become:1 supply:1 anyboost:17 prove:2 eleventh:1 theoretically:2 sacrifice:1 decreasing:1 brea:1 lll:1 classifies:1 bounded:1 what:1 minimizes:1 finding:3 guarantee:1 berkeley:1 act:3 voting:6 classifier:11 scaled:1 uk:1 yn:1 appear:4 positive:3 llew:2 engineering:4 limit:1 despite:1 au:4 ease:1 practice:1 procedure:1 empirical:1 significantly:1 word:1 confidence:1 suggest:1 selection:1 doom:18 optimize:1 equivalent:2 accumulation:2 quick:1 yt:1 maximizing:5 attention:1 convex:4 helmbold:1 variation:1 limiting:1 annals:1 controlling:1 suppose:1 hypothesis:13 element:1 continues:1 predicts:1 database:1 ft:19 electrical:1 decrease:2 noi:1 complexity:1 terminating:1 learner:7 lst:1 various:1 elec:1 london:1 artificial:2 choosing:2 ise:1 quite:2 stanford:2 valued:1 otherwise:1 statistic:2 sequence:4 differentiable:3 advantage:1 product:9 uci:2 combining:1 rapidly:1 pronounced:1 convergence:5 r1:1 produce:2 converges:1 oo:1 ac:1 frean:5 school:3 strong:1 dividing:1 predicted:1 australian:3 direction:4 australia:4 sgn:4 require:1 behaviour:3 ao:1 generalization:3 randomization:1 keogh:1 considered:1 blake:1 normal:1 ic:1 mapping:3 predict:2 omitted:2 perceived:1 label:7 tanh:2 ofc:1 weighted:2 mit:1 always:3 aim:2 rather:3 breiman:2 varying:1 arcing:1 improvement:3 contrast:1 sense:1 typically:2 classification:3 special:2 uc:1 equal:1 ioc:1 x4:1 nearly:2 report:5 simplify:1 national:5 negation:1 friedman:2 interest:1 certainly:1 grove:1 necessary:1 orthogonal:1 tree:2 loosely:1 desired:1 re:1 theoretical:3 minimal:1 soft:1 asking:1 cost:49 subset:1 predictor:1 successful:1 combined:3 chooses:1 international:1 adab:1 lee:1 yl:1 e_:1 concrete:1 containing:1 choose:3 henceforth:1 derivative:1 return:5 stump:2 includes:1 matter:1 oregon:1 later:1 view:2 closed:1 observing:1 characterizes:1 sup:1 start:1 minimize:4 il:1 ensemble:2 correspond:1 weak:12 fo:2 definition:1 against:1 obvious:1 associated:1 proof:2 attributed:1 irvine:1 adjusting:1 mise:1 recall:1 adaboost:25 improved:3 mar:2 smola:1 correlation:1 overfit:1 vci:3 o:2 logistic:1 yf:4 effect:1 dietterich:1 normalized:7 true:1 hence:3 iteratively:1 illustrated:1 round:7 game:1 trying:1 theoretic:1 demonstrate:1 ef:4 recently:1 fi:1 sigmoid:9 functional:14 measurement:1 had:2 resistant:1 longer:1 base:13 add:1 recent:2 showed:1 certain:2 binary:1 yi:1 muller:1 minimum:2 additional:1 preceding:1 maximize:2 monotonically:1 ii:16 full:3 technical:5 calculation:1 raphson:1 lin:13 prescription:1 halt:1 prediction:2 involving:1 regression:1 breast:2 fifteenth:1 limt:1 addition:1 thirteenth:2 brisbane:1 ratsch:1 exhibited:1 thing:1 leveraging:1 effectiveness:2 presence:1 iii:1 baxter:7 gave:1 hastie:1 restrict:1 inner:9 reduce:1 drucker:1 motivated:3 yif:5 bartlett:10 syseng:1 suffer:1 peter:4 returned:1 speaking:1 proceed:1 nine:2 generally:2 detailed:1 jrn:1 http:1 schapire:4 correctly:2 tibshirani:1 discrete:1 steepness:1 almost:1 reader:1 decision:5 bound:1 fl:1 cleve:1 guaranteed:1 annual:2 occur:1 alex:1 performing:1 attempting:1 department:4 according:1 combination:8 march:1 jr:1 terminates:2 smaller:1 enst:1 restricted:2 pr:1 taken:1 singer:1 end:2 operation:1 apply:2 appropriate:3 egham:1 uq:1 bagging:3 quinlan:1 newton:1 especially:2 quantity:1 exhibit:1 gradient:12 majority:1 sensible:1 reason:1 marcus:2 cq:1 minimizing:3 nc:1 setup:1 october:1 relate:1 negative:4 implementation:1 motivates:1 unknown:1 adjustable:1 sch6lkopf:1 upper:1 arc:1 finite:1 descent:9 incorrectly:1 situation:2 arbitrary:1 august:1 specified:1 c4:1 california:1 accepts:1 learned:1 boost:3 bar:1 usually:1 pattern:2 oft:1 royal:1 explanation:1 greatest:2 misclassification:2 indicator:1 rated:1 axis:1 geometric:1 freund:4 editor:1 cancer:3 summary:2 curve:2 xn:1 cumulative:1 dale:1 made:1 qualitatively:1 coincide:1 avoided:1 adaptive:1 correlate:1 functionals:2 transaction:1 global:1 overfitting:6 qld:1 assumed:1 xi:7 search:4 iterative:1 table:3 onoda:1 schuurmans:1 european:1 constructing:1 bounding:1 noise:14 logitboost:1 allowed:1 canberra:3 referred:3 lc:12 downhill:1 wish:1 explicit:1 exponential:7 xl:1 ix:2 splice:2 theorem:5 specific:2 showing:1 mason:6 cortes:1 ionosphere:1 evidence:1 exists:1 hypo:1 anu:3 margin:31 duffy:1 easier:1 simply:2 labor:2 bo:1 corresponds:1 satisfies:1 viewed:2 exposition:1 labelled:2 lipschitz:1 considerable:1 wt:4 called:1 bernard:1 tendency:1 experimental:4 merz:1 vote:2 est:1 holloway:1 jonathan:3
836
1,767
Predictive Approaches For Choosing Hyperparameters in Gaussian Processes S. Sundararajan Computer Science and Automation Indian Institute of Science Bangalore 560 012, India sundar@csa.iisc. ernet.in S. Sathiya Keerthi Mechanical and Production Engg. National University of Singapore 10 Kentridge Crescent, Singapore 119260 mpessk@guppy. mpe. nus. edu. sg Abstract Gaussian Processes are powerful regression models specified by parametrized mean and covariance functions. Standard approaches to estimate these parameters (known by the name Hyperparameters) are Maximum Likelihood (ML) and Maximum APosterior (MAP) approaches. In this paper, we propose and investigate predictive approaches, namely, maximization of Geisser's Surrogate Predictive Probability (GPP) and minimization of mean square error with respect to GPP (referred to as Geisser's Predictive mean square Error (GPE)) to estimate the hyperparameters. We also derive results for the standard Cross-Validation (CV) error and make a comparison. These approaches are tested on a number of problems and experimental results show that these approaches are strongly competitive to existing approaches. 1 Introduction Gaussian Processes (GPs) are powerful regression models that have gained popularity recently, though they have appeared in different forms in the literature for years. They can be used for classification also; see MacKay (1997), Rasmussen (1996) and Williams and Rasmussen (1996). Here, we restrict ourselves to regression problems. Neal (1996) showed that a large class of neural network models converge to a Gaussian Process prior over functions in the limit of an infinite number of hidden units. Although GPs can be created using infinite networks, often GPs are specified directly using parametric forms for the mean and covariance functions (Williams and Rasmussen (1996)). We assume that the process is zero mean. Let ZN = {XN,yN} whereX N = {xCi): i = 1, ... ,N}andYN = {y(i): i = 1, ... ,N}. Here,y(i) represents the output corresponding to the input vector xCi). Then, the Gaussian prior over the functions is given by (1) where eN is the covariance matrix with (i,j)th element [CN]i,j C(x(i),x(j);8) and C(.; 8) denotes the parametrized covariance function. Now, assuming that the S. Sundararajan and S. S. Keerthi 632 observed output tN is modeled as tN = YN + eN and eN is zero mean multivariate Gaussian with covariance matrix 0'2IN and is independent of YN, we get p 9) = exp(-t~Ci\/tN) IX (t N (2) (27r)~ICNli N, where eN = eN + 0'2IN. Therefore, [eN kj = [eN kj + 0'2 bi,j, where bi,j = 1 when i = j and zero otherwise. Note that 9 = (9,0'2) is the new set of hyperparameters. Then, the predictive distribution ofthe output yeN + 1) for a test case x(N + 1) is also Gaussian with mean and variance (3) and O';(N+1) = bN+1 - k~+1 C;/kN +1 (4) where bN+1 C(x(N + 1), x(N + 1); 9) and kN+l is an N x 1 vector with ith element given by C(x(N + 1),x(i); 9). Now, we need to specify the covariance function C(.; 9). Williams and Rasmussen (1996) found the following covariance function to work well in practice. M C(x(i), x(j); 9) = ao + al L p=1 M xp(i)xp(j) + voexp( - ~ L Wp (xp(i) - Xp(j))2) (5) p=1 where xp(i) is the pth component of ith input vector xCi). The wp are the Automatic Relevance Determination (ARD) parameters. Note that C(x(i), x(j); 9) = C(x(i), x(j); 9) + 0'2bi ,j' Also, all the parameters are positive and it is convenient to use logarithmic scale. Hence, 9 is given by log (ao, aI, vo, WI, ... ,W M, 0'2). Then, the question is: how do we handle 9? More sophisticated techniques like Hybrid Monte Carlo (HMC) methods (Rasmussen (1996) and Neal (1997)) are available which can numerically integrate over the hyperparameters to make predictions. Alternately, we can estimate 9 from the training data. We restrict to the latter approach here. In the classical approach, 9 is assumed to be deterministic but unknown and the estimate is found by maximizing the likelihood (2). That is, 9ML = argijaz p(tNIXN' 9). In the Bayesian approach, 9 is assumed to be random and a prior p( 9) is specified. Then, the MAP estimate 9 MP is obtained as 9 MP = argijaz p(tNIXN,9)p(9) with the motivation that the the predictive distribution p(y(N + 1)lx(N + 1), ZN) can be approximated as p(y(N + 1)lx(N + 1),ZN,9MP)' With this background, in this paper we propose and investigate different predictive approaches to estimate the hyperparameters from the training data. 2 Predictive approaches for choosing hyperparameters Geisser (1975) proposed Predictive Sample Reuse (PSR) methodology that can be applied for both model selection and parameter estimation problems. The basic idea is to define a partition scheme peN, n, r) such that pJJ~n = (ZX; -n; Z~) is ith partition belonging to a set r of partitions with Z}V -n' Z~ representing the N - n retained and n omitted data sets respectively. Then, the unknown 9 is estimated (or a model M j is chosen among a set of models indexed by j = 1, ... , J) by means of optimizing a predictive measure that measures the predictive performance on the omitted observations X~ by using the retained observations ZX;_n averaged over the partitions (i E r). In the special case of n = 1, we have the leave one out strategy. Note that this approach was independently presented in the 633 Predictive Approaches for Choosing Hyperparameters in Gaussian Processes name of cross-validation (CV) by Stone (1974). The well known examples are the standard CV error and negative of average predictive likelihood. Geisser and Eddy (1979) proposed to maximize n~l p(t(i)lx(i), Z<;}, M j ) (known as Geisser's surrogate Predictive Probability (GPP)) by synthesizing Bayesian and PSR methodology in the context of (parametrized) model selection. Here, we propose to maximize n~l p(t(i)lx(i), Z<;}, 0) to estimate 0, where Z<;} is obtained from ZN by removing the ith sample. Note that p(t(i)lx(i), Zr;) ,0) is nothing but the predictive distribution p(y(i)lx(i), Zr;), 0) evaluated at y(i) = t(i). Also, we introduce the notion of Geisser's Predictive mean square Error (GPE) defined as ~ 2:~1 E((y(i) - t(i))2) (where the expectation operation is defined with respect to p(y(i)lx(i), Zr;), 0)) and propose to estimate 0 by minimizing GPE. 2.1 Expressions for GPP and its gradient The objective function corresponding to GPP is given by 1 N . G(O) - N log(P(t(i)lx(i), Z~, 0) L (6) i=l From (3) and (4) we get G(O) = ~ ;... (t(i) - y(i))2 N ~ 20'2 . + 1 2N 11(~) i=l 1 N L log O';(i) + '2 log 27l' (7) i=l where y(i) = [c~i)JT[C~)J-lt~ and O';(i) = Cii - [c~i)JT[C~)-lC~i). Here, C~ is an N - 1 x N - 1 matrix obtained from C N by removing the ith column and ith row. Similarly, t<;} and c~i) are obtained from tN and Ci (Le., ith column of CN) respectively by removing the ith element. Then, G(O) and its gradient can be computed efficiently using the following result. Theorem 1 The objective junction G (0) under the Gaussian Process model is given by 1 N q'fv(i) 1 N _ 1 G(O) = 2N Cii - 2N ~ logcii + "2 10g27l' (8) tt where Cii denotes the ith diagonal entry of C-r/ and qN (i) denotes the ith element of qN = C~;ItN' Its gradient is given by _1 t 8G(O) = 80J? 2N ~=l . (1 + q~(i)) (Si,i) Cii Cii + ~ t N a= . 1 qN(i)(r~(i)) -T8CNCh were Bj,i -C N18C 80;NCN1t N an d qN -i 80; Ci, rj denotes the ith column of the matrix c~;I . (9) Cii C-1t N N? H ere, Ci - Thus, using (8) and (9) we can compute the GPP and its gradient. We will give meaningful interpretation to the different terms shortly. 2.2 Expressions for CV function and its gradient We define the CV function as H(O) ~ N L i=l (t(i) - y(i))2 (10) 634 S. Sundararajan and S. S. Keerthi where y(i) is the mean of the conditional predictive distribution as given above. Now, using the following result we can compute R((}) efficiently. Theorem 2 The CV function R ((}) under the Gaussian model is given by R((}) = ~ ~ (q~(i))2 N ~ i=l (11) C;. ' and its gradient is given by where Sj,i,rj,qN(i) and Cii are as defined in theorem 1. 2.3 Expressions for GPE and its gradient The G PE function is defined as GE((}) = ~ N L / (t(i) - y(i))2 p(y(i)lx(i), Z~, (}) dy(i) (13) i=l which can be readily simplified to GE((}) = ~ 1 N N L (t(i) - y(i))2 + N L a~(i) i=l (14) i=l On comparing (14) with (10), we see that while CV error minimizes the deviation from the predictive mean, GPE takes predictive variance also into account. Now, the gradient can be written as (15) - 8C- 1 8C- 1 where we have used the results a~(i) = C!i' ~Oiji = e[ 8(}~ ei and 8/; = -Cj\/ 88~N C NI . Here ei denotes the ith column vector of the identity matrix IN. J 2.4 Interpretations More insight can be obtained from reparametrizing the covariance function as follows. C(x(i), x(j); (}) = a 2 (ao+ih M 1M L xp(i)xp(j)+voexp( - 2 L wp(xp(i)-x U))2)+Oi,j) P p=I p=l (16) where ao = a 2 ?la, al = a 2 aI, Va = a 2 Va. Let us define P(x(i), xU); (}) = ~C(x(i), xU); (}). Then PNI = a 2 C NI . Therefore, Ci,j = ~ where Ci,j, Pi,j denote the (i, j)th element of the matrices C NI and PNI respectively. From theorem 2 (see (10) and (11)) we have t(i) - y(i) (8) as = q~i~i) = c~~iN. Then, we can rewrite 1 N 1 _ _1_ N iJ'jy (i) G ((}) = 2Na 2 "~ p.. - 2N "~ lOgPii + -2log2rra2 i=l n i=l (17) Predictive Approaches for Choosing Hyperparameters in Gaussian Processes 635 Here, iiN = Pj\hN and, Pi, Pii denote, respectively, the ith column and ith diagonal entry of the matrix Pj\/. Now, by setting the derivative of (17) with respect to a 2 to zero, we can infer the noise level as (18) Similarly, the CV error (10) can be rewritten as H(9) = ~ t i=l ii~~.i) (19) Pu Note that H(9) is dependent only on the ratio of the hyperparameters (Le., on ao, aI, vo) apart from the ARD parameters. Therefore, we cannot infer the noise level uniquely. However, we can estimate the ARD parameters and the ratios ao, aI, vo. Once we have estimated these parameters, then we can use (18) to estimate the noise level. Next, we note that the noise level preferred by the GPE criterion is zero. To see this, first let us rewrite (14) under reparametrization as GE (9) = ~ N t q~;i) t ~ + a2 i=l N Pii (20) i=l Pii Since iiN(i) and Pii are independent of a 2 , it follows that the GPE prefers zero as the noise level, which is not true. Therefore, this approach can be applied when, either the noise level is known or a good estimate of it is available. 3 Simulation results We carried out simulation on four data sets. We considered MacKay's robot arm problem and its modified version introduced by Neal (1996). We used the same data set as MacKay (2-inputs and 2-outputs), with 200 examples in the training set and 200 in the test set. This data set is referred to as 'data set l' in Table 1. Next, to evaluate the ability of the predictive approaches in estimating the ARD parameters, we carried out simulation on the robot arm data with 6 inputs (Neal's version), denoted as 'data set 2' in Table 1. This data set was generated by adding four further inputs, two of which were copies of the two inputs corrupted by additive zero mean Gaussian noise of standard deviation 0.02 and two further irrelevant Gaussian noise inputs with zero mean and unit variance (Williams and Rasmussen (1996)). The performance measures chosen were average of Test Set Error (normalized by true noise level of 0.0025) and average of negative logarithm of predictive probability (NLPP) (computed from Gaussian density function with (3) and (4)). Friedman's [1 J data sets 1 and 2 were based on the problem of predicting impedance and phase respectively from four parameters of an electrical circuit. Training sets of three different sizes (50, 100, 200) and with a signal-to-noise ratio of about 3:1 were replicated 100 times and for each training set (at each sample f (y(x) -1i(x))2dx size N), scaled integral squared error (ISE = D varD y(x) ) and NLPP were computed using 5000 data points randomly generated from a uniform distribution over D (Friedman (1991)). In the case of GPE (denoted as GE in the tables), we used the noise level estimate generated from Gaussian distribution with mean N LT (true noise level) and standard deviation 0.03 N LT. In the case of CV, we estimated the hyperparameters in the reparametrized form and estimated the noise level using (18). In the case of MAP (denoted as MP in the tables), we used the same prior S. Sundararajan and S. S. Keerthi 636 Table 1: Results on robot arm data sets. Average of normalized test set error (TSE) and negative logarithm of predictive probability (NLPP) for various methods. Data Set: 1 NLPP ML MP Gp CV GE TSE 1.126 1.131 1.115 1.112 1.111 -1.512 -1.511 -1.524 -1.518 -1.524 Data Set: 2 NLPP TSE 1.131 1.181 1.116 1.146 1.112 -1.512 -1.489 -1.516 -1.514 -1.524 Table 2: Results on Friedman's data sets. Average of scaled integral squared error and negative logarithm of predictive probability (given in brackets) for different training sample sizes and various methods. ML MP Gp cV GE N = 50 0.43 7.24 0.42 7.18 0.47 7.29 0.55 7.27 0.35 7.10 DataSet: 1 N = 100 N = 200 0.19 0.22 0.20 0.22 0.15 6.71 6.78 6.65 6.67 6.60 0.10 0.12 0.10 0.10 0.08 6.49 6.56 6.44 6.44 6.37 N = 50 0.26 1.05 0.25 1.01 0.33 1.25 0.42 1.36 0.28 1.20 Data Set: 2 N = 100 0.16 0.16 0.20 0.21 0.18 0.82) 0.82) 0.86) 0.91) 0.85) N = 200 0.11 0.68) 0.11 0.69) 0.12 0.70) 0.13 0.70) 0.12 0.63) given in Rasmussen (1996). The GPP approach is denoted as G p in the tables. For all these methods, conjugate gradient (CG) algorithm (Rasmussen (1996)) was used to optimize the hyperparameters. The termination criterion (relative function error) with a tolerance of 10- 7 was used, but with a constraint on the maximum number of CG iterations set to 100. In the case of robot arm data sets, the algorithm was run with ten different initial conditions and the best solution (chosen from respective best objective function value) is reported. The optimization was carried out separately for the two outputs and the results reported are the average TSE, NLPP. In the case of Friedman's data sets, the optimization algorithm was run with three different initial conditions and the best solution was picked up. When N = 200, the optimization algorithm was run with only one initial condition. For all the data sets, both the inputs and outputs were normalized to zero mean and unit variance. From Table 1, we see that the performances (both TSE and NLPP) of the predictive approaches are better than ML and MAP approaches for both the data sets. In the case of data set 2, we observed that like ML and MAP methods, all the predictive approaches rightly identified the irrelevant inputs. The performance of GPE approach is the best on the robot arm data and demonstrates the usefulness of this approach when a good noise level estimate is available. In the case of Friedman's data set 1 (see Table 2), the important observation is that the performances (both ISE and NLPP) of GPP, CV approaches are relatively poor at low sample size (N = 50) and improve very well as N increases. Note that the performances of the predictive approaches are better compared to the ML and MAP methods starting from N = 100 onwards (see NLPP). Again, GPE gives the best performance and the performance at low sample size (N = 50) is also quite good. In the case of Friedman's data set 2, the ML and MAP approaches perform better compared to the predictive approaches except GPE. The performances of GPP and CV improve Predictive Approaches for Choosing Hyperparameters in Gaussian Processes 637 as N increases and are very close to the ML and MAP methods when N 200. Next, it is clear that the MAP method gives the best performance at low sample size. This behavior, we believe, is because the prior plays an important role and hence is very useful. Also, note that unlike data set 1, the performance of GP E is inferior to ML and MAP approaches at low sample sizes and improves over these approaches (see NLPP) as N increases. This suggests that the knowledge of the noise level alone is not the only issue. The basic issue we think is that the predictive approaches estimate the predictive performance of a given model from the training samples. Clearly, the quality of the estimate will become better as N increases. Also, knowing the noise level improves the quality of the estimate. 4 Discussion Simulation results indicate that the size N required to get good estimates of predictive performance will be dependent on the problem. When N is sufficiently large, we find that the predictive approaches perform better than ML and MAP approaches. The sufficient number of samples can be as low as 100 as evident from our results on Friedman's data set 1. Also, MAP approach is the best, when N is very low. As one would expect, the performances of ML and MAP approaches are nearly same as N increases. The comparison with the existing approaches indicate that the predictive approaches developed here are strongly competitive. The overall cost for computing the function and the gradient (for all three predictive approaches) is O(M N3). The cost for making prediction is same as the one required for ML and MAP methods. The proofs of the results and detailed simulation results will be presented in another paper (Sundararajan and Keerthi, 1999). References Friedman, J .H., (1991) Multivariate Adaptive Regression Splines, Ann. of Stat., 19, 1-141. Geisser, S., (1975) The Predictive Sample Reuse Method with Applications, Journal of the American Statistical Association, 70, 320-328. Geisser, S., and Eddy, W.F., (1979) A Predictive Approach to Model Selection, Journal of the American Statistical Association, 74, 153-160. MacKay, D.J.C. (1997) Gaussian Processes - A replacement for neural networks ?, Available in Postscript via URL http://www.wol.ra.phy.cam.ac.uk/mackayj. Neal, R.M. (1996) Bayesian Learning for Neural Networks, New York: Springer-Verlag. Neal, R.M. (1997) Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification. Tech. Rep. No. 9702, Dept. of Statistics, University of Toronto. Rasmussen , C. (1996) Evaluation of Gaussian Processes and other Methods for Non-Linear Regression, Ph.D. Thesis, Dept. of Computer Science, University of Toronto. Stone, M. (1974) Cross-Validatory Choice and Assessment of Statistical Predictions (with discussion), Journal of Royal Statistical Society, ser.B, 36, 111-147. Sundararajan, S., and Keerthi, S.S. (1999) Predictive Approaches for Choosing Hyperparameters in Gaussian Processes, submitted to Neural Computation, available at: http://guppy.mpe.nus.edu.sgFmpessk/gp/gp.html. Williams, C.K.I., and Rasmussen, C.E. (1996) Gaussian Processes for Regression. In Advances in Neural Information Processing Systems 8, ed. by D.S.Touretzky, M.C.Mozer, and M.E.Hasselmo. MIT Press.
1767 |@word version:2 termination:1 simulation:5 bn:2 covariance:8 phy:1 initial:3 existing:2 comparing:1 si:1 dx:1 written:1 readily:1 additive:1 partition:4 engg:1 alone:1 ith:14 toronto:2 lx:9 become:1 introduce:1 ra:1 behavior:1 iisc:1 estimating:1 circuit:1 minimizes:1 developed:1 scaled:2 demonstrates:1 uk:1 ser:1 unit:3 yn:3 positive:1 limit:1 gpp:9 suggests:1 bi:3 averaged:1 practice:1 convenient:1 get:3 cannot:1 close:1 selection:3 context:1 optimize:1 www:1 map:14 xci:3 deterministic:1 maximizing:1 williams:5 starting:1 independently:1 insight:1 handle:1 notion:1 play:1 gps:3 element:5 approximated:1 observed:2 role:1 electrical:1 mozer:1 cam:1 rewrite:2 predictive:38 various:2 monte:2 choosing:6 ise:2 quite:1 otherwise:1 ability:1 statistic:1 gp:5 think:1 propose:4 leave:1 derive:1 ac:1 stat:1 ij:1 ard:4 indicate:2 pii:4 wol:1 ao:6 sufficiently:1 considered:1 exp:1 bj:1 a2:1 omitted:2 estimation:1 hasselmo:1 ere:1 minimization:1 mit:1 clearly:1 gaussian:21 modified:1 likelihood:3 tech:1 cg:2 dependent:2 hidden:1 issue:2 classification:2 among:1 overall:1 denoted:4 html:1 special:1 ernet:1 mackay:4 once:1 validatory:1 represents:1 geisser:8 nearly:1 spline:1 bangalore:1 randomly:1 national:1 pjj:1 phase:1 ourselves:1 replacement:1 keerthi:6 friedman:8 onwards:1 investigate:2 evaluation:1 bracket:1 integral:2 respective:1 indexed:1 logarithm:3 column:5 tse:5 zn:4 maximization:1 cost:2 deviation:3 entry:2 reparametrized:1 uniform:1 usefulness:1 reported:2 kn:2 corrupted:1 vard:1 density:1 na:1 squared:2 again:1 thesis:1 hn:1 american:2 derivative:1 account:1 automation:1 mp:6 picked:1 mpe:2 competitive:2 reparametrization:1 yen:1 square:3 ni:3 oi:1 variance:4 efficiently:2 ofthe:1 sundar:1 bayesian:4 carlo:2 zx:2 mpessk:1 submitted:1 touretzky:1 ed:1 proof:1 dataset:1 knowledge:1 improves:2 eddy:2 cj:1 sophisticated:1 methodology:2 specify:1 evaluated:1 though:1 strongly:2 ei:2 reparametrizing:1 assessment:1 quality:2 believe:1 name:2 normalized:3 true:3 hence:2 wp:3 neal:6 uniquely:1 inferior:1 criterion:2 stone:2 evident:1 tt:1 vo:3 tn:4 iin:2 recently:1 association:2 interpretation:2 numerically:1 sundararajan:6 cv:13 ai:4 automatic:1 similarly:2 robot:5 pu:1 multivariate:2 nlpp:10 showed:1 optimizing:1 irrelevant:2 apart:1 verlag:1 rep:1 cii:7 converge:1 maximize:2 itn:1 signal:1 ii:1 rj:2 infer:2 determination:1 cross:3 jy:1 va:2 prediction:3 regression:7 basic:2 expectation:1 iteration:1 background:1 separately:1 unlike:1 restrict:2 identified:1 idea:1 cn:2 knowing:1 expression:3 url:1 reuse:2 york:1 prefers:1 useful:1 detailed:1 clear:1 ten:1 ph:1 http:2 singapore:2 estimated:4 popularity:1 four:3 pj:2 year:1 run:3 powerful:2 dy:1 constraint:1 n3:1 relatively:1 poor:1 belonging:1 conjugate:1 wi:1 making:1 ge:6 available:5 operation:1 junction:1 rewritten:1 shortly:1 denotes:5 classical:1 society:1 objective:3 question:1 parametric:1 strategy:1 diagonal:2 surrogate:2 gradient:10 parametrized:3 evaluate:1 assuming:1 modeled:1 retained:2 ratio:3 minimizing:1 postscript:1 hmc:1 negative:4 synthesizing:1 implementation:1 rightly:1 unknown:2 perform:2 observation:3 introduced:1 namely:1 mechanical:1 specified:3 required:2 fv:1 nu:2 alternately:1 guppy:2 appeared:1 royal:1 hybrid:1 predicting:1 zr:3 arm:5 representing:1 scheme:1 improve:2 created:1 carried:3 kj:2 prior:5 sg:1 literature:1 relative:1 expect:1 validation:2 integrate:1 sufficient:1 xp:8 pi:2 production:1 row:1 rasmussen:10 copy:1 india:1 institute:1 pni:2 tolerance:1 xn:1 qn:5 adaptive:1 replicated:1 simplified:1 pth:1 sj:1 preferred:1 ml:13 assumed:2 sathiya:1 pen:1 table:9 impedance:1 csa:1 gpe:11 motivation:1 noise:16 hyperparameters:14 nothing:1 xu:2 referred:2 en:7 lc:1 pe:1 ix:1 theorem:4 removing:3 jt:2 ih:1 adding:1 gained:1 ci:6 logarithmic:1 lt:3 psr:2 springer:1 conditional:1 identity:1 ann:1 infinite:2 except:1 experimental:1 la:1 meaningful:1 latter:1 relevance:1 indian:1 dept:2 tested:1
837
1,768
Robust Neural Network Regression for Offline and Online Learning Thomas Briegel* Siemens AG, Corporate Technology D-81730 Munich, Germany thomas.briegel@mchp.siemens.de Volker Tresp Siemens AG, Corporate Technology D-81730 Munich, Germany volker.tresp@mchp.siemens.de Abstract We replace the commonly used Gaussian noise model in nonlinear regression by a more flexible noise model based on the Student-tdistribution. The degrees of freedom of the t-distribution can be chosen such that as special cases either the Gaussian distribution or the Cauchy distribution are realized. The latter is commonly used in robust regression. Since the t-distribution can be interpreted as being an infinite mixture of Gaussians, parameters and hyperparameters such as the degrees of freedom of the t-distribution can be learned from the data based on an EM-learning algorithm. We show that modeling using the t-distribution leads to improved predictors on real world data sets. In particular, if outliers are present, the t-distribution is superior to the Gaussian noise model. In effect, by adapting the degrees of freedom, the system can "learn" to distinguish between outliers and non-outliers. Especially for online learning tasks, one is interested in avoiding inappropriate weight changes due to measurement outliers to maintain stable online learning capability. We show experimentally that using the t -distribution as a noise model leads to stable online learning algorithms and outperforms state-of-the art online learning methods like the extended Kalman filter algorithm. 1 INTRODUCTION A commonly used assumption in nonlinear regression is that targets are disturbed by independent additive Gaussian noise. Although one can derive the Gaussian noise assumption based on a maximum entropy approach, the main reason for this assumption is practicability: under the Gaussian noise assumption the maximum likelihood parameter estimate can simply be found by minimization of the squared error. Despite its common use it is far from clear that the Gaussian noise assumption is a good choice for many practical problems. A reasonable approach therefore would be a noise distribution which contains the Gaussian as a special case but which has a tunable parameter that allows for more flexible distributions. In this paper we use the Student-t-distribution as a noise model which contains two free parameters - the degrees of freedom 1/ and a width parameter (72. A nice feature of the t-distribution is that if the degrees of freedom 1/ approach infinity, we recover the Gaussian noise model. If 1/ < 00 we obtain distributions which are more heavy-tailed than the Gaussian distribution including the Cauchy noise model with 1/ = 1. The latter *Now with McKinsey & Company, Inc. T. Briegel and V. Tresp 408 is commonly used for robust regression. The first goal of this paper is to investigate if the additional free parameters, e.g. v, lead to better generalization performance for real world data sets if compared to the Gaussian noise assumption with v = 00. The most common reason why researchers depart from the Gaussian noise assumption is the presence of outliers. Outliers are errors which occur with low probability and which are not generated by the data-generation process that is subject to identification. The general problem is that a few (maybe even one) outliers of high leverage are sufficient to throw the standard Gaussian error estimators completely off-track (Rousseeuw & Leroy, 1987). In the second set of experiments we therefore compare how the generalization performance is affected by outliers, both for the Gaussian noise assumption and for the t-distribution assumption. Dealing with outliers is often of critical importance for online learning tasks. Online learning is of great interest in many applications exhibiting non-stationary behavior like tracking, signal and image processing, or navigation and fault detection (see, for instance the NIPS*98 Sequential Learning Workshop). Here one is interested in avoiding inappropriate weight chances due to measurement outliers to maintain stable online learning capability. Outliers might result in highly fluctuating weights and possible even instability when estimating the neural network weight vector online using a Gaussian error assumption. State-of-the art online algorithms like the extended Kalman filter, for instance, are known to be nonrobust against such outliers (Meinhold & Singpurwalla, 1989) since they are based on a Gaussian output error assumption. The paper is organized as follows. In Section 2 we adopt a probabilistic view to outlier detection by taking as a heavy-tailed observation error density the Student-t-distribution which can be derived from an infinite mixture of Gaussians approach. In our work we use the multi-layer perceptron (MLP) as nonlinear model. In Section 3 we derive an EM algorithm for estimating the MLP weight vector and the hyperparameters offline. Employing a state-space representation to model the MLP's weight evolution in time we extend the batch algorithm of Section 3 to the online learning case (Section 4). The application of the computationally efficient Fisher scoring algorithm leads to posterior mode weight updates and an online EM-type algorithm for approximate maximum likelihood (ML) estimation of the hyperparameters. In in the last two sections (Section 5 and Section 6) we present experiments and conclusions, respectively. 2 THE t-DENSITY AS A ROBUST ERROR DENSITY We assume a nonlinear regression model where for the t-th data point the noisy target Yt E R is generated as (1) and Xt E Rk is a k-dimensional known input vector. g(.;Wt) denotes a neural network model characterized by weight vector Wt E R n , in our case a multi-layer perceptron (MLP). In the offline case the weight vector Wt is assumed to be a fixed unknown constant vector, i.e. Wt == w. Furthermore, we assume that Vt is uncorrelated noise with density Pv, (.). In the offline case, we assume Pv, (.) to be independent of t, i.e. Pv, (.) == Pv (.). In the following we assume that Pv (.) is a Student-t-density with v degrees of freedom with Pv(z)=T(zI0-2,v)= r(!?l) z2-~ y'7W2qv)(1+-2-) 0- 1W 2" 0- V , v,o->O. (2) It is immediately apparent that for v = 1 we recover the heavy-tailed Cauchy density. What is not so obvious is that for v -t 00 we obtain a Gaussian density. For the derivation of the EM-learning rules in the next section it is important to note that the t-denstiy can be thought of as being an infinite mixture of Gaussians of the form (3) 409 Robust Neural Network Regression for Offline and Online Learning Boeton Houaing 3 ". ::-: ;,-_.__.-.... . ... ~ . with addtttve oult.... v-T5 . ,';..,..................... .... 2 ct.,_ 1.2r-~-~--"-.--~----..." , ..~ 1. 1 - .-.- .. . liJO.9 0 .8 -1 . _ -._._ ..... _ .. __ ..... . _ ?? _" - 2 0 .7 ... ~,:,. - .....~- ......-... ...... ... ;:~,' -3 , -e -4 -2 ?z 2 4 nu_ 0.5'----:---5~-7;;IO:-----:15~---;20~--::;!25 " Of 0UIIIen ('K.] Figure 1: Left: ?(.)-functions for the Gaussian density (dashed) and t-densities with II = 1,4,15 degrees of freedom. Right: MSE on Boston Housing data test set for additive outliers. The dashed line shows results using a Gaussian error measure and the continuous line shows the results using the Student-t-distribution as error measure. where T(zI0'2, II) is the Student-t-density with II degrees of freedom and width parameter 0'2, N(zIO, 0'2/U) is a Gaussian density with center 0 and variance 0'2/U and U '" X~/II where X~ is a Chi-square distribution with II degrees of freedom evaluated at U > O. To compare different noise models it is useful to evaluate the "?-function" defined as (Huber,1964) (4) ?(z) = -ologpv(z)/oz i.e. the negative score-function of the noise density. In the case of i.i.d. samples the ?function reflects the influence of a single measurement on the resulting estimator. Assuming Gaussian measurement errors Pv(z) = N(zIO,0'2) we derive ?(z) = z/0'2 which means that for Izl -+ 00 a single outlier z can have an infinite leverage on the estimator. In contrast, for constructing robust estimators West (1981) states that large outliers should not have any influence on the estimator, i.e. ?(z) -+ 0 for Izl -+ 00. Figure 1 (left) shows ?(z) for different II for the Student-t-distribution. It can be seen that the degrees of freedom II determine how much weight outliers obtain in influencing the regression. In particular, for finite II, the influence of outliers with Izl -+ 00 approaches zero. 3 ROBUST OFFLINE REGRESSION As stated in Equation (3), the t-density can be thought of as being generated as an infinite mixture of Gaussians. Maximum likelihood adaptation of parameters and hyperparameters can therefore be performed using an EM algorithm (Lange et al., 1989). For the t-th sample, a complete data point would consist of the triple (Xt, Yt, Ut) of which only the first two are known and Ut is missing. In the E-step we estimate for every data point indexed by t (5) where at = E[ut IYt, Xt] is the expected value of the unknown Ut given the available data (Xt,Yt) andwhere?5t = (Yt - g(Xti W0 1d ?)2 /0'2,old. In the M-step the weights wand the hyperparameters 0'2 and II are optimized using T argm~n{LOt(Yt - g(Xt iW?)2} t=l (6) T. Briegel and V. Tresp 410 T ~L at [(Yt - g(Xt; w new )) 2] (7) t=l argm:x { +( ~ - 1) Tv T v v log 2" - Tlog{f( 2")} T T t=l t=l L (3t - ~ L at} (8) where (9) with the Digamma function DG(z) = 8f(z)/8z. Note that the M-step for v is a onedimensional nonlinear optimization problem. Also note that the M-steps for the weights in the MLP reduce to a weighted least squares regression problem in which outliers tend to be weighted down. The exception of course is the Gaussian case with v ~ 00 in which all terms obtain equal weight. 4 ROBUST ONLINE REGRESSION For robust online regression, we assume that the model Equation (1) is still valid but that w can change over time, i.e. w Wt. In particular we assume that Wt follows a first order random walk with normally distributed increments, i.e. = (10) and where Wo is normally distributed with center ao and covariance Qo. Clearly, due to the nonlinear nature of 9 and due to the fact that the noise process is non-Gaussian, a fully Bayesian online algorithm - which for the linear case with Gaussian noise can be realized using the Kalman filter - is clearly infeasible. On the other hand, if we consider data 'D = {Xt, yt}f=l' the negative log-posterior -logp(WTI'D) of the parameter sequence WT = (wJ, ... , w~) T is up to a normalizing constant -logp(WTI'D) ex: and can be used as the appropriate cost function to derive the posterior mode estimate W?AP for the weight sequence. The two differences to the presentation in the last section are that first, Wt is allowed to change over time and that second, penalty terms, stemming from the prior and the transition density, are included. The penalty terms are penalizing roughness of the weight sequence leading to smooth weight estimates. A suitable way to determine a stationary point of -logp(WTI'D), the posterior mode estimate of W T , is to apply Fisher scoring. With the current estimate W T1d we get a better estimate wTew = wTld +171' for the unknown weight sequence WT where 'Y is the solution of (12) 1 with the negative score function S(WT) = -8logp(WT 'D)/8WT and the expected information matrix S(WT) = E[8 2 10gp(WTI'D)/8WT 8WT ]. By applying the ideas given in Fahrmeir & Kaufmann (1991) to robust neural network regression it turns out that solving (12), i.e. to compute the inverse of the expected information matrix, can be performed by Robust Neural Network Regression/or Offline and Online Learning 411 Cholesky decomposition in one forward and backward pass through the set of data 'D. Note that the expected information matrix is a positive definite block-tridiagonal matrix. The forward-backward steps have to be iterated to obtain the posterior mode estimate W.pAP for WT. For online posterior mode smoothing, it is of interest to smooth backwards after each filter step t. If Fisher scoring steps are applied sequentially for t = 1,2, ... , then the posterior mode smoother at time-step t - 1, wl~~~P = (W~t-l"'" wi-I lt-l) T together with the step-one predictor Wtlt-l = Wt-I lt-l is a reasonable starting value for obtaining the posterior mode smoother WtMAP at time t. One can reduce the computational load by limiting the backward pass to a sliding time window, e.g. the last Tt time steps, which is reasonable in non-stationary environments for online purposes. Furthermore, if we use the underlying assumption that in most cases a new measurement Yt should not change estimates too drastically then a single Fisher scoring step often suffices to obtain the new posterior mode estimate at time t. The resulting single Fisher scoring step algorithm with lookback parameter Tt has in fact just one additional line of code involving simple matrix manipulations compared to online Kalman smoothing and is given here in pseudo-code. Details about the algorithm and a full description can be found in Briegel & Tresp (1999). Online single Fisher scoring step algorithm (pseudo-code) for t = 1,2, ... repeat the following four steps: ? Evaluate the step-one predictor Wt lt-l. ? Perform the forward recursions for s ? New data point (Xt, =t - Tt, ... , t. yd arrives: evaluate the corrector step Wtlt. ? Perform the backward smoothing recursions ws-Ilt for s = t, ... , t - Tt. For the adaptation of the parameters in the t-distribution, we apply results from Fahrmeir & Kunstler (1999) to our nonlinear assumptions and use an online EM-type algorithm for approximate maximum likelihood estimation of the h yperparameters lit and (7F. We assume the scale factors (7F and the degrees of freedom lit being fixed quantities in a certain time window of length ft, e.g. (7F = (72, lit = 11, t E {t - ft, t}. For deriving online EM update equations we treat the weight sequence Wt together with the mixing variables Ut as missing. By linear Taylor series expansion of g(.; w s ) about the Fisher scoring solutions Wslt and by approximating posterior expectations E[w s I'D] with posterior modes Wslt, S E {t - ft, t} and posterior covariances cov[w s I'D] with curvatures :Eslt = E[(Ws - Wslt) (ws - Wslt) T I'D] in the E-step, a somewhat lengthy derivation results in approximate maximum likelihood update rules for (72 and 11 similar to those given in Section 3. Details about the online EM-type algorithm can be found in Briegel & Tresp (1999). 5 EXPERIMENTS 1. Experiment: Real World Data Sets. In the first experiment we tested if the Studentt-distribution is a useful error measure for real-world data sets. In training, the Studentt-distribution was used and both, the degrees of freedom 11 and the width parameter (72 were adapted using the EM update rules from Section 3. Each experiment was repeated 50 times with different divisions into training and test data. As a comparison we trained the neural networks to minimize the squared error cost function (including an optimized weight decay term). On the test data set we evaluated the performance using a squared error cost function. Table 1 provides some experimental parameters and gives the test set performance based on the 50 repetitions of the experiments. The additional explained variance is defined as [in percent] 100 x (1 - MSPE, IMSPEN) where MSPE, is the mean squared prediction error using the t-distribution and MSPEN is the mean squared prediction error using the Gaussian error measure. Furthermore we supply the standard T. Briegel and V. Tresp 412 Table I: Experimental parameters and test set performance on real world data sets. Data Set Boston Housing Sunspot Fraser River I # Inputs/Hidden I Training I Test I Add.Exp.Var. [%] I Std. [%] I (13/6) (1217) (1217) 400 221 600 106 47 334 4.2 5.3 5.4 0.93 0.67 0.75 error based on the 50 experiments. In all three experiments the networks optimized with the t-distribution as noise model were 4-5% better than the networks optimized using the Gaussian as noise model and in all experiments the improvements were significant based on the paired t-test with a significance level of 1%. The results show clearly that the additional free parameter in the Student-t-distribution does not lead to overfitting but is used in a sensible way by the system to value down the influence of extreme target values. Figure 2 shows the normal probability plots. Clearly visible is the derivation from the Gaussian distribution for extreme target values. We also like to remark that we did not apply any preselection process in choosing the particular data sets which indicates that non-Gaussian noise seems to be the rule rather than the exception for real world data sets. :: 0.99 0." ~095 1090 1075 i oso i0 25 1010 ; 005 002 00\ 0003 000' ''. _0, ~--=" -<>5o-----O0- ---,07' - -',---' ,e~"""MWlgWllllh~.. -8fTttdenllly Figure 2: Normal probability plots of the three training data sets after learning with the Gaussian error measure. The dashed line show the expected normal probabilities. The plots show clearly that the residuals follow a more heavy-tailed distribution than the normal distribution. 2. Experiment: Outliers. In the second experiment we wanted to test how our approach deals with outliers which are artificially added to the data set. We started with the Boston housing data set and divided it into training and test data. We then randomly selected a subset of the training data set (between 0.5% and 25%) and added to the targets a uniformly generated real number in the interval [-5,5]. Figure I (right) shows the mean squared error on the test set for different percentages of added outliers. The error bars are derived from 20 repetitions of the experiment with different divisions into training and test set. It is apparent that the approach using the t-distribution is consistently better than the network which was trained based on a Gaussian noise assumption. 3. Experiment: Online Learning. In the third experiment we examined the use of the t-distribution in online learning. Data were generated from a nonlinear map y = 0.6X2 + bsin(6x) - 1 where b = -0.75, -0.4, -0.1,0.25 for the first, second, third and fourth set of 150 data points, respectively. Gaussian noise with variance 0.2 was added and for training, a MLP with 4 hidden units was used. In the first experiment we compare the performance of the EKF algorithm with our single Fisher scoring step algorithm. Figure 3 (left) shows that our algorithm converges faster to the correct map and also handles the transition in the model (parameter b) much better than the EKE In the second experiment with a probability of 10% outliers uniformly drawn from the interval [-5,5] were added to the targets. Figure 3 (middle) shows that the single Fisher scoring step algorithm using the 413 Robust Neural Network Regression/or Offline and Online Learning t-distribution is consistently better than the same algorithm using a Gaussian noise model and the EKE The two plots on the right in Figure 3 compare the nonlinear maps learned after 150 and 600 time steps, respectively. 0.5 '. . w ~ ! 1~' IO~O~-C'00::---=200~-=311l;--::::""'--5;:;;1Il-----:!1Dl r",. 10 0 100 2CIO 300 r",. 400 5CXl all Figure 3: Left & Middle: Online MSE over each of the 4 sets of training data. On the left we compare extended Kalman filtering (EKF) (dashed) with the single Fisher scoring step algorithm with Tt = 10 (GFS-lO) (continuous) for additive Gaussian noise. The second figure shows EKF (dashed-dotted), Fisher scoring with Gaussian error noise (GFS10) (dashed) and t-distributed error noise (TFS-l 0) (continuous), respectively for data with additive outliers. Right: True map (continuous), EKF learned map (dashed-dotted) and TFS-I0 map (dashed) after T = 150 and T = 600 (data sets with additive outliers). 6 CONCLUSIONS We have introduced the Student-t-distribution to replace the standard Gaussian noise assumption in nonlinear regression. Learning is based on an EM algorithm which estimates both the scaling parameters and the degrees of freedom of the t-distribution. Our results show that using the Student-t-distribution as noise model leads to 4-5% better test errors than using the Gaussian noise assumption on real world data set. This result seems to indicate that non-Gaussian noise is the rule rather than the exception and that extreme target values should in general be weighted down. Dealing with outliers is particularly important for online tasks in which outliers can lead to instability in the adaptation process. We introduced a new online learning algorithm using the t-distribution which leads to better and more stable results if compared to the extended Kalman filter. References Briegel, T. and Tresp, V. (1999) Dynamic Neural Regression Models, Discussion Paper, Seminar flir Statistik, Ludwig Maximilians Universitat Milnchen. de Freitas, N., Doucet, A. and Niranjan, M. (1998) Sequential Inference and Learning, NIPS*98 Workshop, Breckenridge, CO. Fahrmeir, L. and Kaufmann, H. (1991) On Kalman Filtering, Posterior Mode Estimation and Fisher Scoring in Dynamic Exponential Family Regression, Metrika 38, pp. 37-60. Fahrmeir, L. and Kilnstler, R. (1999) Penalized Likelihood smoothing in robust state space models, Metrika 49, pp. 173-191. Huber, p.r. (1964) Robust Estimation of Location Parameter, Annals of Mathematical Statistics 35, pp.73-101. Lange, K., Little, L., Taylor, J. (989) Robust Statistical Modeling Using the t-Distribution, JASA 84, pp. 881-8%. Meinhold, R. and SingpurwaIla, N. (1989) Robustification of Kalman Filter Models, JASA 84, pp. 470-496. Rousseeuw, P. and Leroy, A. (1987) Robust Regression and Outlier Detection, John Wiley & Sons. West, M. (1981) Robust Sequential Approximate Bayesian Estimation, JRSS B 43, pp. 157-166.
1768 |@word middle:2 seems:2 covariance:2 decomposition:1 contains:2 score:2 series:1 outperforms:1 freitas:1 current:1 z2:1 john:1 stemming:1 visible:1 additive:5 wanted:1 plot:4 update:4 stationary:3 selected:1 metrika:2 argm:2 provides:1 location:1 mathematical:1 supply:1 tlog:1 huber:2 expected:5 behavior:1 multi:2 chi:1 company:1 eke:2 xti:1 little:1 inappropriate:2 window:2 estimating:2 underlying:1 what:1 interpreted:1 ag:2 pseudo:2 every:1 oso:1 normally:2 bsin:1 unit:1 positive:1 influencing:1 treat:1 io:2 despite:1 ap:1 yd:1 might:1 examined:1 co:1 zi0:2 practical:1 block:1 definite:1 adapting:1 thought:2 get:1 influence:4 instability:2 applying:1 disturbed:1 map:6 yt:8 center:2 missing:2 starting:1 immediately:1 estimator:5 rule:5 deriving:1 handle:1 increment:1 limiting:1 annals:1 target:7 particularly:1 std:1 ft:3 wj:1 environment:1 tfs:2 dynamic:2 trained:2 solving:1 division:2 completely:1 derivation:3 nonrobust:1 choosing:1 apparent:2 cov:1 statistic:1 gp:1 noisy:1 online:30 housing:3 sequence:5 adaptation:3 mixing:1 ludwig:1 oz:1 description:1 converges:1 derive:4 throw:1 indicate:1 exhibiting:1 correct:1 filter:6 ao:1 generalization:2 suffices:1 roughness:1 normal:4 exp:1 great:1 adopt:1 purpose:1 estimation:5 iw:1 wl:1 repetition:2 reflects:1 weighted:3 minimization:1 clearly:5 gaussian:37 ekf:4 rather:2 volker:2 derived:2 improvement:1 consistently:2 likelihood:6 indicates:1 contrast:1 digamma:1 inference:1 i0:2 w:3 hidden:2 interested:2 germany:2 flexible:2 art:2 special:2 smoothing:4 equal:1 lit:3 few:1 randomly:1 dg:1 maintain:2 freedom:13 detection:3 interest:2 mlp:6 investigate:1 highly:1 mixture:4 navigation:1 arrives:1 extreme:3 indexed:1 old:1 taylor:2 walk:1 instance:2 modeling:2 logp:4 cost:3 subset:1 predictor:3 zio:2 tridiagonal:1 too:1 universitat:1 density:14 river:1 probabilistic:1 off:1 together:2 squared:6 leading:1 de:3 student:10 inc:1 performed:2 view:1 lot:1 recover:2 capability:2 minimize:1 square:2 il:1 cio:1 variance:3 kaufmann:2 flir:1 identification:1 bayesian:2 iterated:1 researcher:1 lengthy:1 against:1 pp:6 obvious:1 tunable:1 ut:5 organized:1 follow:1 improved:1 evaluated:2 furthermore:3 just:1 hand:1 qo:1 nonlinear:10 mode:10 effect:1 true:1 evolution:1 deal:1 width:3 complete:1 tt:5 percent:1 image:1 superior:1 common:2 extend:1 onedimensional:1 measurement:5 significant:1 breckenridge:1 stable:4 add:1 curvature:1 posterior:13 manipulation:1 mspe:2 certain:1 fault:1 vt:1 scoring:12 seen:1 additional:4 somewhat:1 determine:2 signal:1 dashed:8 ii:9 sliding:1 corporate:2 full:1 smoother:2 smooth:2 faster:1 characterized:1 divided:1 niranjan:1 fraser:1 paired:1 prediction:2 involving:1 regression:19 expectation:1 gfs:1 izl:3 interval:2 subject:1 tend:1 briegel:8 presence:1 leverage:2 backwards:1 wti:4 lange:2 reduce:2 idea:1 o0:1 studentt:2 wo:1 penalty:2 remark:1 useful:2 clear:1 practicability:1 maybe:1 rousseeuw:2 preselection:1 percentage:1 dotted:2 track:1 affected:1 four:1 drawn:1 penalizing:1 backward:4 pap:1 wand:1 inverse:1 fourth:1 ilt:1 family:1 reasonable:3 scaling:1 layer:2 ct:1 distinguish:1 leroy:2 adapted:1 occur:1 infinity:1 x2:1 statistik:1 munich:2 tv:1 jr:1 em:10 son:1 maximilians:1 wi:1 outlier:28 explained:1 computationally:1 equation:3 turn:1 available:1 gaussians:4 apply:3 fluctuating:1 appropriate:1 batch:1 thomas:2 denotes:1 especially:1 approximating:1 added:5 realized:2 depart:1 quantity:1 sensible:1 w0:1 cauchy:3 reason:2 assuming:1 kalman:8 code:3 length:1 corrector:1 iyt:1 negative:3 stated:1 unknown:3 perform:2 observation:1 finite:1 extended:4 introduced:2 optimized:4 learned:3 nip:2 bar:1 including:2 critical:1 suitable:1 recursion:2 residual:1 technology:2 started:1 tresp:8 nice:1 prior:1 fully:1 generation:1 filtering:2 var:1 triple:1 degree:13 jasa:2 sufficient:1 uncorrelated:1 heavy:4 lo:1 course:1 penalized:1 repeat:1 last:3 free:3 infeasible:1 offline:8 drastically:1 perceptron:2 taking:1 fahrmeir:4 mchp:2 distributed:3 world:7 valid:1 transition:2 t5:1 forward:3 commonly:4 far:1 employing:1 approximate:4 robustification:1 dealing:2 ml:1 doucet:1 sequentially:1 overfitting:1 assumed:1 continuous:4 cxl:1 tailed:4 why:1 table:2 learn:1 nature:1 robust:17 obtaining:1 expansion:1 mse:2 artificially:1 constructing:1 did:1 significance:1 main:1 noise:33 hyperparameters:5 allowed:1 repeated:1 west:2 sunspot:1 wiley:1 seminar:1 pv:7 exponential:1 third:2 rk:1 down:3 load:1 xt:8 decay:1 tdistribution:1 normalizing:1 dl:1 workshop:2 consist:1 sequential:3 importance:1 boston:3 entropy:1 lt:3 simply:1 tracking:1 chance:1 goal:1 presentation:1 replace:2 fisher:12 change:4 experimentally:1 included:1 infinite:5 uniformly:2 wt:19 pas:2 experimental:2 siemens:4 exception:3 cholesky:1 latter:2 evaluate:3 tested:1 avoiding:2 ex:1
838
1,769
Understanding stepwise generalization of Support Vector Machines: a toy model Sebastian Risau-Gusman and Mirta B. Gordon DRFMCjSPSMS CEA Grenoble, 17 avo des Martyrs 38054 Grenoble cedex 09, France Abstract In this article we study the effects of introducing structure in the input distribution of the data to be learnt by a simple perceptron. We determine the learning curves within the framework of Statistical Mechanics. Stepwise generalization occurs as a function of the number of examples when the distribution of patterns is highly anisotropic. Although extremely simple, the model seems to capture the relevant features of a class of Support Vector Machines which was recently shown to present this behavior. 1 Introduction A new approach to learning has recently been proposed as an alternative to feedforward neural networks: the Support Vector Machines (SVM) [1]. Instead of trying to learn a non linear mapping between the input patterns and internal representations, like in multilayered perceptrons, the SVMs choose a priori a non-linear kernel that transforms the input space into a high dimensional feature space. In binary classification tasks like those considered in the present paper, the SVMs look for linear separation with optimal margin in feature space. The main advantage of SVMs is that learning becomes a convex optimization problem. The difficulties of having many local minima that hinder the process of training multilayered neural networks is thus avoided. One of the questions raised by this approach is why SVMs do not overfit the data in spite of the extremely large dimensions of the feature spaces considered. Two recent theoretical papers [2, 3] studied a family of SVMs with the tools of Statistical Mechanics, predicting typical properties in the limit of large dimensional spaces. Both papers considered mappings generated by polynomial kernels, and more specifically quadratic ones. In these, the input vectors x E RN are transformed to N(N + I)j2-dimensional feature vectors <I>(x) . More precisely, the mapping <I> I (x) = (x, XIX, X2X, ... ,XkX) has been studied in [3] as a function of k, the number of quadratic features, and <I> 2 (x) = (x,xlxjN,X2xjN,??? ,xNxjN) has been considered in [2], leading to different results . These mappings are particular cases of quadratic kernels. In particular, in the case of learning quadratically separable tasks with mapping <I> 2 , the generalization error decreases up to a lower bound for a number of examples proportional to N, followed by a further decrease if the number of examples increases proportionally to the dimension of the feature S. Risau-Gusman and M. B. Gordon 322 space, i.e. to N 2 ? In fact, this behavior is not specific of the SVMs. It also arises in the typical case of Gibbs learning (defined below) in quadratic feature spaces [4]: on increasing the training set size, the quadratic components of the discriminating surface are learnt after the linear ones. In the case of learning linearly separable tasks in quadratic feature spaces, the effect of overfitting is harmless, as it only slows down the decrease of the generalization error with the training set size. In the case of mapping <PI, overfitting is dramatic, as the generalization error at any given training set size increases with the number k of features. The aim of the present paper is to understand the influence of the mapping scalingfactor on the generalization performance of the SVMs. To this end, it is worth to remark that features <P2 may be obtained by compressing the quadratic subspace of <PI by a fixed factor. In order to mimic this contraction, we consider a linearly separable task in which the input patterns have a highly anisotropic distribution, so that the variance in one subspace is much smaller than in the orthogonal directions. We show that in this simple toy model, the generalization error as a function of the training set size exhibits a cross-over between two different behaviors: a rapid decrease corresponding to learning the components in the uncompressed space, followed by a slow improvement in which mainly the components in the compressed space are learnt. The latter would correspond, in this highly stylized model, to learning the scaled quadratic features in the SVM with mapping <P2. The paper is organized as follows: after a short presentation of the model, we describe the main steps of the Statistical Mechanics calculation. The order parameters caracterizing the properties of the learning process are defined, and their evolution as a function of the training set size is analyzed. The two regimes of the generalization error are described, and we determine the training set size per input dimension at the crossover, as a function of the pertinent parameters. Finally we discuss our results, and their relevance to the understanding of the generalization properties of SVMs. 2 The model We consider the problem of learning a binary classification task from examples. The training data set Va contains P = aN N-dimensional patterns (eJ.', rJ.') (p = 1,???, P) where rJ.' = sign(e . w .. ) is given by a teacher of weights w" = (WI, W2, ...? , w n ). Without any loss of generality we consider normalized teachers: w" . w" = N. We assume that the components ~i' (i = 1,???, N) of the input patterns are independent, identically distributed random variables drawn from a zero-mean gaussian distribution, with variance a along Nc directions and unit variance in the Nu remaining ones (Nc + Nu = N): e p(e) = IT iENe 1 2 exp V27rO' (~; ) -20'2 IT iENu 1 -exp V2ir (~;) -2 . (1) We take a < 1 without any loss of generality, as the case a > 1 may be deduced from the former through a straightforward rescaling of Nc and N u . Hereafter, the subspace of dimension Nc and variance a will be called compressed subspace. The corresponding orthogonal subspace, of dimension Nu = N - N c , will be called uncompressed subspace. We study the typical generalization error of a student perceptron learning the classification task, using the tools of Statistical Mechanics. The pertinent cost function Understanding Stepwise Generalization ojSVM's: a Toy Model 323 is the number of misclassified patterns: p E(w; Va) = L 8( -TIL ~IL ? w), (2) 1L=1 The weight vectors in version space correspond to a vanishing cost (2). Choosing a w at random from the a posteriori distribution P(wIVa ) = Z-l PO(w) exp (-,8E(w; Va?, (3) in the limit of ,8 -+ 00 is called Gibbs' learning. In eq. (3),,8 is equivalent to an inverse temperature in the Statistical Mechanics formulation, the cost (2) being the energy function. We assume that Po, the a priori distribution of the weights, is uniform on the hypersphere of radius VN: Po(w) = (21re)-N/2 8(w . w - N). (4) The normalization constant (21re)N/2 is the leading order term of the hypersphere's surface in N-dimensional space. Z is the partition function ensuring the correct normalization of P(wIV a ): Z(,8; Va) = J dw Po(w) exp (-,8E(w; Va? . (5) In general, the properties of the student are related to those of the free energy F(,8; Va) = -In Z(,8; V a )/,8. In the limit N -+ 00 with the training set size per input dimension Q: == P / N constant, the properties of the student weights become independant of the particular training set Va. They are deduced from the averaged free energy per degree of freedom, calculated using the replica trick: (6) where the overline represents the average over Va, composed of patterns selected according to (1). In the case of Gibbs learning, the typical behavior of any intensive quantity is obtained in the zero temperature limit ,8 -+ 00. In this limit, only errorfree solutions, with vanishing cost, have non-vanishing posterior probability (3) . Thus, Gibbs learning corresponds to picking at random a student in version space, i.e. a vector w that classifies correctly the training set Va, with a probability proportional to Po (w ). In the case of an isotropic pattern distribution, which corresponds to (7 = 1 in (1), the properties of cost function (2) have been extensively studied [5]. The case of patterns drawn from two gaussian clusters in which the symmetry axis of the clusters is the same [6] and different [7] from the teacher's axis, have recently been addressed. Here we consider the problem where, instead of having a single direction along which the patterns' distribution is contracted (or expanded), there is a finite fraction of compressed dimensions. In this case, all the properties of the student's perceptron may be expressed in terms of the following order parameters, that have to satisfy corresponding extremum conditions of the free energy: -ab qc 1 (N L WiaWib) (7) WiaWib) (8) iENc -ab qu 1 (N L iENu S. Risau-Gusman and M B. Gordon 324 = (~ R ca L wiawi) (9) WiaWn (10) L (Wia)2) (11) iEN< (~ R ua L iENu (~ Qa iEN< where (... ) indicates the average over the posterior (3); a, b are replica indices, and the subcripts c and u stand for compressed and uncompressed respectively. Notice that we do not impose that Qa, the typical squared norm of the student's components in the compressed subspace, be equal to the corresponding teacher's norm Q* = LiEN?wi)2 IN. 3 Order parameters and learning curves Assuming that the order parameters are invariant under permutation of replicas, we can drop the replica indices in equations (7) to (11). We expect that this hypothesis of replica symmetry is consistent, like it is in other cases of perceptrons learning realizable tasks. The problem is thus reduced to the determination of five order parameters. Their meaning becomes clearer if we consider the following combinations: qc iic Q' (12) qu iiu l-Q' (13) Rc Rc (14) .JCJ..ftJ*' Ru Q = Ru v'1=QJl - Q* ' (15) L (Wi)2). (16) (~ iEN< qc and qu are the typical overlaps between the components of two student vectors in the compressed and the uncompressed subspaces respectively. Similarly, Rc and Ru are the corresponding overlaps between a typical student and the teacher. In terms of this set of parameters, the typical generalization error is ?g = (1 I 7r) arccos R with R = RcJQQ* + R u J(1 - Q)(1 - Q*). J(72Q + (1- Q)J(72Q* + (1 - Q*) (72 (17) Given a, the general solution to the extremum conditions depends on the three parameters of the problem, namely (7, Q* and nc == NcIN. An interesting case is the one where the teacher's anisotropy is consistent with that of the pattern's distribution, i.e. Q* = nco In this case, it easy to show that Q = Q*, qc = Rc and qu = Ru. Thus, , R = nuRu + nu (72 ncRc , + (72n c (18) where nu == NuIN, Ru and Rc are given by the following equations: Rc 1 - Rc (72 (72nc J a + nu 7r Jl - R exp (_Rt2 12) Vt H(tVR) , (19) 325 Understanding Stepwise Generalization ofSVM's: a Toy Model 1,0 n =0.9 c * R U .................... ... ........... . . .. ............................ . . .. ... . (12=0.01 ??. , .. ?. R 0,8 -- -- --- ,," ______ - - - - - - - - - R ~ ,'''R ," ,, 0,6 , .... , .... , ? ? ~' ? ? ' G ...... 0,5'..-. .~-~-~-~--, .' E ~~ 0,4 ,, 0,3 0.2 0,4 E 9 0.1 ------------------ --------.-- .......? 0,2 , ,, . ' . 0,0 ...... .... ...... .... . f? ? E9 G 0,0 0~,0--~0,2~~0~,4--~0,76~0~,8~~1.0 ...... .... ....+ .................... ....................... _- ...... E 9 o 2 4 a 6 8 10 Figure 1: Order parameters and generalization error for the case Q* = nc = 0.9, = 10- 2 ? The curves for the case of spherically distributed patterns is shown for comparison. The inset shows the first step of learning and its plateau (see text). (72 (7 2 Ru. 1- R u. (20) where Vt = dte- t2 / 2 /~ and H(x) = I~oo Vt . If (72 = 1, we recover the equations corresponding to Gibbs learning of isotropic pattern distributions [;)]. The order parameters are represented as a function of a on figure 1, for a particular choice of nc and (7 . Ru. grows much faster than R c, meaning that it is easier to learn the components of the uncompressed space. As a result, R (and therefore the generalization error 109) presents a cross-over between two behaviors. At small a, both Ru. ? 1 and Rc ? 1, so that R(a, (72) = Ra(a(nu. +(74nc)/(nu. +(72n c)2) where Ra is the overlap for Gibbs learning with an isotropic (72 = 1) distribution [5]. Learning the anisotropic distribution is faster (in a) than learning the isotropic one. If (7 ? 1 the anisotropy is very large and R increases like Ra but with an effective training set size per input dimension,...., ainu. > a. On increasing a, there is an intermediate regime in which Ru. increases but Rc ? 1, so that R ::: Ru.nu./(nu. +(72n c). The corresponding generalization error seems to reach a plateau corresponding to Ru. = 1 and Rc = O. At a ? 1, R(a, (72) ::: Ra(a), the asymptotic behavior is independent of the details of the distribution, like in [7]. The crossover between these two regimes, when (72 ? 1, occurs at ao ~ J2(nu. + (72n c)/(72n c ). The cases Q* = 1 and Q* = 0 are also of interest. Q* = 1 corresponds to a teacher having all the weights components in the compressed subspace, whereas Q* = 0 S. Risau-Gusman and M. B. Gordon 326 0,5 0,4 I i \ 0,3 ? iI ~,., Gibbs 9 j ....... 0.025 Q?=O.O 0,000 a ,I 20 40 60 80 100 ",", 0,2 \\, , '. I , Q'=0.9 "" " ..... .. . _,......... ./ 0,1 O,O~ ____ ~ ____ o -..... ~ 2 __ ~ ____ ~ 4 ____ ~ ____ ~ 6 __ ~ ____ ~ ____ ~ __ ~ 8 Figure 2: Generalization errors as a function of a for different teachers (Q* = 1, = 0.9 and Q* = 1), for the case nc = 0.9 and (J'2 = 10- 2 . The curve for spherically distributed patterns [5] is included for comparison. The inset shows the large alpha behaviors. Q* corresponds to a teacher orthogonal to the compressed subspace, i.e. with all the components in the uncompressed subspace. They correspond respectively to tasks where either the uncompressed or the compressed components are irrelevant for the patterns' classification. In Figure 2 we show all the generalization error curves, including the generalization error EgG for a uniform distribution [5] for comparison. The behaviour of Eg(a) is very sensitive to the value of Q*. If Q* = 1, the teacher is in the compressed subspace where learning is difficult. Consequently, Eg(a) > EgG (a) as expected. On the contrary, for Q* = 0, only the components in the uncompressed space are relevant for the classification task. In this subspace learning is easy and Eg(a) < EgG(a). At Q* f. 0,1 there is a crossover between these regimes, as already discussed. All the curves merge in the asymptotic regime a -+ 00, as may be seen in the inset of Figure 2. 4 Discussion We analyzed the typical learning behavior of a toy perceptron model that allows to clarify some aspects of generalization in high dimensional feature spaces. In particular, it captures an element essential to obtain stepwise learning, which is shown to stem from the compression of high order features. The components in the compressed space are more difficult to learn than those not compressed. Thus, if Understanding Stepwise Generalization ojSVM's: a Toy Model 327 the training set is not large enough, mainly the latter are learnt. Our results allow to understand the importance of the scaling of high order features in the SVMs kernels. In fact, with SVMs one has to choose a priori the kernel that maps the input space to the feature space. If high order features are conveniently compressed, hierarchical learning occurs. That is, low order features are learnt first; higher order features are only learnt if the training set is large enough. In the cases where the higher order features are irrelevant, it is likely that they will not hinder the learning process. This interesting behavior allows to avoid overfitting. Computer simulations currently in progress, of SVMs generated by quadratic kernels with and without the 1/N scaling, show a behavior consistent with the theoretical predictions [2, 3]. These may be understood with the present toy model. References [1] V. Vapnik (1995) The nature of statistical learning theory. Springer Verlag, New York. [2] R. Dietrich, M. Opper, and H. Sompolinsky (1999) Statistical Mechanics of Support Vector Networks. Phys. Rev. Lett. 82, 2975-2978. [3] A. Buhot and M. B. Gordon (1999) Statistical mechanics of support vector machines. ESANN'99-European Symposium on Artificial Neural Networks Proceedings, Michel Verleysen ed. 201-206; A. Buhot and M. B. Gordon (1998) Learning properties of support vector machines. Cond-Mat/9802179. [4] H. Yoon and J.-H. Oh (1998) Learning of higher order perceptrons with tunable complexities J. Phys. A: Math. Gen. 31, 7771-7784. [5] G. Gyorgyi and N. Tishby (1990) Statistical Theory of Learning a Rule. In Neural Networks and Spin Glasses (W. K. Theumann and R. Koberle, Worls Scientific), 3-36. [6] R. Meir (1995) Empirical risk minimizaton. A case study. Neural Compo 7, 144-157. [7] C. Marangi, M. Biehl, S. A. Solla (1995) Supervised Learning from Clustered Examples Europhys. Lett. 30 (2), 117-122.
1769 |@word version:2 compression:1 polynomial:1 seems:2 norm:2 simulation:1 contraction:1 independant:1 dramatic:1 contains:1 hereafter:1 partition:1 pertinent:2 drop:1 selected:1 isotropic:4 vanishing:3 short:1 compo:1 hypersphere:2 math:1 five:1 rc:10 along:2 become:1 symposium:1 buhot:2 overline:1 expected:1 ra:4 rapid:1 behavior:10 mechanic:7 anisotropy:2 increasing:2 becomes:2 ua:1 classifies:1 xkx:1 extremum:2 scaled:1 unit:1 theumann:1 understood:1 local:1 limit:5 merge:1 studied:3 averaged:1 empirical:1 crossover:3 spite:1 risk:1 influence:1 equivalent:1 map:1 straightforward:1 convex:1 qc:4 iiu:1 rule:1 oh:1 dw:1 harmless:1 hypothesis:1 trick:1 element:1 ien:3 yoon:1 capture:2 compressing:1 sompolinsky:1 solla:1 decrease:4 complexity:1 hinder:2 po:5 stylized:1 represented:1 describe:1 effective:1 artificial:1 choosing:1 europhys:1 iic:1 biehl:1 compressed:13 advantage:1 dietrich:1 j2:2 relevant:2 gen:1 cluster:2 oo:1 clearer:1 rt2:1 progress:1 eq:1 esann:1 p2:2 direction:3 radius:1 correct:1 behaviour:1 ao:1 generalization:21 clustered:1 clarify:1 considered:4 exp:5 mapping:8 currently:1 sensitive:1 tool:2 gaussian:2 aim:1 avoid:1 ej:1 improvement:1 indicates:1 mainly:2 realizable:1 glass:1 posteriori:1 nco:1 lien:1 transformed:1 france:1 misclassified:1 classification:5 priori:3 verleysen:1 arccos:1 raised:1 equal:1 having:3 represents:1 look:1 uncompressed:8 mimic:1 t2:1 gordon:6 grenoble:2 composed:1 ab:2 freedom:1 interest:1 highly:3 analyzed:2 orthogonal:3 re:2 theoretical:2 wiv:1 cost:5 introducing:1 uniform:2 tishby:1 teacher:10 learnt:6 deduced:2 discriminating:1 picking:1 squared:1 choose:2 e9:1 leading:2 rescaling:1 til:1 toy:7 michel:1 de:1 student:8 satisfy:1 depends:1 recover:1 il:1 spin:1 variance:4 correspond:3 worth:1 plateau:2 reach:1 phys:2 sebastian:1 ed:1 energy:4 tunable:1 organized:1 higher:3 supervised:1 formulation:1 generality:2 overfit:1 scientific:1 grows:1 effect:2 normalized:1 evolution:1 avo:1 former:1 spherically:2 eg:3 trying:1 temperature:2 meaning:2 recently:3 anisotropic:3 jl:1 discussed:1 gusman:4 gibbs:7 similarly:1 surface:2 posterior:2 recent:1 irrelevant:2 verlag:1 binary:2 vt:3 seen:1 minimum:1 impose:1 determine:2 ii:1 rj:2 stem:1 faster:2 determination:1 calculation:1 cross:2 va:9 ensuring:1 prediction:1 kernel:6 normalization:2 whereas:1 x2x:1 addressed:1 w2:1 cedex:1 contrary:1 feedforward:1 intermediate:1 identically:1 easy:2 enough:2 intensive:1 york:1 remark:1 proportionally:1 transforms:1 gyorgyi:1 extensively:1 svms:11 reduced:1 meir:1 qjl:1 notice:1 sign:1 per:4 correctly:1 mat:1 drawn:2 replica:5 fraction:1 inverse:1 family:1 vn:1 separation:1 scaling:2 bound:1 followed:2 quadratic:9 precisely:1 aspect:1 extremely:2 separable:3 expanded:1 according:1 combination:1 smaller:1 wi:3 qu:4 rev:1 invariant:1 equation:3 errorfree:1 discus:1 end:1 hierarchical:1 alternative:1 remaining:1 question:1 quantity:1 occurs:3 already:1 exhibit:1 subspace:13 assuming:1 ru:11 index:2 nc:10 difficult:2 slows:1 finite:1 rn:1 namely:1 quadratically:1 nu:11 qa:2 below:1 pattern:15 ftj:1 regime:5 including:1 overlap:3 difficulty:1 predicting:1 cea:1 axis:2 koberle:1 text:1 understanding:5 asymptotic:2 loss:2 expect:1 permutation:1 interesting:2 proportional:2 degree:1 consistent:3 article:1 pi:2 free:3 allow:1 understand:2 perceptron:4 xix:1 distributed:3 curve:6 dimension:8 calculated:1 stand:1 opper:1 lett:2 avoided:1 alpha:1 overfitting:3 why:1 wia:1 learn:3 nature:1 ca:1 symmetry:2 dte:1 european:1 main:2 multilayered:2 linearly:2 contracted:1 egg:3 slow:1 down:1 specific:1 inset:3 svm:2 essential:1 stepwise:6 vapnik:1 importance:1 margin:1 easier:1 likely:1 conveniently:1 expressed:1 springer:1 corresponds:4 presentation:1 consequently:1 included:1 typical:9 specifically:1 called:3 cond:1 risau:4 perceptrons:3 internal:1 support:6 latter:2 arises:1 relevance:1
839
177
314 NEURAL NETWORK STAR PATTERN RECOGNITION FOR SPACECRAFT ATTITUDE DETERMINATION AND CONTROL Phillip Alvelda, A. Miguel San Martin The Jet Propulsion Laboratory, California Institute of Technology, Pasadena, Ca. 91109 ABSTRACT Currently, the most complex spacecraft attitude determination and control tasks are ultimately governed by ground-based systems and personnel. Conventional on-board systems face severe computational bottlenecks introduced by serial microprocessors operating on inherently parallel problems. New computer architectures based on the anatomy of the human brain seem to promise high speed and fault-tolerant solutions to the limitations of serial processing. This paper discusses the latest applications of artificial neural networks to the problem of star pattern recognition for spacecraft attitude determination. INTRODUCTION By design, a conventional on-board microprocessor can perform only one comparison or calculation at a time. Image or pattern recognition problems involving large template sets and high resolution can require an astronomical number of comparisons to a given database. Typical mission planning and optimization tasks require calculations involving a multitude of parameters, where each element has an inherent degree of importance, reliability and noise. Even the most advanced supercomputers running the latest software can require seconds and even minutes to execute a complex pattern recognition or expert system task, often providing incorrect or inefficient solutions to problems that prove trivial to ground control specialists. The intent of ongoing research is to develop a neural network based satellite attitude determination system prototype capable of determining its current three-axis inertial orientation. Such a system that can determine in real-time, which direction the satellite is facing, is needed in order to aim antennas, science instruments, and navigational equipment. For a satellite to be autonomous (an important criterion in interplanetary missions, and most particularly so in the event of a system failure), this task must be performed in a reasonable amount of time with all due consideration to actual environmental, noise and precision constraints. CELESTIAL ATTITUDE DETERMINATION Under normal operating conditions there is a whole repertoire of spacecraft systems that operate in conjunction to perform the attitude determination task, the backbone of which is the Gyro. But a Gyro measures only chaDles in orientation. The current attitude is stored in Neural Network Star Pattern Recognition volatile on-board memory and is updated as the ,yro system inte,rates velocity to provide chanle in anlular position. When there is a power system failure for any reason such as a sinlle-event-upset due to cosmic radiation, an currently stored attitude lafor.atloa Is LOST! One very attractive way of recoverinl attitude information with no a priori knowledge is by USinl on-board imalinl and computer systems to: 1.) Image a portion of the sky, 2.) Compare the characteristic pattern of stars in the sensor fieldof-view to an on-board star catalog, 3.) Thereby identify the stars in the sensor FOV [Field Of View], 4.) Retrieve the identified star coordinates, 5.) Transform and correlate FOV and real-sky coordinates to determine spacecraft attitude. But the problem of matching a limited field of view that contains a small number of stars (out of billions and billions of them), to an onboard fUll-sky catalol containing perhaps thousands of stars has lonl been a severe computational bottleneck. D14~---------;"':~~::.-r----? D13'---~='7""T / , ; ', /,; PAIR 21 PAIR 22 \ , PAIR 703 PAIR 704 ',STORED PAIR ADDRESS ,'" PAIR 70121 PAIR 70122 GEOMETRIC CONSTRAINTS FicuN I.) Serial .tar I.D. catalol rorma' and rnethodololY. The latest serial allorithm to perform this task requires approximately 650 KBytes of RAM to store the on-board star catalol. It incorporates a hilhly optimized allorithm which uses a motorola 68000 to search a sorted database of more than 70,000 star-pair distance values for correlations with the decomposed star pattern in the sensor FOV. It performs the identification process on the order of I second 318 316 Alvelda and San Martin with a success rate of 99 percent. But it does Dot fit iD the spacecraft oD-board memory, and therefore, no such system has flown on a planetary spacecraft. ? USES SUN SENSOR AND ATTITUDE MANEUVERS TO SUN TO SUN CANOPUS FicuN J.) Current Spacecraft attitude inrormation recovery lequence. As a result, state-of-the-art interplanetary spacecraft use several independent sensor systems in ~onjunction to determine attitude with no a priori knowledge. First, the craft is commanded to slew until a Sun Sensor (aligned with the spacecraft's major axis) has locked-on to the sun. The craft must then rotate around that axis until an appropriate star pattern at approximately ninety degrees to the sun is acquired to provide three-axis orientation information. The entire attitude acquisition sequence requires an absolute minimum of thirty minutes, and presupposes that all spacecraft actuator and maneuvering systems are operational. At the phenomenal rendezvous speeds involved in interplanetary navigation, a system failure near mission culmination could mean an almost complete loss of the most valuable scientific data while the spacecraft performs its initial attitude acquisition sequence. NEURAL MOTIVATION The parallel architecture and collective computation properties of a neural network based system address several problems associated with the implementation and performance of the serial star ID algorithm. Instead of searching a lengthy database one element at a time, each stored star pattern is correlated with the field of view concurrently. And whereas standard memory storage technology requires one address in RAM per star-pair distance, the neural star pattern representations are stored in characteristic matrices of interconnections between neurons. This distributed data set representation has several desirable properties. First of all, the 2N redundancy of the serial star-p.air scheme (i.e. which star is at which end of a pair) is discarded and a new more compressed representation emerges from the neuromorphic architecture. Secondly, noise, both statistical (i.e thermal noise) and systematic (i.e. sensor precision limitations), and pattern invariance characteristics are Neural Network Star Pattern Recognition incorporated directly into the preprocessing and neural architecture without extra circuitry. The first neural approach The primary motivation from the NASA perspective is to improve satellite attitude determination performance and enable on-board system implementations. The problem methodology for the neural architecture is then slightly different than that of the serial model. Instead of identifying every detected st~r in the field of view, the neural system identifies a single 'Guide Star' with respect to the pattern of dimmer stars around it, and correlates that star's known position with the sensor FOV to determine the pointing axis. If needed, only one other star is then required to fix the roll angle about that axis. So the core of the celestial attitude determination problem changes from multiple star identification and correlation, single star pattern identification. The entire system consists of several modules in a marriage of different technologies. The first neural system architecture uses already mature(i.e. sensor/preprocessor) technologies where they perform well, and neural technology only where conventional systems prove intractable. With an eye towards rapid prototyping and implementation, the system was designed with technologies (such as neural VLSI) that will be available in less than one year. SYSTEM ARCHITECTURE The Star Tracker sensor system The system input is based on the ASTROS II star tracker under development in the Guidance and Control section at the Jet Propulsion Laboratory. The Star tracker optical system images a defocussed portion of the sky (a star sub-field) onto a charged coupled device (C.C.D.). The tracker electronics then generate star centroid position and intensity information and passes this list to the preprocessing system. The Preprocessln8 system This centroia ind intensity information is passed to the preprocessing subsystem where the star pattern is treated to extract noise and pattern invariance. A 'pattern field-of-view' is defined as centered aroun~ ~he brightest (Le. 'Guide Star') in the central portion of the sensor field-ofview. Since the pattern FOV radius is one half that of the sensor FOV the pattern for that 'Guide Star' is then based on a portion of the image that is complete, or invariant, under translational perturbation. The preprocessor then introduces rotational invariance to the 'guide-star' pattern by using only the distances of all other dimmer stars inside the pattern FOV to the central guide star. These distances are then mapped by the preprocessor onto a two dimensional coordinate system of distance versus relative magnitude (normalized to the guide star, the brightest star in the Pattern FOV) to be sampled by the neural associative star catalog. The motivation for this distance map format become clear when issues involving noise invariance and memory capacity are considered. 31 7 318 Alvelda and San Martin Because the ASTROS Star Tracker is a limited precision instrument, most particularly in the absolute and relative intensity measures, two major problems arise. First, dimmer stars with intensities near the bottom of the dynamic range of the C.C.D. mayor may not be included in the star pattern. So, the entire distance map is scaled to the brightest star such that the bright, high-confidence measurements are weighted more heavily, while the dimmer and possibly transient stars are of less importance to a given pattern. Secondly, since there are a very large number of stars in the sky, the uniqueness of a given star pattern is governed mostly by the relative star distance measures (which, by the way, are the highest precision measurements provided by the star tracker). In addition, because of the limitations in expected neural hardware, a discrete number of neurons must sample a continuous function. To retain the maximum sample precision with a minimum number of neurons, the neural system uses the biological mechanism of a receptive field for hyperacuity. In other words, a number of neurons respond to a single distance stimulus. The process is analogous to that used on the defocussed image of a point source on the C.C.D. which was integrated over several pixels to generate a centroid at sub-pixel accuracies. To relax the demands on hardware development for the neural module, this point smoothing was performed in the preprocessor instead of being introduced into the neural network architecture and dynamics. The equivalent neural response function then becomes: N X?I L 'l'k e -(Ili - Ille )2/tl. k=l where: Xi is the sampling activity of neuron i N is the number of stars in the Pattern Field Of View ILi is the position of neuron i on the sample axis ILk ? is the position of the stimulus from star k on the sample axis is the magnitude scale factor of star k, normalized to the brightest star in the PFOV, the 'Guide star' is the width of the gaussian point spread function The Neural system The neural system, a 106 neuron, three-layer, feed-forward network, samples the scaled and smoothed distance map, to provide an output vector with the highest neural output activity representing the best match to one of the pre-trained guide star patterns. The network training algorithm uses the standard backwards error propagation Neural Network Star Pattern Recognition algorithm to set network interconnect weights from a training set of 'Guide Star' patterns derived from the software simulated sky and sensor models. Simulation testbed The computer simulation testbed includes a realistic celestial field model, as well as a detector model that properly represents achievable position and intensity resolution, sensor scan rates, dynamic range, and signal to noise properties. Rapid identification of star patterns was observed in limited training sets as the simulated tracker was oriented randomly within the celestial sphere. PERFORMANCE RESULTS AND PROJECTIONS In terms of improved performance the neural system was quite a success, but not however in the areas which were initialJy expected. While a VLSI implementation might yield considerable system speed-up, the digital simulation testbed neural processing time was of the same order as the serial algorithm, perhaps slightly better. The success rate of the serial system was already better than 99%. The neural net system achieved an accuracy of 100% when the systematic noise (i.e. dropped stars) of the sensor was neglected. When the dropped star effect was introduced, the performance figure dropped to 94%. It was later discovered that the reason for this 'low' rate was due mostly to the limited size of the Yale Bright Star catalog at higher magnitudes (lower star brightness). In sparse regions of the sky, the pattern in the sensor FOV presented by the limited sky model occasionally consisted of only two or three dim stars. When one or two of them drop out because of the Star sensor magnitude precision limitations. at times. there was no pattern left to identify. Further experiments and parametric studies are under way using a more complete Harvard Smithsonian catalog. The big gain was in terms of required memory. The serial algorithm stored over 70,000 star pairs at high precision in addition to code for a rather complex heuristic, artificial intelligence type of algorithm for a total size of 650 KBytes. The Neural algorithm used a connectionist data representation that was able to abstract from the star catalog, pa ttern class similarities, orthagonalities, and in variances in a highly compressed fashion. Network performance remained essentially constant until interconnect precision was decreased to less than four bits per synapse. 3000 synapses at four bits per synapse requires very little computer memory. These simulation results were all derived from a monte carlo run of approximately 200,000 iterations using the simulator testbed. 319 320 Alvelda and San Martin CONCLUSIONS By means of a clever combination of several technologies and an appropriate data set representation, a star 10 system using one of the most simple neural algorithms outperforms those using the classical serial ones in several aspects, even while running a software simulated neural network. The neural simulator is approximately ten times faster than the equivalent serial algorithm and requires less than one seventh the computer memory. With the transfer to neural VLSI technology, memory requirements will virtually disappear and processing speed will increase by at least an order of magnitude. W1)ere power and weight requirements scale with the hardware chip count, and every pound that must be launched into space costs millions of dollars, neural technology has enabled real-time on-board absolute attitude determination with no a priori information, that may eventually make several accessory satellite systems like horizon and sun sensors obsolete, while increasing the overall reliability of spacecraft systems. Ackaowledgmeats We would like to acknowledge many fruitfull conversations with C. E. Bell, J. Barhen and S. Gulati. Refereaces R. W. H. van Bezooijen. Automated Star Pattern Recognition for Use With the Space Infrared Telescope Facility (SIRTIF). Paper for internal use at The Jet Propulsion Laboratory. P. Gorman, T. J. Sejnowski. Workshop on Neural Network Devices and Applications (Jet Propulsion Laboratory, Pasaden, Ca.) Document 04406, pp.224-237. J. L. Lunkins. Star pattern Recognition for Real Time Attitude Determination. The Journal of Astronautical Science(l979). D. E. Rummelhaft, G. E. Hinton. Parallel Distributed Processing, eds. (MIT Press, Cambridge, Ma.) Vol. 1 pp. 318-364. P. M Salomon, T. A. Glavich. Image Signal Processing and Sub-Pixel Accuracy Star Trackers. SPfE vol. 252 Smart Sensors II (1980). Neural Network Star Pattern Recognition C.C.D . Image Preprocessor Distance Map Redtus from Guide Ster Neural Sampler Neural Output aa DO a II I 18 28 cD DOc I I II 30 St Star Attitude Look-up Table 32 1/1 R.A . 41 50 Dec. 27 -1.3 2.45 :::::~f;r 29 }=:=n:, :::~J;=r:::= 0.2 0.68 a CDc I I 71 86 321 322 Alvelda and San Martin PROTOTYPE HARDWARE IMPLEMENTATJON SERIAL PR')CESS()R TOACS CCMROlLER PNO SIC INTERFACE :.......................................................................................................................................................................: NEURAL PROCESSOR 80 ?????? q q Cl.JQ q q ~Q 1 CORRELATOR ...............................................................................................................................................................................................................................
177 |@word achievable:1 simulation:4 brightness:1 thereby:1 electronics:1 initial:1 contains:1 document:1 outperforms:1 current:3 od:1 must:4 realistic:1 designed:1 drop:1 half:1 intelligence:1 device:2 obsolete:1 kbytes:2 core:1 smithsonian:1 become:1 incorrect:1 prove:2 consists:1 inside:1 acquired:1 interplanetary:3 spacecraft:13 expected:2 rapid:2 planning:1 simulator:2 brain:1 inrormation:1 decomposed:1 actual:1 motorola:1 little:1 correlator:1 increasing:1 becomes:1 provided:1 backbone:1 sky:8 every:2 scaled:2 control:4 maneuver:1 dropped:3 id:2 approximately:4 might:1 fov:9 salomon:1 barhen:1 limited:5 commanded:1 locked:1 range:2 d13:1 thirty:1 lost:1 area:1 bell:1 matching:1 projection:1 confidence:1 word:1 pre:1 onto:2 clever:1 subsystem:1 storage:1 conventional:3 map:4 charged:1 equivalent:2 latest:3 resolution:2 recovery:1 identifying:1 retrieve:1 enabled:1 searching:1 autonomous:1 coordinate:3 analogous:1 updated:1 heavily:1 us:5 harvard:1 element:2 velocity:1 recognition:10 particularly:2 hyperacuity:1 pa:1 infrared:1 database:3 bottom:1 observed:1 module:2 thousand:1 region:1 sun:7 maneuvering:1 highest:2 valuable:1 dynamic:3 neglected:1 ultimately:1 trained:1 smart:1 celestial:4 chip:1 attitude:20 monte:1 sejnowski:1 artificial:2 detected:1 quite:1 heuristic:1 presupposes:1 relax:1 interconnection:1 compressed:2 transform:1 antenna:1 associative:1 sequence:2 net:1 mission:3 aligned:1 billion:2 requirement:2 satellite:5 develop:1 radiation:1 miguel:1 direction:1 anatomy:1 radius:1 centered:1 human:1 enable:1 transient:1 require:3 fix:1 repertoire:1 biological:1 secondly:2 ttern:1 onjunction:1 around:2 ground:2 normal:1 marriage:1 tracker:8 brightest:4 considered:1 circuitry:1 pointing:1 major:2 uniqueness:1 currently:2 ere:1 weighted:1 mit:1 concurrently:1 sensor:19 gaussian:1 aim:1 rather:1 tar:1 conjunction:1 derived:2 properly:1 equipment:1 centroid:2 dollar:1 dim:1 interconnect:2 entire:3 integrated:1 pasadena:1 vlsi:3 jq:1 pixel:3 translational:1 issue:1 orientation:3 overall:1 priori:3 development:2 art:1 smoothing:1 field:10 sampling:1 represents:1 look:1 connectionist:1 stimulus:2 inherent:1 oriented:1 randomly:1 highly:1 severe:2 introduces:1 navigation:1 capable:1 phenomenal:1 guidance:1 neuromorphic:1 cost:1 seventh:1 stored:6 st:2 retain:1 systematic:2 w1:1 central:2 containing:1 possibly:1 expert:1 inefficient:1 star:72 includes:1 performed:2 view:7 later:1 portion:4 parallel:3 air:1 bright:2 accuracy:3 roll:1 variance:1 characteristic:3 yield:1 identify:2 identification:4 carlo:1 processor:1 detector:1 synapsis:1 ille:1 lengthy:1 ed:1 failure:3 acquisition:2 pp:2 involved:1 associated:1 sampled:1 gain:1 astronomical:1 knowledge:2 emerges:1 conversation:1 ster:1 inertial:1 nasa:1 feed:1 higher:1 methodology:1 response:1 improved:1 synapse:2 execute:1 dimmer:4 pound:1 accessory:1 until:3 correlation:2 propagation:1 sic:1 perhaps:2 scientific:1 phillip:1 effect:1 normalized:2 consisted:1 facility:1 laboratory:4 attractive:1 ind:1 width:1 criterion:1 complete:3 performs:2 interface:1 onboard:1 percent:1 image:7 consideration:1 volatile:1 million:1 he:1 measurement:2 cambridge:1 reliability:2 dot:1 similarity:1 operating:2 perspective:1 store:1 occasionally:1 slew:1 success:3 fault:1 minimum:2 determine:4 signal:2 ii:4 full:1 desirable:1 multiple:1 jet:4 determination:10 calculation:2 match:1 sphere:1 faster:1 serial:13 involving:3 mayor:1 essentially:1 iteration:1 cosmic:1 achieved:1 dec:1 whereas:1 addition:2 decreased:1 source:1 extra:1 operate:1 launched:1 pass:1 virtually:1 mature:1 incorporates:1 seem:1 near:2 backwards:1 automated:1 fit:1 architecture:8 identified:1 prototype:2 bottleneck:2 passed:1 clear:1 amount:1 ten:1 hardware:4 telescope:1 generate:2 per:3 discrete:1 promise:1 vol:2 upset:1 redundancy:1 four:2 ce:1 flown:1 ram:2 year:1 run:1 angle:1 respond:1 almost:1 reasonable:1 doc:1 bit:2 layer:1 culmination:1 yale:1 ilk:1 activity:2 constraint:2 software:3 aspect:1 speed:4 optical:1 martin:5 format:1 combination:1 slightly:2 ninety:1 invariant:1 pr:1 discus:1 count:1 mechanism:1 eventually:1 needed:2 instrument:2 end:1 available:1 actuator:1 appropriate:2 specialist:1 supercomputer:1 running:2 disappear:1 classical:1 already:2 receptive:1 primary:1 parametric:1 distance:11 mapped:1 simulated:3 capacity:1 propulsion:4 trivial:1 reason:2 code:1 providing:1 rotational:1 inte:1 mostly:2 intent:1 design:1 implementation:4 collective:1 perform:4 neuron:7 discarded:1 acknowledge:1 thermal:1 hinton:1 incorporated:1 discovered:1 perturbation:1 smoothed:1 intensity:5 introduced:3 pair:11 required:2 optimized:1 catalog:5 california:1 planetary:1 testbed:4 address:3 able:1 pattern:35 prototyping:1 navigational:1 memory:8 power:2 event:2 treated:1 advanced:1 representing:1 scheme:1 improve:1 technology:9 eye:1 identifies:1 axis:8 coupled:1 extract:1 geometric:1 gulati:1 determining:1 relative:3 loss:1 cdc:1 limitation:4 facing:1 versus:1 digital:1 degree:2 cd:1 guide:10 institute:1 template:1 face:1 absolute:3 sparse:1 distributed:2 van:1 forward:1 san:5 preprocessing:3 correlate:2 tolerant:1 xi:1 search:1 continuous:1 table:1 transfer:1 ca:2 inherently:1 operational:1 complex:3 cl:1 microprocessor:2 spread:1 whole:1 noise:8 motivation:3 arise:1 big:1 tl:1 board:9 fashion:1 precision:8 sub:3 position:6 governed:2 minute:2 preprocessor:5 remained:1 list:1 multitude:1 intractable:1 workshop:1 importance:2 magnitude:5 demand:1 horizon:1 gorman:1 gyro:2 personnel:1 aa:1 environmental:1 rendezvous:1 ma:1 sorted:1 towards:1 considerable:1 change:1 included:1 typical:1 sampler:1 total:1 invariance:4 ili:2 craft:2 internal:1 rotate:1 scan:1 ongoing:1 correlated:1
840
1,770
State Abstraction in MAXQ Hierarchical Reinforcement Learning Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, Oregon 97331-3202 tgd@cs.orst.edu Abstract Many researchers have explored methods for hierarchical reinforcement learning (RL) with temporal abstractions, in which abstract actions are defined that can perform many primitive actions before terminating. However, little is known about learning with state abstractions, in which aspects of the state space are ignored. In previous work, we developed the MAXQ method for hierarchical RL. In this paper, we define five conditions under which state abstraction can be combined with the MAXQ value function decomposition. We prove that the MAXQ-Q learning algorithm converges under these conditions and show experimentally that state abstraction is important for the successful application of MAXQ-Q learning. 1 Introduction Most work on hierarchical reinforcement learning has focused on temporal abstraction. For example, in the Options framework [1,2], the programmer defines a set of macro actions ("options") and provides a policy for each. Learning algorithms (such as semi-Markov Q learning) can then treat these temporally abstract actions as if they were primitives and learn a policy for selecting among them. Closely related is the HAM framework, in which the programmer constructs a hierarchy of finitestate controllers [3]. Each controller can include non-deterministic states (where the programmer was not sure what action to perform). The HAMQ learning algorithm can then be applied to learn a policy for making choices in the non-deterministic states. In both of these approaches-and in other studies of hierarchical RL (e.g., [4, 5, 6])-each option or finite state controller must have access to the entire state space. The one exception to this-the Feudal-Q method of Dayan and Hinton [7]introduced state abstractions in an unsafe way, such that the resulting learning problem was only partially observable. Hence, they could not provide any formal results for the convergence or performance of their method. Even a brief consideration of human-level intelligence shows that such methods cannot scale. When deciding how to walk from the bedroom to the kitchen, we do not need to think about the location of our car. Without state abstractions, any RL method that learns value functions must learn a separate value for each state of the State Abstraction in MAXQ Hierarchical Reinforcement Learning 995 world. Some argue that this can be solved by clever value function approximation methods-and there is some merit in this view. In this paper, however, we explore a different approach in which we identify aspects of the MDP that permit state abstractions to be safely incorporated in a hierarchical reinforcement learning method without introducing function approximations. This permits us to obtain the first proof of the convergence of hierarchical RL to an optimal policy in the presence of state abstraction. We introduce these state abstractions within the MAXQ framework [8], but the basic ideas are general. In our previous work with MAXQ, we briefly discussed state abstractions, and we employed them in our experiments. However, we could not prove that our algorithm (MAXQ-Q) converged with state abstractions, and we did not have a usable characterization of the situations in which state abstraction could be safely employed. This paper solves these problems and in addition compares the effectiveness of MAXQ-Q learning with and without state abstractions. The results show that state abstraction is very important, and in most cases essential, to the effective application of MAXQ-Q learning. 2 The MAXQ Framework Let M be a Markov decision problem with states S, actions A, reward function R(s/ls, a) and probability transition function P(s/ls, a). Our results apply in both the finite-horizon undiscounted case and the infinite-horizon discounted case. Let {Mo, .. . ,Mn} be a set of subtasks of M, where each subtask Mi is defined by a termination predicate Ti and a set of actions Ai (which may be other subtasks or primitive actions from A). The "goal" of subtask Mi is to move the environment into a state such that Ti is satisfied. (This can be refined using a local reward function to express preferences among the different states satisfying Ti [8], but we omit this refinement in this paper.) The subtasks of M must form a DAG with a single "root" node-no subtask may invoke itself directly or indirectly. A hierarchical policy is a set of policies 1r = {1ro, ... , 1rn}, one for each subtask. A hierarchical policy is executed using standard procedure-call-and-return semantics, starting with the root task Mo and unfolding recursively until primitive actions are executed. When the policy for Mi is invoked in state s, let P(SI, Nls, i) be the probability that it terminates in state Sl after executing N primitive actions. A hierarchical policy is recursively optimal if each policy 1ri is optimal given the policies of its descendants in the DAG. Let V(i, s) be the value function for subtask i in state s (Le., the value of following some policy starting in s until we reach a state Sl satisfying Ti (S/) ) ? Similarly, let Q(i, s,j) be the Q value for subtask i of executing child action j in state sand then executing the current policy until termination. The MAXQ value function decomposition is based on the observation that each subtask Mi can be viewed as a Semi-Markov Decision problem in which the reward for performing action j in state s is equal to V(j, s), the value function for subtask j in state s. To see this, consider the sequence of rewards rt that will be received when we execute child action j and then continue with subsequent actions according to hierarchical policy 1r: Q(i, s,j) = E{rt + ,rt+l + ,2rt+2 + .. ' Ist = S,1r} The macro action j will execute for some number of steps N and then return. Hence, we can partition this sum into two terms: T. G. Dietterich 996 The first term is the discounted sum ofrewards until subtask j terminates-V(j, s). The second term is the cost of finishing subtask i after j is executed (discounted to the time when j is initiated). We call this second term the completion function, and denote it C(i,s,j). We can then write the Bellman equation as L Q(i,s,j) P(s',Nls,j)? [V(j,s) +,N m.,?-xQ(i,s',j')] s',N J V(j, s) + C(i, s,j) To terminate this recursion, define V (a, s) for a primitive action a to be the expected reward of performing action a in state s. The MAXQ-Q learning algorithm is a simple variation of Q learning in which at subtask M i , state s, we choose a child action j and invoke its (current) policy. When it returns, we observe the resulting state s' and the number of elapsed time steps N and update C(i, s,j) according to C(i, s, j) := (1 - Ut)C(i, s, j) + Ut .,N[max V(a', s') + C(i, s', a')]. a' To prove convergence, we require that the exploration policy executed during learning be an ordered GLIE policy. An ordered policy is a policy that breaks Q-value ties among actions by preferring the action that comes first in some fixed ordering. A GLIE policy [9] is a policy that (a) executes each action infinitely often in every state that is visited infinitely often and (b) converges with probability 1 to a greedy policy. The ordering condition is required to ensure that the recursively optimal policy is unique. Without this condition, there are potentially many different recursively optimal policies with different values, depending on how ties are broken within subtasks, subsubtasks, and so on. Theorem 1 Let M = (S, A, P, R) be either an episodic MDP for which all deterministic policies are proper or a discounted infinite horizon MDP with discount factor,. Let H be a DAG defined over subtasks {Mo, ... ,Mk}. Let Ut(i) > 0 be a sequence of constants for each subtask Mi such that T lim T-too L Ut(i) = t=l T 00 and lim ' " u;(i) T-too~ < 00 (1) t=l Let 7rx (i, s) be an ordered GLIE policy at each subtask Mi and state s and assume that IVt (i, s) I and ICt (i, s, a) I are bounded for all t, i, s, and a. Then with probability 1, algorithm MAXQ-Q converges to the unique recursively optimal policy for M consistent with Hand 7rx . Proof: (sketch) The proof is based on Proposition 4.5 from Bertsekas and Tsitsiklis [10] and follows the standard stochastic approximation argument due to [11] generalized to the case of non-stationary noise. There are two key points in the proof. Define Pt(s',Nls,j) to be the probability transition function that describes the behavior of executing the current policy for subtask j at time t. By an inductive argument, we show that this probability transition function converges (w.p. 1) to the probability transition function of the recursively optimal policy for j. Second, we show how to convert the usual weighted max norm contraction for Q into a weighted max norm contraction for C. This is straightforward, and completes the proof. What is notable about MAXQ-Q is that it can learn the value functions of all subtasks simultaneously-it does not need to wait for the value function for subtask j to converge before beginning to learn the value function for its parent task i. This gives a completely online learning algorithm with wide applicability. State Abstraction in MAXQ Hierarchical Reinforcement Learning 4 R 3 0 997 G 2 1 o y o B 1 234 Figure 1: Left: The Taxi Domain (taxi at row 3 column 0) . Right: Task Graph. 3 Conditions for Safe State Abstraction To motivate state abstraction, consider the simple Taxi Task shown in Figure 1. There are four special locations in this world, marked as R(ed), B(lue), G(reen), and Y(ellow). In each episode, the taxi starts in a randomly-chosen square. There is a passenger at one of the four locations (chosen randomly), and that passenger wishes to be transported to one of the four locations (also chosen randomly). The taxi must go to the passenger's location (the "source"), pick up the passenger, go to the destination location (the "destination"), and put down the passenger there. The episode ends when the passenger is deposited at the destination location. There are six primitive actions in this domain: (a) four navigation actions that move the taxi one square North, South, East, or West, (b) a Pickup action, and (c) a Putdown action. Each action is deterministic. There is a reward of -1 for each action and an additional reward of +20 for successfully delivering the passenger. There is a reward of -10 if the taxi attempts to execute the Putdown or Pickup actions illegally. If a navigation action would cause the taxi to hit a wall, the action is a no-op, and there is only the usual reward of -1. This task has a hierarchical structure (see Fig. 1) in which there are two main sub-tasks: Get the passenger (Get) and Deliver the passenger (Put). Each of these subtasks in turn involves the subtask of navigating to one of the four locations (Navigate(t); where t is bound to the desired target location) and then performing a Pickup or Putdown action. This task illustrates the need to support both temporal abstraction and state abstraction. The temporal abstraction is obvious-for example, Get is a temporally extended action that can take different numbers of steps to complete depending on the distance to the target. The top level policy (get passenger; deliver passenger) can be expressed very simply with these abstractions. The need for state abstraction is perhaps less obvious. Consider the Get subtask. While this subtask is being solved, the destination of the passenger is completely irrelevant- it cannot affect any of the nagivation or pickup decisions. Perhaps more importantly, when navigating to a target location (either the source or destination location of the passenger), only the taxi's location and identity ofthe target location are important. The fact that in some cases the taxi is carrying the passenger and in other cases it is not is irrelevant. We now introduce the five conditions for state abstraction. We will assume that the state s of the MDP is represented as a vector of state variables. A state abstraction can be defined for each combination of subtask Mi and child action j by identifying a subset X of the state variables that are relevant and defining the value function and the policy using only these relevant variables. Such value functions and policies T. G. Dietterich 998 are said to be abstract. The first two conditions involve eliminating irrelevant variables within a subtask of the MAXQ decomposition. Condition 1: Subtask Irrelevance. Let Mi be a subtask of MDP M. A set of state variables Y is irrelevant to sub task i if the state variables of M can be partitioned into two sets X and Y such that for any stationary abstract hierarchical policy 7r executed by the descendants of M i , the following two properties hold: (a) the state transition probability distribution P7r(5',NI5,j) for each child action j of Mi can be factored into the product of two distributions: P7r(x',y',Nlx,y,j) = P7r(x',Nlx,j)' P7r(y'lx,y,j), (2) where x and x' give values for the variables in X, and y and y' give values for the variables in Y; and (b) for any pair of states 51 = (x, yr) and 52 = (x, Y2) and any child action j, V 7r (j, 51) = V7r(j, 52)' In the Taxi problem, the source and destination of the passenger are irrelevant to the Navigate(t) subtask-only the target t and the current taxi position are relevant. The advantages of this form of abstraction are similar to those obtained by Boutilier, Dearden and Goldszmidt [12] in which belief network models of actions are exploited to simplify value iteration in stochastic planning. Condition 2: Leaf Irrelevance. A set of state variables Y is irrelevant for a primitive action a if for any pair of states 51 and 52 that differ only in their values for the variables in Y, L P(5~151' a)R(5~151' a) = L P(5~152' a)R(s~152' a). s'1 s'2 This condition is satisfied by the primitive actions North, South, East, and West in the taxi task, where all state variables are irrelevant because R is constant. The next two conditions involve "funnel" actions- macro actions that move the environment from some large number of possible states to a small number of resulting states. The completion function of such subtasks can be represented using a number of values proportional to the number of resulting states. Condition 3: Result Distribution Irrelevance (Undiscounted case.) A set of state variables }j is irrelevant for the result distribution of action j if, for all abstract policies 7r executed by M j and its descendants in the MAXQ hierarchy, the following holds: for all pairs of states 51 and 52 that differ only in their values for the state variables in }j, V 5' P7r(5'151,j) = P7r(5'152,j). Consider, for example, the Get subroutine under an optimal policy for the taxi task. Regardless of the taxi's position in state 5, the taxi will be at the passenger's starting location when Get finishes executing (Le., because the taxi will have just completed picking up the passenger). Hence, the taxi's initial position is irrelevant to its resulting position. (Note that this is only true in the undiscounted settingwith discounting, the result distributions are not the same because the number of steps N required for Get to finish depends very much on the starting location of the taxi. Hence this form of state abstraction is rarely useful for cumulative discounted reward.) Condition 4: Termination. Let M j be a child task of Mi with the property that whenever M j terminates, it causes Mi to terminate too. Then the completion State Abstraction in MAXQ Hierarchical Reinforcement Learning 999 cost C (i, s, j) = 0 and does not need to be represented. This is a particular kind of funnel action- it funnels all states into terminal states for Mi' For example, in the Taxi task, in all states where the taxi is holding the passenger, the Put subroutine will succeed and result in a terminal state for Root. This is because the termination predicate for Put (i.e., that the passenger is at his or her destination location) implies the termination condition for Root (which is the same). This means that C(Root, s, Put) is uniformly zero, for all states s where Put is not terminated. Condition 5: Shielding. Consider subtask Mi and let s be a state such that for all paths from the root of the DAG down to M i , there exists a subtask that is terminated. Then no C values need to be represented for subtask Mi in state s, because it can never be executed in s. In the Taxi task, a simple example of this arises in the Put task, which is terminated in all states where the passenger is not in the taxi. This means that we do not need to represent C(Root, s, Put) in these states. The result is that, when combined with the Termination condition above, we do not need to explicitly represent the completion function for Put at all! By applying these abstraction conditions to the Taxi task, the value function can be represented using 632 values, which is much less than the 3,000 values required by flat Q learning. Without state abstractions, MAXQ requires 14,000 values! Theorem 2 (Convergence with State Abstraction) Let H be a MAXQ task graph that incorporates the five kinds of state abstractions defined above. Let 7rx be an ordered GLIE exploration policy that is abstract. Then under the same conditions as Theorem 1, MAXQ-Q converges with probability 1 to the unique recursively optimal policy 7r; defined by 7r x and H . Proof: (sketch) Consider a subtask Mi with relevant variables X and two arbitrary states (x, Yl) and (x, Y2). We first show that under the five abstraction conditions, the value function of 7r; can be represented using C(i,x,j) (Le., ignoring the Y values). To learn the values of C(i,x,j) = L:xl,NP(xl,Nlx,j)V(i,x'), a Q-learning algorithm needs samples of x' and N drawn according to P(x' , Nlx,j). The second part of the proof involves showing that regardless of whether we execute j in state (x, Yl) or in (x, Y2), the resulting x' and N will have the same distribution, and hence, give the correct expectations. Analogous arguments apply for leaf irrelevance and V (a, x). The termination and shielding cases are easy. 4 Experimental Results We implemented MAXQ-Q for a noisy version of the Taxi domain and for Kaelbling's HDG navigation task [5] using Boltzmann exploration. Figure 2 shows the performance of flat Q and MAXQ-Q with and without state abstractions on these tasks. Learning rates and Boltzmann cooling rates were separately tuned to optimize the performance of each method. The results show that without state abstractions, MAXQ-Q learning is slower to converge than flat Q learning, but that with state abstraction, it is much faster. 5 Conclusion This paper has shown that by understanding the reasons that state variables are irrelevant, we can obtain a simple proof of the convergence of MAXQ-Q learning T. G. Dietterich 1000 200 MAXQ+Abscradion ? 20 0 1 .~ i e j '! ~ ?200 -- l , \ FIalQ 1-\ f -600 .~ ~g ::E ..0 -60 -SO - 100 - 120 -BOO LX.\' ~ ~fo~ \ ? 1000 0 20000 40000 60000 ? 140 80000 100000 120000 140000 160000 Primidve Actions 200000 400000 600000 800000 Primitive Actioru le+06 1.2e+06 l .~ Figure 2: Comparison of MAXQ-Q with and without state abstraction to flat Q learning on a noisy taxi domain (left) and Kaelbling's HDG task (right). The horizontal axis gives the number of primitive actions executed by each method. The vertical axis plots the average of 100 separate runs. under state abstraction. This is much more fruitful than previous efforts based only on weak notions of state aggregation [10], and it suggests that future research should focus on identifying other conditions that permit safe state abstraction. References [1) D. Precup and R. S. Sutton, "Multi-time models for temporally abstract planning," in NIPS10, The MIT Press, 1998. [2) R. S. Sutton, D. Precup, and S. Singh, "Between MDPs and Semi-MDPs: Learning, planning, and representing knowledge at multiple temporal scales," tech. rep., Univ. Mass., Dept. Compo Inf. Sci., Amherst, MA, 1998. [3] R. Parr and S. Russell, "Reinforcement learning with hierarchies of machines," in NIPS-10, The MIT Press, 1998. [4) S. P. Singh, "Transfer of learning by composing solutions of elemental sequential tasks ," Machine Learning, vol. 8, p. 323, 1992. [5) L. P. Kaelbling, "Hierarchical reinforcement learning: Preliminary results," in Proceedings ICML-l0, pp . 167-173, Morgan Kaufmann, 1993. [6) M. Hauskrecht , N. Meuleau, C. Boutilier, L. Kaelbling, and T . .Dean, "Hierarchical solution of Markov decision processes using macro-actions," tech. rep ., Brown Univ., Dept. Comp oSci., Providence, RI, 1998. [7) P. Dayan and G. Hinton, "Feudal reinforcement learning," in NIPS-5, pp. 271- 278, San Francisco, CA: Morgan Kaufmann, 1993. [8) T . G. Dietterich, "The MAXQ method for hierarchical reinforcement learning," in ICML-15, Morgan Kaufmann, 1998. [9) S. Singh, T. Jaakkola, M. L. Littman, and C. Szpesvari, "Convergence results for single-step on-policy reinforcement-learning algorithms," tech. rep. , Univ. Col., Dept. Compo Sci., Boulder, CO, 1998. [10) D . P. Bertsekas and J . N. Tsitsiklis, Neu.ro-Dynamic Programming. Belmont, MA: Athena Scientific, 1996. [11) T. Jaakkola, M. 1. Jordan, and S. P. Singh, "On the convergence of stochastic iterative dynamic programming algorithms," Neur. Comp ., vol. 6, no. 6, pp. 1185- 1201, 1994. [12) C. Boutilier, R. Dearden, and M. Goldszmidt, "Exploiting structure in policy construction ," in Proceedings IJCAI-95, pp. 1104- 1111, 1995.
1770 |@word version:1 briefly:1 eliminating:1 norm:2 termination:7 decomposition:3 contraction:2 pick:1 recursively:7 initial:1 selecting:1 tuned:1 current:4 si:1 must:4 deposited:1 belmont:1 subsequent:1 partition:1 lue:1 plot:1 update:1 stationary:2 intelligence:1 greedy:1 yr:1 leaf:2 beginning:1 meuleau:1 compo:2 provides:1 characterization:1 node:1 location:16 preference:1 lx:2 five:4 descendant:3 prove:3 introduce:2 expected:1 behavior:1 planning:3 multi:1 terminal:2 bellman:1 discounted:5 little:1 bounded:1 mass:1 what:2 kind:2 developed:1 hauskrecht:1 temporal:5 safely:2 every:1 ti:4 tie:2 ro:2 hit:1 omit:1 bertsekas:2 before:2 local:1 treat:1 taxi:26 sutton:2 initiated:1 path:1 suggests:1 co:1 unique:3 procedure:1 episodic:1 wait:1 get:8 cannot:2 clever:1 put:9 applying:1 optimize:1 fruitful:1 deterministic:4 dean:1 primitive:11 straightforward:1 starting:4 l:2 go:2 focused:1 regardless:2 identifying:2 factored:1 importantly:1 his:1 notion:1 variation:1 analogous:1 hierarchy:3 pt:1 target:5 construction:1 programming:2 illegally:1 satisfying:2 cooling:1 solved:2 episode:2 ordering:2 russell:1 subtask:27 ham:1 environment:2 broken:1 reward:10 littman:1 dynamic:2 terminating:1 motivate:1 carrying:1 singh:4 deliver:2 completely:2 represented:6 univ:3 effective:1 refined:1 think:1 itself:1 noisy:2 online:1 sequence:2 advantage:1 product:1 macro:4 relevant:4 elemental:1 convergence:7 parent:1 undiscounted:3 exploiting:1 ijcai:1 converges:5 executing:5 depending:2 completion:4 op:1 nlx:4 received:1 solves:1 implemented:1 c:1 involves:2 come:1 implies:1 differ:2 nls:3 safe:2 closely:1 correct:1 stochastic:3 exploration:3 human:1 programmer:3 sand:1 require:1 wall:1 preliminary:1 proposition:1 hold:2 deciding:1 mo:3 parr:1 visited:1 successfully:1 weighted:2 unfolding:1 mit:2 jaakkola:2 l0:1 focus:1 finishing:1 tech:3 abstraction:41 dayan:2 entire:1 her:1 subroutine:2 semantics:1 among:3 special:1 equal:1 construct:1 never:1 putdown:3 icml:2 future:1 np:1 simplify:1 randomly:3 simultaneously:1 kitchen:1 attempt:1 navigation:3 irrelevance:4 tgd:1 walk:1 desired:1 mk:1 column:1 cost:2 introducing:1 applicability:1 subset:1 kaelbling:4 successful:1 predicate:2 too:3 providence:1 combined:2 amherst:1 preferring:1 destination:7 invoke:2 yl:2 picking:1 precup:2 satisfied:2 choose:1 usable:1 return:3 north:2 oregon:2 notable:1 explicitly:1 depends:1 passenger:20 view:1 root:7 break:1 start:1 aggregation:1 option:3 square:2 kaufmann:3 v7r:1 identify:1 ofthe:1 weak:1 rx:3 comp:2 researcher:1 executes:1 converged:1 reach:1 fo:1 whenever:1 ed:1 neu:1 finitestate:1 pp:4 obvious:2 proof:8 mi:15 lim:2 car:1 ut:4 knowledge:1 hamq:1 execute:4 just:1 until:4 hand:1 sketch:2 horizontal:1 defines:1 perhaps:2 scientific:1 mdp:5 dietterich:5 brown:1 true:1 y2:3 inductive:1 hence:5 discounting:1 during:1 generalized:1 complete:1 glie:4 consideration:1 invoked:1 rl:5 discussed:1 corvallis:1 ai:1 dag:4 similarly:1 access:1 irrelevant:10 inf:1 rep:3 continue:1 exploited:1 morgan:3 additional:1 p7r:6 employed:2 converge:2 semi:3 multiple:1 faster:1 basic:1 controller:3 expectation:1 iteration:1 represent:2 addition:1 separately:1 completes:1 source:3 sure:1 south:2 incorporates:1 effectiveness:1 jordan:1 call:2 presence:1 easy:1 boo:1 affect:1 finish:2 bedroom:1 idea:1 whether:1 six:1 hdg:2 effort:1 cause:2 action:45 ignored:1 boutilier:3 useful:1 delivering:1 involve:2 discount:1 sl:2 write:1 ivt:1 vol:2 express:1 ist:1 key:1 four:5 drawn:1 graph:2 sum:2 convert:1 run:1 decision:4 bound:1 feudal:2 ri:2 flat:4 aspect:2 argument:3 performing:3 department:1 according:3 neur:1 combination:1 terminates:3 describes:1 partitioned:1 making:1 boulder:1 equation:1 turn:1 merit:1 end:1 permit:3 apply:2 observe:1 hierarchical:19 indirectly:1 slower:1 thomas:1 top:1 include:1 ensure:1 completed:1 move:3 rt:4 usual:2 said:1 navigating:2 distance:1 separate:2 sci:2 athena:1 argue:1 reason:1 executed:8 potentially:1 holding:1 proper:1 policy:39 boltzmann:2 perform:2 vertical:1 observation:1 markov:4 finite:2 pickup:4 situation:1 hinton:2 incorporated:1 extended:1 defining:1 rn:1 arbitrary:1 subtasks:8 introduced:1 pair:3 required:3 orst:1 elapsed:1 unsafe:1 maxq:30 nip:2 max:3 dearden:2 belief:1 recursion:1 mn:1 representing:1 brief:1 mdps:2 temporally:3 axis:2 xq:1 ict:1 understanding:1 proportional:1 funnel:3 consistent:1 row:1 tsitsiklis:2 formal:1 wide:1 world:2 transition:5 cumulative:1 reinforcement:12 refinement:1 san:1 observable:1 francisco:1 iterative:1 learn:6 terminate:2 transported:1 transfer:1 composing:1 ignoring:1 ca:1 domain:4 did:1 main:1 terminated:3 noise:1 child:7 fig:1 west:2 sub:2 position:4 wish:1 xl:2 col:1 learns:1 theorem:3 down:2 navigate:2 shielding:2 showing:1 explored:1 essential:1 exists:1 sequential:1 illustrates:1 horizon:3 simply:1 explore:1 infinitely:2 expressed:1 ordered:4 partially:1 ma:2 succeed:1 goal:1 viewed:1 marked:1 identity:1 experimentally:1 infinite:2 uniformly:1 experimental:1 east:2 exception:1 rarely:1 support:1 goldszmidt:2 arises:1 dept:3
841
1,771
Leveraged Vector Machines Yoram Singer Hebrew University singer@cs.huji.ac.il Abstract We describe an iterative algorithm for building vector machines used in classification tasks. The algorithm builds on ideas from support vector machines, boosting, and generalized additive models. The algorithm can be used with various continuously differential functions that bound the discrete (0-1) classification loss and is very simple to implement. We test the proposed algorithm with two different loss functions on synthetic and natural data. We also describe a norm-penalized version of the algorithm for the exponential loss function used in AdaBoost. The performance of the algorithm on natural data is comparable to support vector machines while typically its running time is shorter than of SVM. 1 Introduction Support vector machines (SVM) [1, 13] and boosting [10, 3, 4, 11] are highly popular and effective methods for constructing linear classifiers. The theoretical basis for SVMs stems from Vapnik's seminal on learning and generalization [12] and has proved to be of great practical usage. The first boosting algorithms [10, 3], on the other hand, were developed to answer certain fundamental questions about PAC-learnability [6]. While mathematically beautiful, these algorithms were rather impractical. Later, Freund and Schapire [4] developed the AdaBoost algorithm, which proved to be a practically useful meta-learning algorithm. AdaBoost works by making repeated calls to a weak learner. On each call the weak learner generates a single weak hypothesis, and these weak hypotheses are combined into an ensemble called strong hypothesis. Recently, Schapire and Singer [11] studied a simple generalization of AdaBoost in which a weak-hypothesis can assign a real-valued confidence to each prediction. Even more recently, Friedman, Hastie, and Tibshirani [5] presented an alternative view of boosting from a statistical point of view and also described a new family of algorithms for constructing generalized additive models of base learners in a similar fashion to AdaBoost. The work of Friedman, Hastie, and Tibshirani generated lots of attention and motivated research in classification algorithms that employ various loss functions [8, 7]. In this work we combine ideas from the research mentioned above and devise an alternative approach to construct vector machines for classification. As in SVM, the base predictors that we use are Mercer kernels. The value of a kernel evaluated at an input pattern, i.e., the dot-product between two instances embedded in a high-dimensional space, is viewed as a real-valued prediction. We describe a simple extension to additive models in which the prediction of a base-learner is a linear transformation of a given kernel. We then describe an iterative algorithm that greedily adds kernels. We derive our algorithm using the exponentialloss function used in AdaBoost and the loss function used by Friedman, Hastie, and Tibshirani [5] in "LogitBoost". For brevity we call the resulting classifiers boosted vector machines (BVM) and logistic vector machines (LVM). We would like to note in passing Leveraged Vector Machines 611 that the resulting algorithms are not boosting algorithms in the PAC sense. For instance, the weak-Iearnability assumption that the weak-learner can always find a weak-hypothesis is violated. We therefore adopt the terminology used in [2] and call the resulting classifiers leveraged vector machines. The leveraging procedure we give adopts the chunking technique from SVM. After presenting the basic leveraging algorithms we compare their performance with SVM on synthetic data. The experimental results show that the leveraged vector machines achieve similar performance to SVM and often the resulting vector machines are smaller than the ones obtained by SVM. The experiments also demonstrate that BVM is especially sensitive to (malicious) label noise while LVM seems to be more insensitve. We also describe a simple norm-penalized extension of BVM that provides a partial solution to overfitting in the presence of noise. Finally, we give results of experiments performed with natural data from the DCI repository and conclude. 2 Preliminaries Let S = ((Xl, yd, ... ,(xm, Ym)) be a sequence of training examples where each instance Xi belongs to a domain or instance space X, and each label Yi is in {-I, +1}. (The methods described in this paper to build vector machines and SVMs can be extended to solve multiclass problems using, for instance, error correcting output coding. Such methods are beyond the scope of this paper and will be discussed elsewhere). For convenience, we will use iii to denote (Yi + 1) /2 E {O, I}. As is boosting, we assume access to a weak or base learning algorithm which accepts as input a weighted sequence of training examples S. Given such input, the weak learner computes a weak (or base) hypothesis h. In general, h has the form h : X -+ ~. We interpret the sign of h(x) as the predicted label (-1 or +1) to be assigned to instance X, and the magnitude Ih(x)1 as the "confidence" in this prediction. To build vector machines we use the notion of confidence-rated predictions. take for base hypotheses sample-based Mercer kernels [13], and define the confidence (i.e., the magnitude of prediction) of a base learner to be the value of its dot-product with another instance. The sign of the prediction is set to be the label of the corresponding instance. Formally, for each base hypothesis h there exist (Xj,Yj) E S such that h(x) = YjK(Xj, x) and K(u, v) defines an inner product in a feature space: K(u, v) = 2:::~1 ak'lfJk (U)'Ij;k (v). We denote the function induced by an instance label pair (Xj, Yj) with a kernel K by </>j (x) = yjK (Xj, x). Our goal is to find a classifier f(x), called a strong hypothesis in the context of boosting algorithms, ofthe form f(x) = 2::::=1 atht(x) + /3, such that the signs of the predictions of the classifier should agree, as much as possible, with the labels of the training instances. The leverage algorithm we describe maintains a distribution Dover {I, ... , m}, i.e., over the indices of S. This distribution is simply a vector of non-negative weights, one weight per example and is an exponential function of the classifier f which is built incrementally, 1 m D(i) = Z exp (-Yd(Xi)) where Z = L exp (-Yd(Xi)) . (1) i=l For a random function 9 of the input instances and the labels, we denote the sample expectation of 9 according to D by ED(g) = 2::::1 D(i)g(Xi, Yi). We also use this notation to denote the expectation of matrices of random functions. We will convert a confidencerated classifier f into a randomized predictor by using the soft-max function and denote it by P(Xi) where p exp (f(Xi)) + exp (- f(Xi)) (Xi) = exp (f(Xi)) 1 1 + exp (-2f(Xi)) . (2) Y. Singer 612 3 The leveraging algorithm The basic procedure to construct leveraged vector machines builds on ideas from [11, 5] by extending the prediction to be a linear function of the base classifiers. The algorithm works in rounds, constructing a new classifier It from the previous one It-I by adding a new base hypothesis h t to the current classifier, It- Denoting by D t and Pt +1 the distribution and probability given by Eqn. (1) and Eqn. (2) using It and It+l' the algorithm attempts to minimize either the exponential function that arise in AdaBoost: m Z = 2: exp (-ydt(Xi)) m = i=1 2: exp (-Yi(ft-l (Xi) + atht(Xi) + f3t)) i=1 m '" 2: Dt(i) exp (-Yi(atht(Xi) + f3t)) , (3) i=1 or the logistic loss function: m (4) i=1 m i=1 m - 2: (fh log(Pt+ (Xi)) + (1 1 ih) log(1 - Pt+ 1(Xi))) (5) i=1 We initialize lo(x) to be zero everywhere and run the procedure for a predefined number of rounds T. The final classifier is therefore IT(X) = '?'['=1 (atht(x) + f3t) = f3 + '?'['=1 atht(x) where f3 = '?t f3t . We would like to note parenthetically that it is possible to use other loss functions that bound the 0-1 (classification) loss (see for instance [8]). Here we focus on the above loss functions, Land Z. Fixing It-I and ht, these functions are convex in at and f3t which guarantees, under mild conditions (details omitted due to lack of space), the uniqueness of at and f3t . On each round we look for the current base hypothesis h t that will reduce the loss function (Z or L) the most. As discussed before, each input instance X j defines a function <Pj (x) and is a candidate for ht(x). In general, there is no close form solution for Eqn. (3) and (5) and finding a and f3 for each possible input instance is time consuming. We therefore use a quadratic approximation for the loss functions. Using the quadratic approximation, for each <Pj we can find a and f3 analytically and calculate the reduction in the loss function. Let \7 Z = (~~, ~~) T and \7 L = (~~, ~~) T be the column vectors of the partial derivatives of Z and L w.r.t a and f3 (fixing It-I and h t ). Similarly, let \7 2 Z and \7 2 L be the 2 x 2 matrices of second order derivatives of Z and L with respect to a and f3. Then, quadratic approximation yields that (a,f3)T = (\72Z)-1 \7Z and (a,f3)T = (\72L)-1 \7L. On each round t we maintain a distribution D t which is defined from It as given by Eqn. (l) and conditional class probability estimates Pt(Xi) as given by Eqn. (2). Solving the linear equation above for a and f3 for each possible instance is done by setting h t (x) = <Pj (x), we get for Z (6) and for L (7) Leveraged Vector Machines ~---- 613 Figure 1: Comparison of the test error as a function of number of leveraging rounds when using full numerical search for a and f3, a "one-step" numerical search based on a quadratic approximation of the loss function, and a one-step search with chunking of the instances. ........ :~_------,7"--~~~'i====~ Note that the equations above share much in common and require, after pre-computing P(Xi), the same amount of computation time. After calculating the value of a and f3 for each instance (x j , Yj ), we simply evaluate the corresponding value of the loss function, choose the instance (Xj> , Yj? that attains the minimal loss, and set h t = <pj>. We then numerically search for the optimal value of a and f3 by iterating Eqn. (6) or Eqn. (7) and summing the values into at and f3t. We would like to note that typically two or three iterations suffice and we can save time by using the value of a and f3 found using the quadratic approximation without a full numerical search for the optimal value of a and f3. (See also Fig. 1.) We repeat this process for T rounds or until no instance can serve as a base hypothesis. We note that the same instance can be chosen more than once, although not in consecutive iterations, and typically only a small fraction of the instances is actually used in building f. Roughly speaking, these instances are the "support patterns" of the leveraged machines although they are not necessarily the geometric support patterns. As in SVMs, in order to make the search for a base hypothesis efficient we pre-compute and store K(x, x') for all pairs x i- x' from 8. Storing these values require 181 2 space, which might be prohibited in large problems. To save space, we employ the idea of chunking used in SVM. We partition 8 into r blocks 8 1 ,82 , ..? ,Sr of about the same size. We divide the iterations into sub-groups such that all iterations belonging to the ith sub-group use and evaluate kernels based on instances from the ith block only_When switching to a new block k we need to compute the values K(x, x') for x E 8 and x, E Sk. This division into blocks might be more expensive since we typically use each block of instances more than once. However, the storage of the kernel values can be done in place and we thus save a factor of r in memory requirements. In practice we found that chunking does not hurt the performance. In Fig. 1 we show the test error as a function of number of rounds when using (a) full numerical search to determine a and f3 on each round, (b) using the quadratic approximation ("one-step") to find a and f3, and (c) using quadratic approximation with chunking. The number of instances in the experiment is 1000, each block for chunking is of size 100, and we switch to a different block every 100 iterations. (Further description of the data is given in the next section.) In this example, after 10 iterations, there is virtually no difference in the performance of the different schemes. 4 Experiments with synthetic data In this section we describe experiments with synthetic data comparing different aspects of leveraged vector machines to SVMs. The original instance space is two dimensional where the positive class includes all points inside a circle of radius R, i.e., an instance (UI, U2) E 1R? is labeled +1 iff + u~ ~ R. The instances were picked at random according to a zero mean unit variance normal distribution and R was set such exactly half of the instances belong to the positive class. In all the experiments described in this section we generated 10 groups of training and test sets each of which includes 1000 train and test examples. Overall, there are 10,000 training examples and 10,000 test examples. The ui Y. Singer 614 .? ._\ I -I SVM i - D .. ' J? - . .~ - ? ! I ~ V OM ?! ,.. - - ?? i? --. - _.'-- ~ .. 5 ...- ? , Figure 2: Performance comparison of SVM and BVM as a function of the training data size (left), the dimension of the kernels (middle), and the number of redundant features. .. ~ - . . svu ? ~ IMI L'" ' 0 ? ~ t ea t ot ....... .. ~II 0 "" ? I' ?? /;I Figure 3: Train and test errors for SVM, LVM, and BVM as a function of the label noise. average variance of the estimates of the empirical errors across experiments is about 0.2%. For SVM we set the regularization parameter, C , to 100 and used 500 iterations to build leveraged machines. In all the experiments without noise the results for BVM and LVM were practically the same. We therefore only compare BVM to SVM in Fig. 2. Unless said otherwise we used polynomials of degree two as kernels: K(X,' x) = (x? x' + 1)2 . Hence, the data is separable in the absence of noise. In the first experiment we tested the sensitivity to the number of training examples by omitting examples from the training data (without any modification to the test sets). On the left part of Fig. 2 we plot the test error as a function of the number of training examples. The test error of BVM is almost indistinguishable from the error of SVM and performance of both methods improves very fast as a function of training examples. Next, we compared the performance as a function of the dimension of polynomial constituting the kernel. We ran the algorithms with kernels of the form K(X,' x) = (x ? x' + l)d for d = 2, ... ,8. The results are depicted in the middle plots of Fig. 2. Again, the performance of BVM and SVM is very close (note the small scale of the y axis for the test error in this experiment). To conclude the experiments with clean, realizable, data we checked the sensitivity to irrelevant features of the input. Each input instance (Ul' U2) was augmented with random elements U3," . ,Ul to form an input vector of dimension l. The right hand side graphs of Fig. 2 shows the test error as a function of 1 for 1 = 2, ... , 12. Once more we see that the performance of both algorithms is very similar. We next compared the performance of the algorithms in the presence of noise. We used kernels of dimension two and instances without redundant features. The label of each instance was flipped with probability E. We ran 15 sets of experiments, for ? 0.01, ... , 0.15. As before, each set included 10 runs each of which used 1000 training examples and 1000 test examples. In Fig. 3 we show the average training error (left), and the average test error (right), for each of the algorithms. It is apparent from the graphs that BVMs built based on the exponential loss are much more sensitive to noise than SVMs and LVMs, and their generalization error degrades significantly, even for low noise rates. The generalization error ofLVMs is, on the other hand, only slightly worse than the that of SVMs, although the = Leveraged Vector Machines -- 615 - ..... ~ .~ LW LW ----_. --- ~ - - .. - .. --- ~ Figure 4: The training error, test error, and the cumulative Ll norm (L~'=l la~ I) as a function of the number of leveraging iterations for LVM,BVM, and PBVM. only algorithmic difference in constructing BVMs and LVMs is in the loss function. The fact that LVMs exhibit performance similar to SVM can be partially attributed to the fact that the asymptotic behavior of their loss functions is the same. 5 A norm-penalized version One of the problems with boosting and the corresponding leveraging algorithm with the exponential loss described here, is that it might increase the confidence on a few instances while misclassifying many other instances, albeit with a small confidence. This often happens on late rounds, during which the distribution D t (i) is concentrated on a few examples, and the leveraging algorithm typically assigns a large weight to a weak hypothesis that does not effect most of the instances. It is therefore desired to control the complexity of the leveraged classifiers by limiting the magnitude of base hypotheses' weights. Several methods have been proposed to limit the confidence of AdaBoost, using, for instance, regularization (e.g., [9]) or "smoothing" the predictions [11]. Here we propose a norm-penalized method for BVM that is very simple to implement and maintains the convexity properties of the objective function. Following the idea Cortes and Vapnik's of SVMs in the non- ,0 (L;=1 separable case [1] we add the following penalization term: exp latlP) . Simple algebric manipulation implies that the objective function at the tth round for BVMs with the penalization term above is, m Zt = I: Dt(i) exp (-Yi(atht(xi) + f3t? +,t exp{latI P ) . (8) i=l It is also easy to show that the penalty parameter should be updated after each round is: ,t = ,t-l exp(lat-lIP)/Zt-l. Since Zt < 1, unless there is no kernel function better than random, typically increases as a function of t, forcing more and more the new weights to be small. Note that Eqn. (8) implies that the search for a base predictor h t and weights at, f3t on each round can still be done independently of previous rounds by maintaining the distribution D t and a single regularization value 't. The penalty term for p = 1 and p = 2 simply adds a diagonal term to the matrix of second order derivatives (Eqn. (6? and the algorithm follows the same line (details omitted). For brevity we call the norm-penalized leveraging procedure PBVM. In Fig. 4 we plot the test error (right), training error (middle), and Lt latl as functions of number of rounds for LVM, BVM, = 0.01. The training set in this example was made small on and PBVM with p = 1 purpose (200 examples) and was contaminated with 5% label noise. In this very small example both LVM and BVM overfit while PBVM stops increasing the weights and finds a reasonably good classifier. The plots demonstrate that the norm-penalized version can safeguard against overfitting by preventing the weights from growing arbitrarily large, and that the effect of the penalized version is very similar to early stopping. We would like ,t ,0 Y. Singer 616 SVM LVM BVM RBVM SVM LVM BVM PBVM Size Size Size Size Error Error Error Error 12.5 7.8 27.2 41.2 122.0 228.6 63.4 37.0 48.1 52.6 46.1 265.5 49.3 360.7 485.2 562.0 1031.0 318.0 637.0 13.7 13.0 20.2 13.5 13.0 11.3 58.9 37.0 84.6 77.1 76.2 78.2 26.5 47.7 89.8 52.0 42.0 153.0 183.0 16.1 12.6 18.5 17.4 13.0 12.8 67.9 41.0 89.3 75.4 77.8 76.4 24.4 30.3 96.5 52.0 43.0 156.0 178.0 13.6 12.4 17.9 14.0 13.0 10.7 59.1 37.0 82.3 74.0 73.3 75.6 24.0 22.8 87.0 52.0 42.0 153.0 160.0 #Example DataSet (Source) labor (UC!) echocard. (uci) bridges (uci) hepati tis (uci) horse?colic (uci) liver (uci) ionosphere (uci) vote (uci) ticketl (att) ticket2 (att) ticket3 (att) bands (uci) breast-wisc (uci) pima (uci) german (uci) weather (uci) network (att) splice (uci) boa (att) & #Feature 57 : 16 74 : 12 102 : 7 155: 19 300: 23 345 : 6 351: 34 435 : 16 556 : 78 556: 53 556 : 61 690: 39 699: 9 768 : 8 1000: 10 1000: 35 2600: 35 3190: 60 5000: 68 6.0 8.6 15.0 21.3 14.7 33.8 13.7 4.4 8.4 6.6 6.9 32.8 3.5 23.0 23.5 25.9 24.8 8.0 41.5 14.0 5.7 15.0 22.0 14.7 35.6 13.1 5.2 3.3 6.4 4.9 33.2 3.6 22.6 24.0 25.4 21.2 8.4 40.8 14.0 10.0 23.0 22.7 14.7 33.5 16.9 5.9 11.5 8.0 7.6 34.3 4.1 23.2 23.8 25.4 23.5 8.4 40.8 12.0 10.0 14.0 22.0 13.2 35.6 13.7 5.2 5.1 6.4 6.7 33.3 4.1 22.1 24.1 25.4 21.2 8.4 41.0 Table 1: Summary of results for a collection of binary classification problems. to note that we found experimentally that the norm-penalized version does compensate for incorrect estimates of a and fJ due to malicious label noise. The experimental results given in the next section show, however, that it does indeed help in preventing overfitting when the training set is small. 6 Experiments with natural data We compared the practical performance of leveraged vector machines with SVMs on a collection of nineteen dataset from the UCI machine learning repository and AT&T networking and marketing data. For SVM we set C = 100. We built each of the leveraged vector machines using 500 rounds. For PBVM we used again p = 1 and 'Yo = 0.0l. We used chunking in building the leveraged vector machines, dividing each training set into 10 blocks. For all the datasets, with the exception of "boa", we used lO-fold cross validation to calculate the test error. (The dataset "boa" has 5000 training examples and 6000 test examples.) The performance of SVM, LVM, and PBVM seem comparable. In fact, with the exception of a very few datasets the differences in error rates are not statistically significant. Of the three methods (SVM, PBVM, and LVM), LVM is the simplest to implement the time required to build an LVM is typically much shorter than that of an SVM. It is also worth noting that the size of leveraged machines is often smaller than the size of the corresponding SVM. Finally, it apparent that PBVMs frequently yield better results than BVMs, especially for small and medium size datasets. References [I] [2] [3] [4] [5] [6] [7] [8] [9] [10] [II] [12] [13] Corinna Cones and Vladimir Vapnik. Suppon-vector networks. Machine Learning, 20(3):273-297, September 1995. N. Duffy and D. Helmbold. A geometric approach to leveraging weak learners. EuroCOLT ' 99. Yoav Freund. Boosting a weak learning algorithm by majority. Information and Computation , 121(2):256-285, 1995. Yoav Freund and Roben E. Schapire. A decision? theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119-139, August 1997. J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Tech. Repon, 1998. Michael Kearns and Leslie G. Valiant. Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the Associationfor Computing Machiner)" 41(1):67-95, January 1994. John D. Laffeny. Additive models, boosting and inference for generalized divergences. In Proceedings of the Twelfth Annual Conference on Computational Learning Theor)" 1999. L. Mason, J. Baxter. P. Banlett, and M. Frean. Doom II. Technical repon. Depa. of Sys. Eng. ANU 1999. G. Rlitsch, T.Onoda. and K.-R. Miiller. Regularizing adaboost. In Advances in Neural Info. Processing Systems 12,1998. Roben E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197-227,1990. Roben E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. COLT'98. V. N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982. Vladimir N. Vapnik. The Nature of Statistical Learning Theor),. Springer, 1995.
1771 |@word mild:1 repository:2 version:5 middle:3 polynomial:2 norm:8 seems:1 twelfth:1 eng:1 parenthetically:1 reduction:1 att:5 denoting:1 current:2 comparing:1 john:1 numerical:4 additive:5 partition:1 plot:4 half:1 sys:1 dover:1 ith:2 provides:1 boosting:13 differential:1 incorrect:1 combine:1 inside:1 indeed:1 behavior:1 roughly:1 frequently:1 growing:1 eurocolt:1 increasing:1 notation:1 suffice:1 medium:1 developed:2 finding:1 transformation:1 impractical:1 guarantee:1 every:1 ti:1 exactly:1 classifier:13 control:1 unit:1 before:2 positive:2 lvm:13 limit:1 switching:1 ak:1 yd:3 might:3 studied:1 statistically:1 practical:2 yj:4 practice:1 block:8 implement:3 procedure:4 empirical:2 significantly:1 weather:1 confidence:8 pre:2 get:1 convenience:1 close:2 storage:1 context:1 doom:1 seminal:1 attention:1 independently:1 convex:1 automaton:1 assigns:1 correcting:1 colic:1 helmbold:1 notion:1 hurt:1 limiting:1 updated:1 pt:4 hypothesis:15 roben:3 element:1 expensive:1 labeled:1 ft:1 calculate:2 ran:2 mentioned:1 convexity:1 ui:2 complexity:1 solving:1 serve:1 division:1 learner:8 basis:1 various:2 train:2 fast:1 describe:7 effective:1 horse:1 apparent:2 valued:2 solve:1 tested:1 otherwise:1 final:1 sequence:2 propose:1 product:3 uci:14 iff:1 achieve:1 description:1 requirement:1 extending:1 help:1 derive:1 ac:1 frean:1 fixing:2 liver:1 ij:1 strong:2 dividing:1 c:1 predicted:1 implies:2 radius:1 require:2 assign:1 generalization:5 preliminary:1 theor:2 mathematically:1 extension:2 practically:2 normal:1 exp:13 great:1 prohibited:1 scope:1 algorithmic:1 u3:1 adopt:1 consecutive:1 omitted:2 fh:1 purpose:1 uniqueness:1 early:1 estimation:1 label:11 sensitive:2 bridge:1 weighted:1 always:1 rather:1 boosted:1 focus:1 yo:1 tech:1 greedily:1 attains:1 sense:1 realizable:1 inference:1 stopping:1 typically:7 overall:1 classification:6 lvms:3 colt:1 smoothing:1 initialize:1 uc:1 construct:2 f3:16 once:3 flipped:1 look:1 contaminated:1 employ:2 few:3 divergence:1 maintain:1 friedman:4 attempt:1 highly:1 predefined:1 partial:2 shorter:2 unless:2 divide:1 circle:1 desired:1 theoretical:1 minimal:1 instance:36 column:1 soft:1 boolean:1 yoav:2 leslie:1 f3t:9 predictor:3 learnability:2 imi:1 answer:1 synthetic:4 combined:1 fundamental:1 randomized:1 huji:1 sensitivity:2 safeguard:1 ym:1 continuously:1 michael:1 again:2 leveraged:15 choose:1 worse:1 derivative:3 coding:1 includes:2 performed:1 later:1 view:3 lot:1 picked:1 maintains:2 minimize:1 il:1 om:1 variance:2 ensemble:1 yield:2 ofthe:1 weak:15 worth:1 networking:1 ed:1 checked:1 against:1 attributed:1 stop:1 proved:2 dataset:3 popular:1 improves:1 actually:1 ea:1 dt:2 adaboost:9 improved:1 evaluated:1 done:3 marketing:1 until:1 overfit:1 hand:3 eqn:9 lack:1 incrementally:1 defines:2 logistic:3 usage:1 effect:2 omitting:1 building:3 analytically:1 assigned:1 regularization:3 hence:1 round:15 indistinguishable:1 ll:1 during:1 generalized:3 presenting:1 theoretic:1 demonstrate:2 fj:1 recently:2 common:1 discussed:2 belong:1 interpret:1 numerically:1 significant:1 similarly:1 dot:2 access:1 base:15 add:3 belongs:1 irrelevant:1 forcing:1 manipulation:1 store:1 certain:1 verlag:1 meta:1 binary:1 arbitrarily:1 yi:6 devise:1 determine:1 redundant:2 ii:3 full:3 stem:1 technical:1 cross:1 compensate:1 ydt:1 prediction:11 basic:2 regression:1 breast:1 expectation:2 iteration:8 kernel:14 malicious:2 source:1 ot:1 sr:1 induced:1 virtually:1 nineteen:1 leveraging:9 seem:1 call:5 presence:2 leverage:1 noting:1 iii:1 easy:1 baxter:1 switch:1 xj:5 hastie:4 inner:1 idea:5 reduce:1 multiclass:1 motivated:1 ul:2 penalty:2 miiller:1 passing:1 speaking:1 useful:1 iterating:1 repon:2 amount:1 band:1 concentrated:1 svms:8 tth:1 simplest:1 schapire:5 exist:1 misclassifying:1 sign:3 tibshirani:4 per:1 discrete:1 group:3 terminology:1 wisc:1 pj:4 clean:1 ht:2 graph:2 fraction:1 convert:1 cone:1 run:2 everywhere:1 place:1 family:1 almost:1 decision:1 comparable:2 bound:2 fold:1 quadratic:7 annual:1 strength:1 generates:1 aspect:1 separable:2 according:2 belonging:1 smaller:2 across:1 slightly:1 making:1 modification:1 happens:1 chunking:7 equation:2 agree:1 german:1 boa:3 singer:7 save:3 alternative:2 corinna:1 original:1 running:1 lat:2 maintaining:1 calculating:1 yoram:2 build:6 especially:2 objective:2 question:1 degrades:1 dependence:1 diagonal:1 said:1 exhibit:1 september:1 majority:1 index:1 hebrew:1 vladimir:2 pbvm:8 pima:1 dci:1 yjk:2 info:1 negative:1 zt:3 cryptographic:1 datasets:3 finite:1 svu:1 january:1 extended:1 august:1 pair:2 required:1 accepts:1 beyond:1 pattern:3 xm:1 built:3 max:1 memory:1 natural:4 beautiful:1 scheme:1 rated:2 axis:1 geometric:2 asymptotic:1 freund:3 loss:19 embedded:1 limitation:1 penalization:2 validation:1 degree:1 mercer:2 storing:1 share:1 land:1 lo:2 elsewhere:1 penalized:8 summary:1 repeat:1 side:1 dimension:4 cumulative:1 computes:1 preventing:2 adopts:1 made:1 collection:2 constituting:1 overfitting:3 summing:1 conclude:2 consuming:1 xi:19 search:8 iterative:2 sk:1 table:1 lip:1 nature:1 reasonably:1 onoda:1 necessarily:1 constructing:4 domain:1 logitboost:1 noise:10 arise:1 atht:6 repeated:1 augmented:1 fig:8 fashion:1 sub:2 exponential:5 xl:1 candidate:1 lw:2 late:1 splice:1 formula:1 pac:2 mason:1 svm:23 cortes:1 ionosphere:1 ih:2 vapnik:5 adding:1 albeit:1 valiant:1 magnitude:3 duffy:1 anu:1 depicted:1 lt:1 simply:3 labor:1 partially:1 u2:2 springer:2 conditional:1 viewed:1 goal:1 confidencerated:1 absence:1 experimentally:1 included:1 kearns:1 called:2 experimental:2 la:1 vote:1 exception:2 formally:1 support:5 brevity:2 violated:1 evaluate:2 regularizing:1
842
1,772
Monte Carlo POMDPs Sebastian Thrun School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract We present a Monte Carlo algorithm for learning to act in partially observable Markov decision processes (POMDPs) with real-valued state and action spaces. Our approach uses importance sampling for representing beliefs, and Monte Carlo approximation for belief propagation. A reinforcement learning algorithm, value iteration, is employed to learn value functions over belief states. Finally, a samplebased version of nearest neighbor is used to generalize across states. Initial empirical results suggest that our approach works well in practical applications. 1 Introduction POMDPs address the problem of acting optimally in partially observable dynamic environment [6] . In POMDPs, a learner interacts with a stochastic environment whose state is only partially observable. Actions change the state of the environment and lead to numerical penalties/rewards, which may be observed with an unknown temporal delay. The learner's goal is to devise a policy for action selection that maximizes the reward. Obviously, the POMDP framework embraces a large range of practical problems. Past work has predominately studied POMDPs in discrete worlds [1]. Discrete worlds have the advantage that distributions over states (so-called "belief states") can be represented exactly, using one parameter per state. The optimal value function (for finite planning horizon) has been shown to be convex and piecewise linear [lO, 14], which makes it possible to derive exact solutions for discrete POMDPs. Here we are interested in POMDPs with continuous state and action spaces, paying tribute to the fact that a large number of real-world problems are continuous in nature. In general, such POMDPs are not solvable exactly, and little is known about special cases that can be solved. This paper proposes an approximate approach, the MC-POMDP algorithm, which can accommodate real-valued spaces and models. The central idea is to use Monte Carlo sampling for belief representation and propagation. Reinforcement learning in belief space is employed to learn value functions, using a sample-based version of nearest neighbor for generalization. Empirical results illustrate that our approach finds to close-to-optimal solutions efficiently. 2 Monte Carlo POMDPs 2.1 Preliminaries POMDPs address the problem of selection actions in stationary, partially observable, controllable Markov chains. To establish the basic vocabulary, let us define: ? State. At any point in time, the world is in a specific state, denoted by x. Monte Carlo POMDPs 1065 ? Action. The agent can execute actions, denoted a. ? Observation. Through its sensors, the agent can observe a (noisy) projection of the world's state. We use 0 to denote observations. ? Reward. Additionally, the agent receives rewards/penalties, denoted R E ~. To simplify the notation, we assume that the reward is part of the observation. More specifically, we will use R( 0) to denote the function that "extracts" the reward from the observation. Throughout this paper, we use the subscript t to refer to a specific point in time (e.g., St refers to the state at time t). POMDPs are characterized by three probability distributions: 1. The initial distribution, 7r( x) := Pr( xo), specifies the initial distribution of states at time t = O. 2. The next state distribution, p(x ' I a,x) := Pr(xt = x' I at-I = a,Xt-l = x), describes the likelihood that action a, when executed at state x, leads to state x'. 3. The perceptual distribution, v( 0 Ix) := Pr( 0t = 0 I Xt = x), describes the likelihood of observing 0 when the world is in state x. A history is a sequence of states and observations. For simplicity, we assume that actions and observations are alternated. We use dt to denote the history leading up to time t: dt {Ot,at-l,Ot-l,at-2, ... ,ao,00} (1) The fundamental problem in POMDPs is to devise a policy for action selection that maximizes reward. A policy, denoted (T : (2) d--+a is a mapping from histories to actions. Assuming that actions are chosen by a policy (T, each policy induces an expected cumulative (and possibly discounted by a discount factor , :::; 1) reward, defined as 00 J<7 = L E [,T R(OT)] (3) T=O Here E[ ] denotes the mathematical expectation. The POMDP problem is, thus, to find a i.e., policy (T* that maximizes (T* = argmax J<7 (4) r, <7 2.2 Belief States To avoid the difficulty of learning a function with unbounded input (the history can be arbitrarily long), it is common practice to map histories into belief states, and learn a mapping from belief states to actions instead [10]. Formally, a belief state (denoted e) is a probability distribution over states conditioned on past actions and observations: et Pr(xt I dt} = Pr(xt lOt, at-I,"" 00) (5) Belief are computed incrementally, using knowledge of the POMDP's defining distributions 7r, p, and v. Initially = 7r (6) eo For t ~ Bt+1 0, we obtain Pr(xt+1 I Ot+l, at,???, 00) 0' Pr(Ot+1 I Xt+I,???, 00) Pr(Xt+l I at,???, 00) 0' Pr(ot+1 I Xt+l) 0' Pr(Ot+l I Xt+d J J Pr(Xt+l I at,???, 00, xt} Pr(xt+1 I at, Xt) et dXt (7) (8) Pr(xt I at,???, 00) dXt (9) (10) S. Thrun 1066 I I 9 / 0.2 I I 0.1 / I I I '''-',, \ \ \ \ \ I , \ I , \ \ '-, 11. _ _ _ ???????? 1111. 1.11111. ? 2 II I 10 .. .? I II I 12 III. HI "I 4 I ???... _ __ _ ....... 12 10 Figure 1: Sampling: (a) Likelihood-weighted sampling and (b) importance sampling. At the bottom of each graph, samples are shown that approximate the function f shown at the top. The height of the samples illustrates their importance/actors. Here a denotes a constant normalizer. The derivations of (8) and (10) follow directly from the fact that the environment is a stationary Markov chain, for which future states and observations are conditionally independent from past ones given knowledge of the state. Equation (9) is obtained using the theorem of total probability. Armed with the notion of belief states, the policy is now a mapping from belief states (instead of histories) to actions: (j : 0 -+ a (11) The legitimacy of conditioning a on 0, instead of d, follows directly from the fact that the environment is Markov, which implies that 0 is all one needs to know about the past to make optimal decisions. 2.3 Sample Representations Thus far, we intentionally left open how belief states 0 are represented. In prior work, state spaces have been discrete. In discrete worlds, beliefs can be represented by a collection of probabilities (one for each state), hence, beliefs can be represented exactly. Here were are interested in real-valued state spaces. In general, probability distributions over realvalued spaces possess infinitely many dimensions, hence cannot be represented on a digital computer. The key idea is to represent belief states by sets of (weighted) samples drawn from the belief distribution. Figure 1 illustrates two popular schemes for sample-based approximation: likelihood-weighted sampling, in which samples (shown at the bottom of Figure la) are drawn directly from the target distribution (labeled f in Figure la), and importance sampling, where samples are drawn from some other distribution, such as the curve labeled 9 in Figure 1b. In the latter case, samples x are annotated by a numerical importance factor p(x) f(x) g(x) (12) to account for the difference in the sampling distribution, g, and the target distribution f (the height of the bars in Figure 1b illustrates the importance factors). Importance sampling requires that f > 0 -+ 9 > 0, which will be the case throughout this paper. Obviously, both sampling methods generate approximations only. Under mild assumptions, they converge to the target distribution at a rate of -j;;, with N denoting the sample set size [16]. In the context of POMDPs, the use of sample-based representations gives rise to the following algorithm for approximate belief propagation (c.f., Equation (10?: Algorithm particleJilter(Ot , at, 0t+l): Ot+l = 0 doN times: draw random state Xt from Ot 1067 Monte Carlo POMDPs sample Xt+1 according to p(Xt+1 I at, xt} set importance factorp(xt+J) = V(Ot+1 I xt+d add (Xt+l,p(Xt+I)) toB t + 1 normalize all p(Xt+d E Bt+1 so that LP(Xt+d = 1 return Bt +1 This algorithm converges to (10) for arbitrary models p, v, and 11" and arbitrary belief distributions B, defined over discrete, continuous, or mixed continuous-discrete state and action spaces. It has, with minor modifications, been proposed under names like particle filters [131. condensation algorithm [5], survival of the fittest [8], and, in the context of robotics, Monte Carlo localization [4]. 2.4 Projection In conventional planning, the result of applying an action at at a state Xt is a distribution Pr(Xt+l, Rt+1 I at, xt} over states Xt+1 and rewards R t +1 at the next time step. This operation is called projection. In POMDPs, the state Xt is unknown. Instead, one has to compute the result of applying action at to a belief state Bt . The result is a distribution Pr(Bt+I' Rt+ 1 I at, Bt ) over belief states Bt +1 and rewards Rt+ I. Since belief states themselves are distributions, the result of a projection in POMDPs is, technically, a distribution over distributions. The projection algorithm is derived as follows. Using total probability, we obtain: Pr(Bt+l , R t+1 I at,Bd Pr(Bt+I,Rt+11 at,dt} (13) J = !r(Bt+l , Rt+: I Ot+l, at, dt ), !r(ot+I,,1 at, dt}, dOt+1 (14) (*) (**) The term (*) has already been derived in the previous section (c.f., Equation (10?, under the observation that the reward R t +1 is trivially computed from the observation 0t+l. The second term, (**), is obtained by integrating out the unknown variables, Xt+1 and Xt. and by once again exploiting the Markov property: Pr(Ot+l I at, dt} J J J Pr(Ot+1 I Xt+d Pr(Ot+1 I Xt+l) V(Ot+1 I Xt+d Pr(xt+1 J J I at. dt} Pr(xt+1 p(Xt+1 dXt+1 I Xt, at} I Xt, at} (15) Pr(xt I dt} dXt dXt-t616) Bt(xt) dXt dXt+1 (17) This leads to the following approximate algorithm for projecting belief state. In the spirit of this paper, our approach uses Monte Carlo integration instead of exact integration. It represents distributions (and distributions over distributions) by samples drawn from such distributions. Algorithm particle_projection(Bt, at): 8t =0 doN times: draw random state Xt from Bt sample a next state Xt+1 accordingtop(xt+1 I at,xt) sample an observation Ot+1 according to V(Ot+1 I Xt+d compute Bt+1 =partic1e_filter(Bt. at. Ot+l) add (Bt+I,R(ot+J)) t08t return8 t The result of this algorithm, 8 t , is a sample set of belief states Bt+1 and rewards Rt+I, drawn from the desired distribution Pr( Bt+ I, Rt+ 1 I Bt , at}. As N ~ 00, at converges with probability 1 to the true posterior [16]. 1068 S. Thrun 2.5 Learning Value Functions Following the rich literature on reinforcement learning [7, 15], our approach solves the POMDP problem by value iteration in belief space. More specifically, our approach recursively learns a value function Q over belief states and action, by backing up values from subsequent belief states: Q(Ot,at} ~ E[R(ot+t}+,m:xQ(Ot+l,a)] (18) Leaving open (for a moment) how Q is represented, it is easy to be seen how the algorithm particle_projection can be applied to compute a Monte Carlo approximation of the right hand-side expression: Given a belief state Ot and an action at, particle_projection computes a sample of R( 0t+ I) and Ot+ I, from which the expected value on the right hand side of (18) can be approximated. It has been shown [2] that if both sides of (18) are equal, the greedy policy (1'Q(O) = argmaxQ(O,a) (19) a = is optimal, i.e., (1'* (1'Q. Furthermore, it has been shown (for the discrete case!) that repetitive application of (18) leads to an optimal value function and, thus, to the optimal policy [17, 3]. Our approach essentially performs model-based reinforcement learning in belief space using approximate sample-based representations. This makes it possible to apply a rich bag of tricks found in the literature on MDPs. In our experiments below, we use online reinforcement learning with counter-based exploration and experience replay [9] to determine the order in which belief states are updated. 2.6 Nearest Neighbor We now return to the issue how to represent Q. Since we are operating in real-valued spaces, some sort of function approximation method is called for. However, recall that Q accepts a probability distribution (a sample set) as an input. This makes most existing function approximators (e.g., neural networks) inapplicable. In our current implementation, nearest neighbor [11] is applied to represent Q. More specifically, our algorithm maintains a set of sample sets 0 (belief states) annotated by an action a and a Q-value Q(O, a). When a new belief state Of is encountered, its Q-value is obtained by finding the k nearest neighbors in the database, and linearly averaging their Q-values. If there aren't sufficiently many neighbors (within a pre-specified maximum distance), Of is added to the database; hence, the database grows over time. Our approach uses KL divergence (relative entropy) as a distance function I. Technically, the KL-divergence between two continuous distributions is well-defined. When applied to sample sets, however, it cannot be computed. Hence, when evaluating the distance between two different sample sets, our approach maps them into continuous-valued densities using Gaussian kernels, and uses Monte Carlo sampling to approximate the KL divergence between them. This algorithm is fairly generic an extension of nearest neighbors to function approximation in density space, where densities are represented by samples. Space limitations preclude us from providing further detail (see [11, 12]). 3 Experimental Results Preliminary results have been obtained in a world shown in two domains, one synthetic and one using a simulator of a RWI B21 robot. In the synthetic environment (Figure 2a), the agents starts at the lower left comer. Its objective is to reach "heaven" which is either at the upper left comer or the lower right 1Strictly speaking, KL divergence is not a distance metric, but this is ignored here. Monte Carlo POMDPs ... (a.... ) , . -_ _...,.._ I=) P \on... '-- 1069 (~"----~~,-~--~-v----v.n-"'" 50 ,1M 25 ?25 -50 t.S:: t.......,.? ?75 ?100 0 20 40 60 80 10 15 20 25 Figure 2: (a) The environment, schematically. (b) Average perfonnance (reward) as a function of training episodes. The black graph corresponds to the smaller environment (25 steps min), the grey graph to the larger environment (50 steps min). (c) Same results, plotted as a function of number of backups (in thousands). comer. The opposite location is "hell." The agent does not know the location of heaven, but it can ask a "priest" who is located in the upper right comer. Thus, an optimal solution requires the agent to go first to the priest, and then head to heaven. The state space contains a real-valued (coordinates of the agent) and discrete (location of heaven) component. Both are unobservable: In addition to not knowing the location of heaven, the agent also cannot sense its (real-valued) coordinates. 5% random motion noise is injected at each move. When an agent hits a boundary, it is penalized, but it is also told which boundary it hit (which makes it possible to infer its coordinates along one axis). However, notice that the initial coordinates of the agent are known. The optimal solution takes approximately 25 steps; thus, a successful POMDP planner must be capable of looking 25 steps ahead. We will use the term "successful policy" to refer to a policy that always leads to heaven, even if the path is suboptimal. For a policy to be successful, the agent must have learned to first move to the priest (information gathering), and then proceed to the right target location. Figures 2b&c show performance results, averaged over 13 experiments. The solid (black) curve in both diagrams plots the average cumulative reward J as a function of the number of training episodes (Figure 2b), and as a function of the number of backups (Figure 2c). A successful policy was consistently found after 17 episodes (or 6,150 backups), in all 13 experiments. In our current implementation, 6,150 backups require approximately 29 minutes on a Pentium Pc. In some experiments, a successful policy was identified in 6 episodes (less than 1,500 backups or 7 minutes). After a successful policy is found, further learning gradually optimizes the path. To investigate scaling, we doubled the size of the environment (quadrupling the size of the state space), making the optimal sol uti on 50 steps long. The results are depicted by the gray curves in Figures 2b&c. Here a successful policy is consistently found after 33 episodes (10,250 backups, 58 minutes). In some runs, a successful policy is identified after only 14 episodes. We also applied MC-POMDPs to a robotic locate-and-retrieve task. Here a robot (Figure 3a) is to find and grasp an object somewhere in its vicinity (at floor or table height). The robot's task is to grasp the object using its gripper. It is rewarded for successfully grasping the object, and penalized for unsuccessful grasps or for moving too far away from the object. The state space is continuous in x and y coordinates, and discrete in the object's height. The robot uses a mono-camera system for object detection; hence, viewing the object from a single location is insufficient for its 3D localization. Moreover, initially the object might not be in sight of the robot's camera, so that the robot must look around first. In our simulation, we assume 30% general detection error (false-positive and false-negative), with additional Gaussian noise if the object is detected correctly. The robot's actions include turns (by a variable angle), translations (by a variable distance), and grasps (at one of two legal heights). Robot control is erroneous with a variance of20% (in x-y-space) and 5% (in rotational space). Typical belief states range from uniformly distributed sample sets (initial belief) to samples narrowly focused on a specific x-y-z location. 30 1070 S. Thrun (b) (c) % success 1 L , \ \ \ , C OB 0.6 0.4 2000 4000 6000 BOOO iteration Figure 3: Find and fetch task: (a) The mobile robot with gripper and camera, holding the target object (experiments are carried out in simulation!), (b) three successful runs (trajectory projected into 2D), and (c) success rate as a function of number of planning steps. Figure 3c shows the rate of successful grasps as a function of iterations (actions). While initially, the robot fails to grasp the object, after approximately 4,000 iterations its performance surpasses 80%. Here the planning time is in the order of 2 hours. However, the robot fails to reach 100%. This is in part because certain initial configurations make it impossible to succeed (e.g., when the object is too close to the maximum allowed distance), in part because the robot occasionally misses the object by a few centimeters. Figure 3b depicts three successful example trajectories. In all three, the robot initially searches the object, then moves towards it and grasps it successfully. 4 Discussion We have presented a Monte Carlo approach for learning how to act in partially observable Markov decision processes (POMDPs). Our approach represents all belief distributions using samples drawn from these distributions. Reinforcement learning in belief space is applied to learn optimal policies, using a sample-based version of nearest neighbor for generalization. Backups are performed using Monte Carlo sampling. Initial experimental results demonstrate that our approach is applicable to real-valued domains, and that it yields good performance results in environments that are-by POMDP standards-relatively large. References [1] AAAI Fall symposium on POMDPs. 1998. See http://www.cs.duke.edu/ ... mlittman/talks/ pomdp-symposiurn.html [2] R E. Bellman. Dynamic Programming. Princeton University Press, 1957. [3] P. Dayan and T. 1. Sejnowski. ID('>') converges with probability 1. 1993. [4] D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte carlo localization: Efficient position estimation for mobile robots. AAAI-99. [5] M. lsard and A. Blake. Condensation: conditional density propagationforvisual tracking.lnternationalJoumalofComputer Vision, 1998. [6] L.P. Kaelbling, M.L. Littman, and A.R Cassandra. Planning and acting in partially observable stochastic domains. Submitted for publication, 1997. [7] L.P. Kaelbling, M.L. Littman, and A. W. Moore. Reinforcement learning: A survey. lAIR,4, 1996. [8] K Kanazawa, D. Koller, and S.l. Russell. Stochastic simulation algorithms for dynamic probabilistic networks. UAI-95. [9] L.-l. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8, 1992. [10] M.L. Littman, A.R Cassandra, and L.P. KaeJbling. Learning poliCies for partially observable environments: Scaling up. ICML-95. [11] A.w. Moore, C.G. Atkeson, and S.A. Schaal. Locally weighted learning for control. AI Review, II, 1997. [12] D. Ormoneit and S. Sen. Kernel-based reinforcernentlearning. TR 1999-8, Statistics, Stanford University, 1999. [13] M. Pitt and N. Shephard. Filtering via simulation: auxiliary particle filter. lournal of the American Statistical Association, 1999. [14] E. Sondik. The Optimal Control of Partially Observable Markov Processes. PhD thesis, Stanford, 1971. [I 5] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [16] M.A. Tanner. ToolsforStatistical Inference. Springer Verlag, 1993. [17] C. 1. C. H. Watkins. Learningfrom Delayed Rewards. PhD thesis, King's College, Cambridge, 1989.
1772 |@word mild:1 version:3 open:2 grey:1 simulation:4 tr:1 solid:1 accommodate:1 recursively:1 moment:1 initial:7 configuration:1 contains:1 denoting:1 past:4 existing:1 current:2 bd:1 must:3 numerical:2 subsequent:1 plot:1 stationary:2 greedy:1 location:7 unbounded:1 mathematical:1 height:5 along:1 symposium:1 expected:2 themselves:1 planning:6 simulator:1 bellman:1 discounted:1 little:1 armed:1 preclude:1 notation:1 moreover:1 maximizes:3 finding:1 temporal:1 booo:1 act:2 exactly:3 hit:2 control:3 positive:1 sutton:1 id:1 subscript:1 path:2 approximately:3 black:2 might:1 studied:1 range:2 averaged:1 practical:2 camera:3 practice:1 empirical:2 projection:5 pre:1 integrating:1 refers:1 suggest:1 doubled:1 cannot:3 close:2 selection:3 context:2 applying:2 impossible:1 www:1 conventional:1 map:2 go:1 convex:1 pomdp:8 focused:1 survey:1 simplicity:1 retrieve:1 notion:1 coordinate:5 updated:1 target:5 exact:2 duke:1 programming:1 us:5 pa:1 trick:1 approximated:1 located:1 labeled:2 database:3 observed:1 bottom:2 solved:1 thousand:1 episode:6 grasping:1 counter:1 sol:1 russell:1 environment:12 reward:15 littman:3 dynamic:3 technically:2 localization:3 inapplicable:1 learner:2 comer:4 represented:7 talk:1 derivation:1 monte:15 sejnowski:1 detected:1 whose:1 larger:1 valued:8 stanford:2 statistic:1 noisy:1 legitimacy:1 online:1 obviously:2 advantage:1 sequence:1 sen:1 fittest:1 normalize:1 priest:3 exploiting:1 converges:3 object:14 derive:1 illustrate:1 nearest:7 minor:1 school:1 paying:1 shephard:1 solves:1 auxiliary:1 c:1 implies:1 annotated:2 filter:2 stochastic:3 exploration:1 viewing:1 require:1 ao:1 generalization:2 preliminary:2 hell:1 extension:1 strictly:1 sufficiently:1 around:1 blake:1 mapping:3 pitt:1 estimation:1 applicable:1 bag:1 successfully:2 weighted:4 mit:1 sensor:1 gaussian:2 always:1 rwi:1 sight:1 avoid:1 mobile:2 barto:1 publication:1 derived:2 schaal:1 consistently:2 likelihood:4 pentium:1 normalizer:1 sense:1 inference:1 dayan:1 bt:19 initially:4 koller:1 interested:2 backing:1 issue:1 unobservable:1 html:1 denoted:5 proposes:1 special:1 integration:2 fairly:1 equal:1 once:1 sampling:12 represents:2 look:1 icml:1 of20:1 future:1 piecewise:1 simplify:1 few:1 divergence:4 delayed:1 argmax:1 argmaxq:1 detection:2 investigate:1 grasp:7 pc:1 chain:2 heaven:6 capable:1 experience:1 predominately:1 perfonnance:1 fox:1 desired:1 plotted:1 kaelbling:2 surpasses:1 burgard:1 delay:1 successful:11 too:2 optimally:1 fetch:1 synthetic:2 st:1 density:4 fundamental:1 told:1 probabilistic:1 tanner:1 again:1 central:1 aaai:2 thesis:2 possibly:1 american:1 leading:1 return:2 account:1 performed:1 sondik:1 lot:1 observing:1 start:1 sort:1 maintains:1 variance:1 who:1 efficiently:1 yield:1 generalize:1 mc:2 carlo:15 trajectory:2 pomdps:21 history:6 submitted:1 reach:2 sebastian:1 intentionally:1 popular:1 ask:1 recall:1 knowledge:2 dt:9 follow:1 execute:1 furthermore:1 hand:2 receives:1 propagation:3 incrementally:1 gray:1 grows:1 name:1 true:1 hence:5 vicinity:1 moore:2 conditionally:1 self:1 demonstrate:1 performs:1 motion:1 common:1 conditioning:1 association:1 mellon:1 refer:2 cambridge:1 ai:1 trivially:1 teaching:1 particle:2 dot:1 moving:1 robot:14 actor:1 operating:1 add:2 mlittman:1 posterior:1 optimizes:1 rewarded:1 occasionally:1 certain:1 verlag:1 arbitrarily:1 success:2 approximators:1 devise:2 seen:1 additional:1 floor:1 employed:2 eo:1 determine:1 converge:1 ii:3 infer:1 characterized:1 long:2 lin:1 basic:1 essentially:1 expectation:1 metric:1 vision:1 iteration:5 represent:3 repetitive:1 kernel:2 robotics:1 condensation:2 schematically:1 addition:1 diagram:1 leaving:1 ot:26 posse:1 spirit:1 iii:1 easy:1 identified:2 opposite:1 suboptimal:1 idea:2 knowing:1 narrowly:1 expression:1 penalty:2 speaking:1 proceed:1 dellaert:1 action:23 ignored:1 discount:1 locally:1 induces:1 lournal:1 generate:1 specifies:1 http:1 notice:1 per:1 correctly:1 carnegie:1 discrete:10 key:1 drawn:6 mono:1 graph:3 run:2 angle:1 injected:1 throughout:2 planner:1 uti:1 draw:2 decision:3 ob:1 scaling:2 hi:1 encountered:1 ahead:1 min:2 relatively:1 embrace:1 according:2 across:1 describes:2 smaller:1 lp:1 modification:1 making:1 projecting:1 gradually:1 pr:24 xo:1 gathering:1 samplebased:1 legal:1 equation:3 turn:1 know:2 operation:1 apply:1 observe:1 away:1 generic:1 top:1 denotes:2 include:1 learningfrom:1 somewhere:1 establish:1 objective:1 move:3 already:1 added:1 rt:7 interacts:1 distance:6 thrun:5 assuming:1 insufficient:1 providing:1 rotational:1 tribute:1 executed:1 holding:1 negative:1 rise:1 lair:1 implementation:2 policy:19 unknown:3 upper:2 observation:11 markov:7 finite:1 defining:1 looking:1 head:1 locate:1 arbitrary:2 specified:1 kl:4 accepts:1 learned:1 hour:1 address:2 bar:1 below:1 b21:1 unsuccessful:1 belief:37 difficulty:1 solvable:1 ormoneit:1 representing:1 scheme:1 mdps:1 realvalued:1 axis:1 carried:1 extract:1 alternated:1 xq:1 prior:1 literature:2 review:1 relative:1 dxt:7 mixed:1 limitation:1 filtering:1 digital:1 agent:12 translation:1 lo:1 centimeter:1 penalized:2 side:3 neighbor:8 fall:1 distributed:1 curve:3 dimension:1 vocabulary:1 world:8 cumulative:2 rich:2 computes:1 evaluating:1 boundary:2 collection:1 reinforcement:9 projected:1 atkeson:1 far:2 approximate:6 observable:8 robotic:1 uai:1 pittsburgh:1 don:2 continuous:7 search:1 table:1 additionally:1 learn:4 nature:1 controllable:1 improving:1 domain:3 linearly:1 backup:7 noise:2 allowed:1 tob:1 depicts:1 fails:2 position:1 replay:1 perceptual:1 watkins:1 ix:1 learns:1 theorem:1 minute:3 erroneous:1 specific:3 xt:47 survival:1 gripper:2 kanazawa:1 false:2 importance:8 phd:2 illustrates:3 conditioned:1 horizon:1 cassandra:2 aren:1 entropy:1 depicted:1 infinitely:1 tracking:1 partially:8 springer:1 corresponds:1 succeed:1 conditional:1 goal:1 king:1 towards:1 change:1 specifically:3 typical:1 uniformly:1 acting:2 averaging:1 miss:1 called:3 total:2 experimental:2 la:2 formally:1 college:1 latter:1 reactive:1 princeton:1
843
1,773
Large Margin DAGs for Multiclass Classification John C. Platt Microsoft Research 1 Microsoft Way Redmond, WA 98052 jpiatt@microsojt.com Nello Cristianini Dept. of Engineering Mathematics University of Bristol Bristol, BS8 1TR - UK nello.cristianini@bristol.ac.uk John Shawe-Taylor Department of Computer Science Royal Holloway College - University of London EGHAM, Surrey, TW20 OEX - UK j.shawe-taylor@dcs.rhbnc.ac.uk Abstract We present a new learning architecture: the Decision Directed Acyclic Graph (DDAG), which is used to combine many two-class classifiers into a multiclass classifier. For an N -class problem, the DDAG contains N(N - 1)/2 classifiers, one for each pair of classes. We present a VC analysis of the case when the node classifiers are hyperplanes; the resulting bound on the test error depends on N and on the margin achieved at the nodes, but not on the dimension of the space. This motivates an algorithm, DAGSVM, which operates in a kernel-induced feature space and uses two-class maximal margin hyperplanes at each decision-node of the DDAG. The DAGSVM is substantially faster to train and evaluate than either the standard algorithm or Max Wins, while maintaining comparable accuracy to both of these algorithms. 1 Introduction The problem of multiclass classificatIon, especially for systems like SVMs, doesn't present an easy solution. It is generally simpler to construct classifier theory and algorithms for two mutually-exclusive classes than for N mutually-exclusive classes. We believe constructing N -class SVMs is still an unsolved research p~oblem. The standard method for N -class SVMs [10] is to construct N SVMs. The ith SVM will be trained with all of the examples in the ith class with positive labels, and all other examples with negative labels. We refer to SVMs trained in this way as J -v-r SVMs (short for oneversus-rest). The final output of the N l-v-r SVMs is the class that corresponds to the SVM with the highest output value. Unfortunately, there is no bound on the generalization error for the l-v-r SVM, and the training time of the standard method scales linearly with N. Another method for constructing N -class classifiers from SVMs is derived from previous research into combining two-class classifiers. Knerr [5] suggested constructing all possible two-class classifiers from a training set of N classes, each classifier being trained on only J C. Platt, N. Cristianini and J Shawe-Taylor 548 two out of N classes. There would thus be K = N (N - 1) /2 classifiers. When applied to SVMs, we refer to this as J-v- J SVMs (short for one-versus-one). Knerr suggested combining these two-class classifiers with an "AND" gate [5]. Friedman [4] suggested a Max Wins algorithm: each I-v-l classifier casts one vote for its preferred class, and the final result is the class with the most votes. Friedman shows circumstances in which this algorithm is Bayes optimal. KreBel [6] applies the Max Wins algorithm to Support Vector Machines with excellent results. A significant disadvantage of the I-v-l approach, however, is that, unless the individual classifiers are carefully regularized (as in SVMs), the overall N -class classifier system will tend to overfit. The "AND" combination method and the Max Wins combination method do not have bounds on the generalization error. Finally, the size of the I-v-l classifier may grow superlinearly with N, and hence, may be slow to evaluate on large problems. In Section 2, we introduce a new multiclass learning architecture, called the Decision Directed Acyclic Graph (DDAG). The DDAG contains N(N - 1)/2 nodes, each with an associated I-v-l classifier. In Section 3, we present a VC analysis of DDAGs whose classifiers are hyperplanes, showing that the margins achieved at the decision nodes and the size of the graph both affect their performance, while the dimensionality of the input space does not. The VC analysis indicates that building large margin DAGs in high-dimensional feature spaces can yield good generalization performance. Using such bound as a guide, in Section 4, we introduce a novel algorithm for multiclass classification based on placing l-v-l SVMs into nodes of a DDAG. This algorithm, called DAGSVM, is efficient to train and evaluate. Empirical evidence of this efficiency is shown in Section 5. 2 Decision DAGs A Directed Acyclic Graph (DAG) is a graph whose edges have an orientation and no cycles. A Rooted DAG has a unique node such that it is the only node which has no arcs pointing into it. A Rooted Binary DAG has nodes which have either 0 or 2 arcs leaving them. We will use Rooted Binary DAGs in order to define a class of functions to be used in classification tasks. The class of functions computed by Rooted Binary DAGs is formally defined as follows. Definition 1 Decision DAGs (DDAGs). Given a space X and a set of boolean functions F = {f : X -t {a, I}}, the class DDAG(F) of Decision DAGs on N classes over Fare functions which can be implemented using a rooted binary DAG with N leaves labeled by the classes where each of the K = N(N - 1)/2 internal nodes is labeled with an element of F. The nodes are arranged in a triangle with the single root node at the top, two nodes in the second layer and so on until the jinallayer of N leaves. The i-th node in layer j < N is connected to the i-th and (i + 1)-st node in the (j + 1)-st layer. To evaluate a particular DDAG G on input x EX, starting at the root node, the binary function at a node is evaluated. The node is then exited via the left edge, if the binary function is zero; or the right edge, if the binary function is one. The next node's binary function is then evaluated. The value of the decision function D (x) is the value associated with the final leaf node (see Figure l(a?. The path taken through the DDAG is known as the evaluation path. The input x reaches a node of the graph, if that node is on the evaluation path for x. We refer to the decision node distinguishing classes i and j as the ij-node. Assuming that the number of a leaf is "its class, this node is the i-th node in the (N - j + i)-th layer provided i < j. Similarly the j-nodes are those nodes involving class j, that is, the internal nodes on the two diagonals containing the leaf labeled by j. The DDAG is equivalent to operating on a list, where each node eliminates one class from the list. The list is initialized with a list of all classes. A test point is evaluated against the decision node that corresponds to the first and last elements of the list. If the node prefers 549 Large Margin DAGs for Multiclass Classification test points on this SIde of hyperplane cannot be in class 1 3 4 1 vs4 SVM 1 1 1 1 1 1 1 1 4 test pOInts on this Side of hyperplane cannot be In class 4 2 3 (b) (a) Figure 1: (a) The decision DAG for finding the best class out of four classes. The equivalent list state for each node is shown next to that node. (b) A diagram of the input space of a four-class problem. A I-v-l SVM can only exclude one class from consideration. one of the two classes, the other class is eliminated from the list, and the DDAG proceeds to test the first and last elements of the new list. The DDAG terminates when only one class remains in the list. Thus, for a problem with N classes, N - 1 decision nodes will be evaluated in order to derive an answer. The current state of the list is the total state of the system. Therefore, since a list state is reachable in more than one possible path through the system, the decision graph the algorithm traverses is a DAG, not simply a tree. Decision DAGs naturally generalize the class of Decision Trees, allowing for a more efficient representation of redundancies and repetitions that can occur in different branches of the tree, by allowing the merging of different decision paths. The class of functions implemented is the same as that of Generalized Decision Trees [1], but this particular representation presents both computational and learning-theoretical advantages. 3 Analysis of Generalization In this paper we study DDAGs where the node-classifiers are hyperplanes. We define a Perceptron DDAG to be a DDAG with a perceptron at every node. Let w be the (unit) weight vector correctly splitting the i and j classes at the ij-node with threshold O. We define the margin of the ij-node to be I = minc(x)==i,j {I(w, x) - Ol}, where c(x) is the class associated to training example x. Note that, in this definition, we only take into account examples with class labels equal to i or j . Theorem 1 Suppose we are able to classify a random m sampLe of LabeLed examples using a Perceptron DDAG on N classes containing K decision nodes with margins Ii at node i, then we can bound the generalization error with probability greater than 1 - 6 to be less than 130R2 --:;;;:- ( D' log (4em) log( 4m) + log 2(2m)K) 6 ' where D' = L~l ~, and R is the radius of a ball containing the distribution's support. Proof: see Appendix 0 J. C. Platt, N. Cristianini and J. Shawe-Taylor 550 Theorem 1 implies that we can control the capacity of DDAGs by enlarging their margin. Note that, in some situations, this bound may be pessimistic: the DDAG partitions the input space into poly topic regions, each of which is mapped to a leaf node and assigned to a specific class. Intuitively, the only margins that should matter are the ones relative to the boundaries of the cell where a given training point is assigned, whereas the bound in Theorem 1 depends on all the margins in the graph. By the above observations, we would expect that a DDAG whose j-node margins are large would be accurate at identifying class j , even when other nodes do not have large margins. Theorem 2 substantiates this by showing that the appropriate bound depends only on the j-node margins, but first we introduce the notation, Ej(G) = P{x : (x in class j and x is misclassified by G) or x is misclassified as class j by G}. Theorem 2 Suppose we are able to correctly distinguish class j from the other classes in a random m-sample with a DDAG Gover N classes containing K decision nodes with margins 'Yi at node i, then with probability 1 - J, 130R 2 Ej(G) ~ ----;:;;- ( D'log(4em) log(4m) + log 2(2m)N-l) J ' where D' = ~ .. d ~,and R is the radius of a ball containing the support of the L-tErno es "Y; distribution. Proof: follows exactly Lemma 4 and Theorem I , but is omitted.O 4 The DAGSVM algorithm Based on the previous analysis, we propose a new algorithm, called the Directed Acyclic Graph SVM (DAGSVM) algorithm, which combines the results of I-v-I SVMs. We will show that this combination method is efficient to train and evaluate. The analysis of Section 3 indicates that maximizing the margin of all of the nodes in a DDAG will minimize a bound on the generalization error. This bound is also independent of input dimensionality. Therefore, we will create a DDAG whose nodes are maximum margin classifiers over a kernel-induced feature space. Such a DDAG is obtained by training each ij-node only on the subset of training points labeled by i or j. The final class decision is derived by using the DDAG architecture, described in Section 2. The DAGSVM separates the individual classes with large margin. It is safe to discard the losing class at each I-v-l decision because, for the hard margin case, all of the examples of the losing class are far away from the decision surface (see Figure 1(b)). For the DAGSVM, the choice of the class order in the list (or DDAG) is arbitrary. The experiments in Section 5 simply use a list of classes in the natural numerical (or alphabetical) order. Limited experimentation with re-ordering the list did not yield significant changes in accuracy performance. The DAGSVM algorithm is superior to other multiclass SVM algorithms in both training and evaluation time. Empirically, SVM training is observed to scale super-linearly with the training set size m [7], according to a power law: T = crn"Y , where 'Y ~ 2 for algorithms based on the decomposition method, with some proportionality constant c. For the standard I-v-r multiclass SVM training algorithm, the entire training set is used to create all N classifiers. Hence the training time for I-v-r is T1 - v - r = cNm"Y . (1) Assuming that the classes have the same number of examples, training each l-v-I SVM only requires 2m/N training examples. Thus, training K l-v-I SVMs would require T I - v - l -- c N(N2 -1) (2m) "Y '"'" 2"Y -1 cN2- "Y m "Y . N (2) Large Margin DAGs for Multiclass Classification 551 For a typical case, where 'Y = 2, the amount of time required to train all of the 1-v-1 SVMs is independent of N , and is only twice that of training a single 1-v-r SVM. Vsing 1-v-1 SVMs with a combination algorithm is thus preferred for training time. 5 Empirical Comparisons and Conclusions The DAGSVM algorithm was evaluated on three different test sets: the VSPS handwritten digit data set [10], the VCI Letter data set [2], and the VCI Covertype data set [2]. The USPS digit data consists of 10 classes (0-9), whose inputs are pixels of a scaled input image. There are 7291 training examples and 2007 test examples. The UCI Letter data consists of 26 classes (A-Z), whose inputs are measured statistics of printed font glyphs. We used the first 16000 examples for training, and the last 4000 for testing. All inputs of the VCI Letter data set were scaled to lie in [-1 , 1]. The VCI Covertype data consists of 7 classes of trees, where the inputs are terrain features. There are 11340 training examples and 565893 test examples. All of the continuous inputs for Covertype were scaled to have zero mean and unit variance. Discrete inputs were represented as a 1-of-n code. On each data set, we trained N 1-v-r SVMs and K 1-v-1 SVMs, using SMO [7], with soft margins. We combined the 1-v-1 SVMs both with the Max Wins algorithm and with DAGSVM. The choice of kernel and of the regularizing parameter C was determined via perfonnance on a validation set. The validation performance was measured by training on 70% of the training set and testing the combination algorithm on 30% of the training set (except for Covertype, where the UCI validation set was used). The best kernel was selected from a set of polynomial kernels (from degree 1 through 6), both homogeneous and inhomogeneous; and Gaussian kernels, with various a. The Gaussian kernel was always found to be best. USPS l-v-r Max Wins DAGSVM Neural Net [10] UCI Letter 1-v-r Max Wins DAGSVM Neural Net UCI Covertype l-v-r Max Wins DAGSVM Neural Net [2] (1 C Error Rate (%) Kernel Evaluations Training CPU Time (sec) Classifier Size (Kparameters) 3.58 5.06 5.06 100 100 100 4.7 4.5 4.4 5.9 2936 1877 819 3532 307 307 760 487 487 0.447 0.632 0.447 100 100 10 2.2 2.4 2.2 4.3 8183 7357 3834 1764 441 792 148 160 223 1 1 1 10 10 10 30.2 29.0 29.2 30 7366 7238 4390 4210 1305 1305 105 107 107 Table 1: Experimental Results Table 1 shows the results of the experiments. The optimal parameters for all three multiclass SVM algorithms are very similar for both data sets. Also, the error rates are similar for all three algorithms for both data sets. Neither 1-v-r nor Max Wins is statistically significantly better than DAGSVM using McNemar's test [3] at a 0.05 significance level for USPS or UCI Letter. For VCI Covertype, Max Wins is slightly better than either of the other SVM-based algorithms. The results for a neural network trained on the same data sets are shown for a baseline accuracy comparison. The three algorithms distinguish themselves in training time, evaluation time, and classifier size. The number of kernel evaluations is a good indication of evaluation time. For J-v- 552 J C. Platt, N. Cristianini and J Shawe-Taylor r and Max Wins, the number of kernel evaluations is the total number of unique support vectors for all SVMs. For the DAGSVM, the number of kernel evaluations is the number of unique support vectors averaged over the evaluation paths through the DDAG taken by the test set. As can be seen in Table 1, Max Wins is faster than I-v-r SVMs, due to shared support vectors between the I-v-1 classifiers. The DAGSVM has the fastest evaluation. The DAGSVM is between a factor of 1.6 and 2.3 times faster to evaluate than Max Wins. The DAGSVM algorithm is also substantially faster to train than the standard I-v-r SVM algorithm: a factor of 2.2 and 11.5 times faster for these two data sets. The Max Wins algorithm shares a similar training speed advantage. Because the SVM basis functions are drawn from a limited set, they can be shared across classifiers for a great savings in classifier size. The number of parameters for DAGSVM (and Max Wins) is comparable to the number of parameters for I-v-r SVM, even though there are N (N - 1) /2 classifiers, rather than N. In summary, we have created a Decision DAG architecture, which is amenable to a VCstyle bound of generalization error. Using this bound, we created the DAGSVM algorithm, which places a two-class SVM at every node of the DDAG. The DAGSVM algorithm was tested versus the standard 1-v-r multiclass SVM algorithm, and Friedman's Max Wins combination algorithm. The DAGSVM algorithm yields comparable accuracy and memory usage to the other two algorithms, but yields substantial improvements in both training and evaluation time. 6 Appendix: Proof of Main Theorem Definition 2 Let F be a set of reaL vaLued functions. We say that a set of points X is ,shattered by F relative to r = (rx)xEx, if there are reaL numbers r x, indexed by x E X, such that for all binary vectors b indexed by X, there is a function fb E F satisfying (2b x - l)fdx) ~ (2b x - l)rx +,. The fat shattering dimension, fatF, of the set F is a function from the positive reaL numbers to the integers which maps a vaLue, to the size of the largest ,-shattered set, if the set is finite, or maps to infinity otherwise. As a relevant example, consider the class Flin = {x -+ the following result from [1]. (w, x) - (J : Ilwll = I}. We quote Theorem 3 Let Flin be restricted to points in a ball ofn dimensions of radius R about the origin. Then We wiIl bound generalization with a technique that closely resembles the technique used in [1] to study Perceptron Decision Trees. We will now give a lemma and a theorem: the lemma bounds the probability over a double sample that the first half has zero error and the second error greater than an appropriate E. We assume that the DDAG on N classes has K = N(N - 1)/2 nodes and we denote fat}'"l? b) by fatb). III Lemma 4 Let G be a DDAG on N classes with K = N(N - 1)/2 decision nodes with margins ,1,,2, ... "K at the decision nodes satisfying k i = fat (,i/8), where fat is continuous from the right. Then the following bound hoLds, p2m{xy:::I a graph G : G which separates classes i and j at the ij-node for all x in x, a fraction of points misclassified in y > E(m, K, 6).} < 6 where E(m, K, 6) = (D log (8m) + log 2;) and D = L:~1 ki log(4em/ki ). ! Proof The proof of Lemma 4 is omitted for space reasons, but is formally analogous to the proof of Lemma 4.4 in [8], and can easily be reconstructed from it. 0 553 Large Margin DAGs for Muldclass Classification Lemma 4 applies to a particular DDAG with a specified margin Ii at each node. In practice, we observe these quantities after generating the DDAG. Hence, to obtain a bound that can be applied in practice, we must bound the probabilities uniformly over all of the possible margins that can arise. We can now give the proof for Theorem 1. Proof of Main Theorem: We must bound the probabilities over different margins. We first use a standard result due to Vapnik [9, page 168] to bound the probability of error in terms of the probability of the discrepancy between the performance on two halves of a double sample. Then we combine this result with Lemma 4. We must consider all possible patterns of ki's over the decision nodes. The largest allowed value of k i is m, and so, for fixed K, we can bound the number of possibilities by m K . Hence, there are m K of applications of Lemma 4 for a fixed N. Since K = N(N - 1)/2, we can let 15k = 8/mK, so thatthe sum L:~l 15 k = 8. Choosing ? (m , K, 8;) = 6~2 (D'IOg(4em) log(4m) + log 2(2;)K) (3) in the applications of Lemma 4 ensures that the probability of any of the statements failing to hold is less than 8/2. Note that we have replaced the constant 8 2 = 64 by 65 in order to ensure the continuity from the right required for the application of Lemma 4 and have upper bounded log(4em/k i ) by log(4em). Applying Vapnik's Lemma [9, page 168] in each case, the probability that the statement of the theorem fails to hold is less than 8. 0 More details on this style of proof, omitted in this paper for space constraints, can be found in [1]. References [1] K. Bennett, N. Cristianini, J. Shawe-Taylor, and D. Wu. Enlarging the margin in perceptron decision trees. Machine Learning (submitted). http://lara.enm.bris.ac.ukJcig/pubsIML-PDT.ps. [2] C. Blake, E. Keogh, and C. Merz. UCI repository of machine leaming databases. Dept. of information and computer sciences, University of Califomia, Irvine, 1998. http://www.ics.uci.edul,,,mleamIMLRepository.html. [3] T. G. Dietterich. Approximate statistical tests for comparing supervised classification leaming algorithms. Neural Computation, 10: 1895-1924, 1998. [4] J. H. Friedman. Another approach to polychotomous classification. Technical report, Stanford Department of Statistics, 1996. htlp:llwww-stat.stanford.edulreports/friedmanlpoly.ps.Z. [5] S. Knerr, L. Personnaz, and G. Dreyfus. Single-layer leaming revisited: A stepwise procedure for building and training a neural network. In Fogelman-Soulie and Herault, editors, Neurocomputing: Algorithms, Architectures and Applications, NATO ASI. Springer, 1990. [6] U. KreGel. Pairwise classification and support vector machines. In B. SchOlkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods: Support Vector Learning, pages 255-268. MIT Press, Cambridge, MA, 1999. [7] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 185-208. MIT Press, Cambridge, MA, 1999. [8] J. Shawe-Taylor and N. Cristianini. Data dependent structural risk minimization for perceptron decision trees. In M. Jordan, M. Keams, and S. SoJla, editors, Advances in Neural Information Processing Systems, volume 10, pages 336-342. MIT Press, 1999. [9] V. Vapnik. Estimation of Dependences Based on Empirical Data [in Russian). Nauka, Moscow, 1979. (English translation: Springer Verlag, New York, 1982). [10] V. Vapnik . Statistical Learning Theory. Wiley, New York, 1998.
1773 |@word repository:1 polynomial:1 proportionality:1 decomposition:1 tr:1 contains:2 current:1 com:1 comparing:1 must:3 john:2 numerical:1 partition:1 xex:1 half:2 leaf:6 selected:1 ith:2 short:2 revisited:1 node:57 traverse:1 hyperplanes:4 simpler:1 scholkopf:2 consists:3 combine:3 introduce:3 pairwise:1 themselves:1 nor:1 ol:1 cpu:1 provided:1 notation:1 bounded:1 superlinearly:1 substantially:2 finding:1 every:2 exactly:1 fat:4 classifier:26 scaled:3 platt:5 uk:4 unit:2 control:1 positive:2 t1:1 engineering:1 path:6 twice:1 resembles:1 substantiates:1 fastest:1 limited:2 statistically:1 averaged:1 directed:4 unique:3 fdx:1 testing:2 practice:2 alphabetical:1 digit:2 procedure:1 tw20:1 empirical:3 asi:1 significantly:1 printed:1 cannot:2 risk:1 applying:1 www:1 equivalent:2 map:2 maximizing:1 starting:1 knerr:3 splitting:1 identifying:1 analogous:1 suppose:2 losing:2 homogeneous:1 us:1 distinguishing:1 origin:1 element:3 satisfying:2 labeled:5 database:1 observed:1 region:1 ensures:1 cycle:1 connected:1 ordering:1 highest:1 substantial:1 cnm:1 cristianini:7 trained:5 efficiency:1 usps:3 triangle:1 basis:1 easily:1 represented:1 various:1 train:5 fast:1 london:1 choosing:1 whose:6 stanford:2 valued:1 say:1 otherwise:1 statistic:2 final:4 advantage:2 indication:1 net:3 propose:1 maximal:1 uci:7 combining:2 relevant:1 vsing:1 double:2 p:2 generating:1 derive:1 ac:3 stat:1 measured:2 ij:5 implemented:2 implies:1 safe:1 radius:3 inhomogeneous:1 closely:1 vc:3 require:1 generalization:8 pessimistic:1 keogh:1 hold:3 blake:1 ic:1 great:1 pointing:1 omitted:3 failing:1 estimation:1 label:3 quote:1 largest:2 repetition:1 create:2 minimization:1 bs8:1 mit:3 gaussian:2 always:1 super:1 rather:1 ej:2 minc:1 derived:2 improvement:1 indicates:2 baseline:1 dependent:1 shattered:2 entire:1 keams:1 misclassified:3 oblem:1 overall:1 classification:10 orientation:1 pixel:1 html:1 fogelman:1 herault:1 equal:1 construct:2 saving:1 eliminated:1 shattering:1 placing:1 discrepancy:1 report:1 neurocomputing:1 individual:2 replaced:1 microsoft:2 friedman:4 possibility:1 evaluation:12 amenable:1 accurate:1 edge:3 xy:1 perfonnance:1 unless:1 tree:8 indexed:2 taylor:7 oex:1 initialized:1 re:1 theoretical:1 minimal:1 mk:1 classify:1 soft:1 boolean:1 disadvantage:1 subset:1 answer:1 combined:1 st:2 polychotomous:1 exited:1 containing:5 style:1 account:1 exclude:1 sec:1 matter:1 depends:3 root:2 bayes:1 minimize:1 accuracy:4 variance:1 yield:4 generalize:1 handwritten:1 rx:2 bristol:3 submitted:1 reach:1 fatf:1 definition:3 against:1 surrey:1 naturally:1 associated:3 proof:9 unsolved:1 irvine:1 dimensionality:2 carefully:1 supervised:1 arranged:1 evaluated:5 though:1 thatthe:1 smola:2 until:1 overfit:1 vci:5 continuity:1 believe:1 russian:1 glyph:1 dietterich:1 usage:1 building:2 hence:4 assigned:2 rooted:5 generalized:1 image:1 dreyfus:1 consideration:1 novel:1 regularizing:1 superior:1 empirically:1 volume:1 fare:1 refer:3 significant:2 cambridge:2 dag:18 mathematics:1 similarly:1 shawe:7 reachable:1 operating:1 surface:1 discard:1 verlag:1 binary:9 mcnemar:1 yi:1 seen:1 greater:2 ii:2 branch:1 technical:1 faster:5 wiil:1 iog:1 involving:1 circumstance:1 kernel:13 achieved:2 cell:1 whereas:1 diagram:1 grow:1 leaving:1 eliminates:1 rest:1 induced:2 tend:1 jordan:1 integer:1 structural:1 iii:1 easy:1 affect:1 architecture:5 multiclass:11 rhbnc:1 york:2 prefers:1 generally:1 amount:1 svms:21 p2m:1 http:2 correctly:2 discrete:1 redundancy:1 four:2 threshold:1 drawn:1 flin:2 neither:1 oneversus:1 graph:10 fraction:1 sum:1 letter:5 place:1 wu:1 decision:29 appendix:2 comparable:3 layer:5 bound:20 ki:3 distinguish:2 occur:1 covertype:6 infinity:1 constraint:1 speed:1 edul:1 department:2 according:1 combination:6 ball:3 terminates:1 slightly:1 em:6 across:1 dagsvm:22 intuitively:1 restricted:1 taken:2 mutually:2 remains:1 experimentation:1 observe:1 pdt:1 away:1 appropriate:2 egham:1 gate:1 top:1 moscow:1 ensure:1 maintaining:1 especially:1 personnaz:1 enm:1 quantity:1 font:1 exclusive:2 dependence:1 diagonal:1 win:16 separate:2 mapped:1 capacity:1 topic:1 evaluate:6 nello:2 reason:1 assuming:2 code:1 unfortunately:1 statement:2 negative:1 motivates:1 allowing:2 upper:1 observation:1 bris:1 arc:2 finite:1 situation:1 dc:1 arbitrary:1 pair:1 cast:1 required:2 specified:1 smo:1 able:2 redmond:1 suggested:3 proceeds:1 pattern:1 royal:1 max:16 memory:1 power:1 natural:1 regularized:1 created:2 ddag:29 relative:2 law:1 expect:1 acyclic:4 versus:2 validation:3 cn2:1 degree:1 editor:4 share:1 translation:1 summary:1 last:3 english:1 guide:1 side:2 burges:2 perceptron:6 boundary:1 dimension:3 soulie:1 doesn:1 fb:1 far:1 reconstructed:1 approximate:1 preferred:2 nato:1 terrain:1 continuous:2 crn:1 table:3 excellent:1 poly:1 constructing:3 did:1 significance:1 main:2 linearly:2 arise:1 n2:1 allowed:1 slow:1 wiley:1 fails:1 lie:1 theorem:12 enlarging:2 ofn:1 specific:1 showing:2 list:14 r2:1 svm:18 evidence:1 ilwll:1 stepwise:1 vapnik:4 merging:1 sequential:1 margin:27 simply:2 applies:2 springer:2 corresponds:2 ma:2 leaming:3 shared:2 bennett:1 hard:1 change:1 typical:1 determined:1 operates:1 except:1 hyperplane:2 uniformly:1 lemma:12 vs4:1 called:3 total:2 e:1 experimental:1 merz:1 vote:2 nauka:1 holloway:1 college:1 formally:2 internal:2 support:10 dept:2 tested:1 ex:1
844
1,774
Effective Learning Requires Neuronal Remodeling of Hebbian Synapses Gal Chechik Isaac Meilijson Eytan Ruppin School of Mathematical Sciences Tel-Aviv University Tel Aviv, Israel ggal@math.tau.ac.il isaco@math.tau.ac.il ruppin@math.tau.ac.il Abstract This paper revisits the classical neuroscience paradigm of Hebbian learning. We find that a necessary requirement for effective associative memory learning is that the efficacies of the incoming synapses should be uncorrelated. This requirement is difficult to achieve in a robust manner by Hebbian synaptic learning, since it depends on network level information. Effective learning can yet be obtained by a neuronal process that maintains a zero sum of the incoming synaptic efficacies. This normalization drastically improves the memory capacity of associative networks, from an essentially bounded capacity to one that linearly scales with the network's size. It also enables the effective storage of patterns with heterogeneous coding levels in a single network. Such neuronal normalization can be successfully carried out by activity-dependent homeostasis of the neuron's synaptic efficacies, which was recently observed in cortical tissue. Thus, our findings strongly suggest that effective associative learning with Hebbian synapses alone is biologically implausible and that Hebbian synapses must be continuously remodeled by neuronally-driven regulatory processes in the brain. 1 Introduction Synapse-specific changes in synaptic efficacies, carried out by long-term potentiation (LTP) and depression (LTD) are thought to underlie cortical self-organization and learning in the brain. In accordance with the Hebbian paradigm, LTP and LTD modify synaptic efficacies as a function of the firing of pre and post synaptic neurons. This paper revisits the Hebbian paradigm showing that synaptic learning alone cannot provide effective associative learning in a biologically plausible manner, and must be complemented with neuronally-driven synaptic remodeling. The importance of neuronally driven normalization processes has already been demonstrated in the context of self-organization of cortical maps [1, 2] and in continuous unsupervised learning as in principal-component-analysis networks [3]. In these scenarios normalization is necessary to prevent the excessive growth of synap-
1774 |@word classical:1 already:1 self:2 capacity:2 potentiation:1 efficacy:5 ruppin:2 yet:1 recently:1 must:2 difficult:1 enables:1 alone:2 neuron:2 homeostasis:1 successfully:1 math:3 mathematical:1 synap:1 driven:3 scenario:1 manner:2 dependent:1 brain:2 pattern:1 paradigm:3 tau:3 memory:2 bounded:1 israel:1 hebbian:7 long:1 finding:1 gal:1 post:1 unsupervised:1 carried:2 heterogeneous:1 essentially:1 excessive:1 growth:1 normalization:4 underlie:1 isaco:1 accordance:1 modify:1 firing:1 organization:2 ltp:2 uncorrelated:1 necessary:2 drastically:1 thought:1 cortical:3 chechik:1 ltd:2 pre:1 suggest:1 cannot:1 storage:1 ggal:1 depression:1 context:1 map:1 demonstrated:1 incoming:2 continuous:1 regulatory:1 neuroscience:1 robust:1 tel:2 continuously:1 prevent:1 linearly:1 revisits:2 sum:1 observed:1 neuronal:3 coding:1 depends:1 meilijson:1 maintains:1 activity:1 specific:1 il:3 showing:1 neuronally:3 importance:1 remodeling:2 effective:6 tissue:1 synapsis:4 implausible:1 synaptic:8 plausible:1 biologically:2 isaac:1 associative:4 complemented:1 improves:1 change:1 achieve:1 synapse:1 principal:1 strongly:1 eytan:1 requirement:2 remodeled:1 ac:3 school:1 aviv:2
845
1,775
Reinforcement Learning for Spoken Dialogue Systems Satinder Singh Michael Keams Diane Litman Marilyn Walker AT&T Labs AT&T Labs AT&T Labs AT&T Labs {baveja,mkeams,diane,walker} @research.att.com Abstract Recently, a number of authors have proposed treating dialogue systems as Markov decision processes (MDPs). However, the practical application ofMDP algorithms to dialogue systems faces a number of severe technical challenges. We have built a general software tool (RLDS, for Reinforcement Learning for Dialogue Systems) based on the MDP framework, and have applied it to dialogue corpora gathered from two dialogue systems built at AT&T Labs. Our experiments demonstrate that RLDS holds promise as a tool for "browsing" and understanding correlations in complex, temporally dependent dialogue corpora. 1 Introduction Systems in which human users speak to a computer in order to achieve a goal are called spoken dialogue systems. Such systems are some of the few realized examples of openended, real-time, goal-oriented interaction between humans and computers, and are therefore an important and exciting testbed for AI and machine learning research. Spoken dialogue systems typically integrate many components, such as a speech recognizer, a database backend (since often the goal of the user is to retrieve information), and a dialogue strategy. In this paper we are interested in the challenging problem of automatically inferring a good dialogue strategy from dialogue corpora. Research in dialogue strategy has been perhaps necessarily ad-hoc due to the open-ended nature of dialogue system design. For example, a common and critical design choice is between a system that always prompts the user to select an utterance from fixed menus (system initiative), and one that attempts to determine user intentions from unrestricted utterances (mixed initiative). Typically a system is built that explores a few alternative strategies, this system is tested, and conclusions are drawn regarding which of the tested strategies is best for that domain [4, 7, 2]. This is a time-consuming process, and it is difficult to rigorously compare and evaluate alternative systems in this fashion, much less design improved ones. Recently, a number of authors have proposed treating dialogue design in the formalism of Markov decision processes (MDPs)[ 1, 3, 7]. In this view, the popUlation of users defines the stochastic environment, a dialogue system's actions are its (speech-synthesized) utterances and database queries, and the state is represented by the entire dialogue so far. The goal is to design a dialogue system that takes actions so as to maximize some measure of reward. Viewed in this manner, it becomes possible, at least in principle, to apply the framework and algorithms of reinforcement learning (RL) to find a good dialogue strategy. However, the practical application of RL algorithms to dialogue systems faces a number of severe technical challenges. First, representing the dialogue state by the entire dialogue so Reinforcement Learningfor Spoken Dialogue Systems 957 far is often neither feasible nor conceptually useful, and the so-called belief state approach is not possible, since we do not even know what features are required to represent the belief state. Second, there are many different choices for the reward function, even among systems providing very similar services to users. Previous work [7] has largely dealt with these issues by imposing a priori limitations on the features used to represent approximate state, and then exploring just one of the potential reward measures. In this paper, we further develop the MDP formalism for dialogue systems, in a way that does not solve the difficulties above (indeed, there is no simple "solution" to them), but allows us to attenuate and quantify them by permitting the investigation of different notions of approximate state and reward. Using our expanded formalism, we give one of the first applications of RL algorithms to real data collected from multiple dialogue systems. We have built a general software tool (RLDS, for Reinforcement Learning for Dialogue Systems) based on our framework, and applied it to dialogue corpora gathered from two dialogue systems built at AT&T Labs, the TOOT system for voice retrieval of train schedule information [4] and the ELVIS system for voice retrieval of electronic mail [7]. Our experiments demonstrate that RLDS holds promise not just as a tool for the end-toend automated synthesis of complicated dialogue systems from passive corpora - a "holy grail" that we fall far short of here 1 - but more immediately, as a tool for "browsing" and understanding correlations in complex, temporally dependent dialogue corpora. Such correlations may lead to incremental but important improvements in existing systems. 2 The TOOT and ELVIS Spoken Dialogue Systems The TOOT and ELVIS systems were implemented using a general-purpose platform developed at AT&T, combining a speaker-independent hidden Markov model speech recognizer, a text-to-speech synthesizer, a telephone interface, and modules for specifying data-access functions and dialogue strategies. In TOOT, the data source is the Amtrak train schedule web site, while in ELVIS, it is the electronic mail spool of the user. In a series of controlled experiments with human users, dialogue data was collected from both systems, resulting in 146 dialogues from TOOT and 227 dialogues from ELVIS. The TOOT experiments varied strategies for information presentation, confirmation (whether and how to confirm user utterances) and initiative (system vs. mixed), while the ELVIS experiments varied strategies for information presentation, for summarizing email folders, and initiative. Each resulting dialogue consists of a series of system and user utterances augmented by observations derived from the user utterances and the internal state of the system. The system's utterances (actions) give requested information, ask for clarification, provide greetings or instructions, and so on. The observations derived from the user's utterance include the speech-recognizer output, the corresponding log-likelihood score, the semantic labels assigned to the recognized utterances (such as the desired train departure and arrival cities in TOOT, or whether the user prefers to hear their email ordered by date or sender in ELVIS); indications of user barge-ins on system prompts; and many more. The observations derived from the internal state include the grammar used by the speech recognizer during the tum, and the results obtained from a query to the data source. In addition, each dialogue has an associated survey completed by the user that asks a variety of questions relating to the user's experience. See [4, 7] for details. 3 Spoken Dialogue Systems and MDPs Given the preceding discussion, it is natural to formally view a dialogue as a sequence d d = (a 1, 01, rt), (a2' 02, r2), ... , (at, Ot, rt). -------------1 However, in recent work we have applied the methodology described here to significantly improve the perfonnance of a new dialogue system [5]. S. Singh, M Kearns, D. Litman and M Walker 958 Here ai is the action taken by the system (typically a speech-synthesized utterance, and less frequently, a database query) to start the ith exchange (or tum, as we shall call it), OJ consists of all the observations logged by the system on this turn, as discussed in the last section, and r j is the reward received on this turn. As an example, in roOT a typical turn might indicate that the action aj was a system utterance requesting the departure city, and the 0; might indicate several observations: that the recognized utterance was "New York", that the log-likelihood of this recognition was -2.7, that there was another unrecognized utterance as well, and so on. We will use d[ i] to denote the prefix of d that ends following the ith turn, and d? (a, 0, r) to denote the one-turn extension of dialogue d by the turn (a, 0, r). The scope of the actions aj and observations 0; is determined by the implementation of the systems (e.g. if some quantity was not logged by the system, we will not have access to it in the 0; in the data). Our experimental results will use rewards derived from the user satisfaction surveys gathered for the roOT and ELVIS data We may view any dialogue d as a trajectory in a well-defined true MOP M. The states 2 of M are all possible dialogues, and the actions are all the possible actions available to the spoken dialogue system (utterances and database queries). Now from any state (dialogue) d and action a, the only possible next states (dialogues) are the one-turn extensions d? (a, 0, r). The probability of transition from d to d?(a, 0, r) is exactly the probability, over the stochastic ensemble of users, that 0 and r would be generated following action a in dialogue d. It is in general impractical to work directly on M due to the unlimited size of the state (dialogue) space. Furthermore, M is not known in advance and would have to be estimated from dialogue corpora. We would thus like to permit a flexible notion of approximate states. We define state estimator SE to be a mapping from any dialogue d into some space S. For example, a simple state estimator for roOT might represent the dialogue state with boolean variables indicating whether certain pieces of information had yet been obtained from the user (departure and arrival cities, and so on), and a continuous variable tracking the average log-likelihood of the recognized utterances so far. Then sE(d) would be a vector representing these quantities for the dialogue d. Once we have chosen a state estimator SE, we can transform the dialogue d into an S-trajectory, starting from the initial empty state So E S: So -tal SE(d[l]) -ta2 sE(d[2]) -ta3 . .. -tat SE(d[t]) where the notation -tao SE(d[i]) indicates a transition to SE(d[i]) E S following action aj. Given a set of dialogues d 1 , .. . , d n , we can construct the empirical MOP MSE ? The state space of MSE is S, the actions are the same as in M, and the probability oftransition from s to s' under action a is exactly the empirical probability of such a transition in the S-trajectories obtained from d1 , .?? ,dn . Note that we can build MSE from dialogue corpora, solve for its optimal policy, and analyze the resulting value function. The point is that by choosing SE carefully, we hope that the empirical MOP MSE will be a good approximation of M. By this we mean that MSE renders dialogues (approximately) Markovian: the probability in M of transition from any dialogue d to anyone-turn extension d ? (a, 0, r) is (approximately) the probability of transition from sE(d) to sE(d ? (a, 0, r)) in MSE ? We hope to find state estimators SE which render dialogues approximately Markovian, but for which the amount of data and computation required to find good policies in MSE will be greatly reduced compared to working directly in dialogue space. While conceptually appealing, this approach is subject to at least three important caveats: First, the approach is theoretically justified only to the extent that the chosen state estimator renders dialogues Markovian. In practice, we hope that the approach is robust, in that "small" violations of the Markov property will still produce useful results. Second, while 2These are not to be confused with the internal states of the spoken dialogue system(s) during the dialogue, which in our view merely contribute observations. Reinforcement Learningfor Spoken Dialogue Systems 959 state estimators violating the Markov property may lead to meaningful insights, they cannot be directly compared. For instance, if the optimal value function derived from one state estimator is larger than the optimal value function for another state estimator, we cannot necessarily conclude that the first is better than the second. (This can be demonstrated formally.) Third, even with a Markovian state estimator SE, data that is sparse with respect to SE limits the conclusions we can draw; in a large space S, certain states may be so infrequently visited in the dialogue corpora that we can say nothing about the optimal policy or value function there. 4 The RLDS System We have implemented a software tool (written in C) called RLOS that realizes the above formalism. RLOS users specify an input file of sample dialogues; the dialogues include the rewards received at each turn. Users also specify input files defining S and a state estimator SEe The system has command-line options that specify the discount factor to be used, and a lower bound on the number of times a state s E S must be visited in order for it to be included in the empirical MOP USE (to control overfitting to sparse data). Given these inputs and options, RLOS converts the dialogues into S -trajectories, as discussed above. It then uses these trajectories to compute the empirical MOP USE specified by the data - that is, the data is used to compute next-state distributions and average reward in the obvious way. States with too few visits are pruned from USE' RLOS then uses the standard value iteration algorithm to compute the optimal policy and value function [6] for USE, all using the chosen discount factor. 5 Experimental Results The goal of the experiments reported below is twofold: first, to confirm that our RLOS methodology and software produce intuitively sensible policies; and second, to use the value functions computed by the RLOS software to discover and understand correlations between dialogue properties and performance. We have space to present only a few of our many experiments on TOOT and ELVIS data. Each experiment reported below involves choosing a state estimator, running RLOS using either the TOOT or ELVIS data, and then analyzing the resulting policy and value function. For the TOOT experiments, the reward function was obtained from a question in the user satisfaction survey: the last turn in a dialogue receives a reward of +1 if the user indicated that they would use the system again, a reward of 0 if the user answered "maybe", and a reward of -1 if the user indicated that they would not use the system again . All turns other than the last receive reward 0 (Le., a reward is received only at the end of a dialogue). For the ELVIS experiments, we used a summed (over several questions) user-satisfaction score to reward the last turn in each dialogue (this score ranges between 8 and 40). Experiment 1 (A Sensible Policy): In this initial "sanity check" experiment, we created a state estimator for TOOT whose boolean state variables track whether the system knows the value for the following five informational attributes: arrival city (denoted AC), departure city (DC), departure date (~O), departure hour (OH), and whether the hour is AM or PM (AP) 3 . Thus, if the dialogue so far includes a turn in which TOOT prompts the user for their departure city, and the speech recognizer matches the user utterance with "New York", the boolean state variable GotOC? would be assigned a value of 1. Note that this ignores the actual values of the attributes. In addition, there is another boolean variable called ConfirmedAll? that is set to 1 if and only ifthe system took action ConfirmAll (which prompts the user to explicitly verify the attribute values perceived by TOOT) and perceived a "yes" utterance in response. Thus. the state vector js sjmply the binary vector 3Remember that TOOT can only track its perceptions of these attributes, since errors may have occurred in speech recognition. S. Singh, M Kearns, D. Litman and M Walker 960 [ GotAC? , GotAP? , GotDC? , GotDD? , GotDH? , ConfirmedAll? ) Among the actions (the system utterances) available to TOOT are prompts to the user to specify values for these informational attributes; we shall denote these actions with labels AskOC, AskAC, AskOO, AskOH, and AskAP. The system takes several other actions that we shall mention as they arise in our results. The result of running RLOS was the following policy, where we have indicated the action to be taken from each state: [0,0,0,0,0,0): SayGreeting [1,0,0,0,0,0) : AskDC [1,0,1,0,0,0): AskAp [1,0,1,1,0,0): AskDH [0,0,0,1,1,0): AskAP [1,0,0,1,1,0): AskAP [0, 1, 0, 1, 1, 0): AskAll [1, 1 , 0, 1 , 1, 0): AskAll [ 1, 0, 1, 1, 1, 0): AskAP [1,1,1,1,1,0): ConfirmAll [1,1,1,1,1,1): Close Thus, RLOS finds a sensible policy, always asking the user for information which it has not already received, confirming the user's choices when it has all the necessary information, and then presenting the closest matching train schedule and closing the dialogue (action Close). Note that in some cases it chooses to ask the user for values for all the informational attributes even though it has values for some of them. It is important to emphasize that this policy was derived purely through the application of RLOS to the dialogue data, without any knowledge of the "goal" of the system. Furthermore, the TOOT data is such that the empirical MOP built by RLOS for this state estimator does include actions considerably less reasonable than those chosen above from many states. Examples include confirming the values of specific informational attributes such as DC (since we do not represent whether such confirmations were successful, this action would lead to infinite loops of confirmation), and requesting values for informational attributes for which we already have values (such actions appear in the empirical MOP due to speech recognition errors). The mere fact that RLOS was driven to a sensible policy that avoided these available pitfalls indicates a correlation between the chosen reward measure (whether the user would use the system again) and the intuitive system goal of obtaining a completely specified train trip. It is interesting to note that RLOS finds it better to confirm values for all 5 attributes when it has them, as opposed to simply closing the dialogue without confirmation. In a similar experiment on ELVIS, RLOS again found a sensible policy that summarizes the user's inbox at the beginning of the dialogue, goes on to read the relevant e-mail messages until done, and then closes. (a) (b) 0.35,---.-----,..-----,..-----,----, 0..2 < 1 , - - - . . - - - - - , . . - - - - - , - - - - , - - - - , 0 .24 I = Number of Information Attributes D = Number of Distress Features 0..3 0..22 0.25 0..2 0.18 !l !l > 01 0..'6 1=2 0 .14 0.2 01 > 0..'5 r---~ D=I ---~-_/ 0.12 0 .1 0.., , - - -_ _- ' II' 0.05 D= 0.08 " C? 08CL-----''------'2-----'3--~------' Number of Attributes Confirmed ~t=~~c===~2~====3==~==.--~ Number of Information Attributes Figure I: a) Role of Confirmation. b) Role of Distress Features (indicators that the dialogue is in trouble). See description of Experiments 2 and 3 respectively in the text for details. Experiment 2 (Role of Confirmation): Here we explore the effect of confirming with the user the values that TOOT perceives for the informational attributes - that is, whether the 961 Reinforcement Learningfor Spoken Dialogue Systems trade-off between the increased confidence in the utterance and the potential annoyance to the user balances out in favor of confirmation or not (for the particular reward function we are using). To do so, we created a simple state estimator with just two state variables. The first variable counts the number of the informational attributes (DC, AC, etc.) that roar believes it has obtained, while the second variable counts the number of these that have been confirmed with the user. Figure 1(a) presents the optimal value as a function of the number of attributes confirmed. Each curve in the plot corresponds to a different setting of the first state variable. For instance, the curve labeled with "1=3" corresponds to the states where the system has obtained 3 informational attributes. We can make two interesting observations from this figure. First, the value function grows roughly linearly with the number of confirmed attributes. Second, and perhaps more startlingly, the value function has only a weak dependence on the first feature - the value for states when some number of attributes have been confirmed seems independent of how many attributes (the system believes) have been obtained. This is evident from the lack of separation between the plots for varying values of the state variable I. In other words, our simple (and preliminary) analysis suggests that for our reward measure, confirmed information influences the value function much more strongly than unconfirmed information. We also repeated this experiment replacing attribute confirmation with thresholded speech recognition log-likelihood scores, and obtained qualitatively similar results. Experiment 3 (Role of Distress Features): Dialogues often contain timeouts (user silence when system expected response), resets (user asks for current context of dialogue to be abandoned and the system is reinitialized), user requests for help, and other indicators that the dialogue is potentially in trouble. Do such events correlate with low value? We created a state estimator for roar that, in addition to our variable I counting informational attributes, counted the number of such distress events in the dialogue. Figure l(b) presents the optimal value as a function of the number of attributes obtained. Each curve corresponds to a different number of distress features. This figure confirms that the value of the dialogue is lower for states with a higher number of distress features. (a) (b) 0.7 35 T = Number of Turns P=TaskProgrcss 0.8 =3 30 0.5 T<4 2S 0??4 '">" 0.3 4<=T<8 !! > '" 20 0.2 0.1 8 <=T< 12 15 12 <=T< 16 0 0 2 3 Number of 1nfonnation Attributes 10 0 2 3 4 5 8 Number of Turns divided by 4 Figure 2: a) Role of Dialogue Length in roar. b) Role of Dialogue Length in ELVIS. See description of Experiment 4 in the text for details. Experiment 4 (Role of the Dialogue Length): All other things being equal (e.g. extent of task completion), do users prefer shorter dialogues? To examine this question, we created a state estimator for TOOT that counts the number of informational attributes obtained (variable I as in Experiment 2), and a state estimator for ELVIS that measures "task progress" (a measure analogous to the variable I for roar; details omitted). In both cases, a second variable tracks the length of the dialogue. S. Singh, M. Kearns, D. Litman and M. Walker 962 Figure 2(a) presents the results for TOOT. It plots the optimal value as a function of the number I of informational values; each curve corresponds to a different range of dialogue lengths. It is immediately apparent that the longer the dialogue, the lower the value, and that within the same length of dialogue it is better to have obtained more attributes 4. Of course, the effect of obtaining more attributes is weak for the longest dialogue length; these are dialogues in which the user is struggling with the system, usually due to multiple speech recognition errors. Figure 2(b) presents the results for ELVIS from a different perspective. The dialogue length is now the x-axis, while each curve corresponds to a different value of P (task progress). It is immediately apparent that the value increases with task progress. More interestingly, unlike TOOT, there seems to be an "optimal" or appropriate dialogue length for each level of task progress, as seen in the inverse U-shaped curves. Experiment 5 (Role of Initiative): One ofthe important questions in dialogue theory is how to choose between system and mixed initiative strategies (cf. Section 1). Using our approach on both TOOT and ELVIS data, we were able to confirm previous results [4, 7] showing that system initiative has a higher value than mixed initiative. Experiment 6 (Role of Reward Functions): To test the robustness of our framework, we repeated Experiments 1-4 for TOOT using a new reward function based on the user's perceived task completion. We found that except for a weaker correlation between number of turns and value function, the results were basically the same across the two reward functions. 6 Conclusion This paper presents a new RL-based framework for spoken dialogue systems. Using our framework, we developed RLDS, a general-purpose software tool, and used it for empirical studies on two sets of real dialogues gathered from the TOOT and ELVIS systems. Our results showed that RLDS was able to find sensible policies, that in ELVIS there was an "optimal" length of dialogue, that in TOOT confirmation of attributes was highly correlated with value, that system initiative led to greater user satisfaction than mixed initiative, and that the results were robust to changes in the reward function. Acknowledgements: We give warm thanks to Esther Levin, David McAllester, Roberto Pieraccini, and Rich Sutton for their many contributions to this work. References [1] A. W. Biennann and P. M. Long. The composition of messages in speech-graphics interactive systems. In Proceedings of the i996international Symposium on Spoken Dialogue. 97-100, 1996. [2] A. L. Gorin, B. A. Parker, R. M. Sachs and J. G. Wilpon. How May I Help You. In Proceedings of International Symposium on Spoken Dialogue. 57-60, 1996. [3] E. Levin, R. Pieraccini and W. Eckert. Learning dialogue strategies within the Markov decision process framework. In Proc. iEEE Workshop on Automatic Speech Recognition and Understanding 1997. [4] D. J. Litman and S. Pan. Empirically Evaluating an Adaptable Spoken Dialogue System. In Proceedings of the 7th International Conference on User Modeling 1999. [5) S. Singh, M. Kearns, D. Litman, and M. Walker. In preparation. [6] R. S. Sutton and A. G. Barto. ReinforcementLearning: An Introduction MIT Press, 1998. [7) M. A. Walker, J. C. Fromer and S. Narayanan. Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email. In Proceedings of the 36th Annual Meeting of the Association of Computational Linguistics, COLINGIACL 98 1345-1352, 1998. 4There is no contradiction with Experiment 2 in this statement, since here we are not separating confirmed and unconfirmed attributes.
1775 |@word seems:2 open:1 instruction:1 confirms:1 tat:1 asks:2 mention:1 holy:1 initial:2 series:2 att:1 score:4 interestingly:1 prefix:1 existing:1 current:1 com:1 synthesizer:1 yet:1 written:1 must:1 confirming:3 treating:2 plot:3 v:1 beginning:1 ith:2 short:1 caveat:1 contribute:1 five:1 dn:1 symposium:2 initiative:10 consists:2 manner:1 theoretically:1 expected:1 indeed:1 roughly:1 nor:1 frequently:1 examine:1 informational:11 pitfall:1 automatically:1 actual:1 becomes:1 confused:1 discover:1 notation:1 perceives:1 what:1 developed:2 spoken:15 ended:1 impractical:1 remember:1 interactive:1 litman:6 exactly:2 control:1 appear:1 service:1 limit:1 sutton:2 analyzing:1 approximately:3 ap:1 might:3 specifying:1 challenging:1 suggests:1 range:2 reinforcementlearning:1 practical:2 practice:1 inbox:1 empirical:8 significantly:1 matching:1 intention:1 confidence:1 word:1 cannot:2 close:3 context:1 influence:1 demonstrated:1 go:1 starting:1 backend:1 survey:3 immediately:3 contradiction:1 estimator:17 insight:1 menu:1 oh:1 retrieve:1 population:1 notion:2 analogous:1 user:46 speak:1 us:2 pieraccini:2 infrequently:1 recognition:6 database:4 labeled:1 role:9 module:1 trade:1 timeouts:1 environment:1 reward:22 rigorously:1 singh:5 purely:1 completely:1 represented:1 train:5 query:4 choosing:2 sanity:1 whose:1 apparent:2 larger:1 solve:2 say:1 tested:2 grammar:1 favor:1 transform:1 hoc:1 sequence:1 indication:1 ifthe:1 took:1 interaction:1 reset:1 relevant:1 combining:1 loop:1 date:2 achieve:1 intuitive:1 description:2 reinitialized:1 empty:1 produce:2 incremental:1 help:2 develop:1 ac:2 completion:2 received:4 progress:4 implemented:2 involves:1 indicate:2 quantify:1 attribute:28 stochastic:2 human:3 mcallester:1 exchange:1 investigation:1 preliminary:1 exploring:1 extension:3 hold:2 scope:1 mapping:1 mop:7 a2:1 omitted:1 purpose:2 recognizer:5 perceived:3 proc:1 realizes:1 label:2 visited:2 city:6 tool:7 hope:3 mit:1 always:2 varying:1 command:1 barto:1 derived:6 improvement:1 longest:1 likelihood:4 indicates:2 check:1 greatly:1 summarizing:1 am:1 esther:1 dependent:2 typically:3 entire:2 hidden:1 keams:1 interested:1 tao:1 issue:1 among:2 flexible:1 denoted:1 priori:1 platform:1 summed:1 equal:1 once:1 construct:1 shaped:1 few:4 oriented:1 attempt:1 message:2 highly:1 severe:2 violation:1 necessary:1 experience:1 shorter:1 perfonnance:1 desired:1 instance:2 formalism:4 increased:1 boolean:4 markovian:4 asking:1 modeling:1 successful:1 levin:2 too:1 graphic:1 reported:2 considerably:1 chooses:1 thanks:1 explores:1 international:2 off:1 michael:1 synthesis:1 roar:4 again:4 opposed:1 choose:1 dialogue:105 potential:2 includes:1 explicitly:1 ad:1 piece:1 view:4 root:3 lab:6 analyze:1 start:1 option:2 complicated:1 contribution:1 largely:1 ensemble:1 gathered:4 ofthe:1 yes:1 conceptually:2 dealt:1 spool:1 weak:2 basically:1 mere:1 trajectory:5 confirmed:7 email:3 obvious:1 associated:1 annoyance:1 ask:2 knowledge:1 schedule:3 carefully:1 adaptable:1 tum:2 higher:2 violating:1 methodology:2 specify:4 improved:1 response:2 done:1 though:1 strongly:1 furthermore:2 just:3 correlation:6 until:1 working:1 receives:1 web:1 replacing:1 struggling:1 lack:1 defines:1 aj:3 indicated:3 perhaps:2 mdp:2 grows:1 effect:2 verify:1 true:1 contain:1 assigned:2 read:1 semantic:1 during:2 speaker:1 presenting:1 evident:1 demonstrate:2 interface:1 passive:1 recently:2 common:1 rl:4 empirically:1 discussed:2 occurred:1 association:1 relating:1 synthesized:2 composition:1 imposing:1 ai:2 attenuate:1 automatic:1 pm:1 closing:2 wilpon:1 distress:6 baveja:1 had:1 access:2 longer:1 etc:1 j:1 closest:1 recent:1 showed:1 perspective:1 driven:1 certain:2 binary:1 meeting:1 seen:1 unrestricted:1 greater:1 preceding:1 recognized:3 determine:1 maximize:1 ii:1 multiple:2 technical:2 match:1 long:1 retrieval:2 divided:1 permitting:1 visit:1 controlled:1 iteration:1 represent:4 justified:1 addition:3 receive:1 walker:7 source:2 ot:1 unlike:1 file:2 subject:1 thing:1 call:1 counting:1 automated:1 variety:1 regarding:1 requesting:2 whether:8 render:3 speech:14 york:2 action:22 prefers:1 useful:2 se:14 maybe:1 amount:1 discount:2 narayanan:1 reduced:1 toend:1 estimated:1 track:3 promise:2 shall:3 drawn:1 neither:1 thresholded:1 merely:1 convert:1 inverse:1 you:1 logged:2 reasonable:1 electronic:2 separation:1 greeting:1 draw:1 decision:3 summarizes:1 prefer:1 bound:1 annual:1 software:6 unlimited:1 tal:1 answered:1 anyone:1 pruned:1 expanded:1 request:1 across:1 pan:1 appealing:1 intuitively:1 grail:1 taken:2 turn:16 count:3 know:2 end:3 available:3 permit:1 apply:1 appropriate:1 alternative:2 voice:2 robustness:1 abandoned:1 running:2 include:5 trouble:2 completed:1 cf:1 linguistics:1 build:1 question:5 realized:1 quantity:2 already:2 strategy:12 rt:2 dependence:1 separating:1 sensible:6 mail:3 collected:2 extent:2 length:10 providing:1 balance:1 difficult:1 potentially:1 statement:1 unrecognized:1 fromer:1 design:5 implementation:1 policy:13 observation:8 markov:6 defining:1 dc:3 varied:2 learningfor:3 prompt:5 david:1 required:2 specified:2 trip:1 testbed:1 hour:2 able:2 below:2 perception:1 usually:1 departure:7 challenge:2 hear:1 built:6 oj:1 belief:4 marilyn:1 critical:1 difficulty:1 natural:1 satisfaction:4 event:2 indicator:2 warm:1 representing:2 improve:1 mdps:3 temporally:2 axis:1 created:4 utterance:19 roberto:1 text:3 understanding:3 acknowledgement:1 mixed:5 interesting:2 limitation:1 integrate:1 agent:1 exciting:1 principle:1 course:1 eckert:1 last:4 silence:1 weaker:1 understand:1 fall:1 face:2 sparse:2 curve:6 transition:5 evaluating:1 rich:1 ignores:1 author:2 qualitatively:1 reinforcement:7 folder:1 avoided:1 counted:1 far:5 correlate:1 approximate:3 emphasize:1 satinder:1 confirm:4 overfitting:1 corpus:9 conclude:1 consuming:1 continuous:1 nature:1 robust:2 confirmation:9 obtaining:2 requested:1 diane:2 mse:7 complex:2 necessarily:2 cl:1 domain:1 sachs:1 linearly:1 arise:1 arrival:3 nothing:1 repeated:2 augmented:1 site:1 fashion:1 parker:1 inferring:1 third:1 specific:1 showing:1 r2:1 workshop:1 browsing:2 led:1 simply:1 explore:1 sender:1 ordered:1 tracking:1 corresponds:5 goal:7 viewed:1 presentation:2 twofold:1 feasible:1 change:1 included:1 telephone:1 typical:1 determined:1 infinite:1 except:1 kearns:4 called:4 clarification:1 elvis:18 experimental:2 meaningful:1 indicating:1 select:1 formally:2 internal:3 preparation:1 evaluate:1 d1:1 correlated:1
846
1,776
LTD Facilitates Learning In a Noisy Environment Paul Munro School of Information Sciences University of Pittsburgh Pittsburgh PA 15260 pwm+@pitt.edu Gerardina Hernandez Intelligent Systems Program University of Pittsburgh Pittsburgh PA 15260 gehst5+@pitt.edu Abstract Long-term potentiation (LTP) has long been held as a biological substrate for associative learning. Recently, evidence has emerged that long-term depression (LTD) results when the presynaptic cell fires after the postsynaptic cell. The computational utility of LTD is explored here. Synaptic modification kernels for both LTP and LTD have been proposed by other laboratories based studies of one postsynaptic unit. Here, the interaction between time-dependent LTP and LTD is studied in small networks. 1 Introduction Long term potentiation (LTP) is a neurophysiological phenomenon observed under laboratory conditions in which two neurons or neural populations are stimulated at a high frequency with a resulting measurable increase in synaptic efficacy between them that lasts for several hours or days [1]-[2] LTP thus provides direct evidence supporting the neurophysiological hypothesis articulated by Hebb [3]. This increase in synaptic strength must be countered by a mechanism for weakening the synapse [4]. The biological correlate, long-term depression (LTD) has also been observed in the laboratory; that is, synapses are observed to weaken when low presynaptic activity coincides with high postsynaptic activity [5]-[6]. Mathematical formulations of Hebbian learning produce weights, Wi}, (where i is the presynaptic unit and j is the postsynaptic unit), that capture the covariance [Eq. 1] between the instantaneous activities of pairs of units, ai and aj [7]. wij(t) = (ai (t) -ai)(a j (t)-aj) [1] This idea has been generalized to capture covariance between acUvlUes that are shifted in time [8]-[9], resulting in a framework that can model systems with temporal delays and dependencies [Eq. 2]. Wij(t) = ffK(t"-t')ai (tA')aj (t')dt'dt' [2] 151 LTD Facilitates Learning in a Noisy Environment As will be shown in the following sections, depending on the choice of the function K(L1t), this formulation encompasses a broad range of learning rules [10]-[12] and can support a comparably broad range of biological evidence. Figure 1. Synaptic change as a function of the time difference between spikes from the presynaptic neuron and the postsynaptic neuron. Note that for tpre < tpos t , LTP results (L1w > 0), and for tpre > tp ost , the result is LTD. Recent biological data from [13]-[15], indicates an increase in synaptic strength (LTP) when presynaptic activity precedes postsynaptic activity, and LTD in the reverse case (postsynaptic precedes presynaptic). These ideas have started to appear in some theoretical models of neural computation [10]-[12], [16]-[18]. Thus, Figure 1 shows the form of the dependence of synaptic change, Liw on the difference in spike arrival times. 2 A General Framework Given specific assumptions, the integral in Eq. 2 can be separated into two integrals, one representing LTP and one representing LTD [Eq. 3]. Wij(t) = t f {' = t f Kp(t-t')ai(t')a/)dt' + -00 v ' KD(t-t')ai(t)a/t')dt' r~'_=_-_oo__~v~ ____ [3] ~ LID L~ The activities that do not depend on t' can be factored out of the integrals, giving two Hebb-like products, between the instantaneous activity in one cell and a weighted time average of the activity in the other [Eq. 4]: wij (t) = (ai (t)) p a j (t) - ai (t)( a j (t)) D [4] t where (f(t)) X :; I ,f K X (t - t')f(t')dt' I for X E {P, D} t =-00 The kernel functions Kp and KD can be chosen to select precise times out of the convoluted function fit), or to average across the functions for an arbitrary range. The alpha function is useful here [Eq. 5]. A high value of a selects an immediate time, while a small value approximates a longer time-average. -a 'r Kx (r) = fixu X for X E {P,D} with ap > O,aD > O,fip > O,fiD < ? [5] P. W. Munro and G. Hernandez 152 For high values of ap and QD, only pre- and post- synaptic activities that are very close temporally will interact to modify the synapse. In a simulation with discrete step sizes, this can be reasonably approximated by only considering just a single time step [Eq. 6]. dWjj (t) =a j (t-l)a j(t)-aj (t)a j (t-1) [6] Summing LiWi,{t) and Liwiit+l) gives a net change in the weights Li(2)Wij = wiiH1)-Wi/t-l) over the two time steps: [7) The first tenn is predictive in that it has the fonn of the delta rule where a/HI) acts as a training signal for aj (t-1), as in a temporal Hopfield network [9]. 3 Temporal Contrast Enhancement The computational role of the LTP term in Eq. 3 is well established, but how does the second term contribute? A possibility is that the term is analogous to lateral inhibition in the temporal domain; that is, that by suppressing associations in the "wrong" temporal direction, the system may be more robust against noise in the input. The resulting system may be able to detect the onset and offset of a signal more reliably than a system not using an anti-Hebbian LTD tenn. The extent to which the LTD term is able to enhance temporal contrast is likely to depend idiosyncratically on the statistical qualities of a particular system. If so, the parameters of the system might only be valid for signals with specific statistical properties, or the parameters might be adaptive. Either of these possibilities lies beyond the scope of analysis for this paper. 4 Simulations Two preliminary simulation studies illustrate the use of the learning rule for predictive behavior and for temporal contrast enhancement. For every simulation, kernel functions were specified by the parameters a and p, and the number of time steps, np and nD, that were sampled for the approximation of each integral. 4.1 Task 1. A Sequential Shifter The first task is a simple shifter over a set of 7 to 20 units. The system is trained on these stimuli and then tested to see if it can reconstruct the sequence given the initial input. The task is given with no noise and with temporal noise (see Figure 2). Task 1 is designed to examine the utility of LTD as an approach to learning a sequence with temporal noise. The ability of the network to reconstruct the noise-free sequence after training on the noisy sequence was tested for different LTD kernel functions. Note that the same patterns are presented (for each time slice, just one of the n units is active), but the shifts either skip or repeat in time. Experiments were run with k = 1, 2, or 3 of the units active. 4.2 Task 2. Time series reconstruction. In this task, a set of units was trained on external sinusoidal signals that varied according to frequency and phase. The purpose of this task is to examine the role of LTD in providing temporal context. The network was then tested under a condition in which the 153 LTD Facilitates Learning in a Noisy Environment external signals were provided to all but one of the units. The activity of the deprived unit was then compared with its training signal Sequence Clean m e Noisy aaaaaaa 7 aaaaaaa 12.'14~67 _ClClClaaa CI_aaaaa Cla_aaaa ClCla_aaa aClaa_aa aClaaa_a aaaaaa_ _ aaaaaa a_aaaaa aa_aaaa aaa_aaa aaaa_aa aaaaa_a aaaaaa_ _aaaaaa a_aaaaa aa_aaaa Cla_aaaa aaa_aaa aaaa_aa Claaaaa_ aaaaaa _ _aaaaaa a_aaaaa aaa_aaa aaaa_aa aaaa_aa aaaaa_a 12::1 4 T Reconstruction ~ 6 LTP alone a12::14~67 a a a a a a _CICICICICICI _ __ __ __ _ CI CI CICICI ------------------------------------------------- LTP&LTD aaaaaaa 1 2 ::I 4 ~ 6 7 _aaaaaa CI_aaaaa aa_aaaa aaa_aal:l aaaa_al:l aaaaa_1:I aaaaaa_ _aal:laaa a_aaaaa aa_aaaa aaa_al:la aaaa_aa aaaaa_a l:Iaaaaa_ Figure 2. Reconstruction of clean shifter sequence using as input the noisy stimulus shifter sequence. For each time slice, just one of the 7 units is active. In the clean sequence, activity shifts cyclically around the 7 units. The noisy sequence has a random jitter of ?1. 5 ResultsSequential Shifter Results All networks trained on the clean sequence can learn the task with LTP alone, but no networks could learn the shifter task based on a noisy training sequence unless there was also an LTD term. Without an LTD term, most units would saturate to maximum values. For a range of LTD parameters, the network would converge without saturating. Reconstruction performance was found to be sensitive to the LTD parameters. The parameters a and f3 shown in Table.l needed to be chosen very specifically to get perfect reconstruction (this was done by trial and error). For a narrow range of parameters near the optimal values, the reconstructed sequence was close to the noise-free target. However, the parameters a and f3 shown in Table 2 are estimated from the experimental result of Zhang,et al [15]. . I sh?f T able 1 Resu ts 0 fth e sequentla Iter task . k n nD Time 0.4 5 208 0 .1 0.2 0.4 0.1 4 7 40 192 1 0.2 0.1 6 168 2.72 1 0.1 0.4 8 682 1 2.72 1 0.1 0.4 7 99 1 2.72 1 0.1 0.4 13 1136 1 2.72 1 0.1 0.4 18 4000 nr ap f3p 1 7 1 1 2 7 np aD 2.72 1 0.1 1 2 1 2.72 0.5 0.4 1 3 ~ 7 1 0.5 0.4 1 10 1 1 12. 10 1 1 15 1 1 20 1 f3D The task was to shift a pattern 1 unit with each time step. A block of k of n units was active. The parameters of the kernel functions (aand /3), the number of values sampled from the kernel (the number of time slices used to estimate the integral), np and nD, and the number of steps used to begin the reconstruction, nr (usually n, = 1) are given in the table. The last column of the table (Time) reports the number of iterations required for perfect reconstruction. 154 P. W. Munro and G. Hernandez Table 2 .. Results of the sequential shifter task using as parameters: nr =1; np =1; aD =0.125; a p=O.5; fJI =-aD *e*O.35; fJp=ap *e*O.8. ~ n nD Time 1 7 6 288 ~ 7 ~ 7 5 96 4 64 For the above results, the k active units were always adjacent with respect to the shifting direction. For cases with noncontiguous active units, reconstruction was never exact. Networks trained with LTP alone would saturate, but would converge to a sequence "close" to the target (Fig. 3) if an LTD term was added. Sequence Reconstruction Clean a G Noisy aaaaa LTP alone a1a2 1 a 4G!G' ia6 7 a 1 2 1 4 !'i 6 7 ? a.aaaa ? a.aaaa aa.a.aa a.a.aaa aaaa.a. aaaa.a. ? aaaa.a a.aaaa. LTP&LTD a1a2a.0 a4!'i67 aaa a1G2 :G>a4 !a' ia6 a7 a.aaaa ? a.aaaa ? .a.aaaa ? ? ? ? aaa aa ? ? ? ? a aaa ? ? ? ? .aaa ??? a.aaa ?? ? ?? aaa. ??????? ??????? ??????? ??????? ??????? ??????? ? ?????? Figure 3. This base pattern (k=2, n=7) with noncontiguous active units was presented as a shifted sequence with noise. The target sequence is partially reconstructed only when LTP and LTD are used together. 5.1 Time Series Reconstruction Results A network of just four units was trained for hundreds of iterations, the units were each externally driven by a sinusoidally varying input. Networks trained with LTP alone fail to reconstruct the time series on units deprived of external input during testing. In these simulations, there is no noise in the patterns, but LTO is shown to be necessary for reconstruction of the patterns (Fig. 4). I'~. f: !', I I ? = . . :. : :: II : .:1 ? I ~. l: ..... ' ,: :'I ~ : .' ", ??: ~ ? II . I I. I.:': I ~I : ~ I 'I ~. ~, ~ '"I II .'~ ........ ~!.........'ll ..........,,:........:.;. ... IJ I . u ? ? ?\ . : ? ? ? ? ? ? ? ? I ~,: ?? ? ?? ? ????? ? ?,?y ?? ?????rl:????? ? ? ?.~?????? ??::.? ? ? ????"!I? ???? ? ?? ~~??? ?? ?? ?~I I I I : \ ? " I I : ~J .; ?? ! i ~ Figure 4. Reconstruction of sinusoids. Target signals , from training (dashed) plotted with reconstructed signals (solid). Left: The best reconstruction using LTP alone. Right: A typical result with LTP and LTD together. For high values of ap and aD, the reconstruction of sinusoids is very sensitive to the values of fJD and fJP. Figure 5 shows the results when IfJD I and fJp values are close. In the first case (top), when IfJD I is slightly smaller than f3p , the first two neurons (from left to right) saturate. And, in the contrary case (bottom) the first two neurons LTD Facilitates Learning in a Noisy Environment 155 show almost null activation. However, the third and fourth neurons (from left to right) in both cases (top and bottom) show predictive behavior. :~ ~ .' I ~ II I: " " " I. I ? '\ :: ::. I ~ I' I' .: " I :: , I " ,i ;: ;: ; ~ .' I: I: ? ~: ' l' II'! I ? ." It. I I I :: :: ? I, '. .: I, : I :. ? ? I : : ;: ? ? ? ? ? I I . :~::;!;~~~ :: I: ~ : :: ~: .' : .! ~!:: ' . I I \ . :: :: :: :: ~: ~.I ~; ~I?~ :. :. .' 0' .; ? :.: .' " '. \ I. .: ~: ~: ~: ?? ~ I,' ~I .. ~ ~r ...?.???? \ .??.??.?...'J. .?.??...?.?.....!C, . . .. ? I .. :: t ?; , I :' I ? I I I I ? I ? I ? I. I I I I. " I: I I I: : : ~I ? I ~I ? I ~I ? I ~I ? I ., I : ~! .~! .~ .~! 't! j : ,I ; , {??~??l? ? ?? ? It ' I I ... :. I I '\ '. ' I, I I I , I I I I ? I ? ,- : " ? .. t . I c ,, : ct , ' .~ I ': I 'I : I ,; , ., ,: . ?... ... ...... . . . . . ........ . , j ~ ~ , ,, .. , .. . ,1 . ~: . : ~~'~ : p " 'I:': ? ? ? .... I " : .: . . . ,. i ,? ; ::,:J I I _. : ? :,= ? "; ? : " };' I I' ~:; ~ " I: . . .. . . ..... 0"<". :: .' I:: ~ I: ?? ? _: I : ? .... ' 1"'~" '" .~ I: ? ."+ ??_.-. ~ I{ _~-. :.-.-.~ \ ~ I ._._~.......?......j :. . :: I! .. I , " , II , , , .. .. " r: :~ I I :'" " , , " I, ~ I I I , , . , ,! ,? 'I ., \ " " ,' I:. , " , ," :: I :. : ; I .. ...... I. ? , "" ., ,, .. I II ,, ,, , ~ I. : .. :, :, I ? It It : : :'" , , , ~ I,. .. i ? II ? ?? ???~??????:t???? ??ft??u I I ( I .??. I1 .......... A: ? ? ?????\ .??? -....4: .... . .. .l ...? ? : ? , .' II '\ : ? . I: !I =- '\ :: ;~ :: ,::, :'~ ~ 1 :~ I' I: I. I. ; ~ .' I: I,: ~ I I : t.. .. . I : t Figure 5. Reconstruction of sinusoids . Examples of target signals from training (dashed) plotted with reconstructed signals (solid). Top: When IfJD kfJp. Bottom: When IfJDI> fJp . 6 Discussion In the half century that has elapsed since Hebb articulated his neurophysiological postulate, the neuroscience community has corne to recognize its fundamental role in plasticity. Hebb's hypothesis clearly transcends its original motivation to give a neurophysiologically based account of associative memory. The phenomenon of LTP provides direct biological support for Hebb's postulate, and hence has clear cognitive implications. Initially after its discovery in the laboratory, the computational role of LTD was thought to be the flip side of LTP. This interpretation would have synapses strengthen when activities are correlated and have them weaken when they are anti-correlated. Such a theory is appealing for its elegance, and has formed the basis many network models [19]-[20]. However, the dependence of synaptic change on the relative timing of pre- and post- synaptic activity that has recently been shown in the laboratory is inconsistent with this story and calls for a computational interpretation. A network trained with such a learning rule cannot converge to a state where the weights are symmetric, for example, since LiWij 7:- LiWji. While the simulations reported here are simple and preliminary, they illustrate two tasks that benefit from the inclusion of time-dependent LTD. In the case of the sequential shifter, an examination of more complex predictive tasks is planned in the near future. It is expected that this will require architectures with unclamped (hidden) units. The role of LTD here is to temporally enhance contrast, in a way analogous to the role of lateral inhibition for computing spatial contrast enhancement in the retina. The time-series example illustrates the possible role of LTD for providing temporal context. P. W. Munro and G. Hernandez J56 7 References [1] Bliss TVP & Lcllmo T (1973) Long-lasting potentiation of synaptic in the dentate area of the un anaesthetized rabbit following stimulation of the perforant path.J Physiol 232:331-356 [2] Malenka RC (1995) LTP and LTD: dynamic and interactive processes of synaptic plasticity. The Neuroscientist 1:35-42. [3] Hebb DO (1949) The Organization of Behavior. Wiley: NY. [4] Stent G (1973) A physiological, mechanism for Hebb's postulate of learning. Proc. Natl. Acad. Sci. USA 70: 997-1001 [5] Barrionuevo G, Schottler F & Lynch G (1980) The effects of repetitive low frequency stimulation on control and "pontentiated" synaptic responses in the hippocampus. Life Sci 27:2385-2391. [6] Thiels E, Xie X, Yeckel MF, Barrionuevo G & Berger TW (1996) NMDA Receptordependent LTD in different subfields of hippocampus in vivo and in vitro. Hippocampus 6:43-51. [7] Sejnowski T J (1977) Storing covariance '!Vith nonlinearly interacting neurons. 1. Math. BioL 4:303-321. [8] Sutton RS (1988) Learning to predict by the methods of temporal difference. Machine Learning. 3:9-44 [9] Sompolinsky H and Kanter I (1986) Temporal association in asymmetric neural networks. Phys.Rev.Letter. 57:2861-2864. [10] Gerstner W, Kempter R, van Hemmen JL & Wagner H (1996) A neuronal learning rule for sub-millisecond temporal coding. Nature 383:76-78 . [11] Kempter R, Gerstner W & van Hemmen JL (1999) Spike-based compared to rate-based hebbian learnin g. Kearns , Ms. , Solla, S.A and Cohn, D.A. Eds. Advances in Neural Information Processing Systems J J. MIT Press, Cambridge MA. [12] Kempter R, Gerstner W, van Hemmen JL & Wagner H (1996) Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway. Touretzky, D.S, Mozer, M.C, Hasselmo, M.E, Eds. Advances in Neural Information Processing Systems 8. MIT Press, Cambridge MA pp.124-130. [13] Markram H, Lubke J, Frotscher M & Sakmann B (1997) Regulation of synaptic efficacy by coincidence of postsynaptic Aps and EPPSPs. Science 275 :213-215 . [14] Markram H & Tsodyks MV (1996) Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature 382:807-810. [15] Zhang L, Tao HW, Holt CE & Poo M (1998) A critical window for cooperation and competition among developing retinotectal synapses. Nature 35:37-44 [16] Abbott LF, & Blum KI (1996) Functional significance of long-term potentiation for sequence learning and prediction. Cerebral Cortex 6: 406-416. [17] Abbott LF, & Song S (1999) Temporally asymmetric hebbian learning, spike timing and neuronal response variability. Kearns, Ms., Solla, S.A and Cohn, D.A. Eds. Advances in Neural Information Processing Systems J J. MIT Press, Cambridge MA. [18] Goldman MS, Nelson SB & Abbott LF (1998) Decorrelation of spike trains by synaptic depression. Neurocomputing (in press). [19] Hopfield J (1982) Neural networks and physical systems with emergent collective . computational properties. Proc. Natl. Acad. Sci. USA. 79:2554-2558. [20] Ackley DH, Hinton GE, Sejnowski TJ (1985) A learning algorithm for Boltzmann machines. Cognitive Science 9:147-169.
1776 |@word trial:1 hippocampus:3 nd:4 r:1 simulation:6 covariance:3 fonn:1 solid:2 initial:1 series:4 efficacy:3 suppressing:1 activation:1 must:1 physiol:1 plasticity:2 designed:1 aps:1 alone:6 tenn:2 half:1 provides:2 math:1 contribute:1 zhang:2 mathematical:1 rc:1 direct:2 fth:1 pathway:1 expected:1 behavior:3 examine:2 aaaa:9 goldman:1 window:1 considering:1 provided:1 begin:1 null:1 temporal:15 every:1 act:1 interactive:1 wrong:1 control:1 unit:22 appear:1 timing:2 modify:1 acad:2 sutton:1 path:1 hernandez:4 ap:5 might:2 studied:1 range:6 subfields:1 testing:1 block:1 lf:3 stent:1 area:1 thought:1 pre:2 holt:1 get:1 cannot:1 close:4 aaaaaa:5 context:2 measurable:1 poo:1 rabbit:1 factored:1 rule:5 transcends:1 his:1 population:1 century:1 analogous:2 target:5 strengthen:1 exact:1 substrate:1 hypothesis:2 pa:2 approximated:1 asymmetric:2 observed:3 role:7 bottom:3 ft:1 coincidence:1 ackley:1 capture:2 tsodyks:1 sompolinsky:1 solla:2 mozer:1 environment:4 dynamic:1 trained:7 depend:2 predictive:4 lto:1 basis:1 hopfield:2 emergent:1 articulated:2 separated:1 train:1 sejnowski:2 kp:2 precedes:2 kanter:1 emerged:1 reconstruct:3 ability:1 noisy:10 associative:2 sequence:17 net:1 reconstruction:15 interaction:1 product:1 convoluted:1 competition:1 enhancement:3 produce:1 perfect:2 depending:1 illustrate:2 ij:1 school:1 eq:8 skip:1 qd:1 direction:2 a12:1 owl:1 redistribution:1 require:1 potentiation:4 preliminary:2 biological:5 around:1 barn:1 scope:1 dentate:1 predict:1 pitt:2 vith:1 purpose:1 proc:2 barrionuevo:2 sensitive:2 hasselmo:1 weighted:1 mit:3 clearly:1 always:1 lynch:1 varying:1 unclamped:1 indicates:1 contrast:5 detect:1 dependent:2 sb:1 weakening:1 initially:1 hidden:1 wij:5 selects:1 i1:1 tao:1 among:1 spatial:1 frotscher:1 f3:2 never:1 broad:2 future:1 np:4 stimulus:2 intelligent:1 report:1 retina:1 recognize:1 neurocomputing:1 phase:1 fire:1 neuroscientist:1 organization:1 possibility:2 sh:1 natl:2 tj:1 held:1 implication:1 integral:5 necessary:1 unless:1 plotted:2 theoretical:1 weaken:2 column:1 sinusoidally:1 planned:1 tp:1 hundred:1 delay:1 reported:1 dependency:1 fundamental:1 enhance:2 together:2 postulate:3 external:3 cognitive:2 li:1 account:1 sinusoidal:1 coding:2 bliss:1 mv:1 ad:5 onset:1 vivo:1 formed:1 lubke:1 comparably:1 synapsis:3 phys:1 touretzky:1 synaptic:15 ed:3 against:1 frequency:3 pp:1 elegance:1 sampled:2 auditory:1 nmda:1 ta:1 dt:5 day:1 xie:1 response:2 synapse:2 formulation:2 done:1 just:4 cohn:2 a7:1 aj:5 quality:1 usa:2 effect:1 liwi:1 perforant:1 hence:1 sinusoid:3 symmetric:1 laboratory:5 adjacent:1 ll:1 during:1 aand:1 coincides:1 m:3 generalized:1 neocortical:1 instantaneous:2 recently:2 stimulation:2 functional:1 rl:1 vitro:1 physical:1 cerebral:1 jl:3 association:2 interpretation:2 approximates:1 cambridge:3 ai:8 inclusion:1 longer:1 cortex:1 inhibition:2 base:1 recent:1 driven:1 reverse:1 ost:1 fid:1 life:1 converge:3 f3p:2 signal:10 ii:9 dashed:2 hebbian:4 long:7 post:2 prediction:1 iteration:2 kernel:6 repetitive:1 aal:1 cell:3 pyramidal:1 ltp:22 facilitates:4 corne:1 contrary:1 inconsistent:1 call:1 near:2 fit:1 l1t:1 architecture:1 idea:2 shift:3 munro:4 utility:2 ltd:32 song:1 depression:3 useful:1 clear:1 millisecond:2 shifted:2 delta:1 estimated:1 neuroscience:1 discrete:1 iter:1 four:1 blum:1 clean:5 ce:1 abbott:3 run:1 letter:1 jitter:1 fourth:1 almost:1 ki:1 hi:1 ct:1 activity:13 strength:2 fjd:1 noncontiguous:2 malenka:1 developing:1 according:1 kd:2 across:1 slightly:1 smaller:1 postsynaptic:8 wi:2 appealing:1 tw:1 lid:1 modification:1 aaa:7 rev:1 deprived:2 lasting:1 fjp:4 mechanism:2 fail:1 needed:1 flip:1 ge:1 fji:1 original:1 pwm:1 top:3 a4:2 giving:1 anaesthetized:1 added:1 spike:5 dependence:2 countered:1 nr:3 lateral:2 sci:3 nelson:1 presynaptic:6 extent:1 shifter:8 berger:1 providing:2 regulation:1 reliably:1 sakmann:1 collective:1 boltzmann:1 neuron:8 anti:2 t:1 supporting:1 immediate:1 hinton:1 variability:1 precise:1 interacting:1 varied:1 arbitrary:1 community:1 pair:1 required:1 specified:1 nonlinearly:1 elapsed:1 narrow:1 established:1 hour:1 tpre:2 able:3 beyond:1 usually:1 pattern:5 encompasses:1 program:1 memory:1 shifting:1 critical:1 decorrelation:1 examination:1 representing:2 temporally:3 started:1 discovery:1 f3d:1 relative:1 kempter:3 neurophysiologically:1 resu:1 story:1 storing:1 cooperation:1 repeat:1 last:2 free:2 side:1 wagner:2 markram:2 benefit:1 slice:3 van:3 valid:1 adaptive:1 correlate:1 reconstructed:4 alpha:1 fip:1 active:7 summing:1 pittsburgh:4 un:1 table:5 stimulated:1 retinotectal:1 learn:2 reasonably:1 robust:1 nature:3 interact:1 gerstner:3 complex:1 domain:1 significance:1 motivation:1 noise:8 paul:1 arrival:1 neuronal:2 fig:2 hemmen:3 hebb:7 ny:1 wiley:1 sub:2 lie:1 third:1 hw:1 cyclically:1 externally:1 saturate:3 specific:2 explored:1 offset:1 physiological:1 evidence:3 sequential:3 ci:2 illustrates:1 kx:1 mf:1 likely:1 neurophysiological:3 saturating:1 partially:1 aa:3 dh:1 ma:3 change:4 specifically:1 typical:1 kearns:2 experimental:1 la:1 select:1 support:2 phenomenon:2 tested:3 biol:1 correlated:2
847
1,777
Acquisition in Autoshaping Sham Kakade Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WC1N 3AR. sharn@gatsby.ucl.ac.uk dayan@gatsby.ucl.ac.uk Abstract Quantitative data on the speed with which animals acquire behavioral responses during classical conditioning experiments should provide strong constraints on models of learning. However, most models have simply ignored these data; the few that have attempted to address them have failed by at least an order of magnitude. We discuss key data on the speed of acquisition, and show how to account for them using a statistically sound model of learning, in which differential reliabilities of stimuli playa crucial role. 1 Introduction Conditioning experiments probe the ways that animals make predictions about rewards and punishments and how those predictions are used to their advantage. Substantial quantitative data are available as to how pigeons and rats acquire conditioned responses during autoshaping, which is one of the simplest paradigms of classical conditioning. 4 These data are revealing about the statistical, and ultimately also the neural, substrate underlying the ways that animals learn about the causal texture of their environments. In autoshaping experiments on pigeons, the birds acquire a peck response to a lighted key associated (irrespective of their actions) with the delivery of food. One attractive feature of autoshaping is that there is no need for separate 'probe trials' to assess the degree of association formed between the light and the food by the animal- rather, the rate of key pecking during the light (and before the food) can be used as a direct measure of this association. In particular, acquisition speeds are often measured by the number of trials until a certain behavioral criterion is met, such as pecking during the light on three out of four successive trials. 4, 8,10 As stressed persuasively by Gallistel & Gibbon4 (GG; forthcoming), the critical feature of autoshaping is that there is substantial experimental evidence on how acquisition speed depends on the three critical variables shown in figure 1A. The first is I, the inter-trial interval; the second is T, the time during the trial for which the light is presented; the third is the training schedule, liS, which is the fractional number of deliveries per light - some birds were only partially reinforced. Figure 1 makes three key points. First, figure 1B shows that the median number of trials to the acquisition criterion depends on the ratio of I IT, and not on I and T separately - experiments reported for the same I IT are actually performed with I and T differing by more than an order of magnitude.4,8 Second, figure 1B shows convincingly that the number of reinforcements is approximately inversely proportional to I IT - the relatively shorter presentation of light, the faster the leam- 25 Acquisition in Autoshaping B A 'OOOO r - - - - - -- - - , C 500 . E vrnotoo .. . . \~ ' ' ' ~ 1': \: \ ,': ;: :::::~::;i~: lime 2 Jr 10 20 50 10 , - -- 2 ? ? :~ . 5 ? ? '0 Figure 1: Autoshaping. A) Experimental paradigm. Top: the light is presented for T seconds every C seconds and is always followed by the delivery of food (filled circle). Bottom: the food is delivered with probability liS = 1/2 per trial. In some cases I is stochastic, with the appropriate mean. B) Log-log plot4 of the number of reinforcements to a given acquisition criterion versus the I IT ratio for S = l. The data are median acquisition times from 12 different laboratories. C) Log-log acquisition curves for various I IT ratios and S values. The main graph shows trials versus S; the inset shows reinforcements versus S. (1999). ing. Third, figure lC shows that partial reinforcement has almost no effect when measured as a function of the number of reinforcements (rather than the number of trials),4, 10 since although it takes S times as many trials to acquire, there are reinforcements on only liS trials. Changing S does not change the effective I IT when measured as a function of reinforcements, so this result might actually be expected on the basis of figure IB, and we only consider S = 1 in this paper. Altogether, the data show that: (1) where n is the number of rewards to the acquisition criterion. Remarkably, these effects seem to hold for over an order of magnitude in both I IT and S . These quantitative data should be a most seductive target for statistically sound models of learning. However, few models have even attempted to capture the strong constraints they provide, and those that have attempted, all fail in critical aspects. The best of them, rate estimation theory4 (RET), is closely related to the Rescorla-Wagner l3 (RW) model, and actually captures the proportionality in equation 1. However, as shown below, RET grossly overestimates the observed speed of acquisition (underestimating the proportionality constant). Further, RET is designed to account for the time at which a particular, standard, acquisition criterion is met. Figure 2A shows that this is revealing only about the very early stages of learning - RET is silent about the remainder of the learning curve. We look at additional quantitative data on learning, which collectively suggest that stimuli compete to predict the delivery of reward. Dayan & Long3 (DL) discussed various statistically inspired competitive models of classical conditioning, concluding with one in which stimuli are differently reliable as predictors of reward. However, DL ignored the data shown in figures 1 and 2, basing their analYSis on conditioning paradigms in which I IT was not a factor. Figures 1 and 2 demand a more sophisticated statistical model - building such a model is the focus of this paper. 2 Rate Estimation Theory Gallistel & Gibbon4 (GG; forthcoming) are amongst the strongest proponents of the quantitative relationships in figure 1. To account for them, GG suggest that animals are estimating the rates of rewards - one, >'1, for the rate associated with the light and another, >'b, for the rate associated with the background context. The context is the ever-present environment which can itself gain associative value. The overall S. Kakade and P. Dayan 26 cl40r---~~-~-----' A B :~120 1.5 '::; 8"100 '" $I 80 <Il ~60 e E i ' 100 200 300 40 ~ 20 rite ' 400 '~ 0, L. . . . .J~0---4~8L.J10164~12":-8-!"256"':-J.!12~00,---J reinforcements prior context reinforcements Figure 2: Additional Autoshaping Data. A) Acquisition of keypecking. The figure shows response rate versus reinforcements. 6 The acquisition criterion is satisfied at a relatively early time when the response curve crosses the acquisition criterion line. B) The effects of prior context reinforcements on subsequent acquisition speed. The data are taken from two experiments,I,2 with I IT = 6. predicted reward rate while the light is on is.A1 is just .Ab? + .Ab, and the rate without the light The additive form of the model makes it similar to Rescorla-Wagner's13 (RW) standard delta-rule model, for which the net prediction of the expected reward in a trial is the sum of the associative values of each active predictor (in this case, the context and light). If the rewards are modeled as being just present or absent, the expected value for a reward is just its probability of occurrence. Instead, RET uses rates, which are just probabilities per unit time. GG 4 formulated their model from a frequentist viewpoint. However, it is easier to discuss a closely related Bayesian model which suffers from the same underlying problem. Instead of using RW's delta-rule for learning the rates, GG assume that reinforcements come from a constant rate Poisson process, and make sound statistical inferences about the rates given the data on the rewards. Using an improper flat prior over the rates, we can write the joint distribution as: P(.AI.Ab I data) ex P(n I .A1.Abtltb) ex (.AI + .Ab)ne-(AI+Ab)tle-Abtb (2) since all n rewards occur with the light, at rate .AI + .Ab. Here, tl = nT is the total time the light is on, and tb = nI is the total time the light is off. GG take the further important step of relating the inferred rates .AI and .Ab to the decision of the animals to start responding (ie to satisfy the acquisition criterion). GG suggest that acquisition should occur when the animals have strong evidence that the fractional increase in the reward rate, whilst the light is on, is greater than some threshold. More formally, acquisition should occur when: P ?.AI + .Ab) l.Ab > J3 I n) = 1- Q (3) where Q is the uncertainty threshold and J3 is slightly greater than one, reflecting the fractional increase. The n that first satisfies equation 3 can be found by integrating the joint probability in equation 2. It turns out that n ex tlltb, which has the approximate, linear dependence on the ratio I IT (as in figure IB), since tt/tb = nTInI = T I I. It also has no dependence on partial reinforcement, as observed in figure 1C. However, even with a very low uncertainty, Q = 0.001, and a reasonable fractional increase, J3 = 1.5, this model predicts that learning should be more than ten times as fast as observed, since we get n ~ 20 * T I I as opposed to the 300 *T I I observed. Equation 1 can only be satisfied by setting Q between 10- 20 and 10- 50 (depending on the precise values of I IT and J3)! This spells problems for GG as a normative, ideal detector model of learning - it cannot, for instance, be repaired with any reasonable prior for the rates, as Q drops drastically with n . In other circumstances, Acquisition in Autoshaping 27 though, Gallistel, Mark & KingS (forthcoming) have shown that animals can be ideal detectors of changes in rates. One hint of the flaw with GG is that simple manipulations to the context before starting auto shaping (in particular extinction) can produce very rapid learning.2 More generally, the data show that acquisition speed is strongly- controlled by prior rewards being given only in the context (without the light present).2 Figure 2B shows a parametric study of subsequent acquisition speeds during autoshaping as a function of the number of rewards given only with the context. This effect cannot simply be modeled by assuming a different prior distribution for the rates (which does not fix the problem of the speed of acquisition in any case), since the rate at which these prior context rewards were given has little effect on subsequent acquisition speed for a given number of prior reinforcements.9 Note that the data in figure 2B (ie equation 1) suggest that there were about thirty prior rewards in the context - this is consistent with the experimental procedures used,8--10 although prior experience was not a carefully controlled factor. 3 The Competitive Model Five sets of constraints govern our new model. First, since animals can be ideal detectors of rates in some circumstances,s we only consider accounts under which their acquisition of responding has a rational statistical basis. Second, the number of reinforcements to acquisition must be n ~ 300 * T / I, as in equation 1. This requires that the constant of proportionality should come from rational, not absurd, uncertainties. Third, pecking rates after the acquisition criterion is satisfied should also follow the form of figure 2A (in the end, we are preventing from a normative account of this by a dearth of data). Fourth, the overallieaming speed should be strongly affected by the number of prior context rewards (figure 2B), but not by the rate at which they were presented. That is, the context, as an established predictor, regardless of the rate it predicts, should be able to substantially block learning to a less established predictor. Finally, the asymptotic accuracy of rate estimates. should satisfy the substantial experimental data on the intrinsic uncertainty in the predictions in the form of a quantitative account called scalar expectancy theory7 (SET). In our model, as in DL, an independent prediction of the rate of reward delivery is made on the basis of each stimulus that is present (wc, for the context; WI for the light). These separate predictions are combined based on estimated reliabilities of the predictions. Here, we present a heuristic version of a more rigorously specified model. 12 3.1 Rate Predictions SET7 was originally developed to capture the nature of uncertainty in the way that animals estimate time intervals. Its most important result is that the standard deviation of an estimate is consistently proportional to the mean, even after an asymptotic number of presentations of the interval. Since the estimated time to a reward is just the inverse rate, asymptotic rate estimates might also be expected to have constant coefficients of variation. Therefore, we constrain the standard deviations of rate estimates not to drop below a multiple of their means. Evidence suggests that this multiple is about 0.2.7 RET clearly does not satisfy this constraint as the joint distribution (equation 2) becomes arbitrarily accurate over time. Inspired by Sutton,14 we consider Kalman filter models for independent logpredictions, logwc(m) and logwl(m), on trial m. The output models for the filters s. 28 Kakade and P. Dayan specify the relationship between the predicted and observed rates. We use a simple log-normal, CN, approximation (to an underlying truly Poisson model): P(oc(m) I wc(m? ,... CN(wc(m) , v;) P(ol(m) I wl(m)) ,... CN(wl(m), vt) (4) where o.(m) is the observed average reward whilst predictor * is present, so if a reward occurs with the light in trial m, then ol(m) = l/T and oc(m) = l/C (where C = T + J). The values of can be determined, from the Poisson model, to be V c2 -- v I2 -1 ? The other part of the Kalman filter is a model of change in the world for the w's: v; logwc(m) = logwc(m - 1) + ?c(m) logwl(m) = log WI (m - 1) + ?l(m) + 1?-1) ,... N(O, (1](1] + 1?-1) ?c(m) ,... N(O, ?l(m) (1](1] (5) (6) We use log(rates) so that there is no inherent scale to change in the world. Here, is a constant chosen to satisfy the SET constraint, imposed as u. = w./..,fii at asymptote. Notice that 1] acts as the effective number of rewards remembered, which will be less than 30, to get the observed coefficient of variation above 0.2. After observing the data from m trials, the posterior distributions for the predictions will become approximately: 1] P(wc(m) I data) '" N(1/C,u;(m? P(wl(m) I data) ,... N(1/T, ut(m? (7) and, in about m = 1] trials, uc(m) -+ (1/C)/..,fii and ul(m) -+ (l/T)/..,fii. This captures the fastest acquisition in figure 2, and also extinction. 3.2 Cooperative Mixture of Experts The two predictions (equation 7) are combined using the factorial experts model of Jacobs et a[11 that was also used by DL. For this, during the presentation of the light (and the context, of course), we consider that, independently, the relationships between the actual reward rate rem) and the outputs wl(m) and wc(m) of 'experts' associated with each stimulus are: P(wl(m)lr(m? '" N(r(m), pJm) , P(wc(m)lr(m?,... N(r(m), p)m) (8) where PI(m)-1 and pc(m)-1 are inverse variances, or reliabilities for the stimuli. These reliabilities reflect the belief as to how close wl(m) and wc(m) are to rem). The estimates are combined, giving P(r(m) I wl(m),wc(m? '" N(T(m) , (Pl(m) rem) = 7f1(m)wl(m) + (1- 7f1(m))wc(m) 7f1(m) + pc(m?-I) = Pl(m)/(Pl(m) + pc(m? The prediction of the reward rate without the light r c(m) is determined just by the context value wc(m). In this formulation, the context can block the light's prediction if it is more reliable (Pc? PI), since 7f1 ~ 0, making the mean rem) ~ wc(m), and this blocking occurs regardless of the context's rate,wc(m). If PI slowly increases, then rem) -+ WI slowly as 7f1 (m) -+ 1. We expect this to model the post-acquisition part of the learning shown in figure 2A. A fully normative model of acquisition would come from a statistically correct account of how the reliabilities should change over time, which, in turn, would come from a statistical model of the expectations the animal has of how predictabilities change in the world. Unfortunately, the slow phase of learning in figure 2A, which should provide the most useful data on these expectations, is almost ubiquitously 29 Acquisition in Autoshaping A B 1.5 ...__-= =---1 lIT C 500. .: .. ...I . 0.6 ? . ; 0.3 acquiS110n Criterion 100 200 300 reinforcements 400 100 200 300 reinforcements I .. , '. 10 400 5 1fT 10 20 50 Figure 3: Satisfaction of the Constraints. A) The fit to the behavioral response curve (figure 2B), using equation 9 and 7r0 = 0.004. B) Possible acquisition curves showing r{m) versus m . The +--7 on the criterion line denotes the range of 15 to 120 reinforcements that are indicated by figure 2B. The -curve is the same as in Fig 3A. The parameters displayed are values for 7r0 in multiples of 7r0 for the center curve. C) A theoretical fit to the data using equation 11. Here,o: = 5% and 7ro..jPo = 0.004. ignored in experiments. We therefore make two assumptions about this, which are chosen to fit the acquisition data, but whose normative underpinnings are unclear. The first assumption, chosen to obtain the slow learning curve, is that: 1ft (m) = tanh 1fom (9) Assuming that the strength of the behavioral response is approximately proportional to r(m) - r c(m), which we will estimate by 1fl(m)(i~h(m) - wc(m)), figure 3A compares the rate of key pecking in the model with the data from figure 2A. Figure 3B shows the effect on the behavioral response of varying 1fo. Within just a half an order magnitude of variation of 1fo, the acquisition speeds (judged at the criterion line shown) due to between 1200 and 0 prior context rewards (figure 2B) can be obtained. Note the slightly counter-intuitive explanation - the actual reward rate associated with the light is established very quickly - slow learning comes from slow changes in the importance paid to these rates. We make a second assumption that the coefficient of variation of the context's prediction, from equation 8, does not change Significantly for the early trials before the acquisition criterion is met (it could change thereafter). This gives: pc(m) ~ po/wc(m) 2 for early m (10) It is plausible that the context is not becoming a relatively worse 'expert' for early m, since no other predictor has yet proven more reliable. Following GG's suggestion, we model acquisition as occurring on trial m if P(r(m) > rc(m)ldata) ~ 1 - 0:, ie if the animal has sound reasons to expect a higher reward rate with the light. Integrating over the Kalman filter distributions in equation 7 gives the distribution of r(m) - rc(m) for early mas P(r(m) - rc(m) I data) '" N?(tanh 1fom)(1/T - l/C), (pOC 2)-1) where O".(m) has dropped out due to 1ft(m) being small at early m. Finding the number of rewards, n, that satisfies the acquisition criterion gives: n ~ 0: T 1foVPO I where the factor of 0: depends on the uncertainty, theoretical fit to the data. (11) 0:, used. Figure 3C shows the 4 Discussion Although a noble attempt, RET fails to satisfy the strong body of constraints under which any acquisition model must labor. Under RET, the acquisition of responding cannot have a rational statistical basis, as the animal's modeled uncertainty in 30 S. Kakade and P. Dayan the association between light and reward at the time of acquisition is below 10- 20 . Further, RET ignores constraints set forth by the data establishing SET and also data on prior context manipulations. These latter data show that the context, regardless of the rate it predicts, will substantially block learning to a less established predictor. Additive models, such as RET, are unable to capture this effect. We have suggested a model in which each stimulus is like an 'expert' that learns independently about the world. Expert predictions can adapt quickly to changes in contingencies, as they are based on a Kalman filter model, with variances chosen to satisfy the constraint suggested by SET, and they can be combined based on their reliabilities. We have demonstrated the model's close fit to substantial experimental data. In particular, the new model captures the I IT dependence of the number of rewards to acquisition, with a constant of proportionality that reflects rational statistical beliefs. The slow learning that occurs in some circumstances, is due to a slow change in the reliabilities of predictors, not due to the rates being unable to adapt quickly. Although we have not shown it here, the model is also able to account for quantitative data as to the speed of extinction of the association between the light and the reward. The model leaves many directions for future study. In particular, we have not specified a sound statistical basis for the changes in reliabilities given in equations 9 and 10. Such a basis is key to understanding the slow phase of learning. Second, we have not addressed data from more sophisticated conditioning paradigms. For instance, overshadowing, in which multiple conditioned stimuli are similarly predictive of the reward, should be able to be incorporated into the model in a natural way. Acknowledgements We are most grateful to Randy Gallistel and John Gibbon for freely sharing, prior to publication, their many ideas about timing and conditioning. We thank Sam Roweis for comments on an earlier version of the manuscript. Funding is from a NSF Graduate Research Fellowship (SK) and the Gatsby Charitable Foundation. References [1] Balsam, PD, & Gibbon, J (1988). Journal of Experimental Psychology: Animal Behavior Processes, 14: 401-412. [2] Balsam, PD, & Schwartz, AL (1981). Journal of Experimental Psychology: Animal Behavior Processes, 7: 382-393. [3] Dayan, P, & Long, T, (1997) Neural Information Processing Systems, 10:117-124. [4] Gallistel, CR, & Gibbon, J (1999). Time, Rate, and Conditioning. Forthcoming. [5] Gallistel, CR, Mark, TS & King, A (1999). Is the Rat an Ideal Detector of Changes in Rates of Reward? Forthcoming. [6] Gamzu, ER, & Williams, DR (1973). Journal of the Experimental Analysis of Behavior, 19:225-232. [7] Gibbon, J (1977). Psychological Review 84:279-325. [8] Gibbon, J, Baldock, MD, Locurto, C, Gold, L & Terrace, HS (1977). Journal of Experimental Psychology: Animal Behavior Processes, 3: 264-284. [9] Gibbon, J & Balsam, P (1981). In CM Locurto, HS Terrace, & J Gibbon, editors, Autoshaping and Conditioning Theory. 219-253. New York, NY: Academic Press. [10] Gibbon, J, Farrell, L, Locurto, CM, Duncan, JH & Terrace, HS (1980). Animal Learning and Behavior, 8:45-59. [11] Jacobs, RA, Jordan, MI, & Barto, AG (1991). Cognitive Science 15:219-250. [12] Kakade, S & Dayan, P (2000). In preparation. [13] Rescorla, RA & Wagner, AR (1972). In AH Black & WF Prokasy, editors, Classical Conditioning II: Current Research and Theory, 64-69. New York, NY: Appleton-Century-Crofts. [14] Sutton, R (1992). In Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems.
1777 |@word h:3 trial:18 version:2 extinction:3 proportionality:4 jacob:2 paid:1 current:1 nt:1 yet:1 must:2 john:1 additive:2 subsequent:3 asymptote:1 designed:1 drop:2 half:1 leaf:1 underestimating:1 lr:2 successive:1 five:1 rc:3 c2:1 direct:1 differential:1 become:1 gallistel:6 terrace:3 behavioral:5 inter:1 ra:2 expected:4 rapid:1 behavior:5 ol:2 inspired:2 rem:5 food:5 little:1 actual:2 becomes:1 estimating:1 underlying:3 cm:2 substantially:2 developed:1 whilst:2 differing:1 ret:10 finding:1 ag:1 quantitative:7 every:1 absurd:1 act:1 ro:1 uk:2 schwartz:1 unit:2 peck:1 overestimate:1 before:3 dropped:1 timing:1 sutton:2 establishing:1 becoming:1 approximately:3 might:2 black:1 bird:2 suggests:1 fastest:1 range:1 statistically:4 graduate:1 thirty:1 block:3 procedure:1 significantly:1 revealing:2 integrating:2 suggest:4 get:2 cannot:3 close:2 judged:1 context:22 imposed:1 demonstrated:1 center:1 williams:1 regardless:3 starting:1 independently:2 rule:2 century:1 variation:4 target:1 substrate:1 us:1 predicts:3 cooperative:1 blocking:1 bottom:1 role:1 observed:7 ft:3 capture:6 improper:1 counter:1 substantial:4 environment:2 govern:1 pd:2 gibbon:8 reward:33 rigorously:1 ultimately:1 grateful:1 predictive:1 basis:6 po:1 joint:3 differently:1 various:2 fast:1 effective:2 london:1 whose:1 heuristic:1 plausible:1 autoshaping:12 itself:1 delivered:1 associative:2 advantage:1 net:1 ucl:2 rescorla:3 remainder:1 roweis:1 gold:1 forth:1 intuitive:1 produce:1 depending:1 ac:2 measured:3 strong:4 predicted:2 come:5 met:3 direction:1 closely:2 correct:1 filter:5 stochastic:1 fix:1 f1:5 pl:3 hold:1 normal:1 predict:1 pjm:1 early:7 estimation:2 proponent:1 tanh:2 wl:8 basing:1 reflects:1 clearly:1 always:1 rather:2 cr:2 varying:1 barto:1 publication:1 focus:1 consistently:1 wf:1 seductive:1 inference:1 flaw:1 dayan:8 overall:1 animal:17 uc:1 lit:1 look:1 noble:1 future:1 stimulus:8 hint:1 few:2 inherent:1 phase:2 ab:9 attempt:1 truly:1 mixture:1 light:25 pc:5 wc1n:1 accurate:1 underpinnings:1 balsam:3 partial:2 experience:1 shorter:1 filled:1 circle:1 causal:1 theoretical:2 psychological:1 instance:2 earlier:1 ar:2 overshadowing:1 queen:1 deviation:2 predictor:8 reported:1 punishment:1 combined:4 ie:3 off:1 quickly:3 reflect:1 satisfied:3 opposed:1 slowly:2 dr:1 worse:1 cognitive:1 expert:6 li:3 account:8 s13:1 coefficient:3 satisfy:6 farrell:1 depends:3 performed:1 observing:1 competitive:2 start:1 ass:1 square:1 formed:1 il:1 ni:1 accuracy:1 variance:2 reinforced:1 bayesian:1 ah:1 detector:4 strongest:1 fo:2 suffers:1 sharing:1 grossly:1 acquisition:41 associated:5 mi:1 gain:1 rational:4 fractional:4 oooo:1 ut:1 schedule:1 shaping:1 sophisticated:2 actually:3 reflecting:1 carefully:1 manuscript:1 originally:1 higher:1 follow:1 response:8 specify:1 formulation:1 though:1 strongly:2 just:7 stage:1 until:1 indicated:1 building:1 effect:7 spell:1 laboratory:1 i2:1 attractive:1 ldata:1 during:7 oc:2 rat:2 criterion:14 gg:10 tt:1 funding:1 conditioning:10 association:4 discussed:1 relating:1 appleton:1 ai:6 similarly:1 reliability:8 l3:1 playa:1 fii:3 posterior:1 manipulation:2 randy:1 certain:1 arbitrarily:1 remembered:1 vt:1 ubiquitously:1 additional:2 greater:2 r0:3 freely:1 paradigm:4 ii:1 multiple:4 sound:5 sham:1 ing:1 faster:1 england:1 adapt:2 cross:1 long:1 academic:1 post:1 a1:2 controlled:2 prediction:14 j3:4 circumstance:3 expectation:2 poisson:3 background:1 remarkably:1 separately:1 fellowship:1 interval:3 addressed:1 median:2 crucial:1 comment:1 seem:1 jordan:1 ideal:4 fit:5 psychology:3 forthcoming:5 silent:1 idea:1 cn:3 absent:1 ul:1 peter:1 york:2 action:1 ignored:3 generally:1 useful:1 factorial:1 ten:1 simplest:1 rw:3 nsf:1 notice:1 neuroscience:1 delta:2 per:3 estimated:2 write:1 affected:1 key:6 four:1 thereafter:1 threshold:2 changing:1 graph:1 sum:1 compete:1 inverse:2 uncertainty:7 fourth:1 almost:2 reasonable:2 delivery:5 decision:1 duncan:1 lime:1 fl:1 followed:1 yale:1 strength:1 occur:3 constraint:9 constrain:1 flat:1 wc:14 aspect:1 speed:13 concluding:1 relatively:3 jr:1 slightly:2 sam:1 wi:3 kakade:5 making:1 taken:1 equation:13 discus:2 turn:2 fail:1 fom:2 end:1 available:1 probe:2 appropriate:1 occurrence:1 frequentist:1 altogether:1 top:1 responding:3 denotes:1 giving:1 classical:4 occurs:3 parametric:1 dependence:3 md:1 unclear:1 amongst:1 separate:2 unable:2 thank:1 reason:1 assuming:2 kalman:4 modeled:3 relationship:3 ratio:4 acquire:4 unfortunately:1 t:1 displayed:1 ever:1 precise:1 incorporated:1 inferred:1 specified:2 established:4 address:1 able:3 suggested:2 below:3 convincingly:1 tb:2 reliable:3 explanation:1 belief:2 critical:3 satisfaction:1 natural:1 inversely:1 ne:1 irrespective:1 auto:1 prior:14 understanding:1 acknowledgement:1 review:1 asymptotic:3 repaired:1 fully:1 expect:2 suggestion:1 proportional:3 proven:1 versus:5 foundation:1 contingency:1 degree:1 pecking:4 consistent:1 viewpoint:1 charitable:1 editor:2 pi:3 course:1 drastically:1 jh:1 wagner:3 curve:8 world:4 preventing:1 ignores:1 made:1 reinforcement:18 expectancy:1 adaptive:1 dearth:1 prokasy:1 approximate:1 lighted:1 active:1 sk:1 learn:1 nature:1 main:1 body:1 fig:1 tl:1 gatsby:4 slow:7 predictability:1 lc:1 gamzu:1 fails:1 ny:2 ib:2 third:3 learns:1 croft:1 inset:1 showing:1 normative:4 er:1 leam:1 evidence:3 dl:4 intrinsic:1 workshop:1 importance:1 texture:1 magnitude:4 conditioned:2 occurring:1 demand:1 easier:1 pigeon:2 simply:2 failed:1 labor:1 partially:1 scalar:1 collectively:1 satisfies:2 tle:1 ma:1 presentation:3 formulated:1 king:2 change:13 determined:2 total:2 called:1 experimental:9 attempted:3 formally:1 mark:2 latter:1 stressed:1 preparation:1 ex:3
848
1,778
Better Generative Models for Sequential Data Problems: Bidirectional Recurrent Mixture Density Networks Mike Schuster ATR Interpreting Telecommunications Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN gustl@itl.atr.co.jp Abstract This paper describes bidirectional recurrent mixture density networks, which can model multi-modal distributions of the type P(Xt Iyf) and P(Xt lXI, X2 , ... ,Xt-l, yf) without any explicit assumptions about the use of context . These expressions occur frequently in pattern recognition problems with sequential data, for example in speech recognition. Experiments show that the proposed generative models give a higher likelihood on test data compared to a traditional modeling approach, indicating that they can summarize the statistical properties of the data better. 1 Introduction Many problems of engineering interest can be formulated as sequential data problems in an abstract sense as supervised learning from sequential data, where an input vector (dimensionality D) sequence X = xf = {X!,X2, .. . ,XT_!,XT} living in space X has to be mapped to an output vector (dimensionality J<) target sequence T = tf = {tl' t 2, ... , tT -1 , tT} in space 1 y, that often embodies correlations between neighboring vectors Xt, Xt+l and tt, tt+l. In general there are a number of training data sequence pairs (input and target), which are used to estimate the parameters of a given model structure, whose performance can then be evaluated on another set of test data pairs . For many applications the problem becomes to predict the best sequence Y* given an arbitrary input sequence X, with , best' meaning the sequence that minimizes an error using a suitable metric that is yet to be defined . Making use of the theory of pattern recognition [2] this problem is often simplified by treating any sequence as one pattern. This makes it possible to express the objective of sequence prediction with the well known expression y* = arg maxy P(YIX), with X being the input sequence, Y being any valid output sequence and y* being the predicted sequence with the highest probability2 1 a sample sequence of the training target data is denoted as T, while an output sequence in general is denoted as Y, both live in the output space Y 2to simplify notation, random variables and their values, are not denoted as different symbols. This means, P(x) = P(X = x). M Schuster 590 among all possible sequences. Training of a sequence prediction system corresponds to estimating the distribution P(YIX) from a number of samples which includes (a) defining an appropriate model representing this distribution and (b) estimating its parameters such that P(YIX) for the training data is maximized. In practice the' model consists of several modules with each of them being responsible for a different part of P(YIX) . 3 Testing (usage) of the trained system or recognition for a given input sequence X corresponds principally to the evaluation of P(YIX) for all possible output sequences to find the best one Y*. This procedure is called the search and its efficient implementation is important for many applications . In order to build a model to predict sequences it is necessary to decompose the sequences such that modules responsible for smaller parts can be build. An often used approach is the decomposition into a generative and prior model part, using P(BIA) = P(AIB)P(B)/ P(A) and P(A, B) = P(A)P(BIA), as: Y* arg maxP(YIX) = arg maxP(XIY)P(Y) y y T arg max Y T [II P(XtI Xl,X2, . .. ,Xt-l,yn] [II P(YtIYI,Y2, ... ,Yt-d](1) , t=l '" ~, t=l v.------' generative part prior part For many applications (1) is approximated by simpler expressions, for example as a first order Markov Model Y* ~ T arg max T [II P(xtIYd] [II P(Yt!Yt-l)] (2) t=l t=l making some simplifying approximations. These are for this example: Y ? Every output Yt depends only on the previous output Yt-l and not on all previous outputs: P(YtIYI,Y2,'" ,Yt-d => P(YtIYt-d (3) ? The inputs are assumed to be statistically independent in time: P(XtIXI, X2, .. ?, Xt-l. yf) => P(Xt Iyn (4) ? The likelihood of an input vector Xt given the complete output sequence y[ is assumed to depend only on the output found at t and not on any other ones: (5) Assuming that the output sequences are categorical sequences (consisting of symbols), approximation (2) and derived expressions are the basis for many applications. For example, using Gaussian mixture distributions to model P(Xtlye) Pk(X) V Ii occuring symbols, approach (2) is used in a more sophisticated form in most stateof-the-art speech recognition systems. = Focus of this paper is to present some models for the generative part of (1) which need less assumptions. Ideally this means to be able to model directly expressions of the form P(XtIX},X2, ... ,Xt-l,yn, the possibly (multi-modal) distribution of a vector conditioned on previous x vectors Xt, Xt-l, ... , Xl and a complete sequence as shown in the next section. yi, 3 t here is no distinction made between probability mass and density, usually denoted as P and p, respectively. If the quantity to model is categorical, a probability mass is assumed, if it is continuous, a probability density is assumed. Bidirectional Recurrent Mixture Density Networks 2 591 Mixture density recurrent neural networks Assume we want to model a continuous vector sequence , conditioned on a sequence of categorical variables as shown in Figure 1. One approach is to assume that the vector sequence can be modeled by a uni-modal Gaussian distribution with a constant variance, making it a uni-modal regression problem. There are many practical examples where this assumption doesn't hold, requiring a more complex output distribution to model multi-modal data. One example is the attempt to model the sounds of phonemes based on data from multiple speakers. A certain phoneme will sound completely different depending on its phonetic environment or on the speaker, and using a single Gaussian with a constant variance would lead to a crude averaging of all examples . The traditional approach is to build generative models for each symbol separately, as suggested by (2) . If conventional Gaussian mixtures are used to model the observed input vectors , then the parameters of the distribution (means, covariances , mixture weights) in general do not change with the temporal position of the vector to model within a given state segment of that symbol. This can be a bad representation for the data in some areas (shown are here the means of a very bi-modal looking distribution) , as indicated by the two shown variances for the state 'E'. When used to model speech , a procedure often used to cope with this problem is to increase the number of symbols by grouping often appearing symbol sub-strings into a new symbol and by subdividing each original symbol into a number of states. L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _~~TINffi KKKEEEEEEEEEEmmrrmKKKOOOOOOOOOo KKKEEEEEEEEEEIUUIumKKKOOOOOooooo Figure 1: Conventional Gaussian mixtures (left) and mixture density BRNNs (right) for multi-modal regression Another alternative is explored here , where all parameters of a Gaussian mixture distribution modeling the continuous targets are predicted by one bidirectional recurrent neural network , extended to model mixture densities conditioned on a complete vector sequence , as shown on the right side of Figure 1. Another extension (section 2.1) to the architecture allows the estimation of time varying mixture densities conditioned on a hypothesized output sequence and a continuous vector sequence to model exactly the generative term in (1) without any explicit approximations about the use of context. Basics of non-recurrent mixture density networks (MLP type) can be found in [1][2] . The extension from uni-modal to multi-modal regression is somewhat involved but straightforward for the two interesting cases of having a radial covariance matrix or a diagonal covariance matrix per mixture component. They are trained with gradientdescent procedures as regular uni-modal regression NNs. Suitable equations to calculate the error that is back-propagated can be found in [6] for the two cases mentioned, a derivation for the simple case in [1][2]. Conventional recurrent neural networks (RNNs) can model expressions of the form P( Xt iYl , Y2 , ... , Yt), the distribution of a vector given an input vector plus its past input vectors. Bidirectional recurrent neural networks (BRNNs) [5][6] are a simple M. Schuster 592 extension of conventional RNNs. The extension allows one to model expressions of the form P(xtlyi), the distribution of a vector given an input vector plus its past and following input vectors. 2.1 Mixture density extension for BRNN s Here two types of extensions of BRNNs to mixture density networks are considered: I) An extension to model expressions of the type P( Xt Iyi), a multi-modal distribution of a continuous vector conditioned on a vector sequence y[, here labeled as mixture density BRNN of Type 1. II) An extension to model expressions of the type P(XtlXt,X2,'" ,Xt-l,yf), a probability distribution of a continuous vector conditioned on a vector sequence y[ and on its previous context in time Xl,X2, ... ,Xt-l. This architecture is labeled as mixture density BRNN of Type II. The first extension of conventional uni-modal regression BRNNs to mixture density networks is not particularly difficult compared to the non-recurrent implementation, because the changes to model multi-modal distributions are completely independent of the structural changes that have to be made to form a BRNN . The second extension involves a structural change to the basic BRNN structure to incorporate the Xl, X2, ... ,Xt-l as additional inputs, as shown in Figure 2. For any t the neighboring Xt-l. Xt-2, ... are incorporated by adding an additional set of weights to feed the hidden forward states with the extended inputs (the targets for the outputs) from the time step before. This includes Xt-l directly and Xt-2, Xt-3, ... Xl indirectly through the hidden forward neurons. This architecture allows one to estimate the generative term in (1) without making the explicit assumptions (4) and (5), since all the information Xt is conditioned on, is theoretically available . 1-1 1+1 Figure 2: BRNN mixture density extension (Type II) (inputs: striped, outputs: black, hidden neurons: grey, additional inputs: dark grey). Note that without the backward states and the additional inputs this structure is a conventional RNN , unfolded in time. Different from non-recurrent mixture density networks, the extended BRNNs can predict the parameters of a Gaussian mixture distribution conditioned on a vector sequence rather than a single vector, that is, at each (time) position t one parameter set (means, variances (actually standard variations), mixture weights) conditioned on y[ for the BRNN of type I and on Xl. X2 , ... ,Xt-l, y[ for the BRNN of type II . Bidirectional Recurrent Mixture Density Networks 3 593 Experiments and Results The goal of the experiments is to show that the proposed models are more suitable to model speech data than traditional approaches , because they rely on fewer assumptions. The speech data used here has observation vector sequences representing the original waveform in a compressed form, where each vector is mapped to exactly one out of f{ phonemes. Here three approaches are compared, which allow the estimation of the likelihood P(XIY) with various degrees of approximations: Conventional Gaussian mixture model, P(XIY) ~ 0;=1 P(xtIYt): According to (2) the likelihood of a phoneme class vector is approximated by a conventional Gaussian mixture distribution, that is, a separate mixture model is built to estimate P(xly) = PI;(X) for each of the possible f{ categorical states in y . In this case the two assumptions (4) and (5) are necessary. For the variance a radial covariance matrix (diagonal single variance for all vector components) is chosen to match it to the conditions for the BRNN cases below. The number of parameters for the complete model is f{ M(D + 2) for M > 1. Several models of different complexity were trained (Table 1). Mixture density BRNN I, P(XIY) ~ 0;=1 P(xtiy[): One mixture density BRNN of type I , with the same number of mixture components and a radial covariance matrix for its output distribution as in the approach above , is trained by presenting complete sample sequences to it. Note that for type I all possible context-dependencies (assumption (5? are automatically taken care of, because the probability is conditioned on complete sequences yi . The sequence yi contains for any t not only the information about neighboring phonemes , but also the position of a frame within a phoneme. In conventional systems this can only be modeled crudely by introducing a certain number of states per phoneme . The number of outputs for the network depends on the number of mixture components and is M(D + 2) . The total number of parameters can be adjusted by changing the number of hidden forward and backward state neurons, and was set here to 64 each. Mixture density BRNN II, P(XIY) = 0;-1 P(xti x l,X2 , ... , Xt-l , yf): One mixture density BRNN of type II , again with the same number of mixture components and a radial covariance matrix, is trained under the same conditions as above. Note that in this case both assumptions (4) and (5) are taken care of, because exactly expressions of the required form can be modeled by a mixture density BRNN of type II. 3.1 Experiments The recommended training and test data of the TIMIT speech database [3] was used for the experiments. The TIMIT database comes with hand-aligned phonetic transcriptions for all utterances , which were transformed to sequences of categorical class numbers (training = 702438 , test = 256617 vec.). The number of possible categorical classes is the number of phonemes, f{ = 61. The categorical data (input data for the BRNNs) is represented as f{-dimensional vectors with the kth component being one and all others zero . The feature extraction for the waveforms, which resulted in the vector sequences xi to model , was done as in most speech recognition systems [7]. The variances were normalized with respect to all training data, such that a radial variance for each mixture component in the model is a reasonable choice . 594 M. Schuster All three model types were trained with M = 1,2,3,4, the conventional Gaussian mixture model also with M = 8,16 mixture components. The number of resulting parameters , used as a rough complexity measure for the models , is shown in Table 1. The states of the triphone models were not clustered. Table 1: Number of parameters for different types of models mixture components 1 2 3 4 8 16 mon061 I-state 1952 3904 5856 7808 15616 31232 mon061 3-state 5856 11712 17568 23424 46848 93696 tri571 3-state 54816 109632 164448 219264 438528 877056 BRNN I BRNN II 20256 24384 28512 32640 22176 26304 30432 34560 - - - - Training for the conventional approach using M mixtures of Gaussians was done using the EM algorithm. For some classes with only a few samples M had to be reduced to reach a stationary point of the likelihood. Training of the BRNNs of both types must be done using a gradient descent algorithm. Here a modified version of RPROP [4] was used, which is in more detail described in [6] . The measure used in comparing the tested approaches is the log-likelihood of training and test data given the models built on the training data. In absence of a search algorithm to perform recognition this is a valid measure to evaluate the models since maximizing log-likelihood on the training data is the objective for all model types. Note that the given alignment of vectors to phoneme classes for the test data is used in calculating the log-likelihood on the test data - there is no search for the best alignment. 3.2 Results Figure 3 shows the average log-likelihoods depending on the number of mixture components for all tested approaches on training (upper line) and test data (lower line). The baseline I-state monophones give the lowest likelihood. The 3-state monophones are slightly better, but have a larger gap between training and test data likelihood . For comparison on the training data a system with 571 distinct triphones with 3 states each was trained also. Note that this system has a lot more parameters than the BRNN systems (see Table 1) it was compared to. The results for the traditional Gaussian mixture systems show how the models become better by building more detailed models for different (phonetic) context, i.e., by using more states and more context classes. The mixture density BRNN of type I gives a higher likelihood than the traditional Gaussian mixture models. This was expected because the BRNN type I models are, in contrast to the traditional Gaussian mixture models , able to include all possible phonetic context effects by removing assumption (5) - i.e. a frame of a certain phoneme surrounded by frames of any other phonemes with theoretically no restriction about the range of the contextual influence . The mixture density BRNN of type II , which in addition removes the independence assumption (4), gives a significant higher likelihood than all other models. Note that the difference in likelihood on training and test data for this model is very small. indicating a useful model for the underlying distribution of the data.
1778 |@word version:1 grey:2 covariance:6 decomposition:1 simplifying:1 contains:1 xiy:5 past:2 comparing:1 contextual:1 yet:1 must:1 remove:1 treating:1 stationary:1 generative:8 fewer:1 simpler:1 become:1 consists:1 theoretically:2 expected:1 subdividing:1 seika:1 frequently:1 multi:7 automatically:1 unfolded:1 xti:2 becomes:1 estimating:2 notation:1 underlying:1 mass:2 lowest:1 minimizes:1 string:1 temporal:1 every:1 exactly:3 yn:2 before:1 engineering:1 black:1 rnns:2 plus:2 co:1 bi:1 statistically:1 range:1 practical:1 responsible:2 testing:1 practice:1 procedure:3 area:1 rnn:1 radial:5 regular:1 context:7 live:1 influence:1 restriction:1 conventional:11 yt:7 maximizing:1 straightforward:1 variation:1 target:5 recognition:7 approximated:2 particularly:1 labeled:2 database:2 mike:1 observed:1 module:2 calculate:1 highest:1 mentioned:1 environment:1 complexity:2 ideally:1 trained:7 depend:1 segment:1 iyl:1 basis:1 completely:2 various:1 represented:1 derivation:1 distinct:1 whose:1 larger:1 compressed:1 maxp:2 sequence:38 neighboring:3 aligned:1 depending:2 recurrent:11 predicted:2 involves:1 xly:1 come:1 waveform:2 clustered:1 decompose:1 adjusted:1 extension:11 hold:1 considered:1 predict:3 estimation:2 tf:1 rough:1 gaussian:13 yix:6 modified:1 rather:1 iyf:1 varying:1 derived:1 focus:1 likelihood:14 contrast:1 baseline:1 sense:1 hidden:4 transformed:1 arg:5 among:1 denoted:4 stateof:1 art:1 having:1 extraction:1 others:1 simplify:1 few:1 resulted:1 consisting:1 attempt:1 interest:1 mlp:1 evaluation:1 alignment:2 mixture:46 necessary:2 modeling:2 introducing:1 dependency:1 cho:1 nns:1 density:25 again:1 iyn:1 possibly:1 japan:1 includes:2 depends:2 lot:1 timit:2 variance:8 phoneme:11 maximized:1 reach:1 rprop:1 involved:1 propagated:1 dimensionality:2 sophisticated:1 actually:1 back:1 bidirectional:6 feed:1 higher:3 supervised:1 modal:13 evaluated:1 done:3 correlation:1 crudely:1 hand:1 yf:4 indicated:1 building:1 effect:1 hypothesized:1 aib:1 y2:3 requiring:1 normalized:1 usage:1 laboratory:1 speaker:2 presenting:1 tt:4 complete:6 occuring:1 interpreting:1 meaning:1 jp:1 significant:1 vec:1 had:1 iyi:1 triphones:1 phonetic:4 certain:3 yi:3 additional:4 somewhat:1 care:2 triphone:1 recommended:1 living:1 ii:14 multiple:1 sound:2 kyoto:1 xf:1 match:1 prediction:2 regression:5 basic:2 metric:1 addition:1 want:1 separately:1 structural:2 independence:1 architecture:3 expression:10 monophones:2 soraku:1 speech:7 useful:1 detailed:1 dark:1 reduced:1 per:2 express:1 changing:1 backward:2 probability2:1 bia:2 telecommunication:1 reasonable:1 occur:1 striped:1 x2:10 brnns:7 according:1 describes:1 smaller:1 em:1 slightly:1 making:4 maxy:1 principally:1 taken:2 equation:1 available:1 gaussians:1 appropriate:1 indirectly:1 appearing:1 alternative:1 lxi:1 original:2 include:1 calculating:1 embodies:1 build:3 hikaridai:1 objective:2 quantity:1 traditional:6 diagonal:2 gradient:1 kth:1 separate:1 mapped:2 atr:2 gun:1 assuming:1 modeled:3 difficult:1 implementation:2 perform:1 upper:1 neuron:3 observation:1 markov:1 descent:1 defining:1 extended:3 looking:1 incorporated:1 frame:3 arbitrary:1 pair:2 required:1 distinction:1 able:2 suggested:1 usually:1 pattern:3 below:1 summarize:1 built:2 max:2 suitable:3 rely:1 xtiyt:1 representing:2 categorical:7 utterance:1 prior:2 interesting:1 degree:1 pi:1 surrounded:1 side:1 allow:1 valid:2 doesn:1 forward:3 made:2 simplified:1 cope:1 uni:5 transcription:1 brnn:20 assumed:4 xi:1 search:3 continuous:6 table:4 complex:1 pk:1 xtix:1 xt_:1 tl:1 sub:1 position:3 explicit:3 xl:6 crude:1 gradientdescent:1 removing:1 bad:1 xt:26 symbol:9 explored:1 grouping:1 sequential:4 adding:1 conditioned:10 gap:1 corresponds:2 itl:1 goal:1 formulated:1 absence:1 change:4 averaging:1 called:1 total:1 indicating:2 incorporate:1 evaluate:1 tested:2 schuster:4
849
1,779
Data Visualization and Feature Selection: New Algorithms for Nongaussian Data Howard Hua Yang and John Moody Oregon Graduate Institute of Science and Technology 20000 NW, Walker Rd., Beaverton, OR97006, USA hyang@ece.ogi.edu, moody@cse.ogi.edu, FAX:503 7481406 Abstract Data visualization and feature selection methods are proposed based on the )oint mutual information and ICA. The visualization methods can find many good 2-D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2-D viewing coordinates for data visualization and to select inputs for a neural network classifier. Keywords: feature selection, joint mutual information, ICA, visualization, classification. 1 INTRODUCTION Visualization of input data and feature selection are intimately related. A good feature selection algorithm can identify meaningful coordinate projections for low dimensional data visualization. Conversely, a good visualization technique can suggest meaningful features to include in a model. Input variable selection is the most important step in the model selection process. Given a target variable, a set of input variables can be selected as explanatory variables by some prior knowledge. However, many irrelevant input variables cannot be ruled out by the prior knowledge. Too many input variables irrelevant to the target variable will not only severely complicate the model selection/estimation process but also damage the performance of the final model. Selecting input variables after model specification is a model-dependent approach[6]. However, these methods can be very slow if the model space is large. To reduce the computational burden in the estimation and selection processes, we need modelindependent approaches to select input variables before model specification. One such approach is 6-Test [7]. Other approaches are based on the mutual information (MI) [2, 3,4] which is very effective in evaluating the relevance of each input variable, but it fails to eliminate redundant variables. In this paper, we focus on the model-independent approach for input variable selec- H. H. Yang and J. Moody 688 tion based on joint mutual information (JMI). The increment from MI to JMI is the conditional MI. Although the conditional MI was used in [4] to show the monotonic property of the MI, it was not used for input selection. Data visualization is very important for human to understand the structural relations among variables in a system. It is also a critical step to eliminate some unrealistic models. We give two methods for data visualization. One is based on the JMI and another is based on Independent Component Analysis (ICA). Both methods perform better than some existing methods such as the methods based on PCA and canonical correlation analysis (CCA) for nongaussian data. 2 Joint mutual information for input/feature selection Let Y be a target variable and Xi'S are inputs. The relevance of a single input is measured by the MI I(Xi;Y) = K(p(Xj,y)llp(Xi)P(Y)) where K(pllq) is the Kullback-Leibler divergence of two probability functions P and q defined by K(p(x)llq(x)) = Lx p(x) log~. The relevance of a set of inputs is defined by the joint mutual information I(Xi' ... , Xk; Y) = K(P(Xi' ... , Xk, y)llp(Xi' ... , Xk)P(Y))? Given two selected inputs Xj and Xk, the conditional MI is defined by I(Xi; YIXj, Xk) = L p(Xj, xk)K(p(Xi' ylxj, xk)llp(xilxj, xk)p(ylxj, Xk)). Similarly define I(Xi; YIXj, ... , X k ) conditioned on more than two variables . The conditional MI is always non-negative since it is a weighted average of the Kullback-Leibler divergence. It has the following property I(XI. ? ??, X n- l , Xn; Y) - I(XI ,???, X n- l ; Y) = I(Xn; YIX I , ???, Xn-I) 2: o. Therefore, I(X I , ??? , X n - l , Xn; Y) 2: I(X I ,???, X n - l ; Y), i.e., adding the variable Xn will always increase the mutual information. The information gained by adding a variable is measured by the conditional MI. When Xn and Yare conditionally independent given Xl,? ? ?, X n MI between Xn and Y is l , the conditional (1) so Xn provides no extra information about Y when Xl,?? ?,Xn - l are known. In particular, when Xn is a function of Xl, .. . , X n - l , the equality (1) holds. This is the reason why the joint MI can be used to eliminate redundant inputs. The conditional MI is useful when the input variables cannot be distinguished by the mutual information I(Xi;Y). For example, assume I(XI;Y) = I(X2;Y) I(X3; Y), and the problem is to select (Xl, X2), (Xl, X3) or (X2' X3) . Since I(XI,X2;Y) - I(XI,X3;Y) = I(X 2;YIX I ) - I(X 3;YIXt}, we should choose (Xl, X2) rather than (Xl, X3) if I(X2; YIX I ) > I(X3; YIX I ). Otherwise, we should choose (Xl, X3). All possible comparisons are represented by a binary tree in Figure 1. To estimate I(X I, . . . , Xk; Y), we need to estimate the joint probability P(XI,?? ?, Xk, y). This suffers from the curse of dimensionality when k is large. 689 Data Visualization and Feature Selection Sometimes, we may not be able to estimate high dimensional MI due to the sample shortage. Further work is needed to estimate high dimensional joint MI based on parametric and non-parametric density estimations, when the sample size is not large enough. In some real world problems such as mining large data bases and radar pulse classification, the sample size is large. Since the parametric densities for the underlying distributions are unknown, it is better to use non-parametric methods such as histograms to estimate the joint probability and the joint MI to avoid the risk of specifying a wrong or too complicated model for the true density function. "' (xl. x2) A. .~ 1(Xl.Y\X2?:1(X3;Y\X/, (xl.x2) (xl .x3) , \ 1;Y\X2l<1(X3;Y\X2) (x2.x3) l(Xl ;Y\X3?:1(X2;Y\X3y\!(XLY\X3)<1(X2;Y\X3) / " \ (x1 .xl) (xl,x3) Figure 1: Input selection based on the conditional MI. In this paper, we use the joint mutual information I(Xi, Xj; Y) instead of the mutual information I(Xi; Y) to select inputs for a neural network classifier. Another application is to select two inputs most relevant to the target variable for data visualiz ation. 3 Data visualization methods We present supervised data visualization methods based on joint MI and discuss unsupervised methods based on ICA. The most natural way to visualize high-dimensional input patterns is to display them using two of the existing coordinates, where each coordinate corresponds to one input variable. Those inputs which are most relevant to the target variable corresponds the best coordinates for data visualization , Let (i*, j*) = arg maxU ,nI(Xi, Xj; Y). Then, the coordinate axes (Xi-, Xj-) should be used for visualizing the input patterns since the corresponding inputs achieve the maximum joint MI. To find the maximum I(Xj-, Xj-IY), we need to evaluate every joint MI I(Xi' Xj; Y) for i < j. The number of evaluations is O(n 2 ) . Noticing that I(Xj,Xj;Y) = I(Xi;Y) + I(Xj;YIXi), we can first maximize the MI I(Xi; Y), then maximize the conditional MI. This algorithm is suboptimal, but only requires n - 1 evaluations of the joint MIs. Sometimes, this is equivalent to exhaustive search. One such example is given in next section. Some existing methods to visualize high-dimensional patterns are based on dimensionality reduction methods such as PCA and CCA to find the new coordinates to display the data, The new coordinates found by PCA and CCA are orthogonal in Euclidean space and the space with Mahalanobis inner product, respectively. However, these two methods are not suitable for visualizing nongaussian data because the projections on the PCA or CCA coordinates are not statistically independent for nongaussian vectors. Since the JMI method is model-independent, it is better for analyzing nongaussian data. H H Yang and J. Moody 690 Both CCA and maximumjoint MI are supervised methods while the PCA method is unsupervised. An alternative to these methods is ICA for visualizing clusters [5]. The ICA is a technique to transform a set of variables into a new set of variables, so that statistical dependency among the transformed variables is minimized. The version of ICA that we use here is based on the algorithms in [1, 8]. It discovers a non-orthogonal basis that minimizes mutual information between projections on basis vectors. We shall compare these methods in a real world application. 4 Application to Signal Visualization and Classification 4.1 Joint mutual information and visualization of radar pulse patterns Our goal is to design a classifier for radar pulse recognition. Each radar pulse pattern is a 15-dimensional vector. We first compute the joint MIs, then use them to select inputs for the visualization and classification of radar pulse patterns. A set of radar pulse patterns is denoted by D = {(zi, yi) : i = 1"", N} which consists of patterns in three different classes. Here, each Zi E R t5 and each yi E {I, 2, 3}. , 14 I~ " MIl"" mlormabon CondIIionai NI given X2 0 2 12 1 i e- J ::E 0.8 1 ~ 8 to 106 I ;; , t>: 0.' t> 02 I> ~ 0 I> 0 0 0 0 00 " Inputvanablallldex (a) 02 0 0 ~ . '.a '0 .. ,. .~:: : 0I> .. 15 0 1 2 3 .4 5 6 7 8 9 1J J 11 10 11 12 13 14 15 bundle nutrtJer (b) Figure 2: (a) MI vs conditional MI for the radar pulse data; maximizing the MI then the conditional MI with O(n) evaluations gives I(Xil' Xii; Y) = 1.201 bits. (b) The joint MI for the radar pulse data; maximizing the joint MI gives I(Xi. ,Xj-; Y) = 1.201 bits with O(n 2 ) evaluations of the joint MI. (il' it) = (i* , j*) in this case. Let it = arg maxJ(Xi;Y) and it = arg maXj;tiJ(Xj;YIXi1 ). From Figure 2(a), we obtain (it,it) (2,9) and I(XillXjl;Y) = I(Xil;Y) + I(Xj1;YIXi1 ) 1.201 bits . If the number of total inputs is n, then the number of evaluations for computing the mutual information I(Xi; Y) and the conditional mutual information I(Xj; YIXiJ is O(n). = = To find the maximum I(Xi-, X j >; Y), we evaluate every I(Xi, Xj; Y) for i < j. These MIs are shown by the bars in Figure 2(b), where the i-th bundle displays the MIs I(Xi,Xj;Y) for j = i+ 1" " , 15. In order to compute the joint MIs, the MI and the conditional MI is evaluated 0 (n) and O(n 2 ) times respectively. The maximumjoint MI is I(X i -, X j -; Y) = 1.201 bits. Generally, we only know I(Xil ' Xii; Y) ~ I(Xi-, Xj-; Y). But in this particular 691 Data Visualization and Feature Selection application, the equality holds. This suggests that sometimes we can use an efficient algorithm with only linear complexity to find the optimal coordinate axis view (Xi?,Xj.). The joint MI also gives other good sets of coordinate axis views with high joint MI values . 2 3.~ 0 0 N 'I"llll~ c '". ~ 0 ~ ~o .. 8 ., ~ 33 33 <> x !O .~ ~ 0 Ii ' /2 h 0 '"'l' ~ ,0 -,0 ?20 -20 20 40 20 firs, prinopol oomponen' X2 (b) (a) 25 J 3 N Cl ..J /20 3 3J3 ~, 3 3 ?~~ " ,"' , , , 1~~1~~~, , , ? , ", 'l' 2 2 3 3 3 3 3 2~:ai 2~ 3 2 2 ~ 2 2 3 ~ 3 3 -<15 2 2 2 2 2 2 3 3 3 3 3 3 3 2 2 2 -15 2 3 3 3 3 3 2 3 2 2 3 3 3 3 1 3 3 3 1 1 ? 05 ? 1 1 1 '1 ~ 1 1 1 1 -1 -6 1 1 f f 2a> 2 1 1 1 . ~j;l;, 3 , " 1,1 " 0 3 15 3 3 2 3 3 3 2 2 2 2 3 2 2 2 2 3 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 ?2 F.rstLD (c) -2 -3 -2 -1 (d) Figure 3: (a) Data visualization by two principal components; the spatial relation between patterns is not clear. (b) Use the optimal coordinate axis view (Xi., Xj-) found via joint MI to project the radar pulse data; the patterns are well spread to give a better view on the spatial relation between patterns and the boundary between classes. (c) The CCA method. (d) The ICA method. Each bar in Figure 2(b) is associated with a pair of inputs. Those pairs with high joint MI give good coordinate axis view for data visualization. Figure 3 shows that the data visualizations by the maximum JMI and the ICA is better than those by the PCA and the CCA because the data is nongaussian. 4.2 Radar pulse classification Now we train a two layer feed-forward network to classify the radar pulse patterns. Figure 3 shows that it is very difficult to separate the patterns by using just two inputs. We shall use all inputs or four selected inputs. The data set D is divided 692 H H Yang and J. Moody into a training set DI and a test set D2 consisting of 20 percent patterns in D. The network trained on the data set DI using all input variables is denoted by Y = f(X I ,'" ,Xn; WI, W 2 , 0) where WI and W 2 are weight matrices and 0 is a vector of thresholds for the hidden layer. From the data set D, we estimate the mutual information I(Xi; Y) and select il arg maxJ(Xi ; Y). Given Xii' we estimate the conditional mutual information I(Xj; YIXii ) for j =1= i l . Choose three inputs Xi'J' Xi3 and Xi 4 with the largest conditional MI. We found a quartet (iI, i 2, i3, i4) = (1,2,3,9). The two-layer feedforward network trained on DI with four selected inputs is denoted by = Y = g(X I ,X2 , X 3 , X g; W~, W~, 0'). There are 1365 choices to select 4 input variables out of 15. To set a reference performance for network with four inputs for comparison. Choose 20 quartets from the set Q = {(h,h,h,h): 1 ~ jl < h < h < j4 ~ 15}. For each quartet (h,h,h,j4), a two-layer feed-forward network is trained using inputs (XjllXh,Xh,Xj4)' These networks are denoted by Y = hi(Xil ,Xh , Xh, X j4 ; W~, W~, 0"), i = 1,2"",20 . ? ?. -- - - .55 --. 1>---+ nini'I EA ............... Xl,X2, lQ, _)fJ ..... ER;.............. Xl.X2. lQ.... XI . . . . w..q ER. wlh3)QJnIIa . . . . ~ER 'd\mcpdltl If'1n11ER lIIIII'I4I1111d1dtnpil Xl. X2,l(3,n:G 1eItIngst.., 4.-..ct .... XI, X2.)(J, MIl xv 5 ..... a. ..., .. ~ I ..... Eft ....... ...,. ... ? , - - ~ 3 l\ - - - - - - - I .25 I 2 015 \~ I \ 5 ?? 1 ' .1 .', " \ . , , , .. '7;:Y.- ... 10 -- (a) 15 -- Y 25 - (b) Figure 4: (a) The error rates of the network with four inputs (Xl, X 2 , X 3 , Xg) selected by the joint MI are well below the average error rates (with error bars attached) of the 20 networks with different input quartets randomly selected; this shows that the input quartet (X I ,X2 ,X3 ,X9 ) is rare but informative. (b) The network with the inputs (X I ,X2 ,X3 ,X9 ) converges faster than the network with all inputs. The former uses 65% fewer parameters (weights and thresholds) and 73% fewer inputs than the latter. The classifier with the four best inputs is less expensive to construct and use, in terms of data acquisition costs, training time, and computing costs for real-time application. The mean and the variance of the error rates of the 20 networks are then computed. All networks have seven hidden units. The training and testing error rates of the networks at each epoch are shown in Figure 4, where we see that the network with four inputs selected by the joint MI performs better than the networks with randomly selected input quartets and converges faster than the network with all inputs. The network with fewer inputs is not only faster in computing but also less expensive in data collection. Data Visualization and Feature Selection 5 693 CONCLUSIONS We have proposed data visualization and feature selection methods based on the joint mutual information and ICA. The maximum JMI method can find many good 2-D projections for visualizing high dimensional data which cannot be easily found by the other existing methods. Both the maximum JMI method and the ICA method are very effective for visualizing nongaussian data. The variable selection method based on the JMI is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. Input selection methods based on mutual information (MI) have been useful in many applications, but they have two disadvantages. First, they cannot distinguish inputs when all of them have the same MI. Second, they cannot eliminate the redundancy in the inputs when one input is a function of other inputs. In contrast, our new input selection method based on th~ joint MI offers significant advantages in these two aspects. We have successfully applied these methods to visualize radar patterns and to select inputs for a neural network classifier to recognize radar pulses. We found a smaller yet more robust neural network for radar signal analysis using the JMI. Acknowledgement: This research was supported by grant ONR N00014-96-10476. References [1] S. Amari, A. Cichocki, and H. H. Yang. A new learning algorithm for blind signal separation. In Advances in Neural Information Processing Systems, 8, eds. David S. Touretzky, Michael C. Mozer and Michael E. Hasselmo, MIT Press: Cambridge, MA., pages 757-763, 1996. [2] G. Barrows and J. Sciortino. A mutual information measure for feature selection with application to pulse classification. In IEEE Intern. Symposium on TimeFrequency and Time-Scale Analysis, pages 249-253, 1996. [3] R. Battiti. Using mutual information for selecting features in supervised neural net learning. IEEE Trans. on Neural Networks, 5(4):537-550, July 1994. [4] B. Bonnlander. Nonparametric selection of input variables for connectionist learning. Technical report, PhD Thesis. University of Colorado, 1996. [5] C. Jutten and J. Herault. Blind separation of sources, part i: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24:1-10, 1991. [6] J. Moody. Prediction risk and architecture selection for neural network. In V. Cherkassky, J .H. Friedman, and H. Wechsler, editors, From Statistics to Neural Networks: Theory and Pattern Recognition Applications. NATO ASI Series F, Springer-Verlag, 1994. [7] H. Pi and C. Peterson. Finding the embedding dimension and variable dependencies in time series. Neural Computation, 6:509-520, 1994. [8] H. H. Yang and S. Amari . Adaptive on-line learning algorithms for blind separation: Maximum entropy and minimum mutual information. Neural Computation, 9(7):1457-1482, 1997.
1779 |@word timefrequency:1 version:1 eliminating:2 d2:1 pulse:13 reduction:1 series:2 efficacy:1 selecting:2 existing:5 yet:1 john:1 informative:1 v:1 selected:8 fewer:3 xk:11 provides:1 cse:1 lx:1 symposium:1 consists:1 ica:11 curse:1 project:1 underlying:1 minimizes:1 finding:1 every:2 classifier:5 wrong:1 unit:1 grant:1 before:1 xv:1 severely:1 analyzing:1 conversely:1 specifying:1 suggests:1 graduate:1 statistically:1 testing:1 x3:17 asi:1 projection:5 suggest:1 cannot:6 selection:23 risk:2 equivalent:1 maximizing:2 embedding:1 coordinate:13 increment:1 target:5 colorado:1 us:1 recognition:2 expensive:2 mozer:1 complexity:1 radar:15 trained:3 basis:2 easily:2 joint:28 represented:1 train:1 effective:2 exhaustive:1 otherwise:1 amari:2 statistic:1 transform:1 final:1 advantage:1 net:1 product:1 relevant:2 oint:1 achieve:1 cluster:1 xil:4 converges:2 measured:2 keywords:1 xly:1 human:1 viewing:1 hold:2 maxu:1 nw:1 visualize:3 estimation:3 largest:1 hasselmo:1 successfully:1 weighted:1 mit:1 always:2 yix:4 i3:1 rather:1 avoid:1 mil:2 ax:1 focus:1 contrast:1 dependent:1 eliminate:4 explanatory:1 hidden:2 relation:3 transformed:1 arg:4 classification:6 among:2 denoted:4 herault:1 spatial:2 mutual:23 construct:1 unsupervised:2 minimized:1 connectionist:1 report:1 randomly:2 divergence:2 recognize:1 maxj:3 consisting:1 friedman:1 llq:1 mining:1 evaluation:5 bundle:2 orthogonal:2 tree:1 euclidean:1 ruled:1 classify:1 disadvantage:1 cost:2 rare:1 too:2 dependency:2 x2l:1 density:3 michael:2 iy:1 moody:6 nongaussian:7 thesis:1 x9:2 choose:4 fir:1 oregon:1 maximumjoint:2 blind:3 tion:1 view:5 complicated:1 il:2 ni:2 variance:1 identify:1 suffers:1 touretzky:1 complicate:1 ed:1 acquisition:1 associated:1 mi:47 di:3 knowledge:2 dimensionality:2 ea:1 feed:2 supervised:3 bonnlander:1 evaluated:1 just:1 correlation:1 jutten:1 usa:1 xj1:1 true:1 former:1 equality:2 leibler:2 illustrated:1 conditionally:1 ogi:2 visualizing:5 mahalanobis:1 performs:1 fj:1 percent:1 discovers:1 attached:1 jl:1 interpretation:1 eft:1 significant:1 cambridge:1 ai:1 rd:1 similarly:1 specification:2 j4:3 base:1 xi3:1 irrelevant:2 n00014:1 verlag:1 binary:1 onr:1 battiti:1 yi:2 minimum:1 maximize:2 redundant:2 signal:5 ii:2 july:1 technical:1 faster:3 offer:1 divided:1 j3:1 prediction:1 histogram:1 sometimes:3 walker:1 source:1 extra:1 jmi:9 structural:1 yang:6 feedforward:1 enough:1 xj:21 zi:2 architecture:2 suboptimal:1 reduce:1 inner:1 pca:6 useful:2 tij:1 generally:1 shortage:1 clear:1 nonparametric:1 canonical:1 xii:3 shall:2 redundancy:3 four:6 threshold:2 noticing:1 separation:3 bit:4 cca:7 layer:4 hi:1 ct:1 distinguish:1 display:3 i4:1 x2:22 aspect:1 smaller:1 intimately:1 selec:1 wi:2 visualization:23 discus:1 needed:1 know:1 neuromimetic:1 yare:1 llll:1 distinguished:1 pllq:1 alternative:1 include:1 beaverton:1 wechsler:1 damage:1 parametric:4 separate:1 seven:1 reason:1 quartet:6 difficult:1 negative:1 design:1 unknown:1 perform:1 howard:1 barrow:1 david:1 pair:2 trans:1 able:1 llp:3 bar:3 below:1 pattern:16 unrealistic:1 suitable:1 ation:1 critical:1 natural:1 technology:1 axis:4 xg:1 cichocki:1 fax:1 prior:2 epoch:1 acknowledgement:1 editor:1 pi:1 supported:1 understand:1 institute:1 peterson:1 boundary:1 dimension:1 xn:11 evaluating:1 world:2 t5:1 forward:2 collection:1 adaptive:2 nato:1 kullback:2 xi:38 search:1 why:1 robust:1 yixi:1 cl:1 spread:1 x1:1 slow:1 fails:1 xh:3 lq:2 xl:19 er:3 burden:1 adding:2 gained:1 phd:1 conditioned:1 cherkassky:1 entropy:1 intern:1 hua:1 monotonic:1 springer:1 corresponds:2 ma:1 conditional:15 goal:1 hyang:1 principal:1 total:1 ece:1 meaningful:2 select:9 latter:1 relevance:3 liiiii:1 evaluate:2
850
178
444 A MODEL FOR RESOLUTION ENHANCEMENT (HYPERACUITY) IN SENSORY REPRESENTATION Jun Zhang and John P. Miller Neurobiology Group, University of California, Berkeley, California 94720, U.S.A. ABSTRACT Heiligenberg (1987) recently proposed a model to explain how sensory maps could enhance resolution through orderly arrangement of broadly tuned receptors. We have extended this model to the general case of polynomial weighting schemes and proved that the response function is also a polynomial of the same order. We further demonstrated that the Hermitian polynomials are eigenfunctions of the system. Finally we suggested a biologically plausible mechanism for sensory representation of external stimuli with resolution far exceeding the inter-receptor separation. 1 INTRODUCTION In sensory systems, the stimulus continuum is sampled at discrete points by receptors of finite tuning width d and inter-receptor spacing a. In order to code both stimulus locus and stimulus intensity with a single output, the sampling of individual receptors must be overlapping (i. e. a < d). This discrete and overlapped sampling of the stimulus continuum poses a question of how then the system could reconstruct the sensory stimuli with Resolution Enhancement in Sensory Representation a resolution exceeding that is specified by inter-receptor spacing. This is known as the hyperacuity problem (Westheimer,1975). Heiligenberg (1987) proposed a model in which the array of receptors (with Gaussian-shaped tuning curves) were distributed uniformly along the entire range of stimulus variable x. They contribute excitation to a higher order interneuron, with the synaptic weight of each receptor's input set proportional to its rank index k in the receptor array. Numerical ~lmulation and subsequent mathematical analysis (Baldi and Heiligenberg, 1988) demonstrated that, so long as a <:: d, the response function f( x) of the higher order neuron was monotone increasing and surprisingly linear. The smoothness of this function offers a partial explanation of the general phenomena of hyperacuity (see Baldi and Heiligenberg in this volumn). Here we consider various extensions of this model. Only the main results shall be stated below; their proof is presented elsewhere (Zhang and Miller, in preparation). 2 POLYNOMIAL WEIGHTING FUNCTIONS First, the model can be extended to incorporate other different weighting schemes. The weighting function w( k) specifies the strength of the excitation from the k-th receptor onto the higher order interneuron and therefore determines the shape of its response f( x). In Heiligenberg's original model, the linear weighting scheme w( k) = k is used. A natural extension would then be the polynomial weighting schemes. Indeed, we proved that, for sufficiently large d, a) If w(k) = k2m , then: 445 446 Zhang and Miller If w( k) = k 2m +l , then: f( X) = alX + a3X 3 + ... + a2m+IX 2m+1 where m = 0,1,2, ... , and ai are real constants. Note that for w(k) = kP , f(x) has parity (-I)P , that is, it is an odd function for odd interger p and even function for even interger p. The case of p = 1 reduces to the linear weighting scheme in Heiligenberg's original model. b) If w(k) = Co + clk + c2k2 + ... + cpkP , then: Note that this is a direct result of a), because f( x) is linearly dependent on w( k). The coefficients Ci and ai are usually different for the two polynomials. One would naturally ask: what kind of polynomial weighting function then would yield an identical polynomial response function? This leads to the important conclusion: c) If w(k) = Hp(k) is an Hermitian polynomial, then f(x) = Hp(x) , the same Hermitian polynomial. The Hermitian polynomial Hp(t) is a well-studied function in mathematics. It is defined as: 2 dP Hp(t) = (-I)Pe t -d e- t tP 2 For reference purpose, the first four polynomials are given here: Ho(t) 1?, HI(t) 2t?, H2(t) 4t 2 - 2?, H3(t) 8t3 - 12t?, Resolution Enhancement in Sensory Representation The conclusion of c) tells us that Hermitian polynomials are unique in the sense that they serve as eigenfunctions of the system. 3 REPRESENTATION OF SENSORY STIMULUS Heiligenberg's model deals with the general problem of two-point resolution, i. e. how sensory system can resolve two nearby point stimuli with a reso- lution exceeding inter-receptor spacing. Here we go one step further to ask ourselves how a generalized sensory stimulus g( x) is encoded and represented beyond the receptor level with a resolution exceeding the inter-receptor spacing. We'll show that if, instead of a single higher order interneuron, we have a group or layer of interneurons, each connected to the array of sensory receptors using some different but appropriately chosen weighting schemes wn(k), then the representation of the sensory stimulus by this interneuron group (in terms of In , each interneuron's response) is uniquely determined with enhanced resolution (see figure below). INTERNEURON GROUP ... ... RECEPTOR ARRAY 447 448 Zhang and Miller Suppose that 1) each interneuron in this group receives input from the receptor array, its weighting characterized by a Hermitian polynomial H1'(k) ; and that 2) the order p of the Hermitian polynomial is different for each interneuron. We know from mathematics that any stimulus function g( x) satisfying certain boundary conditions can be decomposed in the following way: 00 g(x) = ~ cnHn(x)e-X2 n=O The decomposition is unique in the sense that Cn completely determines g(x). Here we have proved that the response /1' of the p-th interneuron (adopting H1'(k) as weighting scheme) is proportional to c1' : This implies that g( x) can be uniquely represented by the response of this set of interneurons {/1'}' Note that the precision of representation at this higher stage is limited not by the receptor separation, but by the number of neurons available in this interneuron group. 4 EDGE EFFECTS Since the array of receptors must actually be finite in extent, simple weighting schemes may result in edge-effects which severely degrade stimulus resolution near the array boundaries. For instance, the linear model investigated by Heiligenberg and Baldi will have regions of degeneracy where two nearby point stimuli, if located near the boundary defined by receptor array coverage, may yield the same response. We argue that this region of degeneracy can be eliminated or reduced in the following situations: 1) If w( k) approaches zero as k goes to infinity, then the receptor array Resolution Enhancement in Sensory Representation can still be treated as having infinite extent since the contributions by the large index receptors are negligibly small. We proved, using Fourier analysis, that this kind of vanishing-at-infinity weighting scheme could also achieve resolution enhancement provided that the tuning width of the receptor is sufficiently larger than the inter-receptor spacing and meanwhile sufficiently smaller than the effective width of the entire weighting function. 2) If the receptor array "wraps around" into a circular configuration, then it can again be treated as infinite (but periodic) along the angular dimension. This is exactly the case in the wind-sensitive cricket cercal sensory system (Jacobs et al,1986; Jacobs and Miller,1988) where the population of directional selective mechano-receptors covers the entire range of 360 degrees. 5 CONCLUSION Heiligenberg's model, which employs an array of orderly arranged and broadly tuned receptors to enhance the two-point resolution, can be extended in a number of ways. We first proved the general result that the model works for any polynomial weighting scheme. We further demonstrated that Hermitian polynomial is the eigenfunction of this system. This leads to the new concept of stimulus representation, i. e. a group of higher-order interneurons can encode any generalized sensory stimulus with enhanced resolution if they adopt appropriately chosen weighting schemes. Finally we discussed possible ways of eliminating or reducing the "edge-effects". ACKNOWLEDGMENTS This work was supported by NIH grant # ROI-NS26117. 449 450 Zhang and Miller REFERENCES Baldi, P. and W. Heiligenberg (1988) How sensory maps could enhance resolution through ordered arrangements of broadly tuned receivers. Bioi. Cybern. 59: 314-318. Heiligenberg, W. (1987) Central processing of the sensory information in electric fish. J. Compo Physiol. A 161: 621-631. Jacobs, G.A. and J.P. Miller (1988) Analysis of synaptic integration using the laser photo-inactivation technique. Experientia 44: 361- 462. Jacobs, G.A., Miller, J.P. and R.K. Murphey (1986) Cellular mechanisms underlying directional sensitivity of an identified sensory interneuron. J.Neurosci. 6: 2298-2311. Westheimer, G. (1975) Visual acuity and hyperacuity. Invest. Ophthalmol. Vis. 14: 570-572.
178 |@word effect:3 coverage:1 concept:1 eliminating:1 polynomial:17 implies:1 arrangement:2 question:1 deal:1 decomposition:1 jacob:4 ll:1 width:3 uniquely:2 cricket:1 dp:1 wrap:1 excitation:2 configuration:1 generalized:2 degrade:1 argue:1 extent:2 tuned:3 cellular:1 extension:2 experientia:1 code:1 heiligenberg:11 around:1 sufficiently:3 index:2 roi:1 recently:1 must:2 nih:1 john:1 westheimer:2 physiol:1 subsequent:1 numerical:1 reso:1 shape:1 continuum:2 adopt:1 stated:1 purpose:1 discussed:1 neuron:2 sensitive:1 ai:2 vanishing:1 smoothness:1 tuning:3 ophthalmol:1 compo:1 hp:4 mathematics:2 finite:2 situation:1 neurobiology:1 contribute:1 gaussian:1 extended:3 zhang:5 k2m:1 inactivation:1 mathematical:1 along:2 direct:1 intensity:1 encode:1 acuity:1 specified:1 baldi:4 hermitian:8 rank:1 certain:1 california:2 inter:6 indeed:1 sense:2 eigenfunction:1 dependent:1 beyond:1 suggested:1 below:2 usually:1 decomposed:1 entire:3 resolve:1 increasing:1 selective:1 provided:1 alx:1 underlying:1 explanation:1 reduces:1 what:1 natural:1 kind:2 treated:2 characterized:1 integration:1 offer:1 long:1 scheme:11 shaped:1 having:1 sampling:2 eliminated:1 identical:1 berkeley:1 jun:1 exactly:1 h3:1 stimulus:16 adopting:1 employ:1 grant:1 c1:1 individual:1 spacing:5 severely:1 ourselves:1 receptor:26 appropriately:2 proportional:2 eigenfunctions:2 interneurons:3 h2:1 circular:1 degree:1 studied:1 co:1 near:2 limited:1 range:2 wn:1 elsewhere:1 unique:2 acknowledgment:1 surprisingly:1 parity:1 edge:3 identified:1 partial:1 supported:1 cn:1 lution:1 distributed:1 curve:1 boundary:3 dimension:1 instance:1 sensory:18 cover:1 onto:1 tp:1 far:1 cybern:1 map:2 demonstrated:3 orderly:2 go:2 a2m:1 receiver:1 resolution:15 reduced:1 specifies:1 periodic:1 fish:1 array:11 sensitivity:1 cercal:1 clk:1 population:1 broadly:3 discrete:2 enhance:3 shall:1 enhanced:2 suppose:1 group:7 again:1 central:1 four:1 investigated:1 meanwhile:1 electric:1 overlapped:1 main:1 hyperacuity:4 satisfying:1 located:1 external:1 linearly:1 neurosci:1 preparation:1 monotone:1 murphey:1 negligibly:1 coefficient:1 region:2 connected:1 vi:1 separation:2 precision:1 h1:2 wind:1 exceeding:4 pe:1 layer:1 hi:1 weighting:16 ix:1 contribution:1 strength:1 infinity:2 serve:1 miller:8 yield:2 completely:1 t3:1 x2:1 directional:2 nearby:2 fourier:1 various:1 represented:2 ci:1 laser:1 effective:1 kp:1 interneuron:11 explain:1 tell:1 synaptic:2 smaller:1 encoded:1 larger:1 plausible:1 visual:1 reconstruct:1 biologically:1 ordered:1 naturally:1 proof:1 degeneracy:2 sampled:1 proved:5 determines:2 ask:2 bioi:1 interger:2 mechanism:2 locus:1 know:1 actually:1 photo:1 higher:6 available:1 determined:1 infinite:2 uniformly:1 response:8 achieve:1 reducing:1 arranged:1 angular:1 stage:1 invest:1 ho:1 enhancement:5 receives:1 original:2 overlapping:1 pose:1 incorporate:1 odd:2 phenomenon:1
851
1,780
Learning Statistically Neutral Tasks without Expert Guidance Ton Weijters Information Technology, Eindhoven University, The Netherlands Antal van den Bosch ILK, Tilburg University, The Netherlands Eric Postma Computer Science, Universiteit Maastricht, The Netherlands Abstract In this paper, we question the necessity of levels of expert-guided abstraction in learning hard, statistically neutral classification tasks. We focus on two tasks, date calculation and parity-12, that are claimed to require intermediate levels of abstraction that must be defined by a human expert. We challenge this claim by demonstrating empirically that a single hidden-layer BP-SOM network can learn both tasks without guidance. Moreover, we analyze the network's solution for the parity-12 task and show that its solution makes use of an elegant intermediary checksum computation. 1 Introduction Breaking up a complex task into many smaller and simpler subtasks facilitates its solution. Such task decomposition has proved to be a successful technique in developing algorithms and in building theories of cognition. In their study and modeling of the human problem-solving process, Newell and Simon [1] employed protocol analysis to determine the subtasks human subjects employ in solving a complex task. Even nowadays, many cognitive scientists take task decomposition, Le., the necessity of explicit levels of abstraction, as a fundamental property of human problem solving. Dennis Norris' [2] modeling study on the problem-solving capacity of autistic savants is a case in point. In the study, Norris focuses on the date-calculation task (Le., to calculate the day of the week a given date fell on), which some autistic savants have been reported to perform flawlessly [3]. In an attempt to train a multi-layer neural network on the task, Norris failed to get a satisfactory level of generalization performance. Only by decomposing the task into three sub-tasks, and training the separate networks on each of the sub-tasks, the date-calculation task could be learned. Norris concluded that the date-calculation task is solvable (learnable) only when it is decomposed into intermediary steps using human assistance [2]. The date-calculation task is a very hard task for inductive learning algorithms, because it is a statistically neutral task: all conditional output probabilities on any input feature have chance values. Solving the task implies decomposing it, if possible, into subtasks that are not statistically neutral. The only suggested decomposition of the date-calculation task known to date involves explicit assistance T. Weijters, A. v. d. Bosch and E. Postma 74 MFN SOM ? - class A elements o - class B elements I!I - unlabelled element Figure 1: An example BP-SOM network . from a human supervisor [2J. This paper challenges the decomposition assumption by showing that the date-calculation task can be learned in a single step with a appropriately constrained single hidden-layer neural network . In addition, another statistically neutral task, called the parity-n task (given an n-Iength bit string of 1's and O's , calculate whether the number of 1's is even or odd) is investigated. In an experimental study by Dehaene, Bossini, and Giraux [4], it is claimed that humans decompose the parity-n task by first counting over the input string, and then perform the even/odd decision. In our study, parity-12 is shown to be learnable by a network with a single hidden layer. 2 BP-SOM Below we give a brief characterization of the functioning of BP-SOM. For details we refer to [5J. The aim of the BP-SOM learning algorithm is to establish a cooperation between BP learning and SOM learning in order to find adequately constrained hidden-layer representations for learning classification tasks. To achieve this aim, the traditional MFN architecture [6J is combined with SOMS [7]: each hidden layer of the MFN is associated with one SOM (See Figure 1). During training of the weights in the MFN, the corresponding SOM is trained on the hidden-unit activation patterns. After a number of training cycles of BP-SOM learning, each SOM develops a twodimensional representation, that is translated into classification information, i.e., each SOM element is provided with a class label (one of the output classes of the task). For example, let the BP-SOM network displayed in Figure 1 be trained on a classification task which maps instances to either output class A or B. Three types of elements can be distinguished in the SOM: elements labelled with class A, elements labelled with class B, and unlabelled elements (no winning class could be found). The two-dimensional representation of the SOM is used as an addition to the standard BP learning rule [6J. Classification and reliability information from the SOMS is included when updating the connection weights of the MFN. The error of a hidden-layer vector is an accumulation of the error computed by the BP learning rule, and a SOM-error. The SOM-error is the difference between the hidden-unit activation vector and the vector of its best-matching element associated with the same class on the SOM. An important effect of including SOM information in the error signals is that clusters of hidden-unit activation vectors of instances associated with the same class tend to become increasingly similar to each other. On top of this effect, individual hidden-unit activations tend to become more streamlined, and often end up having activations near one of a limited number of discrete values. Learning Statistically Neutral Tasks without Expert Guidance 3 75 The date-calculation task The first statistically neutral calculation task we consider is the date-calculation task: determining the day of the week on which a given date fell. (For instance, October 24, 1997 fell on a Friday.) Solving the task requires an algorithmic approach that is typically hard for human calculators and requires one or more intermediate steps. It is generally assumed that the identity of these intermediate steps follows from the algorithmic solution, although variations exist in the steps as reportedly used by human experts [2] . We will show that such explicit abstraction is not needed, after reviewing the case for the necessity of "human assistance" in learning the task. 3.1 Date calculation with expert-based abstraction Norris [2] attempted to model autistic savant date calculators using a multi-layer feedforward network (MFN) and the back-propagation learning rule [6]. He intended to build a model mimicking the behavior of the autistic savant without the need either to develop arithmetical skills or to encode explicit knowledge about regularities in the structure of dates. A standard multilayer network trained with backpropagation [6] was not able to solve the date-calculation task. Although the network was able to learn the examples used for training, it did not manage to generalize to novel date-day combinations. In a second attempt Norris split up the date-calculation task in three simpler subtasks and networks. Using the three-stage learning strategy Norris obtained a nearly perfect performance on the training material and a performance of over 90% on the test material (errors are almost exclusively made on dates falling in January or February in leap years). He concludes with the observation that "The only reason that the network was able to learn so well was because it had some human assistance." [2, p.285]. In addition, Norris claims that "even if the [backpropagation] net did have the right number of layers there would be no way for the net to distribute its learning throughout the net such that each layer learned the appropriate step in computation." [2, p. 290]. 3.2 Date calculation without expert-based abstraction We demonstrate that with the BP-SOM learning rule, a single hidden-layer feedforward network can become a successful date calculator. Our experiment compares three types of learning: standard backpropagation learning (BP, [6]), backpropagation learning with weight decay (BPWD, [8]), and BP-SOM learning. Norris used BP learning in his experiment which leads to overfitting [2] (a considerably lower generalization accuracy on new material as compared to reproduction accuracy on training material); BPWD learning was included to avoid overfitting. The parameter values for BP (including the number of hidden units for each task) were optimized by performing pilot experiments with BP. The optimal learning-rate and momentum values were 0.15 and 0.4, respectively. BP, BPWD, and BP-SOM were trained for a fixed number of cycles m = 2000. Early stopping, a common method to prevent overfitting, was used in all experiments with BP, BPWD, and BP-SOM [9]. In our experiments with BP-SOM, we used the same interval of dates as used by Norris, i.e., training and test dates ranged from January 1, 1950 to December 31, 1999. We generated two training sets, each consisting of 3,653 randomly selected instances, i.e., one-fifth of all dates. We also generated two corresponding test sets and two validation sets (with 1,000 instances each) of new dates within the same 50-year period. In all our experiments, the training set, test set, and validation set T. Weijters, A. v. d. Bosch and E. Postma 76 Table 1: Average generalization performances (plus standard deviation, after '?'; averaged over ten experiments) in terms of incorrectly-processed training and test instances, of BP, BPWD, and BP-SOM, trained on the date-calculation task and the parity-12 task. had empty intersections. We partitioned the input into three fields, representing the day of the month (31 units), the month (12 units) and the year (50 units). The output is represented by 7 units, one for each day of the week. The MFN contained one hidden layer with 12 hidden units for BP, and 25 hidden units for BPWD and BP-SOM. The SOM of the BP-SOM network contained 12 x 12 elements. Each of the three learning types was tested on two different data sets. Five runs with different random weight initializations were performed on each set, yielding ten runs per learning type. The averaged classification errors on the test material are reported in Table 1. From Table 1 it follows that the average classification error of BP is high: on test instances BP yields a classification error of 28.8%, while the classification error of BP on training instances is 20.8%. Compared to the classification error of BP, the classification errors on both training and test material of BPWD and BP-SOM are much lower. However, BPWD'S generalization performance on the test material is considerably worse than its performance on the training material: a clear indication of overfitting. We note in passing that the results of BPWD contrast with Norris' [2J claim that BP is unable to learn the date-calculation task when it is not decomposed into subtasks. The inclusion of weight decay in BP is sufficient for a good approximation of the performance results of Norris' decomposed network. The results in Table 1 also show that the performance of BP-SOM on test material is significantly better than that of BPWD (t(19)=7.39, p<O.OOl); BP-SOM has learned the date-calculation task at a level well beyond the average of human date calculators as reported by Norris [2J. In contrast with Norris' pre-structured network, BP-SOM does not rely on expert-based levels of abstraction for learning the date-calculation task. 4 The parity-12 task The parity-n problem, starting from the XOR problem (parity-2), continues to be a relevant topic on the agenda of many neural network and machine learning researchers. Its definition is simple (determine whether there is an odd or even number of l's in an n-Iength bit string of l's and O's), but established state-of-the-art algorithms such as C4.5 [1OJ and backpropagation [6J cannot learn it even with small n, i.e., backpropagation fails with n 2: 4 [l1J. That is, these algorithms are unable to generalize from learning instances of a parity-n task to unseen new instances of the same task. As with date calculation, this is due to the statistical neutrality of the task. The solution of the problem must lie in having some comprehensive overview over all input values at an intermediary step before the odd/even decision is made. Indeed, humans appear to follow this strategy [4J . 77 Learning Statistically Neutral Tasks without Expert Guidance BP BPWD BP-SOM Figure 2: Graphic representation of a 7 x 7 SOM associated with a BP-trained MFN (left) and a BPwD-trained MFN (middle), and a 7 x 7 SOM associated with a BP-SOM network (right), all trained on the parity-12 task_ Analogous to our study of the date-calculation task presented in Section 3, we apply BP, BPWD, and BP-SOM to the parity-n task_ We have selected n to be 12 _ The training set contained 1,000 different instances selected at random out of the set of 4,096 possible bit strings. The test set and the validation set contained 100 new instances each. The hidden layer of the MFN in all three algorithms contained 20 hidden units, and the SOM in BP-SOM contained 7 x 7 elements. The algorithms were run with 10 different random weight initializations. Table 1 displays the classification errors on training instances and test instances. Analysis of the results shows that BP-SOM performs significantly better than BP and BPWD on test material (t(19)=3.42, p<O.Ol and t(19)=2.42, p<0.05, respectively). (The average error of 6.2% made by BP-SOM stems from a single experiment out of the ten performing at chance level, and the remaining nine yielding about 1% error). BP-SOM is able to learn the parity-12 task quite accurately; BP and BPWD fail relatively, which is consistent with other findings [11]. As an additional analysis, we have investigated the differences in hidden unit activations after training with the three learning algorithms. To visualize the differences between the representations developed at the hidden layers of the MFNS trained with BP, BPWD, and BP-SOM, we also trained SOMs with the hidden layer activities of the trained BP and BPWD networks. The left part of Figure 2 visualizes the class labelling of the SOM attached to the BP-trained MFN after training; the middle part visualizes the SOM of the BpwD-trained MFN, and the right part displays the SOM of the BP-SOM network after training on the same material. The SOM of the BP-SOM network is much more organized and clustered than that of the SOMs corresponding with the BP-trained and BpwD-trained MFNS. The reliability values of the elements of all three SOMS are represented by the width of the black and white squares. It can be seen that the overall reliability and the degree of clusteredness of the SOM of the BP-SOM network is considerably higher than that of the SOM of the BP-trained and BpwD-trained MFNS. 5 How parity-12 is learned Given the hardness of the task and the supposed necessity of expert guidance, and given BP-SOM'S success in learning parity-12 in contrast, it is relevant to analyze what solution was found in the BP-SOM learning process. In this subsection we provide such an analysis, and show that the trained network performs an elegant checksum calculation at the hidden layer as the intermediary step_ All elements of SOMS of BP-SOM networks trained on the paritY-12 task are either the prototype for training instances that are all labeled with the same class, or 78 T. Weijters, A. v. d. Bosch and E. Postma Table 2: List of some training instances of the parity-12 task associated with elements (1,1), (2,4), and (3,3) of a trained BP-SOM network. SOM SOM (1,1), class-even, reliability 1.0 inl 1 0 1 in2 1 0 1 in3 0 1 0 inl 0 1 1 in2 1 1 0 in3 1 1 1 inl 0 1 1 in2 0 1 0 in3 1 1 1 in4 0 0 1 in5 in6 0 0 0 0 0 0 in7 0 1 0 inS in9 inl0 inll in12 checksum -2 0 0 0 0 0 1 0 1 0 0 -2 1 0 -2 0 0 0 SOM 12,4), class-odd, reliability 1.0 in4 1 0 1 in5 1 1 0 in6 0 1 1 in7 1 1 0 inS 1 0 1 in9 inl0 inll in12 checksum 1 0 0 0 -1 1 1 1 -1 0 -1 1 0 1 0 :OM (3,3), class-even, reliability 1.0 in4 in5 in6 1 0 0 1 1 0 1 1 1 in7 1 1 0 inS 1 0 1 in9 inl0 inll in12 checksum 1 0 1 0 0 0 0 1 1 0 1 0 0 1 0 II - - - 1+ + + I - - - I + + + II II prototype of no instances at all. Non-empty elements (the black and white squares in the right part of Figure 2) can thus be seen as containers of homogeneouslylabeled subsets of the training set (i.e., fully reliable elements). The first step of our analysis consists of collecting, after training, for each non-empty SOM element all training instances clustered at that SOM element. As an illustration, Table 2 lists some training instances clustered at the SOM elements at coordinates (1,1), (2,4), and (3,3). At first sight the only common property of instances associated with the same SOM element is the class to which they belong; e.g., all instances of SOM element (1,1) are even, all instances of SOM element (2,4) are odd, and all instances of SOM element (3,3) are again even. The second step of our analysis focuses on the sign of the weights of the connections between input and hidden units. Surprisingly, we find that the connections of each individual input unit to all hidden units have the same sign; each input unit can therefore be labeled with a sign marker (as displayed at the bottom of Table 2). This allows the clustering on the SOM to become interpretable. All weights from input unit 1,2,3, 7,8, and 9 to all units of the hidden layer are negative, all weights from input unit 4,5,6, 10, 11, and 12 to all units of the hidden layer are positive. At the hidden layer, this information is gathered as if a checksum is computed; each SOM element contains instances that add up to an identical checksum. This can already be seen using only the sign information rather than the specific weights. For instance, all instances clustered at SOM element (1,1) lead to a checksum of -2 when a sum is taken of the product of all input values with all weight signs. Analogously, all instances of cluster (2,4) count up to -1 and the instances of cluster (3,3) to zero. The same regularity is present in the instances of the other SOM elements. In sum, the BP-SOM solution to the parity-12 task can be interpreted as to transform it at the hidden layer into the mapping of different, approximately discrete, checksums to either class 'even' or 'odd'. Learning Statistically Neutral Tasks without Expert Guidance 6 79 Conclusions We have performed two learning experiments in which the BP-SOM learning algorithm was trained on the date-calculation task and on the parity-12 task. Both tasks are hard to learn because they are statistically neutral, but can be learned adequately and without expert guidance by the BP-SOM learning algorithm. The effect of the SOM part in BP-SOM (adequately constrained hidden-layer vectors, reliable clustering of vectors on the SOM, and streamlined hidden-unit activations) clearly contributes to this success. From the results of the experiments on the date-calculation task, we conclude that Norris' claim that, without human assistance, a backpropagation net would never learn the date-calculation task is inaccurate. While BP with weight decay performs at Norris' target level of accuracy, BP-SOM performs even better. Apparently BPSOM is able to distribute its learning throughout the net such that the two parts of the network (from input layer to hidden layer, and from hidden layer to output layer) perform the mapping with an appropriate intermediary step. The parity-12 experiment exemplified that such a discovered intermediary step can be quite elegant; it consists of the computation of a checksum via the connection weights between the input and hidden layers. Unfortunately, a similar elegant simplicity was not found in the connection weights and SOM clustering of the date calculation task; future research will be aimed at developing more generic analyses for trained BP-SOM networks, so that automatically-discovered intermediary steps may be made understandably explicit. References [1] [2] [3] [4] [5] [6] [7] Newell, A. and Simon, H.A. (1972) Human problem solving. Engelwood Cliffs, NJ: Prentice-Hall . Norris, D. (1989). How to build a connectionist idiot (savant) . Cognition, 35, 277-291. Hill, A. L. (1975). An investigation of calendar calculating by an idiot savant. American Journal of Psychiatry, 132, 557- 560. Dehaene, P., Bossini, S., and Giraux, P. (1993) . The mental representation of parity and numerical magnitude. Journal of Experimental Psychology: General, 122, 371396. Weijters, A., Van den Bosch, A., Van den Herik, H. J . (1997). Behavioural Aspects of Combining Backpropagation Learning and Self-organizing Maps. Connection Science, 9,235-252. Rumelhart, D. E., Hinton, G. E. , and Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart and J . L. McClelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1: Foundations (pp. 318-362). Cambridge, MA: The MIT Press. Kohonen, T. (1989). Self-organisation and Associative Memory. Berlin: Springer Verlag. [8] Hinton, G. E. (1986). Learning distributed representations of concepts. In Proceedings of the Eighth Annual Conference of the Cognitive Science Society, 1-12. Hillsdale, NJ: Erlbaum. [9] Prechelt, L. (1994). Probenl: A set of neural network benchmark problems and benchmarking rules. Technical Report 24/94, Fakultat fUr Informatik, Universitat Karlsruhe, Germany. [10] Quinlan, J. R. (1993) . C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann. [11] Thornton, C. (1996). Parity: the problem that won't go away. In G. McCalla (Ed.), Proceeding of AI-96, Toronto, Canada (pp. 362-374). Berlin: Springer Verlag.
1780 |@word middle:2 decomposition:4 necessity:4 contains:1 exclusively:1 activation:7 must:2 numerical:1 interpretable:1 selected:3 mental:1 characterization:1 toronto:1 simpler:2 five:1 become:4 consists:2 indeed:1 hardness:1 behavior:1 multi:2 ol:1 decomposed:3 automatically:1 provided:1 moreover:1 what:1 interpreted:1 string:4 developed:1 finding:1 nj:2 collecting:1 unit:22 appear:1 before:1 positive:1 scientist:1 giraux:2 cliff:1 approximately:1 black:2 plus:1 initialization:2 mateo:1 limited:1 statistically:10 averaged:2 backpropagation:8 significantly:2 matching:1 pre:1 get:1 cannot:1 twodimensional:1 prentice:1 accumulation:1 map:2 williams:1 go:1 starting:1 simplicity:1 rule:5 his:1 variation:1 coordinate:1 analogous:1 target:1 element:26 rumelhart:2 updating:1 continues:1 labeled:2 bottom:1 calculate:2 cycle:2 trained:22 reviewing:1 solving:7 eric:1 translated:1 represented:2 train:1 quite:2 solve:1 mfn:12 calendar:1 unseen:1 transform:1 thornton:1 associative:1 indication:1 net:5 product:1 kohonen:1 relevant:2 combining:1 date:36 organizing:1 achieve:1 supposed:1 cluster:3 regularity:2 empty:3 perfect:1 develop:1 bosch:5 odd:7 involves:1 implies:1 guided:1 exploration:1 human:15 material:11 hillsdale:1 require:1 microstructure:1 generalization:4 clustered:4 decompose:1 investigation:1 eindhoven:1 hall:1 cognition:3 week:3 algorithmic:2 claim:4 visualize:1 mapping:2 early:1 intermediary:7 label:1 leap:1 mit:1 clearly:1 sight:1 aim:2 rather:1 avoid:1 encode:1 focus:3 fur:1 contrast:3 psychiatry:1 abstraction:7 stopping:1 inaccurate:1 typically:1 hidden:32 germany:1 mimicking:1 overall:1 classification:12 constrained:3 art:1 field:1 never:1 having:2 identical:1 nearly:1 future:1 connectionist:1 report:1 develops:1 employ:1 randomly:1 comprehensive:1 individual:2 neutrality:1 intended:1 consisting:1 attempt:2 yielding:2 nowadays:1 guidance:7 instance:29 modeling:2 deviation:1 neutral:10 subset:1 successful:2 supervisor:1 erlbaum:1 graphic:1 universitat:1 reported:3 autistic:4 considerably:3 combined:1 fundamental:1 analogously:1 again:1 manage:1 in7:3 worse:1 cognitive:2 expert:12 american:1 friday:1 distribute:2 performed:2 analyze:2 apparently:1 universiteit:1 parallel:1 simon:2 om:1 square:2 l1j:1 calculator:4 accuracy:3 xor:1 kaufmann:1 yield:1 gathered:1 generalize:2 bpwd:20 accurately:1 informatik:1 researcher:1 visualizes:2 ed:2 definition:1 streamlined:2 pp:2 associated:7 pilot:1 proved:1 knowledge:1 subsection:1 organized:1 back:1 higher:1 day:5 follow:1 stage:1 dennis:1 marker:1 propagation:2 postma:4 mfns:3 karlsruhe:1 building:1 effect:3 ranged:1 concept:1 functioning:1 inductive:1 adequately:3 satisfactory:1 white:2 assistance:5 during:1 width:1 self:2 won:1 hill:1 demonstrate:1 performs:4 novel:1 common:2 empirically:1 overview:1 attached:1 volume:1 belong:1 he:2 in4:3 refer:1 cambridge:1 ai:1 in6:3 inclusion:1 had:2 reliability:6 add:1 claimed:2 verlag:2 success:2 arithmetical:1 seen:3 morgan:1 additional:1 employed:1 determine:2 period:1 signal:1 ii:3 stem:1 technical:1 unlabelled:2 calculation:25 multilayer:1 addition:3 interval:1 concluded:1 appropriately:1 container:1 fell:3 subject:1 tend:2 elegant:4 facilitates:1 dehaene:2 december:1 near:1 counting:1 intermediate:3 feedforward:2 split:1 psychology:1 architecture:1 prototype:2 whether:2 ool:1 passing:1 nine:1 generally:1 clear:1 aimed:1 netherlands:3 ten:3 processed:1 mcclelland:1 exist:1 sign:5 per:1 discrete:2 demonstrating:1 falling:1 prevent:1 year:3 sum:2 run:3 almost:1 throughout:2 decision:2 bit:3 layer:26 display:2 ilk:1 activity:1 annual:1 bp:68 aspect:1 performing:2 relatively:1 structured:1 developing:2 combination:1 smaller:1 increasingly:1 partitioned:1 den:3 taken:1 behavioural:1 flawlessly:1 count:1 fail:1 needed:1 end:1 decomposing:2 apply:1 away:1 appropriate:2 generic:1 distinguished:1 in2:3 top:1 remaining:1 clustering:3 quinlan:1 calculating:1 build:2 establish:1 february:1 society:1 question:1 already:1 strategy:2 traditional:1 separate:1 unable:2 berlin:2 capacity:1 topic:1 reason:1 illustration:1 october:1 unfortunately:1 negative:1 agenda:1 perform:3 observation:1 herik:1 benchmark:1 displayed:2 january:2 incorrectly:1 hinton:2 discovered:2 subtasks:5 canada:1 connection:6 optimized:1 c4:2 fakultat:1 learned:6 established:1 able:5 suggested:1 beyond:1 below:1 pattern:1 in3:3 exemplified:1 eighth:1 challenge:2 program:1 including:2 oj:1 reliable:2 memory:1 rely:1 solvable:1 representing:1 technology:1 brief:1 concludes:1 iength:2 determining:1 fully:1 validation:3 foundation:1 degree:1 sufficient:1 consistent:1 checksum:10 maastricht:1 cooperation:1 surprisingly:1 parity:22 fifth:1 van:3 distributed:2 made:4 san:1 skill:1 overfitting:4 assumed:1 conclude:1 table:8 learn:8 ca:1 contributes:1 investigated:2 complex:2 som:85 protocol:1 understandably:1 did:2 benchmarking:1 sub:2 momentum:1 fails:1 explicit:5 winning:1 lie:1 breaking:1 specific:1 showing:1 learnable:2 ton:1 decay:3 list:2 reproduction:1 organisation:1 magnitude:1 labelling:1 intersection:1 failed:1 contained:6 inl:3 springer:2 norris:17 newell:2 chance:2 ma:1 conditional:1 identity:1 month:2 labelled:2 inll:3 hard:4 included:2 called:1 experimental:2 attempted:1 internal:1 tested:1
852
1,781
An Analog VLSI Model of Periodicity Extraction Andre van Schaik Computer Engineering Laboratory J03, University of Sydney, NSW 2006 Sydney, Australia andre@ee.usyd.edu.au Abstract This paper presents an electronic system that extracts the periodicity of a sound. It uses three analogue VLSI building blocks: a silicon cochlea, two inner-hair-cell circuits and two spiking neuron chips. The silicon cochlea consists of a cascade of filters. Because of the delay between two outputs from the silicon cochlea, spike trains created at these outputs are synchronous only for a narrow range of periodicities. In contrast to traditional bandpass filters, where an increase in' selectivity has to be traded off against a decrease in response time, the proposed system responds quickly, independent of selectivity. 1 Introduction The human ear transduces airborne sounds into a neural signal using three stages in the inner ear's cochlea: (i) the mechanical filtering of the Basilar Membrane (BM), (ii) the transduction of membrane vibration into neurotransmitter release by the Inner Hair Cells (IHCs), and (iii) spike generation by the Spiral Ganglion Cells (SGCs), whose axons form the auditory nerve. The properties of the BM are such that close to the entrance of the cochlea (the base) the BM is most sensitive to high frequencies and at the apex the BM responds best to low frequencies. Along the BM the best-frequency decreases in an exponential manner with distance along the membrane. For frequencies below a given point's best-frequency the response drops off gradually, but for frequencies above the best-frequency the response drops off rapidly (see Fig. 1b for examples of such frequency-gain functions). An Inner Hair Cell senses the local vibration of a section of the Basilar Membrane. The intracellular voltage of an IHC resembles a half-wave-rectified version of the local BM vibration, low-pass filtered at I kHz. The IHC voltage has therefore lost it's AC component almost completely for frequencies above about 4 kHz. Well below this frequency, however, the lHC voltage has a clear temporal structure, which will be reflected in the spike trains on the auditory nerve. These spike trains are generated by the spiral ganglion cells. These sacs spike with a probability roughly proportional to the instantaneous inner hair cell voltage. Therefore, for the lower sound frequencies, the spectrum of the input waveform is not only encoded in the form of an average spiking rate of different fibers along the An Analog VLSI Model of Periodicity Extraction 739 cochlea (place coding), but also in the periodicity of spiking of the individual auditory nerve fibers. It has been shown that this periodicity information is a much more robust cue than the spatial distribution of average firing rates [I]. Some periodicity information can already be detected at intensities 20 dB below the intensity needed to obtain a change in average rate. Periodicity information is retained at intensities in the range of 60-90 dB SPL, for which the average rate of the majority of the auditory nerve fibers is saturated. Moreover, the positions of the fibers responding best to a given frequency move with changing sound intensity, whereas the periodicity information remains constant. Furthermore, the frequency selectivity of a given fiber's spiking rate is drastically reduced at medium and high sound intensities. The robustness of periodicity information makes it likely that the brain actually uses this information. 2 Modelling periodicity extraction Several models have been proposed that extract periodicity information using the phase encoding of fibers connected to the same inner hair cell or that use the synchronicity of firing on auditory nerve fibers connected to different inner hair cells (see [2] for 4 examples of these models). The simplest of the phase encoding schemes correlate the output of the cochlea at a given position with a delayed version of itself. It is easy to see that for pure tones, the comparison sin(2 1t f t) = sin(2 1t f (t - ~? is only true for frequencies that are a multiple of 1I~, i.e., for these frequencies the signals are in perfect synchrony and thus perfectly correlated. We can adapt the delay ~ to each cochlear output, so that I/~ equals the best frequency of that cochlear output. In this case higher mUltiples of I/~ will be suppressed due to the very steep cut-off of the cochlear filters for frequencies above the best frequency. Each synchronicity detector will then only be sensitive to the best frequency of the filter to which it is connected. If we code the direct signal and the delayed signal with two spike trains, with one spike per period at a fixed phase each, it becomes a very simple operation to detect the synchronicity. A simple digital AND operator will be enough to detect overlap between two spikes. These spikes will overlap perfectly when f = 1I~, but some overlap wi II still be present for frequencies close to 1I~, since the spikes have a finite width. The bandwidth of the AND output can thus be controlled by the spike width. It is possible to create a silicon implementation of this scheme using an artificial cochlea, an IHe circuit, and a spiking neuron circuit together with additional circuits to create the delays. A chip along these lines has been developed by John Lazzaro [3] and functioned correctly. A disadvantage of this scheme, however, is the fact that the delay associated with a cochlear output has to be matched to the inverse of the best frequency of that cochlear output. For a cochlea whose best frequency changes exponentially with filter number in the cascade from 4 kHz (the upper range of phase locking on the auditory nerve) to 100 Hz, we will have to create delays that range from 0.25 ms to 10 ms. In the brain, such a large variation in delays is unlikely to be provided by an axonal delay circuit because it would require an excessively large variation in axon length. A possible solution comes from the observation that the phase of a pure tone of a given frequency on the basilar membrane increases from base to apex, and the phase changes rapidly around the best frequency. The silicon cochlea, which is implemented with a cascade of second-order low-pass filters (Fig. 1a), also functions as a delay line, and each filter adds a delay which corresponds to 1t/2 at the cut-off frequency of that filter. If we assume that filter i and filter i-4 have the same cut-off frequency (which is not the case), the delay between the output of both filters will correspond to a full period (21t) at the cut-off frequency. 740 A. v. Schaik 20 gain .......... ??????? ???,?,??? ????1????? _- i ?????????,?????_? ___ , i-4??? ????????????? ?-?-- Il l (dB)o /' \ \ '\ -20 \ \ Figure 1: a) Part of a silicon cochlea. Each section contains a second-order low-pass filter and a derivator; b) accumulated gain at output i and i-4; c) phase curves of the individual stages between output i and output i-4; d) proposed implementation of the periodicity extraction model. In reality, the filters along the cochlea will have different cut-off frequencies, as shown in Fig. 1. Here we show the accumulated gain at the outputs i and i-4 (Fig. 1b), and the delay added by each individual filter between these two outputs (Fig. 1c) as a function of frequency (normalized to the cut-off frequency of filter i). The solid vertical line represents this cut-off frequency, and we can see that only filter i adds a delay of 7t/2, and the other filters add less. However, if we move the vertical line to the right (indicated by the dotted vertical line), the delay added by each filter will increase relatively quickly, and at some frequency slightly higher than the cutoff frequency of filter i, the sum of the delays will become 27t (dashed line). At this frequency neither filter i nor filter i-4 has maximum gain, but if the cut-off frequency of both filters is not too different, the gain will stiII be high enough for both filters at the correlator frequency to yield output signals with reasonable amplitudes. The improved model can be implemented using building blocks as shown in Fig. I d. Each of these building blocks have previously been presented (refer to [4] for additional details). The silicon cochlea is used to filter and delay the signal, and has been adjusted so that the cut-off frequency decreases by one octave every twenty stages, so that the cut-off frequencies of neighboring filters are almost equal. The IHC circuit half-wave rectifies the signal in the implementation of Figure Id. The low-pass filtering of the biological Inner Hair Cell can be ignored for frequencies below the approximately I kHz cut-off frequency of the cell. Since we limited our measurements to this range, the low-pass filtering has not been modeled by the circuit. Two chips containing electronic leaky-integrate-and-fire neurons have been used to create the two spike trains. In the first series of measurements, each chip generates exactly one spike per period of the input signal. A final test will set the 32 neurons on each chip to behave more like biological spiral ganglion cells and the effect on periodicity extraction will be shown. A digital AND gate is used to compare the output spikes of the two chips, and the spike rate at the output of the AND gate is the measure of activity used. 3 Test results The first experiment measures the number of spikes per second at the output of the AND gate as a function of input frequency, using different cochlear filter combinations. Twelve filter pairs have been measured, each combining a filter 741 An Analog VLSI Model of Periodicity Extraction output with the output of a filter four sections earlier in the cascade. The best frequency of the filter with the lowest best frequency of the pairs ranged from 200 Hz to 880 Hz. The results are shown in Fig. 2a. 1S00 ._ _. _ __ _ 1.2 a b ~ 0.8 7S0 j 0.4 Q. ?O~~~~LU~~4-~__~~ 400 200 600 800 frequency (Hz) 1000 1200 ./ o 200 frequency (log scale) 2000 Figure 2: a) measured output rate at different cochlear positions, and b) spike rate normalized to best input frequency, plotted on a log frequency scale. The maximum spike rate increases approximately linearly with frequency; this is to be expected, since we will have approximately one spike per signal period. Furthermore the best response frequencies of the filters sensitive to higher frequencies are further apart, due to the exponential scaling of the frequencies along the cochlea. Finally, a given time delay corresponds to a larger phase delay for the higher frequencies, so that the absolute bandwidth of the coincidence detectors, I.e., the range of input frequencies to which they respond, is larger. When we normalize the spike rate and plot the curves on a logarithmic frequency scale, as in Fig. 2b, we see that the best frequencies of the correlators follow the exponential scaling of the best frequencies of the cochlear filters, and that the relative bandwidth is fairly constant. 1200 , -..___._________ spike a _..~, ~I';"" ra te !, 600 'J\ I -,-_--.".-,~ 1---- 20mV _ _ 30mV 1'----- 40mV . I 1200 ._...._.. __.._. __._... _.__ .__....._...... _____._._. spike rate b 1----20mV ! _ _ 30mV ' 600 ! , , ,, ? I : o~ SS8 _____~J~'-~______~ 6S1 744 837 9301023111612091302 Input frequency (Hz) O~~~~~~_~~~~~ 558 651 744 837 930 1023111612091302 carrier frequency (Hz) Figure 3: Frequency selectivity for different input intensities. a) pure tones b) AM signals. Using the same settings as in the previous experiment, the output spike rate of the system for different input amplitudes has been measured, using the cochlear filter pair with best frequencies of 710Hz and 810Hz. In principle, the amplitude of the input signal should have no effect on the output of the system, since the system only uses phase information. However, this is only true if the spikes are always created at the same phase of the output signal of the cochlear filters, for instance at the peak, or the zero crossing. Fig. 3 shows however that the resulting filter selectivity shifts to lower frequencies for higher intensity input signals. This is a result of the way the spikes are created on the neuron chip. The neurons have been adjusted to spike once per period, but the phase at which they spike with respect to the half-wave-rectified waveform depends on the integration time of the neuron, which is the time needed with a given input current to reach the spike threshold voltage from the zero resting voltage. This time depends on the amplitude of the input current, which in tum is proportional to the amplitude of the input signal. Since the amplitude gain of the two cochlear filters used is not the same, the amplitude of the current input to the two neuron chips is different. Therefore, they do not spike at the same phase with respect to their respective input waveforms. This causes the frequency selectivity of the system to shift to lower frequencies with increasing intensity. However, this is an artifact of the spike generation used to 742 A. v. Schaik simplify the system. On the auditory nerve, spikes arrive with a probability roughly proportional to the half-wave rectified wavefonn. The most probable phase for a spike is therefore always at the maximum of the wavefonn, independent of intensity. In such a system, the frequency selectivity will therefore be independent of amplitude. A second advantage of coding (at least half of) the wavefonn in spike probability is that it does not assume that the input wavefonn is sinusoidal. Coding a wavefonn with just one spike per period can only code the frequency and phase of the wavefonn, but not its shape. A square wave and a sine wave would both yield the same spike train. We will discuss the "auditory-nerve-like" coding at the end of this section. To test the model with a more complex waveform, a 930 Hz sine wave 100% amplitude-modulated at 200 Hz generated on a computer has been used. The carrier frequency was varied by playing the whole wavefonn a certain percentage slower or faster. Therefore the actual modulation frequency changes with the same factor as the carrier frequency. The results of this test are shown in Fig. 3b for three different input amplitudes. Compared to the measurements in Fig. 3a, we see that the filter is less selective and centered at a higher input frequency. The shift towards a higher frequency can be explained by the fact that the average amplitude of a half-wave rectified amplitude modulated signal is lower than in for a half-wave rectified pure tone with the same maximum amplitude. Furthennore, the amplitude of the positive half-cycle of the output of the IHC circuit changes from cycle to cycle because of the amplitude modulation. We have seen that the amplitude of the input signal changes the frequency for which the two spike trains are synchronous, which means that the frequency which yields the best response changes from cycle to cycle with a periodicity equal to the modulation frequency. This introduces a sort of "roaming" of the frequencies in the input signa], effectively reducing the selectivity of the filters. Finally, because of the 100% depth of the amplitude modulation, the amplitude of the input will be too low during some cycles to create a spike, which therefore reduces the total number of spikes which can coincide. Fig. 3b shows that this model detects periodicity and not spectral content. The spectrum of a 930 Hz pure tone 100% amplitude modulated at 200 Hz contains, apart from a 930 Hz carrier component, components both at 730 Hz and 1130 Hz, with half the amplitude of the carrier component. When the speed of the wavefonn playback is varied so that the carrier frequency is either 765 Hz or 1185 Hz, one of these spectral side bands will be at 930 Hz, but the system does not respond at these carrier frequencies. This is explained by the fact that the periodicity of the zero crossings, and thus of the positive half cycles of the IHC output, is always equal to the carrier frequency. Traditional band-pass filters with a very high quality factor (Q) can also yield a narrow pass-band, but their step response takes about ] .5Q cycles at the center frequency to reach steady state. The periodicity selectivity of the synchronicity detector shown in Fig. 3a corresponds to a quality factor of ] 4; a traditional bandpass filter would take about 2] cycles of the 930Hz input signal to reach 95% of it's final output value. Fig. 4 shows the temporal aspect of the synchronicity detection in our system. The top trace in this figure shows the output of the cochlear filter with the highest best frequency (index i-4 in Fig. 1) and the spikes generated based on this output. The second trace shows the same for the output of the cochlear filter with the lower best frequency (index i in Fig. ]). The third trace shows the output of the AND gate with the above inputs, which are slightly above its best periodicity. Coincidences are detected at the onset of the tone, even when it is not of the correct periodicity, but only for the first one or two cycles. The bottom trace shows the output of the AND gate for an input at best frequency. The system thus responds to the presence of a pure tone of the correct periodicity after only a few cycles, independent of the filters selectivity. 743 An Analog VLSI Model of Periodicity Extraction I--~ ~ I b I J I timeI (ms)I Figure 4: Oscilloscope traces of the temporal aspect of synchronicity detection. The vertical scale is 20mY per square for the cochlear outputs, the spikes are 5Y in amplitude. To show this more dramatically, we have reduced the spike width to lOllS, to obtain a high periodicity selectivity as shown in Fig. 5a. The bandwidth of this filter is only 20 Hz at 930 Hz, equivalent to a quality factor of 46.5. A traditional filter with such a quality factor would only settle 70 cycles after the onset of the signal, whereas the periodicity detector still settles after the first few cycles, as shown in Fig. 5b. We can compare this result with the response of a classic RLC band-pass filter with a 930 Hz center frequency and a quality factor of 46.5 as shown in Fig. 6. After 18 cycles of the input signal, the output of the band-pass filter has only reached 65% of its final value. Thresholding the RLC output could signal the presence of a periodicity faster, but it would then still respond very slowly to the offset of the tone as the RLC filter wi II continue ringing after the offset. - - -- - - - 800 --, spike rate I II 400 I I 0 830 930 880 980 1030 0 Input frequency (Hzl 10 5 15 time (ms) 20 Figure 5: a) Frequency selectivity with a lOlls spike width. b) Cochlear output (top, 40 mY scale) and coincidences (bottom) for a signal at best frequency. ! \ gain 05 \ / OL---______ 830 880 \, ~ __ ~-- 930 \JW\MN\N : 65% --~ 980 Input frequency (Hzl 1030 o 5 10 15 time (ms) 20 Figure 6: Simulated response of the RLC band-pass filter. a) frequency selectivity, b) transient response (scale units are 40 mY). In the previous experiments we simplified the model to use one spike per period in order to understand the principle behind the periodicity detection. However, we have seen that this implementation leads to a shift in best periodicity with changing amplitude, because the phase at which the 'single neuron' spikes changes with intensity. Now, we will change the settings to be more realistic, so that each of the 32 neurons cannot spike at every period, and we will reduce the output gain of the IHC circuit so that the neurons receive less signal current, and thus have a lower input SNR. The resulting spike distribution is a better simulation of the spike distribution on the auditory nerve. This is shown in Fig. 7 for a group of 32 neurons stimulated by and IHC circuit connected to a single cochlear output. The bottom trace shows the sum of spikes over the 32 neurons on an arbitrary scale. When we 744 A. v. Schaik use this spike distribution and repeat the pure-tone detection experiment of Fig. 3a at different input intensities, we obtain the curve of Fig. 7b. Indeed, in this case, the best periodicity does not change; the curves are remarkably independent of input intensity. However, the selectivity curve is about twice as wide at the base as the ones in Fig. 3a, but the slopes of the selectivity curve rise and fall much more gradually. This means that we can easily increase the selectivity of these curves by setting a higher threshold, e.g., discarding spike rates below 70 spikes per second. Because of the steep slopes in Fig. 3a such an operation would hardly increase the selectivity for that case. 140 _ ?._ _ ._ _ _ _ _...._.,--_=:;-, ____ 20"'" _ _ 30mV spike rate _ .. _ . 40"", 70 O~~~ o 10 15 time (ms) 20 558 651 __~~~__- __~ 744 837 930 1023 11161209 1302 Input freque ncy (Hz) Figure 7: a) Cochlear output (top) and population average of the auditory nerve spikes (bottom); b) periodicity selectivity with auditory nerve like spike distribution. 4 Conclusions In this paper we have presented a neural system for periodicity detection implemented with three analogue VLSI building blocks. The system uses the delay between the outputs at two points along the cochlea and synchronicity of the spike trains created from these cochlear outputs to detect the periodicity of the input signal. An especially useful property of the cochlea is that the delay between two points a fixed distance apart corresponds to a full period at a frequency that scales in the same way as the best frequency along the cochlea, I.e., decreases exponentially. If we always create spikes at the same phase of the output signal at each filter, or simply have the highest spiking probability for the maximum instantaneous amplitude of the output signal, then both outputs will only have synchronous spikes for a certain periodicity, and we can easily detect this synchronicity with coincidence detectors. This system offers a way to obtain very selective filters using spikes. Even though they react to a very narrow range of periodicities, these filters are able to react after only a few periods. Furthermore, the range of periodicities it responds to can be made independent of input intensity, which is not the case with the cochlear output itself. This clearly demonstrates the advantages of using spikes in the detection of periodicity. Acknowledgements The author thanks Eric Fragniere, Eric Vittoz and the Swiss NSF for their support. References [1] Evans, "Functional anatomy of the auditory system," in Barlow and MoHon (editors), The Senses, Cambridge University Press, pp. 251-306, 1982. [2] Seneff, Shamma, Deng, & Ghitza, Journal of Phonetics, Vol. 16, pp. 55-123, 1988. [3] Lazzaro, "A silicon model of an auditory neural representation of spectral shape." IEEE Journal of Solid-State Circuits, Vol. 26, No.5, pp. 772-777, 1991. [4] van Schaik, "An Analogue VLSI Model of Periodicity Extraction in the Human Auditory System," to appear in Analog Integrated Circuits and Signal Processing, Kluwer, 2000. PART VI SPEECH, HANDWRITING AND SIGNAL PROCESSING
1781 |@word version:2 simulation:1 nsw:1 solid:2 contains:2 series:1 current:4 john:1 evans:1 realistic:1 synchronicity:8 entrance:1 shape:2 drop:2 plot:1 half:10 cue:1 tone:9 schaik:5 filtered:1 along:8 direct:1 become:1 consists:1 manner:1 ra:1 indeed:1 expected:1 oscilloscope:1 nor:1 roughly:2 brain:2 ol:1 detects:1 actual:1 correlator:1 increasing:1 becomes:1 provided:1 moreover:1 matched:1 circuit:12 medium:1 lowest:1 ringing:1 developed:1 temporal:3 every:2 exactly:1 demonstrates:1 unit:1 appear:1 carrier:8 positive:2 engineering:1 local:2 encoding:2 id:1 firing:2 modulation:4 approximately:3 twice:1 au:1 resembles:1 limited:1 shamma:1 range:8 lost:1 block:4 swiss:1 cascade:4 cannot:1 close:2 operator:1 equivalent:1 center:2 pure:7 sac:1 react:2 classic:1 population:1 variation:2 us:4 crossing:2 cut:11 bottom:4 coincidence:4 connected:4 cycle:14 decrease:4 highest:2 locking:1 eric:2 completely:1 easily:2 chip:8 neurotransmitter:1 fiber:7 train:8 detected:2 artificial:1 whose:2 encoded:1 larger:2 furthennore:1 itself:2 final:3 advantage:2 neighboring:1 combining:1 rapidly:2 normalize:1 perfect:1 ac:1 basilar:3 measured:3 sydney:2 implemented:3 come:1 vittoz:1 waveform:4 anatomy:1 correct:2 filter:52 hzl:2 centered:1 human:2 australia:1 transient:1 settle:2 require:1 biological:2 probable:1 adjusted:2 around:1 fragniere:1 traded:1 sensitive:3 vibration:3 create:6 clearly:1 always:4 playback:1 voltage:6 release:1 modelling:1 contrast:1 detect:4 am:1 accumulated:2 unlikely:1 integrated:1 vlsi:7 selective:2 spatial:1 integration:1 fairly:1 equal:4 once:1 extraction:8 represents:1 simplify:1 few:3 lhc:1 individual:3 delayed:2 phase:16 fire:1 detection:6 rectifies:1 saturated:1 introduces:1 sens:2 behind:1 respective:1 plotted:1 instance:1 earlier:1 ncy:1 disadvantage:1 snr:1 delay:19 too:2 j03:1 my:3 thanks:1 twelve:1 peak:1 off:14 together:1 quickly:2 s00:1 ear:2 containing:1 rlc:4 slowly:1 sinusoidal:1 coding:4 mv:6 depends:2 onset:2 vi:1 sine:2 reached:1 wave:9 sort:1 slope:2 synchrony:1 il:1 square:2 correspond:1 yield:4 lu:1 rectified:5 detector:5 reach:3 andre:2 against:1 frequency:89 pp:3 associated:1 handwriting:1 gain:9 auditory:14 amplitude:23 actually:1 nerve:11 tum:1 higher:8 follow:1 reflected:1 response:9 improved:1 jw:1 though:1 furthermore:3 just:1 stage:3 quality:5 indicated:1 artifact:1 building:4 effect:2 excessively:1 normalized:2 true:2 ranged:1 barlow:1 laboratory:1 transduces:1 sin:2 during:1 width:4 steady:1 m:6 octave:1 phonetics:1 instantaneous:2 functional:1 spiking:6 khz:4 exponentially:2 analog:5 resting:1 kluwer:1 silicon:8 refer:1 measurement:3 cambridge:1 apex:2 lolls:2 base:3 add:3 apart:3 selectivity:18 certain:2 continue:1 seneff:1 seen:2 additional:2 deng:1 period:10 dashed:1 ii:4 signal:27 multiple:2 sound:5 full:2 reduces:1 faster:2 adapt:1 offer:1 controlled:1 hair:7 cochlea:17 cell:11 receive:1 whereas:2 remarkably:1 airborne:1 hz:23 db:3 correlators:1 ee:1 axonal:1 presence:2 iii:1 spiral:3 easy:1 enough:2 perfectly:2 bandwidth:4 inner:8 reduce:1 shift:4 synchronous:3 speech:1 cause:1 lazzaro:2 hardly:1 ignored:1 dramatically:1 useful:1 clear:1 band:6 simplest:1 reduced:2 percentage:1 nsf:1 dotted:1 per:9 correctly:1 vol:2 group:1 four:1 threshold:2 changing:2 cutoff:1 neither:1 sum:2 inverse:1 respond:3 place:1 almost:2 reasonable:1 spl:1 electronic:2 arrive:1 scaling:2 activity:1 generates:1 aspect:2 speed:1 relatively:1 combination:1 membrane:5 slightly:2 suppressed:1 wavefonn:8 wi:2 s1:1 explained:2 gradually:2 remains:1 previously:1 discus:1 ihe:1 needed:2 end:1 operation:2 spectral:3 robustness:1 gate:5 slower:1 responding:1 top:3 especially:1 move:2 already:1 added:2 spike:60 traditional:4 responds:4 usyd:1 distance:2 simulated:1 majority:1 cochlear:19 code:2 length:1 retained:1 modeled:1 index:2 steep:2 trace:6 rise:1 implementation:4 twenty:1 upper:1 vertical:4 neuron:13 observation:1 finite:1 behave:1 varied:2 arbitrary:1 intensity:13 pair:3 mechanical:1 functioned:1 narrow:3 able:1 below:5 analogue:3 overlap:3 mn:1 scheme:3 created:4 extract:2 acknowledgement:1 relative:1 generation:2 filtering:3 proportional:3 digital:2 integrate:1 s0:1 signa:1 principle:2 thresholding:1 editor:1 playing:1 periodicity:37 repeat:1 drastically:1 side:1 understand:1 wide:1 fall:1 absolute:1 leaky:1 roaming:1 van:2 curve:7 depth:1 author:1 made:1 coincide:1 simplified:1 bm:6 correlate:1 spectrum:2 reality:1 stimulated:1 robust:1 complex:1 intracellular:1 linearly:1 whole:1 fig:24 transduction:1 axon:2 position:3 bandpass:2 exponential:3 third:1 ihc:7 discarding:1 offset:2 effectively:1 te:1 logarithmic:1 simply:1 likely:1 ganglion:3 corresponds:4 towards:1 content:1 change:10 reducing:1 total:1 pas:10 support:1 modulated:3 correlated:1
853
1,782
Evolv . . . . . JIiIIIIIIo. Bradley Tookes Dept of Comp. Sci. and Elec. Engineering University of Queensland Queensland, 4072 Australia btonkes@csee.uq. edu. au Alan Blair Department of Computer Science University of Melbourne Parkville, Victoria, 3052 Australia blair@cs. mu. oz. au Janet Wiles Dept of Comp. Sci. and Elec. Engineering , School of Psychology , University of Queensland Queensland, 4072 Australia janetw@csee.uq. edu. au Abstract Recent theories suggest that language acquisition is assisted by the evolution of languages towards forms that are easily learnable. In this paper, we evolve combinatorial languages which can be learned by a recurrent neural network quickly and from relatively few examples. Additionally, we evolve languages for generalization in different "worlds", and for generalization from specific examples. We find that languages can be evolved to facilitate different forms of impressive generalization for a minimally biased, general purpose learner. The results provide empirical support for the theory that the language itself, as well as the language environment of a learner, plays a substantial role in learning: that there is far more to language acquisition than the language acquisition device. 1 Introduction: Factors in language learnability In exploring issues of language learnability, the special abilities of humans to learn complex languages have been much emphasized, with one dominant theory based on innate, domain-specific learning mechanisms specifically tuned to learning human languages. It has been argued that without strong constraints on the learning mechanism, the complex syntax of language could .not be learned from the sparse data that a 'child observes [1]. More recent theories challenge this claim and emphasize the interaction between learner and environment [~]. In addition to these two theories is the proposal that rather than "language-savvy infants", languages themselves adapt to human learners, and the ones that survive are "infant-friendly languages" [3-5]. To date, relatively few empirical studies have explored how such adaptation of language facilitates learning. Hare and Elman [6] demonstrated that Evolving Learnable Lan~ages 67 classes of past tense forms could evolve over simulated generations in response to changes in the frequency of verbs, using neural networks. Kirby [7] showed, using a symbolic system, how compositional languages are more likely to emerge when learning is constrained to a limited set of examples. Batali [8] has evolved recurrent networks that communicate simple structured, concepts. Our argument is not that humans are general purpose learners. Rather, current research questions require exploring the nature and extent of biases that learners bring to language learning, and the ways in which languages exploit those biases [2]. Previous theories suggesting that many aspects of language were unlearnable without strong biases are graduallybrealdng down as new aspects of language are shown to be learnable with much weaker biases. Studies include the investigation of how languages may exploit biases as subtle as attention ~d memory limitations in children [9]. A complementary study has shown that general purpose learners can evolve biases in the form of initial starting weights that facilitate the learning of a family of recursive languages [10] . In this paper we present an empirical paradigm for continuing the exploration of factors that contribute to language learnability. The paradigm we propose necessitates the evolution of languages comprising recursive sentences over symbolic strings languages whose sentences cannot be. conveyed without combinatorial composition of symbols drawn from a finite alphabet. The paradigm is not based on any specific natural language, but rather, it is the simplest task we could find to illustrate the point that languages with compositional structure can be evolved to be learnable from few sentences.. The simplicity of the communication task allows us to analyze the language and its generalizability, and highlight the nature of the generalization properties. We start with the evolution of a recursive language that can be learned easily from five sentences by a minimally biased learner. We then address issues of robust learning of evolved languages, showing that different languages support generalization in different ways. We also address a factor to which scant regard has been paid, namely that languages may evolve not just to their learners, but also to be easily generalizable from a specific set of concepts. It seems almost axiomatic that learning paradigms should sample randomly from the training domain. It may be that human languages are not learnable from random sentences, but are easily generalizable from just those examples that a child is likely to be exposed to in its environment. In the third series of simulations, we test whether a language can adapt to be learnable from a core ?set of concepts. 2 A paradig:m for exploring language learnability We consider a simple language task in which two recurrent neural networks try to communicate a "concept" represented by a point in the unit interval, [0, 1] over a symbolic? channeL An encoder network sends a sequence of symbols (thresholded outputs) for each concept, which a decoder network receives and processes back into a concept (the framework is described in greater detail in [11]). For communication to be successful, the decoder's output should approximate the encoder's input for all concepts. The architecture for the encoder is a recurrent network with one input unit and five output units, and with recurrent connections from both the output and hidden units back to the hidden units. The encoder produces a sequence of up to five symbols (states of the output units) taken from ~ = {A, ....., J}, followed by the $ symbol, for each concept taken from .[0, 1]. To encode a value x E [0,1], the network 68 B. Tonkes, A. Blair and J. Wiles ~ tm e ~~-----I S I ,,---A-i Wl A E / ' " ~ Il\ I e f\ A ~ I B I ~ "~~Lf IE e E B A B C E B A E I I 1/'\ /\ I I II A\ I 1B1 E AB C Be E E A C BC A BB ~ Q A E E I3l E E ~/\I\II\IIIIII\II/\IIl\/\/\1l\ 1/\111l\11/\/\IIl\II/\II/\ B C [BI B A ECAEB B iiiiiiffliiiiiiiiffliiiiil liiiii fiiiiiiiiiiiiiiiiiiiiiiiff SSSSSssOOSSSS$$$SSsOOSS$$$SSmSSS$SS mSSSSSSSSSS$SSSSSSSSSSSssm Figure 1: Hierarchical decomposition of the language produced by an encoder, with the first symbols produced appearing near the root of the tree. The ordering of leaves in the tree represent the input space, smaller inputs being encoded by those sentences on the left. The examples used to train the best decoder found during . evolution are highlighted. The decoder must generalize to all other branches. LTI order to learn the task, the decoder must generalize systematically to novel states in the tree, including generalizing to symbols in different positions in the sequence. (Figure 2 shows the sequence of states of a successful decoder.) is presented with a sequence of inputs (x, 0, 0, ..).At each step, the output units of the network assume one of eleven states: all zero if no output is greater than 0.5 (denoted by $); or the saturation of the two highest activations at 1.0 and the remainder at 0.0 (denoted by A = [1,1,0,0,0] through J = [0,0,0,1, 1]). If the zero output is produced, propagation is halted. Otherwise propagation continues for up to five steps, after which the output units assume the zero ($) state. The decoder is a recurrent network with 5 input units and a single output, and a recurrent hidden layer. Former work [11] has shown that due to conflicting constraints of the encoder and decoder, it is easier for the decoder to process strings which are in the reverse order to those produced by the encoder. Consequently, the input to the decoder is taken to be the reverse of the output from the decoder, except for $, which remains the last symbol. (For clarity, strings are written in the order produced by the encoder.) Each input pattern presented to the decoder matches the output of the encoder - either two units are active, or none are. The network is trained with backpropagation through time to produce the desired value, x, on presentation of the final symbol. in the sequence ($). A simple hill-climbing evolutionary strategy with a two-stage evaluation function is used to evolve an initially random encoder into one which produces a language which a random decoder can learn easily from few examples. The evaluation of an encoder, mutated from the current "champion" by the addition of Gaussian noise to the weights, is performed against two criteria. (1) The mutated network must produce a greater variety of sequences over the range of inputs; and (2) a decoder with initially small random weights, trained on the mutated encoder's output, must yield lower sum-squared error across the entire range of inputs than the champion. Each mutant encoder is paired with a single decoder with initially random weights. If the mutant encoder-decoder pair is more successful than the champion, the mutant becomes champion and the ptocess is repeated. Since the encoder's input space is continuous and impossible to examine in its entirety, the input range is approximated with 100 uniformly distributed examples from 0.00 to 0.99. The final output from the hill-climber is the language gen~rated by the best encoder found. Evolving Learnable Languages . 2.1 69 Evolving an easily learnable language Humans learn from sparse data. In the first series of simulations we test whether a compositional language can be evolved that learners can reliably and effectively learn from only five examples. From just five training examples, it seems unreasonable to expect that any decoder would learn the task. The task is intentionally hard in that a language is restricted to sequences of discrete symbols with which it must describe a continuous space. Note that simple linear interpolation is not possible due to the symbolic alphabet of the languages. Recursive solutions are possible but are unable to be learned by an unbiased learner. The decoder is a minimally-biased learner and as the simulations showed, performed much better than arguments based on learnability theory would predict. Ten languages were evolved with the hill-climbing algorithm (outlined above) for 10000 generations. 1 For each language, 100 new random decoders were trained under the same conditions as during evolution (five examples, 400 epochs). All ten runs used encoders and decoders with five hidden units. All of the evolved languages were learnable by some decoders (minimum 20, maximum 72, mean 48). A learner is said to have effectively learned the language if its sum-squared-error across the 100 points in the space is less than 1.0. 2 Encoders employed on average 36 sentences (minimum 21, maximum 60) to communicate the 100 points. The 5 training examples for each decoder were sampled randomly from [0, 1] and hence some decoders faced very difficult generalization tasks. The difficulty of the task is demonstrated by the language analyzed in Figures 1 and 2. The evolved languages all contained' similar compositional structure to that of the language described in Figures 1 and 2. The inherent biases of the decoder, although minimal, are clearly sufficient for learning the compositional structure. 3 Evolving languages for particular generalization The first series of simulations demonstrate that we can find languages for which a minimally biased learner can generalize from few examples. In the next simulations we consider whether languages can be evolved to facilitate specific forms of generalization in their users. Section 2.1 considered the case? where the decoder's required output was the same as the encoder's input. This setup yields the approximation to the line y == x in Figure 2. The compositional structure of the evolved languages allows the decoder to generalize to unseen regions of the space. In the following series of simulations we consider the relationship between the structure of a language and the way in which the decoder is required to generalize. This association is studied by altering the desired relationship between the encoder'~ input (x) and the decoder's output (y). Two sets of ten languages were evolved, one set requiring y = x (identity, as in section 2.1), the other using a function resembling a series of five steps at random heights: y == r( L5xJ); r = (0.3746, 0.5753,0.8102,0.7272,0.4527) (random step)3. All conditions were as for section 2, with the exception that 10 training examples were used and the hill-climber ran for 1000 generations. On completion of evolution, 100 decoders were trained on the 20 final languages under both conditions above as lOne generation represents the creation of a more variable, mutated e~coder and the subsequent training of a decoder. 2 A language is said to be reliably learnable when at least 50% of random decoders are able to effectively learn it within 400 epochs. 3 L5x J provides an index into the array r, based on the mag~tude of x. Tonkes) A. Blair and J. Wiles 70 (t;~O.;2 0.3 0.4 0.5 0.8 07 01 0.9 t (a) (b) ?:~-q!JL~;l (c) J~,~"",.".,.~~ o. 0, oQ 1,~ '~-' ~:l ?o:i~,4cio6 G.:; ,{;;I (d) ....--.....---..---..---...---r-_~ Figure 2: Decoder output after seeing the first n symbols in the message, for n == 1 (a) to n == 6 (f) (from the language in Figure 1). The X-axis is the encoder's input, the Y-axis is the decoder's output at that point in the sequence. The five points that the decoder was trained on are shown as crosses in each graph. After the first symbol (A, B, G, E or $), the decoder outputs one of five values (a); after the second symbol, more outputs are possible (b). Subsequent symbols in each string specify finer gradations in the output. Note that the output is not constructed monotonically, with each symbol providing a closer approximation to the target function, but rather recursively, only approximating the linear target at the final position in each sequence. Structure inherent in the sequences allows the system to generalize to parts of the space it has never seen. Note that the generalization is not based on interpolation between symbol values, but rather on their compositional structure. well as two others, a sine function and a cubic function. The results show that languages can be evolved to enhance generalization preferentially for one "world" over another. On average, the languages performed far better when tested in the world in which they were evolved than in other worlds. Languages evolved for the identity mapping were on average learned by 64% of decoders trained on the identity task compared with just 5% in the random step case. Languages evolved for the random step task were learned by 60% of decoders trained on the random step task but only 24% when trained on the identity task. Decoders generally performed poorly on the cubic function, and no decoder learned the sine task from either set of evolved languages. The second series of simulations show that the manner in which the decoder generalizes is not restricted to the task of section 2.1. Rather, the languages evolve to facilitate generalization by the decoder in different ways, aided by its minimal biases. Evolving Learnable LaliZJ!UGlJ!es 71 core concepts' 4 In the former simulations, randomly selected concepts were used to train decoders. In some cases a pathological distribution of points made learning extremely difficult. In contrast, it seems likely that human children learn language based on a common set of semantically-constrained core concepts ("Mom", "I want milk", "no", etc). For the third series of simulations, we tested whether selecting a fortuitous set of training concepts could have a positive affect on the success of an evolved language. The simulations with alternative generalization functions (section 3) indicated that decoders had difficulty generalizing to the sine function. Even when encoders were evolved specifically on the sine task, in the best of 10 systems only 13 of 100 random decoders successfully learned. We evolved a new language on a specifically chosen set of 10 points for generalization to the sine function. One hundred decoders were then trained on the resulting language ush"1.g either the same set of 10 points, or a random set. Of the networks trained on the fixed set, 92 learned the tasked, compared with 5 networks trained on the random sets. That a language evolves to communicate a restricted set of concepts is not particularly unusual. But what this simulation shows is the more surprising result that a language can evolve to generalize from specific core concepts to a whole recursive langUage in a particular way (in this case, a sine function). 5 Discussion The first series of simulations show that a compositional language can be learned from five strings by an recurrent network. Generalization performance included correct decoding of novel branches and symbols in novel positions (Figure 1). The second series of simulations highlight how a language can be evolved to facilitate different forms of generalization in the decoder. The final simulation demonstrates that languages can also be tailored to generalize from a specific set of examples. The three series of simulations modify the language environment of the decoder in three different ways: (1) the relationship between utterances and meaning; (2) the type of generalization required from the decoder; and (3) the particular utterances and meanings to which a learner is exposed. In each case, the language environment of the learner was sculpted to exploit the minimal biases present in the learner. While taking an approach similar to [10] of giving the learner' an additional bias in the form of initial weights was also likely to have been effective, the purpose of the simulations was to investigate how strongly external factors could assist in simplifying learning. 6 Conclusions "The key to understanding language learnability does not lie in the richly social context of language training, nor _in the incredibly prescient guesses of young language learners; rather, it lies in a process that seems otherwise far remote from the microcosm of toddlers and caretakers - language change. Although the rate of social evolutionary change in learning structure appears unchang~ ing compared to the time it takes a child to develop language abilities, this process is crucial to understanding how the child can learn a language that on the surface appears impossibly complex and poorly taught." [3, p115]. 72 B. Tonkes, A. Blair and J. Wiles In this paper we studied ways in which languages can adapt to their learners. running simulations of a language evolution process, 'We contribute additional components to the list of aspects of language that can be learned by minimally-biased, general-purpose learners, namely that recursive structure can be learned from few examples, that languages can evolve to facilitate generalization in a particular way, and that they can evolve to be easily learnable from common sentences. In all the simulations in this paper, enhancement of language learnability is achieved through changes to the learner's environment without resorting to adding biases in the language acquisition device. This work was supported by an APA to Bradley Tonkes, a UQ PostdoCtoral Fellowship to Alan Blair and an ARC grant to Janet Wiles. References [1] N. Chomsky. Language and Mind. Harcourt, Brace, New York, 1968. [2] J. L. Elman, E. A. Bates, M. H. Johnson, A. Karmiloff-Smith, D. Parisi, and K. Plunkett. Rethinking Innateness: A Connectionist Perspective on Development. MIT Press, Boston, 1996. [3] T. W. Deacon. The Symbolic Species: The Co-Evolution of Language and~the Brain. W. W. Norton and Company, New York, 1997. [4] S. Kirby. Fitness and the selective adaptation of language. In J. Hurford, C. Knight, and M. Studdert-Kennedy, editors, Approaches to the Evolution of Language. Cambridge University Press, Cambridge, 1998. [5] M. H. Christiansen. Language as an organism - implications for the evolution and acquisition of language. Unpublished manuscript, February 1995. [6] M. Hare and J. L. Elman. Learning and morphological change. Cognition, 56:61-98, 1995. [7] S. Kirby. Syntax without natural selection: How compositionality emerges from vocabulary in a population of learners. In C. Knight, J. Hurford, and M.. StuddertKennedy, editors, The Evolutionary Emergence of Language: Social function and the origins of linguistic form. Cambridge University Press, Cambridge, 1999. [8] J. Batali. Computational simulations of the emergence of grammar. In J. Hurford, C. Knight, and M. Studdert-Kennedy, editors, Approaches to the Evolution of Language, pages 405-426. Cambridge University Press, Cambridge, 1998.. [9] J. L. Elman. Learning and development in neural networks: The importance of starting smalL Cognition, 48:71-99, 1993. [10] J. Batali. Innate biases and critical periods: Combining evolution and learning in the acquisition of syntax. In R. Brooks and P. Maes, editors, Proceedings of the Fourth Artificial Life Workshop, pages 160-171. MIT Press, 1994. [11] B. Tonkes, A. Blair, and J. Wiles. A paradox of neural encoders and decoders, Of, why don't we talk backwards? In B. McKay, X. Yao, C. S. Newton, J. -H. Kim, and T. Furuhashi, editors, Simulated Evolution and Learning, volume 1585 of Lecture Notes in Artificial Intelligence. Springer, 1999.
1782 |@word seems:4 simulation:19 queensland:4 decomposition:1 simplifying:1 paid:1 maes:1 fortuitous:1 recursively:1 initial:2 series:10 selecting:1 mag:1 tuned:1 bc:1 past:1 bradley:2 current:2 surprising:1 activation:1 must:5 written:1 subsequent:2 eleven:1 infant:2 intelligence:1 leaf:1 device:2 selected:1 guess:1 smith:1 core:4 provides:1 contribute:2 five:12 height:1 constructed:1 manner:1 themselves:1 elman:4 examine:1 nor:1 brain:1 company:1 becomes:1 coder:1 what:1 evolved:20 string:5 generalizable:2 lone:1 friendly:1 demonstrates:1 unit:11 grant:1 positive:1 engineering:2 modify:1 interpolation:2 au:3 minimally:5 studied:2 co:1 limited:1 bi:1 ms:1 range:3 hurford:3 recursive:6 lf:1 backpropagation:1 innateness:1 scant:1 empirical:3 evolving:5 seeing:1 suggest:1 symbolic:5 chomsky:1 cannot:1 selection:1 janet:2 context:1 impossible:1 gradation:1 demonstrated:2 resembling:1 attention:1 starting:2 incredibly:1 simplicity:1 array:1 population:1 target:2 play:1 user:1 savvy:1 origin:1 approximated:1 particularly:1 continues:1 role:1 region:1 morphological:1 ordering:1 remote:1 highest:1 knight:3 observes:1 ran:1 substantial:1 environment:6 mu:1 trained:11 exposed:2 creation:1 learner:23 necessitates:1 easily:7 plunkett:1 represented:1 talk:1 alphabet:2 train:2 elec:2 describe:1 effective:1 artificial:2 whose:1 encoded:1 s:1 otherwise:2 encoder:19 ability:2 grammar:1 unseen:1 highlighted:1 itself:1 emergence:2 final:5 sequence:11 parisi:1 propose:1 interaction:1 adaptation:2 remainder:1 combining:1 date:1 gen:1 poorly:2 oz:1 enhancement:1 microcosm:1 produce:4 illustrate:1 recurrent:8 completion:1 develop:1 school:1 strong:2 c:1 entirety:1 blair:7 correct:1 exploration:1 human:7 australia:3 argued:1 require:1 generalization:17 investigation:1 exploring:3 assisted:1 considered:1 iil:2 mapping:1 predict:1 cognition:2 claim:1 purpose:5 axiomatic:1 combinatorial:2 wl:1 champion:4 successfully:1 mit:2 clearly:1 gaussian:1 rather:7 linguistic:1 encode:1 mutant:3 contrast:1 kim:1 entire:1 initially:3 hidden:4 selective:1 comprising:1 janetw:1 issue:2 denoted:2 development:2 constrained:2 special:1 never:1 represents:1 survive:1 others:1 connectionist:1 inherent:2 few:6 randomly:3 pathological:1 fitness:1 ab:1 message:1 investigate:1 evaluation:2 analyzed:1 implication:1 closer:1 tree:3 continuing:1 desired:2 minimal:3 melbourne:1 halted:1 altering:1 mckay:1 hundred:1 successful:3 johnson:1 learnability:7 encoders:4 generalizability:1 ie:1 decoding:1 enhance:1 quickly:1 yao:1 squared:2 external:1 suggesting:1 sine:6 performed:4 try:1 root:1 analyze:1 start:1 il:1 yield:2 climbing:2 generalize:8 mutated:4 produced:5 none:1 bates:1 comp:2 kennedy:2 finer:1 norton:1 against:1 acquisition:6 hare:2 frequency:1 intentionally:1 sampled:1 richly:1 emerges:1 subtle:1 back:2 appears:2 manuscript:1 response:1 specify:1 strongly:1 just:4 stage:1 receives:1 harcourt:1 propagation:2 indicated:1 innate:2 facilitate:6 tense:1 concept:14 unbiased:1 requiring:1 evolution:13 former:2 hence:1 during:2 criterion:1 syntax:3 hill:4 demonstrate:1 bring:1 meaning:2 novel:3 common:2 volume:1 jl:1 association:1 organism:1 composition:1 cambridge:6 outlined:1 resorting:1 language:101 had:1 impressive:1 surface:1 etc:1 dominant:1 recent:2 showed:2 perspective:1 reverse:2 success:1 life:1 seen:1 minimum:2 greater:3 additional:2 employed:1 paradigm:4 period:1 monotonically:1 ii:5 branch:2 i3l:1 alan:2 ing:1 match:1 adapt:3 cross:1 paired:1 tasked:1 represent:1 tailored:1 achieved:1 proposal:1 addition:2 want:1 fellowship:1 interval:1 sends:1 crucial:1 biased:5 brace:1 facilitates:1 oq:1 near:1 backwards:1 variety:1 affect:1 psychology:1 architecture:1 tm:1 toddler:1 whether:4 assist:1 york:2 compositional:8 tude:1 generally:1 ten:3 simplest:1 discrete:1 taught:1 key:1 lan:1 drawn:1 clarity:1 iiiiii:1 thresholded:1 lti:1 graph:1 sum:2 impossibly:1 run:1 fourth:1 communicate:4 family:1 almost:1 christiansen:1 batali:3 layer:1 followed:1 apa:1 constraint:2 aspect:3 argument:2 extremely:1 relatively:2 department:1 structured:1 smaller:1 across:2 climber:2 ush:1 kirby:3 evolves:1 wile:6 restricted:3 taken:3 remains:1 mechanism:2 mind:1 unusual:1 generalizes:1 unreasonable:1 victoria:1 hierarchical:1 appearing:1 uq:3 studdert:2 alternative:1 running:1 include:1 newton:1 exploit:3 giving:1 approximating:1 february:1 question:1 strategy:1 said:2 evolutionary:3 unable:1 sci:2 simulated:2 decoder:49 rethinking:1 extent:1 index:1 relationship:3 providing:1 preferentially:1 difficult:2 setup:1 reliably:2 arc:1 finite:1 prescient:1 communication:2 paradox:1 verb:1 compositionality:1 namely:2 pair:1 required:3 unpublished:1 sentence:8 connection:1 learned:13 conflicting:1 brook:1 address:2 able:1 pattern:1 challenge:1 saturation:1 including:1 memory:1 critical:1 parkville:1 natural:2 difficulty:2 rated:1 axis:2 utterance:2 faced:1 epoch:2 mom:1 understanding:2 evolve:10 expect:1 highlight:2 lecture:1 generation:4 limitation:1 age:1 conveyed:1 sufficient:1 editor:5 systematically:1 supported:1 last:1 bias:12 weaker:1 taking:1 emerge:1 sparse:2 distributed:1 regard:1 vocabulary:1 world:4 made:1 far:3 social:3 bb:1 approximate:1 emphasize:1 active:1 b1:1 postdoctoral:1 don:1 continuous:2 why:1 additionally:1 learn:9 nature:2 robust:1 channel:1 complex:3 domain:2 whole:1 noise:1 child:6 complementary:1 repeated:1 cubic:2 position:3 csee:2 lie:2 third:2 young:1 down:1 specific:7 emphasized:1 showing:1 learnable:12 explored:1 symbol:16 list:1 workshop:1 adding:1 effectively:3 milk:1 importance:1 easier:1 boston:1 generalizing:2 likely:4 contained:1 springer:1 identity:4 presentation:1 consequently:1 towards:1 change:5 hard:1 aided:1 specifically:3 except:1 uniformly:1 semantically:1 included:1 specie:1 e:1 exception:1 support:2 liiiii:1 dept:2 tested:2 unlearnable:1
854
1,783
Predictive Sequence Learning in Recurrent Neocortical Circuits* R.P.N.Rao T. J. Sejnowski Computational Neurobiology Lab and Sloan Center for Theoretical Neurobiology The Salk Institute, La Jolla, CA 92037 rao@salk.edu Computational Neurobiology Lab and Howard Hughes Medical Institute The Salk Institute, La Jolla, CA 92037 terry@salk.edu Abstract Neocortical circuits are dominated by massive excitatory feedback: more than eighty percent of the synapses made by excitatory cortical neurons are onto other excitatory cortical neurons. Why is there such massive recurrent excitation in the neocortex and what is its role in cortical computation? Recent neurophysiological experiments have shown that the plasticity of recurrent neocortical synapses is governed by a temporally asymmetric Hebbian learning rule. We describe how such a rule may allow the cortex to modify recurrent synapses for prediction of input sequences. The goal is to predict the next cortical input from the recent past based on previous experience of similar input sequences. We show that a temporal difference learning rule for prediction used in conjunction with dendritic back-propagating action potentials reproduces the temporally asymmetric Hebbian plasticity observed physiologically. Biophysical simulations demonstrate that a network of cortical neurons can learn to predict moving stimuli and develop direction selective responses as a consequence of learning. The space-time response properties of model neurons are shown to be similar to those of direction selective cells in alert monkey VI. 1 INTRODUCTION The neocortex is characterized by an extensive system of recurrent excitatory connections between neurons in a given area. The precise computational function of this massive recurrent excitation remains unknown. Previous modeling studies have suggested a role for excitatory feedback in amplifying feedforward inputs [1]. Recently, however, it has been shown that recurrent excitatory connections between cortical neurons are modified according to a temporally asymmetric Hebbian learning rule: synapses that are activated slightly before the cell fires are strengthened whereas those that are activated slightly after are weakened [2, 3]. Information regarding the postsynaptic activity of the cell is conveyed back to the dendritic locations of synapses by back-propagating action potentials from the soma. In this paper, we explore the hypothesis that recurrent excitation subserves the function of prediction and generation of temporal sequences in neocortical circuits [4, 5, 6]. We show "This research was supported by the Sloan Foundation and Howard Hughes Medical Institute. 165 Predictive Sequence Learning in Recurrent Neocortical Circuits that a temporal difference based learning rule for prediction applied to backpropagating action potentials reproduces the experimentally observed phenomenon of asymmetric Hebbian plasticity. We then show that such a learning mechanism can be used to learn temporal sequences and the property of direction selectivity emerges as a consequence of learning to predict moving stimuli. Space-time response plots of model neurons are shown to be similar to those of direction selective cells in alert macaque VI. 2 TEMPORALLY ASYMMETRIC HEBBIAN PLASTICITY AND TEMPORAL DIFFERENCE LEARNING To accurately predict input sequences, the recurrent excitatory connections in a network need to be adjusted such that the appropriate set of neurons are activated at each time step. This can be achieved by using a "temporal-difference" (TD) learning rule [5, 7]. In this paradigm of synaptic plasticity, an activated synapse is strengthened or weakened based on whether the difference between two temporally-separated predictions is positive or negative. This minimizes the errors in prediction by ensuring that the prediction generated by the neuron after synaptic modification is closer to the desired value than before (see [7] for more details). In order to ascertain whether temporally-asymmetric Hebbian learning in cortical neurons can be interpreted as a fonn of temporal-difference learning, we used a two-compartment model of a cortical neuron consisting of a dendrite and a soma-axon compartment. The compartmental model was based on a previous study that demonstrated the ability of such a model to reproduce a range of cortical response properties [8] . The presence of voItageactivated sodium channels in the dendrite allowed back-propagation of action potentials from the soma into the dendrite. To study plasticity, excitatory postsynaptic potentials (EPSPs) were elicited at different time delays with respect to postsynaptic spiking by presynaptic activation of a single excitatory synapse located on the dendrite. Synaptic currents were calculated using a kinetic model of synaptic transmission with model parameters fitted to whole-cell recorded AMPA currents (see [9] for more details). Synaptic plasticity was simulated by incrementing or decrementing the value for maximal synaptic conductance by an amount proportional to the temporal-difference in the postsynaptic membrane potential at time instants t + ~t and t - ~t for presynaptic activation at time t. The delay parameter ~t was set to 5 ms to yield results consistent with previous physiological experiments [2]. Presynaptic input to the model neuron was paired with postsynaptic spiking by injecting a depolarizing current pulse (10 ms, 200 pA) into the soma. Changes in synaptic efficacy were monitored by applying a test stimulus before and after pairing, and recording the EPSP evoked by the test stimulus. Figure I A shows the results of pairings in which the postsynaptic spike was triggered 5 ms after and 5 ms before the onset of the EPSP respectively. While the peak EPSP amplitude was increased 58.5% in the former case, it was decreased 49.4% in the latter case, qualitatively similar to experimental observations [2] . The critical window for synaptic modifications in the model depends on the parameter ~t as well as the shape ofthe back-propagating action potential. This window of plasticity was examined by varying the time interval be5 ms). As shown in tween presynaptic stimulation and postsynaptic spiking (with ~t Figure IB, changes in synaptic efficacy exhibited a highly asymmetric dependence on spike timing similar to physiological data [2]. Potentiation was observed for EPSPs that occurred between I and 12 ms before the postsynaptic spike, with maximal potentiation at 6 ms. Maximal depression was observed for EPSPs occurring 6 ms after the peak of the postsynaptic spike and this depression gradually decreased, approaching zero for delays greater than 10 ms. As in rat neocortical neurons, Xenopus tectal neurons, and cultured hippocampal neurons (see [2]), a narrow transition zone (roughly 3 ms in the model) separated the potentiation and depression windows. = R. P. N. Rao and T. J. Sejnowski 166 I~< A ~, before :~ -----" pairing - "-! i :I . IS ---.J ~ afler : --'1 ISma - -, - - r - - - -_ _ _ _ _ __J l__ _ ___ , IS ~~ befote 15m. IS I>m. -"'.J .... after I : I ~i I~ ~< ISms _____ - - - - - - r - - - - :~ I , I ? & SI S2 ~ ~ 150 ]- 100 .5 -50 B i OO~ _ _ _J ~ c: CU -100 .c U -40 20 o Time of Synaptic Input (ms) -20 40 Figure l: Synaptic Plasticity in a Model Neocortical Neuron. (A) (Left Panel) EPSP in the model neuron evoked by a presynaptic spike (S 1) at an excitatory synapse ("before"). Pairing this presynaptic spike with postsynaptic spiking after a 5 ms delay ("pairing") induces long-term potentiation ("after"). (Right Panel) If presynaptic stimulation (S2) occurs 5 ms after postsynaptic firing. the synapse is weakened resulting in a corresponding decrease in peak EPSP amplitude. (B) Critical window for synaptic plasticity obtained by varying the delay between pre- and postsynaptic spiking (negative delays refer to presynaptic before postsynaptic spiking). 3 RESULTS 3.1 Learning Sequences using Temporally Asymmetric Hebbian Plasticity To see how a network of model neurons can learn sequences using the learning mechanism described above, consider the simplest case of two excitatory neurons N 1 and N2 connected to each other, receiving inputs from two separate input neurons 11 and 12 (Figure 2A). Suppose input neuron 11 fires before input neuron 12, causing neuron Nl to fire (Figure 2B). The spike from Nl results in a sub-threshold EPSP in N2 due to the synapse S2. If input arrives from 12 any time between land 12 ms after this EPSP and the temporal summation of these two EPSPs causes N2 to fire, the synapse S2 will be strengthened. The synapse S l, on the other hand, will be weakened because the EPSP due to N2 arrives a few milliseconds after Nl has fired. Thus, on a subsequent trial, when input 11 causes neuron Nl to fire, Nl in turn causes N2 to fire several milliseconds before input 12 occurs due to the potentiation of the recurrent synapse S2 in previous trial(s) (Figure 2C). Input neuron 12 can thus be inhibited by the predictive feedback from N2 just before the occurrence of imminent input activity (marked by an asterisk in Figure 2C). This inhibition prevents input 12 from further exciting N2. Similarly, a positive feedback loop between neurons Nl and N2 is avoided because the synapse S 1 was weakened in previous trial(s) (see arrows in Figures 2B and 2C). Figure 2D depicts the process of potentiation and depression of the two synapses as a function of the number of exposures to the 11-12 input sequence. The decrease in latency of the predictive spike elicited in N2 with respect to the timing of input 12 is shown in Figure 2E. Notice that before learning, the spike occurs 3.2 ms after the occurrence of the input whereas after learning, it occurs 7.7 ms before the input. 3.2 Emergence of Direction Selectivity In a second set of simulations, we used a network of recurrently connected excitatory neurons as shown in Figure 3A receiving retinotopic sensory input consisting of moving pulses of excitation (8 ms pulse of excitation at each neuron) in the rightward and leftward directions. The task of the network was to predict the sensory input by learning appropriate recurrent connections such that a given neuron in the network starts firing several milliseconds before the arrival of its input pulse of excitation. The network was comprised of two parallel chains of neurons with mutual inhibition (dark arrows) between corresponding pairs of neurons along the two chains. The network was initialized such that within a chain, a given Predictive Sequence Learning in Recurrent Neocortical Circuits A 167 D SI S2 Excitato ry Neuron N2 0 .03 6 Synapse S2 Input Neuron 11 r. Input Neuron 12 Input I B 66 Input 2 C Before Learning o After Learning ?0000000000 0 000000 o NI 10 20 30 40 Time (number of trials) E II 11 4 V) ? ??? ? 2 .. . . . .. ~ ~j~ 12 1 I~ N2 ~~ 15 illS 12 L ~ ~~< 15 illS .. 0? ? .:: .~ -2 "E ...d:: -4 0 ;>, -6 ... . ... u " ~ j -8 0 ~ .. 10 20 30 40 Time (number of trials) Figure 2: Learning to Predict using Temporally Asymmetric Hebbian Learning. (A) Network of two model neurons Nl and N2 recurrently connected via excitatory synapses SI and S2, with input neurons 11 and 12. Nl and N2 inhibit the input neurons via inhibitory interneurons (darkened circles). (B) Network activity elicited by the sequence 11 followed by 12. (C) Network activity for the same sequence after 40 trials of learning. Due to strengthening of recurrent synapse S2. recurrent excitation from Nl now causes N2 to fire several ms before the expected arrival of input 12 (dashed line). allowing it to inhibit 12 (asterisk). Synapse SI has been weakened. preventing re-excitation of Nl (downward arrows show decrease in EPSP). (D) Potentiation and depression of synapses S 1 and S2 respectively during the course of learning. Synaptic strength was defined as maximal synaptic conductance in the kinetic model of synaptic transmission [9]. (E) Latency of predictive spike in N2 during the course of learning measured with respect to the time of input spike in 12 (dotted line). excitatory neuron received both excitation and inhibition from its predecessors and successors (Figure 3B). Excitatory and inhibitory synaptic currents were calculated using kinetic models of synaptic transmission based on properties of AMPA and GABAA receptors as determined from whole-cell recordings [9]. Maximum conductances for all synapses were initialized to small positive values (dotted lines in Figure 3C) with a slight asymmetry in the recurrent excitatory connections for breaking symmetry between the two chains. The network was exposed alternately to leftward and rightward moving stimuli for a total of 100 trials. The excitatory connections (labeled 'EXC' in Figure 3B) were modified according to the asymmetric Hebbian learning rule in Figure IB while the excitatory connections onto the inhibitory interneuron (labeled 'INH') were modified according to an asymmetric anti-Hebbian learning rule that reversed the polarity of the rule in Figure lB . The synaptic conductances learned by two neurons (marked NI and N2 in Figure 3A) located at corresponding positions in the two chains after 100 trials of exposure to the moving stimuli are shown in Figure 3C (solid line). Initially, for rightward motion, the slight asymmetry in R. P N. Rao and T. J. Sejnowski 168 B A Recurrent Excitatory Connections (EXC) l -4 -3 -2 -I () 2 4 Recurrent Inhibitory Connections (INH) - - - Input Stimulus (Rightward)f---- c Neuron NI Neuron N2 EXC -4 -3 -2 -l () 2 4 D Neuron NI Neuron N2 (Right-Selective) (Left-Selective) ~~ ~~ Rightward I11II111111 Motion -.PJUL. I11I I111111 Synapse Number LLflward Motion Synapse Number Figure 3: Direction Selectivity in the Model. (A) A model network consisting of two chains of recurrently connected neurons receiving retinotopic inputs_ A given neuron receives recurrent excita-tiorrand recurrent inhibition (white-headed arrows) as well as inhibition (dark-headed arrows) from its counterpart in the other chain_ (B) Recurrent connections to a given neuron (labeled '0') arise from 4 preceding and 4 succeeding neurons in its chain. Inhibition at a given neuron is mediated via a GAB Aergic interneuron (darkened circle). (C) Synaptic strength of recurrent excitatory (EXC) and inhibitory (IN H) connections to neurons Nt and N2 before (dotted lines) and after learning (solid lines). Synapses were adapted during 100 trials of exposure to alternating leftward and rightward moving stimuli. (D) Responses of neurons Nt and N2 to rightward and leftward moving stimuli_ As a result of learning, neuron N 1 has become selective for rightward motion (as have other neurons in the same chain) while neuron N2 has become selective for leftward motion_ In the preferred direction, each neuron starts firing several milliseconds before the actual input arrives at its soma (marked by an asterisk) due to recurrent excitation from preceding neurons_ The dark triangle represents the start of input stimulation in the network. the initial excitatory connections of neuron Nl allows it to fire slightly earlier than neuron N2 thereby inhibiting neuron N2. Additionally, since the EPSPs from neurons lying on the left of Nt occur before Nl fires, the excitatory synapses from these neurons are strengthened while the excitatory synapses from these same neurons to the inhibitory interneuron are weakened according to the two learning rules mentioned above. On the other hand, the excitatory synapses from neurons lying on the right side ofNl are weakened while inhibitory connections are strengthened since the EPSPs due to these connections occur after Nl has fired. The synapses on neuron N2 and its associated interneuron remain unaltered since there is no postsynaptic firing (due to inhibition by Nl) and hence no back-propagating action potentials in the dendrite. As shown in Figure 3C, after lOO trials, the excitatory and inhibitory connections to neuron Nl exhibit a marked asymmetry, with excitation originating from neurons on the left and inhibition from neurons on the right. Neuron N2 exhibits the opposite pattern of connectivity. As expected, neuron Nl was found to be selective for rightward motion while neuron N2 was selective for leftward motion (Figure 3D). Moreover, when stimulus motion is in the preferred direction, each neuron starts firing several milliseconds before the time of arrival of the input stimulus at its soma (marked by an asterisk) due to recurrent excitation from preceding neurons. Conversely, motion in the nonpreferred direction triggers recurrent inhibition from preceding neurons as well as inhibition 169 Predictive Sequence Learning in Recurrent Neocortical Circuits Monkey Data 5~ __ . ~'~?~?u'~'~"_'~~ ...? . ~ .? ~3- .. ~ 'Cd . . . . - , . . _~_~_~~ rftr_-4 :dc' ............ tH1? ? .. ' --.... ?... - ... p.!i! ,.. '. . ~ Model ____~'= _____?_? n. sO' rtn h n ... + j, ~~.~ : ?jHT!:%??: ; Q) ? ? _. r ~.; "::n:; ;,::= :: : :: em 6ft. 1'b'1 ..... Q. en 1 ?? ? ~ ~ C\I ? . 6 Stimylus _. 0.. -",., ? ....... _?h.e r-' lime (rLonds) 1?~ row +.+' h, =f- d "'d n h 'ft. . . . . . . . . . 'h,tr ' , hzc+ no d ?? IL ? ? - g ~ . . - 'hz ? ? ~ ;::I:; ~I~ N 0 d + te- _ _ _ _ _ _ _ _~ ----- stimulus 50 time (ms) 100 ....... Figure 4: Comparison of Monkey and Model Space-Time Response Plots. (Left) Sequence of PSTHs obtained by flashing optimally oriented bars at 20 positions across the 50 -wide receptive field (RF) of a complex cell in alert monkey V1 (from [11)). The cell's preferred direction is from the part of the RF represented at the bottom towards the top. Flash duration = 56 ms; inter-stimulus delay = 100 ms; 75 stimulus presentations. (Right) PSTHs obtained from a model neuron after stimulating the chain of neurons at 20 positions to the left and right side of the given neuron. Lower PSTHs represent stimulations on the preferred side while upper PSTHs represent stimulations on the null side. from the active neuron in the corresponding position in the other chain. Thus, the learned pattern of connectivity allows the direction selective neurons comprising the two chains in the network to conjointly code for and predict the moving input stimulus in each direction. The average firing rate of neurons in the network for the preferred direction was 75.7 Hz, which is in the range of cortical firing rates for moving bar stimuli. Assuming a 200 /-tm separation between excitatory model neurons in each chain and utilizing known values for the cortical magnification factor in monkey striate cortex, one can estimate the preferred stimulus velocity of model neurons to be 3.1 ? Is in the fovea and 27.9? Is in the periphery (at an eccentricity of 8?). Both of these values fall within the range of monkey striate cortical velocity preferences [11]. The model predicts that the neuroanatomical connections for a direction selective neuron should exhibit a pattern of asymmetrical excitation and inhibition similar to Figure 3C. A recent study of direction selective cells in awake monkey VI found excitation on the preferred side of the receptive field and inhibition on the null side consistent with the pattern of connections learned by the model [11] . For comparison with this experimental data, spontaneous background activity in the model was generated by incorporating Poisson-distributed random excitatory and inhibitory alpha synapses on the dendrite of each model neuron. Post stimulus time histograms (PSTHs) and space-time response plots were obtained by flashing optimally oriented bar stimuli at random positions in the cell's activating region. As shown in Figure 4, there is good qualitative agreement between the response plot for a complex cell and that for the model. Both space-time plots show a progressive shortening of response onset time and an increase in response transiency going in the preferred direction: in the model, this is due to recurrent excitation from progressively closer cells on the preferred side. Firing is reduced to below background rates 40-60 ms after stimulus onset in the upper part of the plots: in the model, this is due to recurrent inhibition from cells on the null side. The response transiency and shortening of response time course appears as a slant in the space-time maps, which can be related to the neuron's velocity sensitivity [11]. R. P. N. Rao and T. J. Sejnowski 170 4 CONCLUSIONS Our results show that a network of recurrently connected neurons endowed with a temporaldifference based asymmetric Hebbian learning mechanism can learn a predictive model of its spatiotemporal inputs. When exposed to moving stimuli, neurons in a simulated network learned to fire several milliseconds before the expected arrival of an input stimulus and developed direction selectivity as a consequence of learning. The model predicts that a direction selective neuron should start responding several milliseconds before the preferred stimulus enters its retinal input dendritic field (such predictive neural activity has recently been reported in retinal ganglion cells [10)). Temporally asymmetric Hebbian learning has previously been suggested as a possible mechanism for sequence learning in the hippocampus [4] and as an explanation for the asymmetric expansion of hippocampal place fields during route learning [12]. Some of these theories require relatively long temporal windows of synaptic plasticity (on the order of several hundreds of milliseconds) [4] while others have utilized temporal windows in the millisecond range for coincidence detection [3]. Sequence learning in our model is based on a window of plasticity in the 10 to 15 ms range which is roughly consistent with recent physiological observations [2] (see also [13)). The idea that prediction and sequence learning may constitute an important goal of the neocortex has previously been suggested in the context of statistical and information theoretic models of cortical processing [4, 5,6]. Our biophysical simulations suggest a possible implementation of such models in cortical circuitry. Given the universality ofthe problem of encoding and generating temporal sequences in both sensory and motor domains, the hypothesis of predictive sequence learning in recurrent neocortical circuits may help provide a unifying principle for studying cortical structure and function. References [1] R. 1. Douglas et al., Science 269, 981 (1995); H. Suarez et aI., 1. Neurosci. 15,6700 (1995); R. Maex and G. A. Orban, 1. Neurophysiol. 75, 1515 (1996); P. Mineiro and D. Zipser, Neural Comput. 10, 353 (1998); F. S. Chance et aI., Nature Neuroscience 2, 277 (1999). [2] H. Markram et al., Science 275, 213 (1997); W. B. Levy and O. Steward, Neuroscience 8, 791 (1983); D. Debanne et aI., Proc. Natl. Acad. Sci. U.S.A. 91, 1148 (1994); L. I. Zhang et aI., Nature 395, 37 (1998); G. Q. Bi and M. M. Poo, 1. Neurosci. 18, 10464 (1998). [3] w. Gerstner et al., Nature 383, 76 (1996); R. Kempter et al., in Advances in Neural Info. Proc. Systems 11, M. S. Kearns, S. A. Solla and D. A. Cohn, Eds. (MIT Press, Cambridge, MA, 1999), pp. 125-131. [4] L. F. Abbott and K. I. Blum, Cereb. Cortex 6, 406 (1996); W. Gerstner and L. F. Abbott, 1. Comput. Neurosci. 4, 79 (1997); A. A. Minai and W. B. Levy, in Proceedings of the 1993 World Congress on Neural Networks II, 505 (1993). [5] P. R. Montague and T. J. Sejnowski, Learning and Memory 1, 1 (1994); P. R. Montague et al., Nature 377, 725 (1995); w. Schultz et aI., Science 275, 1593 (1997). [6] R. P. N. Rao and D. H. Ballard, Neural Computation 9, 721 (1997); R. P. N. Rao and D. H. Ballard, Nature Neuroscience 2, 79 (1999); H. Barlow, Perception 27, 885 (1998). [7] R. S. Sutton, Machine Learning 3, 9 (1988); R. S. Sutton and A. G. Barto, in Learning and Computational Neuroscience: Foundations of Adaptive Networks, M. Gabriel and J. W. Moore, editors (MIT Press, Cambridge, MA, 1990). [8] Z. F. Mainen and T. 1. Sejnowski, Nature 382, 363 (1996). [9] A. Destexhe et al., in Methods in NeurolUll Modeling, C. Koch and I. Segev, editors, (MIT Press, Cambridge, MA, 1998). [10] M.1. Berry et al., Nature 398,334 (1999). [11] M. S. Livingstone, Neuron 20, 509 (1998). [12] M. R. Mehta et aI., Proc. Natl. Acad. Sci. U.S.A. 94,8918 (1997). [13] L. F. Abbott and S. Song, in Advances in Neural Info. Proc. Systems 1J, M. S. Keams, S. A. Solla and D. A. Cohn, Eds. (MIT Press, Cambridge, MA, 1999), pp. 69-75.
1783 |@word trial:10 cu:1 unaltered:1 hippocampus:1 mehta:1 simulation:3 pulse:4 fonn:1 thereby:1 tr:1 solid:2 initial:1 efficacy:2 mainen:1 l__:1 past:1 current:4 nt:3 activation:2 si:4 universality:1 subsequent:1 plasticity:13 shape:1 motor:1 plot:6 succeeding:1 progressively:1 location:1 preference:1 zhang:1 alert:3 along:1 predecessor:1 become:2 pairing:5 qualitative:1 headed:2 inter:1 expected:3 roughly:2 ry:1 td:1 actual:1 window:7 retinotopic:2 moreover:1 circuit:7 panel:2 null:3 what:1 interpreted:1 minimizes:1 monkey:7 developed:1 temporal:12 medical:2 before:22 positive:3 timing:2 modify:1 congress:1 consequence:3 acad:2 receptor:1 encoding:1 sutton:2 firing:8 weakened:8 examined:1 evoked:2 conversely:1 range:5 bi:1 hughes:2 area:1 imminent:1 pre:1 suggest:1 onto:2 context:1 applying:1 map:1 demonstrated:1 center:1 poo:1 exposure:3 duration:1 rule:10 utilizing:1 debanne:1 cultured:1 suppose:1 trigger:1 massive:3 spontaneous:1 hypothesis:2 agreement:1 pa:1 velocity:3 magnification:1 located:2 utilized:1 asymmetric:14 predicts:2 labeled:3 observed:4 role:2 ft:2 bottom:1 coincidence:1 enters:1 suarez:1 region:1 connected:5 solla:2 decrease:3 inhibit:2 mentioned:1 exposed:2 predictive:10 gabaa:1 triangle:1 neurophysiol:1 rightward:9 montague:2 represented:1 separated:2 describe:1 sejnowski:6 compartmental:1 ability:1 emergence:1 sequence:20 triggered:1 biophysical:2 maximal:4 epsp:9 strengthening:1 causing:1 loop:1 fired:2 transmission:3 asymmetry:3 eccentricity:1 generating:1 gab:1 help:1 oo:1 recurrent:29 develop:1 propagating:4 measured:1 received:1 epsps:6 direction:19 successor:1 require:1 potentiation:7 activating:1 dendritic:3 summation:1 adjusted:1 lying:2 koch:1 predict:7 circuitry:1 inhibiting:1 proc:4 injecting:1 amplifying:1 mit:4 modified:3 varying:2 barto:1 conjunction:1 initially:1 keams:1 originating:1 selective:13 reproduce:1 comprising:1 going:1 ill:2 mutual:1 field:4 represents:1 progressive:1 others:1 stimulus:22 maex:1 eighty:1 few:1 inhibited:1 oriented:2 consisting:3 fire:10 conductance:4 detection:1 interneurons:1 highly:1 arrives:3 nl:16 activated:4 natl:2 chain:12 closer:2 experience:1 initialized:2 desired:1 circle:2 re:1 theoretical:1 fitted:1 increased:1 modeling:2 earlier:1 rao:7 hundred:1 comprised:1 delay:7 loo:1 optimally:2 reported:1 spatiotemporal:1 peak:3 sensitivity:1 receiving:3 connectivity:2 recorded:1 potential:8 retinal:2 sloan:2 vi:3 onset:3 depends:1 lab:2 start:5 elicited:3 parallel:1 depolarizing:1 il:1 compartment:2 ni:4 yield:1 ofthe:2 accurately:1 synapsis:15 synaptic:20 ed:2 pp:2 associated:1 monitored:1 emerges:1 amplitude:2 back:6 appears:1 response:12 synapse:14 just:1 hand:2 receives:1 cohn:2 propagation:1 asymmetrical:1 barlow:1 counterpart:1 former:1 hence:1 alternating:1 moore:1 white:1 during:4 backpropagating:1 excitation:15 rat:1 m:23 hippocampal:2 neocortical:10 demonstrate:1 theoretic:1 cereb:1 motion:8 percent:1 recently:2 psths:5 stimulation:5 spiking:6 occurred:1 slight:2 refer:1 cambridge:4 slant:1 ai:6 similarly:1 moving:10 cortex:3 inhibition:13 recent:4 leftward:6 jolla:2 periphery:1 selectivity:4 route:1 steward:1 greater:1 preceding:4 paradigm:1 dashed:1 ii:2 tectal:1 hebbian:12 characterized:1 long:2 post:1 paired:1 ensuring:1 prediction:8 poisson:1 histogram:1 represent:2 achieved:1 cell:14 background:2 whereas:2 interval:1 decreased:2 exhibited:1 recording:2 hz:2 zipser:1 presence:1 feedforward:1 destexhe:1 approaching:1 opposite:1 regarding:1 tm:1 idea:1 whether:2 song:1 cause:4 constitute:1 action:6 depression:5 gabriel:1 latency:2 amount:1 shortening:2 dark:3 neocortex:3 induces:1 simplest:1 reduced:1 millisecond:9 notice:1 inhibitory:9 dotted:3 neuroscience:4 soma:6 threshold:1 blum:1 douglas:1 abbott:3 v1:1 place:1 separation:1 lime:1 followed:1 activity:6 strength:2 adapted:1 occur:2 segev:1 awake:1 conjointly:1 dominated:1 orban:1 relatively:1 according:4 membrane:1 remain:1 slightly:3 ascertain:1 postsynaptic:14 em:1 across:1 modification:2 gradually:1 remains:1 previously:2 turn:1 mechanism:4 studying:1 endowed:1 appropriate:2 occurrence:2 neuroanatomical:1 top:1 responding:1 unifying:1 instant:1 ism:1 spike:11 occurs:4 receptive:2 dependence:1 striate:2 exhibit:3 darkened:2 fovea:1 reversed:1 separate:1 simulated:2 sci:2 exc:4 presynaptic:8 assuming:1 code:1 polarity:1 info:2 negative:2 implementation:1 unknown:1 allowing:1 upper:2 neuron:86 observation:2 howard:2 anti:1 neurobiology:3 precise:1 inh:2 dc:1 lb:1 pair:1 extensive:1 connection:17 learned:4 narrow:1 macaque:1 alternately:1 suggested:3 bar:3 below:1 pattern:4 perception:1 rf:2 memory:1 explanation:1 terry:1 critical:2 sodium:1 temporally:9 rtn:1 mediated:1 berry:1 kempter:1 generation:1 proportional:1 asterisk:4 foundation:2 conveyed:1 i111111:1 consistent:3 temporaldifference:1 exciting:1 principle:1 editor:2 land:1 cd:1 row:1 excitatory:27 course:3 supported:1 side:8 allow:1 institute:4 wide:1 fall:1 markram:1 distributed:1 feedback:4 calculated:2 cortical:15 transition:1 world:1 sensory:3 preventing:1 made:1 qualitatively:1 adaptive:1 avoided:1 schultz:1 alpha:1 preferred:10 reproduces:2 active:1 decrementing:1 physiologically:1 mineiro:1 why:1 additionally:1 learn:4 channel:1 nature:7 ca:2 ballard:2 symmetry:1 dendrite:6 expansion:1 gerstner:2 complex:2 ampa:2 domain:1 tween:1 neurosci:3 arrow:5 whole:2 incrementing:1 s2:10 arrival:4 n2:26 arise:1 allowed:1 en:1 depicts:1 strengthened:5 salk:4 axon:1 sub:1 position:5 comput:2 governed:1 ib:2 breaking:1 levy:2 subserves:1 afler:1 recurrently:4 physiological:3 incorporating:1 flashing:2 te:1 downward:1 occurring:1 interneuron:4 explore:1 ganglion:1 neurophysiological:1 prevents:1 chance:1 kinetic:3 stimulating:1 ma:4 goal:2 marked:5 presentation:1 flash:1 towards:1 experimentally:1 change:2 determined:1 kearns:1 total:1 experimental:2 la:2 livingstone:1 nonpreferred:1 zone:1 minai:1 latter:1 phenomenon:1
855
1,784
Hierarchical Image Probability (HIP) Models Clay D. Spence and Lucas Parra Sarnoff Corporation CN5300 Princeton, NJ 08543-5300 {cspence, lparra} @samoff.com Abstract We formulate a model for probability distributions on image spaces. We show that any distribution of images can be factored exactly into conditional distributions of feature vectors at one resolution (pyramid level) conditioned on the image information at lower resolutions. We would like to factor this over positions in the pyramid levels to make it tractable, but such factoring may miss long-range dependencies. To fix this, we introduce hidden class labels at each pixel in the pyramid. The result is a hierarchical mixture of conditional probabilities, similar to a hidden Markov model on a tree. The model parameters can be found with maximum likelihood estimation using the EM algorithm. We have obtained encouraging preliminary results on the problems of detecting various objects in SAR images and target recognition in optical aerial images. 1 Introduction Many approaches to object recognition in images estimate Pr(class I image). By contrast, a model of the probability distribution of images, Pr(image), has many attractive features. We could use this for object recognition in the usual way by training a distribution for each object class and using Bayes' rule to get Pr(class I image) = Pr(image I class) Pr(class)J Pr{image). Clearly there are many other benefits of having a model of the distribution of images, since any kind of data analysis task can be approached using knowledge of the distribution of the data. For classification we could attempt 'to detect unusual examples and reject them, rather than trusting the classifier's output. We could also compress, interpolate, suppress noise, extend resolution, fuse multiple images, etc. Many image analysis algorithms use probability concepts, but few treat the distribution of images. Zhu, Wu and Mumford [9] do this by computing the maximum entropy distribution given a set of statistics for some features, This seems to work well for textures but it is not clear how well it will model the appearance of more structured objects. There are several algorithms for modeling the distributions of features extracted from the image, instead of the image itself. The Markov Random Field (MRF) models are an example of this line of development; see, e.g., [5, 4]. Unfortunately they tend to be very expensive computationally. In De Bonet and Viola's flexible histogram approach [2, 1], features are extracted at multiple image scales, and the resulting feature vectors are treated as a set of independent Hierarchical Image Probability (HIP) Models 849 Feature Pyramid Figure 1: Pyramids and feature notation. samples drawn from a distribution. They then model this distribution of feature vectors with Parzen windows. This has given good results, but the feature vectors from neighboring pixels are treated as independent when in fact they share exactly the same components from lower-resolutions. To fix this we might want to build a model in which the features at one pixel of one pyramid level condition the features at each of several child pixels at the next higher-resolution pyramid level. The multi scale stochastic process (MSP) methods do exactly that. Luettgen and Willsky [7], for example, applied a scale-space auto-regression (AR) model to texture discrimination. They use a quadtree or quadtree-like organization of the pixels in an image pyramid, and model the features in the pyramid as a stochastic process from coarse-to-fine levels along the tree. The variables in the process are hidden, and the observations are sums of these hidden variables plus noise. The Gaussian distributions are a limitation of MSP models. The result is also a model of the probability of the observations on the tree, not of the image. All of these methods seem well-suited for modeling texture, but it is unclear how we might build the models to capture the appearance of more structured objects. We will argue below that the presence of objects in images can make local conditioning like that of the flexible histogram and MSP approaches inappropriate. In the following we present a model for probability distributions of images, in which we try to move beyond texture modeling. This hierarchical image probability (HIP) model is similar to a hidden Markov model on a tree, and can be learned with the EM algorithm. In preliminary tests of the model on classification tasks the performance was comparable to that of other algorithms. 2 Coarse-to-fine factoring of image distributions Our goal will be to write the image distribution in a form similar to Pr(I) "-' Pr(Fo I Fd Pr(F l I F 2 ) ... , where FI is the set of feature images at pyramid levell. We expect that the short-range dependencies can be captured by the model's distribution of individual feature vectors, while the long-range dependencies can be captured somehow at low resolution. The large-scale structures affect finer scales by the conditioning. In fact we can prove that a coarse-to-fine factoring like this is correct. From an image I we build a Gaussian pyramid (repeatedly blur-and-subsample, with a Gaussian filter). Call the l-th level II, e.g., the original image is 10 (Figure 1). From each Gaussian level II we extract some set of feature images Fl. Sub-sample these to get feature images GI. Note that the images in GI have the same dimensions as 11+ 1 . We denote by G I the set of images containing 11+1 and the images in G I . We further denote the mapping from II to GI by gl. Suppose now that go : 10 t-+ Go is invertible. Then we can think of go as a change of vari- C. D. Spence and L. Parra 850 abIes. If we have a distribution on a space, its expressions in two different coordinate systems are related by multiplying by the Jacobian. In this case we get Pr( 10 ) = Igo I Pr( Go). Since Go = (Go , II), we can factor Pr(G o) to get Pr(Io) = Igol Pr(G o I h) Pr(h). If gl is invertible for alll E {O, . .. , L - 1} then we can simply repeat this change of variable and factoring procedure to get Pr(I) = [rflgdPr(GIIII+d] Pr(h) (1) 1=0 This is a very general result, valid for all Pr(I), no doubt with some rather mild restrictions to make the change of variables valid. The restriction that gl be invertible is strong, but many such feature sets are known to exist, e.g., most wavelet transforms on images. We know of a few ways that this condition can be relaxed, but further work is needed here. 3 The need for hidden variables For the sake of tractability we want to factor Pr(G I 111+ 1 ) over positions, something like Pr(I) ,.... ITI ITxEI,+l Pr(gl(x) I fl+! (x)) where gl(x) and fl+! (x) are the feature vectors at position x. The dependence of gl on fi+l expresses the persistence of image structures across scale, e.g., an edge is usually detectable as such in several neighboring pyramid levels. The flexible histogram and MSP methods share this structure. While it may be plausible that fl+ 1 (x) has a strong influence on gl (x), we argue now that this factorization and conditioning is not enough to capture some properties of real images. Objects in the world cause correlations and non-local dependencies in images. For example, the presence of a particular object might cause a certain kind of texture to be visible at levell. Usually local features fi+l by themselves will not contain enough information to infer the object's presence, but the entire image II+! at that layer might. Thus gl(x) is influenced by more of 11+1 than the local feature vector. Similarly, objects create long-range dependencies. For example, an object class might result in a kind of texture across a large area of the image. If an object of this class is always present, the distribution may factor, but if such objects aren't always present and can't be inferred from lower-resolution information, the presence of the texture at one location affects the probability of its presence elsewhere. We introduce hidden variables to represent the non-local information that is not captured by local features. They should also constrain the variability of features at the next finer scale. Denoting them collectively by A, we assume that conditioning on A allows the distributions over feature vectors to factor. In general, the distribution over images becomes Pr(I) ()( L {IT II A Pr(gl(x) I fi+l (x) , A) Pr(A I h)} Pr(h) . (2) 1=0 xEI,+l As written this is absolutely general, so we need to be more specific. In particular we would like to preserve the conditioning of higher-resolution information on coarser-resolution information, and the ability to factor over positions. Hierarchical Image Probability (HIP) Models 851 Figure 2: Tree structure of the conditional dependency between hidden variables in the HIP model. With subsampling by two, this is sometimes called a quadtree structure. As a first model we have chosen the following structure for our HIP model: 1 Pr(I)<x L IT II [pr(gl(X) \f1+1(x),al(X)) Pr(al(x) \a1+1(X))]. (3) Ao, .. . ,AL_1l=O xE11+l To each position x at each level l we attach a hidden discrete index or label al (x). The resulting label image Al for levell has the same dimensions as the images in G1? Since al(x) codes non-local information we can think of the labels Al as a segmentation or classification at the l-th pyramid level. By conditioning al(x) on al+l (x), we mean that al(x) is conditioned on al+1 at the parent pixel of x. This parent-child relationship follows from the sub-sampling operation. For example, if we sub-sample by two in each direction to get G 1 from Fl, we condition the variable al at (x, y) in level l on al+l at location (Lx /2 J, Ly /2 J) in levell + 1 (Figure 2). This gives the dependency graph of the hidden variables a tree structure. Such a probabilistic tree of discrete variables is sometimes referred to as a belief network. By conditioning child labels on their parents information propagates though the layers to other areas of the image while accumulating information along the way. For the sake of simplicity we've chosen Pr(gl I fl+1 ,al) to be normal with mean Mal fl+ 1 and covariance ~al. We also constrain Mal and ~al to be diagonal. (kal + 4 EM algorithm Thanks to the tree structure, the belief network for the hidden variables is relatively easy to train with an EM algorithm. The expectation step (summing over ai's) can be performed directly. If we had chosen a more densely-connected structure with each child having several parents, we would need either an approximate algorithm or Monte Carlo techniques. The expectation is weighted by the probability of a label or a parent-child pair of labels given the image. This can be computed in a fine-to-coarse-to-fine procedure, i.e. working from leaves to the root and then back out to the leaves. The method is based on belief propagation [6]. With some care an efficient algorithm can be worked out, but we omit the details due to space constraints. Once we can compute the expectations, the normal distribution makes the M-step tractable; we simply compute the updated gal' ~al' Mal' and Pr(al Ial+d as combinations of various expectation values. I The proportionality factor I1x Pr(gdX) I adx)) Pr(adx)). includes This is the I read as having no quantities f L +1 or aL+l. Pr(A L , h) which we model as L factor of Equation 3, which should be = 852 C. D. Spence and L. Parra Plane ROC are8t! o HP 0.4 .. ? HPNN 0.5 0.6 0 0 0 000 .. . . 0.7 Az 0.8 0.9 Figure 3: Examples of aircraft ROls. On the right are A z values from a jack-knife study of detection performance of HIP and HPNN models. Figure 4: SAR images of three types of vehicles to be detected. 5 Experiments We applied HIP to the problem of detecting aircraft in an aerial photograph of Logan airport. A simple template-matching algorithm was used to select forty candidate aircraft, twenty of which were false positives (Figure 3). Ten of the plane examples were used for training one HIP model and ten negative examples were used to train another. Because of thesmall number of examples, we performed a jack-knife study with ten random splits of the data. For features we used filter kernels that were polynomials of up to third order multiplying Gaussians. The HIP pyramid used subsampling by three in each direction. The test set ROC area for HIP had a mean of A z = 0.94, while our HPNN algorithm [8] gave a mean A z of 0.65. The individual values shown in Figure 3. (We compared with the HPNN because it had given Az = 0.86 on a larger set of aircraft images including these with a different set of features and subsampling by two.) We also performed an experiment with the three target classes in the MSTAR public targets data set, to compare with the results of the flexible histogram approach of De Bonet, et al [lJ . We trained three HIP models, one for each of the target vehicles BMP-2, BTR-70 and T-72 (Figure 4). As in [1J we trained each model on ten images of its class, one image for each of ten aspect angles, spaced approximately 36? apart. We trained one model for all ten images of a target, whereas De Bonet et al trained one model per image. We first tried discriminating between vehicles of one class and other objects by thresholding log Pr(I I class), i.e., no model of other objects is used. For the tests, the other objects were taken from the test data for the two other vehicle classes, plus seven other vehicle classes. 853 ROC uamg P, ( II t.rget1) 1 Pr( II_rget2) ..' " BMP-2 vs T -72: Az = 0.79 BMP-2 ys BTR-70: Az = 0.82 T- 72 ys BTR-70: Az = 0.89 01 02 0.3 04 0.5 0.6 07 08 09 f.lII9poallVes b Figure 5: ROC curves for vehicle detection in SAR imagery. (a) ROC curves by thresholding HIP likelihood of desired class. (b) ROC curves for inter-class discrimination using ratios of likelihoods as given by HIP models. There were 1,838 image from these seven other classes, 391 BMP2 test images, 196 BTR70 test images, and 386 Tn test images. The resulting ROC curves are shown in Figure 5a. We then tried discriminating between pairs target classes using HIP model likelihood ratios, i.e., log Pr(I Iclassl) - log Pr(I I class2) . Here we could not use the extra seven vehicle classes. The resulting ROC curves are shown in Figure 5b. The performance is comparable to that of the flexible histogram approach. 6 Conditional distributions of features To further test the HIP model's fit to the image distribution, we computed several distributions of features 9i(X) conditioned on the parent feature Ii+! (x).2 The empirical and computed distributions for a particular parent-child pair of features are shown in Figure 6. The conditional distributions we examined all had similar appearance, and all fit the empirical distributions well. Buccigrossi and Simoncelli [3] have reported such "bow-tie" shape conditional distributions for a variety of features. We want to point out that such conditional distributions are naturally obtained for any mixture of Gaussian distributions with varying scales and zero means. The present HIP model learns such conditionals, in effect describing the features as non-stationary Gaussian variables. 7 Conclusion We have developed a class of image probability models we call hierarchical image probability or HIP models. To justify these, we showed that image distributions can be exactly represented as products over pyramid levels of distributions of sub-sampled feature images conditioned on coarser-scale image information. We argued that hidden variables are needed to capture long-range dependencies while allowing us to further factor the distributions over position. In our current model the hidden variables act as indices of mixture 2This is somewhat involved; Pr(gl I /l+d is not just Pr(gl I /1+1 , al) Pr(al) summed over aI, but Lal Pr(gl, all /l+d = Lal Pr(gl 1/1+1, al) Pr(all /l+t} . C. D. Spence and L. Parra 854 Condilional dislirbulion of HIP model Condllional distirbulion of dala 0 0 W ..!!! ..!!! ~ >- '"!! '"~ ~c;, j c;, f. fealure 9 layer 1 I; fealure 9 layer 1 Figure 6: Empirical and HIP estimates of the distribution of a feature 91 (X) conditioned on its parent feature 11+1 (x). components. The resulting model is somewhat like a hidden Markov model on a tree. Our early results on classification problems showed good performance. Acknowledgements We thank Jeremy De Bonet and John Fisher for kindly answering questions about their work and experiments. Supported by the United States Government. References [1] J. S. De Bonet, P. Viola, and 1. W. Fisher III. Flexible histograms: A multiresolution target discrimination model. In E. G. Zelnio, editor, Proceedings of SPIE, volume 3370,1998. [2] Jeremy S. De Bonet and Paul Viola. Texture recognition using a non-parametric multiscale statistical model. In Conference on Computer Vision and Pattern Recognition. IEEE,1998 . [3] Robert W. Buccigrossi and Eero P. Simoncelli. Image compression via joint statistical characterization in the wavelet domain. Technical Report 414, U. Penn. GRASP Laboratory, 1998. Available at ftp :l/ftp.cis.upenn.eduJpub/eero!buccigrossi97.ps.gz. [4] Rama Chellappa and S. Chatterjee. Classification of textures using Gaussian Markov random fields. IEEE Trans. ASSP, 33:959-963, 1985 . [5] Stuart Geman and Donald Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. PAMI, PAMI-6(6):194-207 , November 1984. [6] Michael 1. Jordan, editor. Learning in Graphical Models, volume 89 of NATO Science Series D: Behavioral and Brain Sciences. Kluwer Academic, 1998. [7] Mark R. Luettgen and Alan S. Will sky. Likelihood calculation for a class of multiscale stochastic models, with application to texture discrimination. IEEE Trans. Image Proc., 4(2):194-207, 1995. [8] Clay D. Spence and Paul Sajda. Applications of multi-resolution neural networks to mammography. In Michael S. Kearns, Sara A. SolI a, and David A. Cohn, editors, NIPS 11, pages 981-988, Cambridge, MA, 1998. MIT Press. [9] Song Chun Zhu, Ying Nian Wu, and David Mumford. Minimax entropy principle and its application to texture modeling. Neural Computation , 9(8): 1627-1660, 1997.
1784 |@word mild:1 aircraft:4 compression:1 polynomial:1 seems:1 proportionality:1 tried:2 covariance:1 series:1 united:1 denoting:1 current:1 com:1 written:1 john:1 visible:1 blur:1 nian:1 shape:1 discrimination:4 v:1 stationary:1 leaf:2 plane:2 ial:1 short:1 detecting:2 coarse:4 characterization:1 location:2 lx:1 along:2 prove:1 behavioral:1 introduce:2 inter:1 upenn:1 themselves:1 multi:2 brain:1 encouraging:1 window:1 inappropriate:1 becomes:1 notation:1 kind:3 developed:1 gal:1 corporation:1 nj:1 sky:1 act:1 tie:1 exactly:4 classifier:1 rols:1 ly:1 omit:1 penn:1 positive:1 local:7 treat:1 io:1 approximately:1 pami:2 might:5 plus:2 examined:1 dala:1 sara:1 factorization:1 sarnoff:1 range:5 spence:5 procedure:2 area:3 empirical:3 reject:1 matching:1 persistence:1 donald:1 get:6 influence:1 accumulating:1 restriction:2 go:6 formulate:1 resolution:10 simplicity:1 mammography:1 factored:1 rule:1 coordinate:1 sar:3 updated:1 target:7 suppose:1 recognition:5 expensive:1 cspence:1 coarser:2 geman:2 capture:3 mal:3 connected:1 i1x:1 trained:4 joint:1 various:2 represented:1 train:2 sajda:1 chellappa:1 monte:1 detected:1 approached:1 larger:1 plausible:1 ability:1 statistic:1 gi:3 g1:1 think:2 itself:1 btr:3 product:1 neighboring:2 bow:1 multiresolution:1 az:5 parent:8 p:1 object:17 ftp:2 rama:1 strong:2 direction:2 correct:1 filter:2 stochastic:4 public:1 argued:1 government:1 fix:2 f1:1 ao:1 preliminary:2 parra:4 normal:2 mapping:1 early:1 lparra:1 estimation:1 proc:1 label:7 create:1 weighted:1 mit:1 clearly:1 gaussian:7 always:2 rather:2 varying:1 likelihood:5 contrast:1 detect:1 factoring:4 entire:1 lj:1 hidden:14 pixel:6 classification:5 flexible:6 lucas:1 development:1 summed:1 airport:1 field:2 once:1 having:3 sampling:1 stuart:1 report:1 few:2 preserve:1 ve:1 interpolate:1 individual:2 densely:1 attempt:1 detection:2 organization:1 fd:1 grasp:1 mixture:3 solo:1 edge:1 tree:9 logan:1 desired:1 hip:20 modeling:4 ar:1 restoration:1 tractability:1 reported:1 dependency:8 thanks:1 discriminating:2 probabilistic:1 invertible:3 michael:2 parzen:1 imagery:1 luettgen:2 containing:1 doubt:1 jeremy:2 de:6 includes:1 performed:3 try:1 root:1 vehicle:7 bayes:1 trusting:1 spaced:1 bayesian:1 carlo:1 multiplying:2 finer:2 fo:1 influenced:1 involved:1 naturally:1 spie:1 sampled:1 knowledge:1 segmentation:1 clay:2 back:1 higher:2 though:1 just:1 correlation:1 working:1 bonet:6 multiscale:2 cohn:1 propagation:1 somehow:1 effect:1 concept:1 contain:1 read:1 laboratory:1 attractive:1 tn:1 adx:2 image:67 jack:2 fi:4 conditioning:7 volume:2 extend:1 kluwer:1 cambridge:1 gibbs:1 ai:2 similarly:1 hp:1 cn5300:1 had:4 etc:1 something:1 igo:1 showed:2 apart:1 certain:1 captured:3 relaxed:1 care:1 somewhat:2 forty:1 ii:9 multiple:2 simoncelli:2 infer:1 hpnn:4 alan:1 levell:4 technical:1 academic:1 calculation:1 long:4 knife:2 y:2 a1:1 mrf:1 regression:1 vision:1 expectation:4 histogram:6 kernel:1 represent:1 sometimes:2 pyramid:15 whereas:1 want:3 fine:5 conditionals:1 extra:1 tend:1 seem:1 jordan:1 call:2 presence:5 split:1 enough:2 easy:1 iii:1 variety:1 affect:2 fit:2 gave:1 expression:1 song:1 cause:2 repeatedly:1 clear:1 abies:1 transforms:1 ten:6 exist:1 per:1 discrete:2 write:1 express:1 drawn:1 fuse:1 graph:1 relaxation:1 sum:1 angle:1 xei:1 wu:2 bmp:3 comparable:2 fl:7 layer:4 constraint:1 worked:1 constrain:2 sake:2 aspect:1 optical:1 relatively:1 structured:2 combination:1 aerial:2 across:2 em:4 pr:43 taken:1 computationally:1 equation:1 describing:1 detectable:1 needed:2 know:1 msp:4 tractable:2 unusual:1 available:1 operation:1 gaussians:1 hierarchical:6 original:1 compress:1 subsampling:3 graphical:1 build:3 move:1 question:1 quantity:1 mumford:2 parametric:1 dependence:1 usual:1 diagonal:1 unclear:1 thank:1 seven:3 argue:2 willsky:1 code:1 index:2 relationship:1 ratio:2 ying:1 unfortunately:1 robert:1 negative:1 suppress:1 quadtree:3 twenty:1 allowing:1 observation:2 markov:5 iti:1 november:1 viola:3 variability:1 assp:1 inferred:1 david:2 pair:3 class2:1 lal:2 learned:1 nip:1 trans:3 beyond:1 below:1 usually:2 pattern:1 including:1 belief:3 treated:2 attach:1 zhu:2 minimax:1 gz:1 auto:1 extract:1 acknowledgement:1 expect:1 limitation:1 propagates:1 thresholding:2 editor:3 principle:1 share:2 elsewhere:1 gl:15 repeat:1 supported:1 buccigrossi:2 template:1 benefit:1 curve:5 dimension:2 valid:2 vari:1 world:1 approximate:1 nato:1 summing:1 eero:2 alll:1 domain:1 kindly:1 noise:2 subsample:1 paul:2 child:6 referred:1 roc:8 sub:4 position:6 candidate:1 answering:1 jacobian:1 third:1 wavelet:2 learns:1 specific:1 chun:1 false:1 ci:1 texture:11 conditioned:5 chatterjee:1 suited:1 entropy:2 aren:1 photograph:1 simply:2 appearance:3 collectively:1 extracted:2 ma:1 conditional:7 goal:1 fisher:2 change:3 justify:1 miss:1 kearns:1 called:1 select:1 mark:1 absolutely:1 princeton:1
856
1,785
Image representations for facial expression coding Marian Stewart Bartlett* V.C. San Diego marni<Osalk.edu Gianluca Donato Persona, Redwood City, CA glanlucad<Odigitalpersona.com Di~ital Javier R. Movellan V.C. San Diego movellan<ocogsci.ucsd.edu Joseph C. Hager Network Information Res., SLC, Utah jchager<Oibm.com Paul Ekman V.C. San Francisco ekman<ocompuserve.com Terrence J. Sejnowski Howard Hughes Medical Institute The Salk Institute; V.C. San Diego terry<osalk.edu Abstract The Facial Action Coding System (FACS) (9) is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These methods include unsupervised learning techniques for finding basis images such as principal component analysis, independent component analysis and local feature analysis, and supervised learning techniques such as Fisher's linear discriminants. These data-driven bases are compared to Gabor wavelets, in which the basis images are predefined. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96% accuracy for classifying 12 facial actions. The ICA representation employs 2 orders of magnitude fewer basis images than the Gabor representation and takes 90% less CPU time to compute for new images. The results provide converging support for using local basis images, high spatial frequencies, and statistical independence for classifying facial actions. 1 Introduction Facial expressions provide information not only about affective state, but also about cognitive activity, temperament and personality, truthfulness, and psychopathology. The Facial Action Coding System (FACS) (9) is the leading method for measuring facial movement in behavioral science. FACS is performed manually by highly trained human experts. A FACS coder decomposes a facial expression into component muscle movements (Figure 1). Ekman and Friesen described 46 distinct facial movements, and over 7000 distinct combinations of such movements have * To whom correspondence should be addressed. (UCSD 0523, La Jolla, CA 92093.) This research was supported by NIH Grant No. IF32 MH12417-01. 887 Image Representations for Facial Expression Coding been observed in spontaneous behavior. An automated system would make facial expression measurement more widely accessible as a research tool for behavioral science and medicine. Such a system would also have application in human-computer interaction tools and low bandwidth facial animation coding. A number of systems have appeared in the computer vision literature for classifying facial expressions into a few basic categories of emotion, such as happy, sad, or surprised. While such approaches are important, an objective and detailed measure of facial activity such as FACS is needed for basic research into facial behavior. In a system being developed concurrently for automatic facial action coding, Cohn and colleagues (7) employ feature point tracking of a select set of image points. Techniques employing 2-D image filters have proven to be more effective than featurebased representations for face image analysis [e.g. (6)]. Here we examine image analysis techniques that densely analyze graylevel information in the face image. This work surveys and compares techniques for face image analysis as applied to automated FACS encoding. l The analysis focuses on methods for face image representation in which image graylevels are described as a linear superposition of basis images. The techniques were compared on a common image testbed using common similarity measures and classifiers. We compared four representations in which the basis images were learned from the statistics of the face image ensemble. These include unsupervised learning techniques such as principal component analysis (PCA), and local feature analysis (LFA), which are learned from the second-order dependences among the image pixels, and independent component analysis (ICA) which is learned from the highorder dependencies as well. We also examined a representation obtained through supervised learning on the second-order image statistics, Fisher's linear discriminants (FLD). Classification performances with these data-driven basis images were compared to Gabor wavelets, in which the basis images were pre-defined. We examined properties of optimal basis images, where optimal was defined in terms of classification. Generalization to novel faces was evaluated using leave-one-out cross-validation. Two basic classifiers were employed: nearest neighbor and template matching, where the templates were the mean feature vectors for each class. Two similarity measures were employed for each classifier: Euclidean distance and cosine of the angle between feature vectors. 2 1 AU 1 Inner brow raiser 4 4 AU 2 Outer brow raiser AU 4 Brow lowerer h. a. Figure 1: a. The facial muscles underlying six of the 46 facial actions. b. Cropped face images and 8-images for three facial actions (AU's). 1A detailed description of this work appears in (8). 888 2 M S. Bartlett, G. Donato, J. R. Movellan, J. C. Hager, P. Ekman and T. J. Sejnowski Image Database We collected a database of image sequences of subjects performing specified facial actions. The database consisted of image sequences of subjects performing specified facial actions. Each sequence began with a neutral expression and ended with a high magnitude muscle contraction. For this investigation, we used 111 sequences from 20 subjects and attempted to classify 12 actions: 6 upper face actions and 6 lower face actions. Upper and lower-face actions were analyzed separately since facial motions in the lower face do not effect the upper face, and vice versa (9). The face was located in the first frame in each sequence using the centers of the eyes and mouth. These coordinates were obtained manually by a mouse click. The coordinates from Frame 1 were used to register the subsequent frames in the sequence. The aspect ratios of the faces were warped so that the eye and mouth centers coincided across all images. The three coordinates were then used to rotate the eyes to horizontal, scale, and finally crop a window of 60 x 90 pixels containing the upper or lower face. To control for variations in lighting, logistic thresholding and luminance scaling was performed (13). Difference images (b-images) were obtained by subtracting the neutral expression in the first image of each sequence from the subsequent images in the sequence. 3 3.1 Unsupervised learning Eigenfaces (peA) A number of approaches to face image analysis employ data-driven basis vectors learned from the statistics of the face image ensemble. Techniques such as eigenfaces (17) employ principal component analysis, which is an unsupervised learning method based on the second-order dependencies among the pixels (the pixelwise covariances). PCA has been applied successfully to recognizing facial identity (17), and full facial expressions (14). Here we performed PCA on the dataset of b-images, where each b-image comprised a point in Rn given by the brightness of the n pixels. The PCA basis images were the eigenvectors of the covariance matrix (see Figure 2a), and the first p components comprised the representation. Multiple ranges of components were tested, from p = 10 to P = 200, and performance was also tested excluding the first 1-3 components. Best performance of 79.3% correct was obtained with the first 30 principal components, using the Euclidean distance similarity measure and template matching classifier. Padgett and Cottrell (14) found that a local PCA representation outperformed global PCA for classifying full facial expressions of emotion. Following the methods in (14), a set of local basis images was derived from the principal components of 15x 15 image patches from randomly sampled locations in the b-images (see Figure 2d.) The first p principal components comprised a basis set for all image locations, and the representation was downsampled by a factor of 4. Best performance of 73.4% was obtained with components 2-30, using Euclidean distance and template matching. Unlike the findings in (14), local basis images obtained through PCA were not more effective than global PCA for facial action coding. A second local implementation of PCA, in which the principal components were calculated for fixed 15x 15 image patches also failed to improve over global PCA. 3.2 Local Feature Analysis (LFA) Penev and Atick (15) recently developed a local, topographic representation based on second-order image statistics called local feature analysis (LFA). The kernels are derived from the principal component axes, and consist of a "whitening" step to equalize the variance of the PCA coefficients, followed by a rotation to pixel space. 889 Image Representations for Facial Expression Coding a. h. c. d. Figure 2: a. First 4 PCA basis images. b. Four ICA basis images. The ICA basis images are local, spatially opponent, and adaptive. c. Gabor kernels are local, spatially opponent, and predefined. d. First 4 local PCA basis images. We begin with the matrix P containing the principal component eigenvectors in its columns, and Ai are the corresponding eigenvalues. Each row of the matrix K serves as an element of the LFA image dictionary2 K = pVpT where V = D-! = diag( ~) V Ai i = 1, ... ,p (1) where Ai are the eigenvalues. The rows of K were found to have spatially local properties, and are "topographic" in the sense that they are indexed by spatial location (15). The dimensionality of the LFA representation was reduced by employing an iterative sparsification algorithm based on multiple linear regression described in (15). The LFA representation attained 81.1 % correct classification performance. Best performance was obtained using the first 155 kernels, the cosine similarity measure, and nearest neighbor classifier. Classification performance using LFA was not significantly different from the performance using peA. Although a face recognition algorithm based on the principles of LFA outperformed Eigenfaces in the March 1995 FERET competition, the exact algorithm has not been disclosed. Our results suggest that an aspect of the algorithm other than the LFA representation accounts for the difference in performance. 3.3 Independent Component Analysis (ICA) Representations such as Eigenfaces, LFA, and FLD are based on the second-order dependencies among the pixels, but are insensitive to the high-order dependencies. High-order dependencies are relationships that cannot be described by a linear predictor. Independent component analysis (ICA) learns the high-order dependencies in addition to the second-order dependencies among the pixels. 2 An image dictionary is a set of images that decomposes other images, e.g. through inner product. Here it finds the coefficients for the basis set K- 1 M. S. Bartlett, G. Donato, J. R. Movellan, J. C. Hager, P. Ekman and T. J. Sejnowski 890 The ICA representation was obtained by performing Bell & Sejnowski's infomax algorithm (4) (5) on the ensemble of ~-images in the rows of the matrix X. The images in X were assumed to be a linear mixture of an unknown set of independent source images which were recovered through ICA. In contrast to PCA, the ICA source images were local in nature (see Figure 2b). These source images provided a basis set for the expression images. The coefficients of each image with respect to the new basis set were obtained from the estimated mixing matrix A ~ W- 1 , where W is the ICA weight matrix [see (1), (2)]. Unlike PCA, there is no inherent ordering to the independent components of the dataset. We therefore selected as an ordering parameter the class discriminability of each component, defined as the ratio of between-class to within-class variance. Best performance of 95.5% was obtained with the first 75 components selected by class discriminability, using the cosine similarity measure, and nearest neighbor classifier. Independent component analysis gave the best performance among all of the datadriven image kernels. Class discriminability analysis of a PCA representation was previously found to have little effect on classification performance with PCA (2). 4 Supervised learning: Fisher's Linear Discriminants (FLD) A class specific linear projection of a PCA representation of faces was recently shown to improve identity recognition performance (3). The method employs a classic pattern recognition technique, Fisher's linear discriminant (FLD), to project the images into a c - 1 dimensional subspace in which the c classes are maximally separated. Best performance was obtained by choosingp = 30 principal components to first reduce the dimensionality of the data. The data was then projected down to 5 dimensions via the FLD projection matrix, W,ld. The FLD image dictionary was thus Wpca * W,ld. Best performance of 75.7% correct was obtained with the Euclidean distance similarity measure and template matching classifier. FLD provided a much more compact representation than PCA. However, unlike the results obtained by (3) for identity recognition, Fisher's Linear Discriminants did not improve over basic PCA (Eigenfaces) for facial action classification. The difference in performance may be due to the low dimensionality of the final representation here. Class discriminations that are approximately linear in high dimensions may not be linear when projected down to as few as 5 dimensions. 5 Predefined image kernels: Gabor wavelets An alternative to the adaptive bases described above are wavelet decompositions based on predefined families of Gabor kernels. Gabor kernels are 2-D sine waves modulated by a Gaussian (Figure 2c). Representations employing families of Gabor filters at multiple spatial scales, orientations, and spatial locations have proven successful for recognizing facial identity in images (11). Here, the ~-images were convolved with a family of Gabor kernels 'l/Ji, defined as IIkill2 _"kjlI2IzI2 [}'k .i _0'2] 'l/Ji (X. . ) = --2-e 20'2 e' - e 2 u where ki = ( it c~s'Pl-' ) sm 'PI-' ' J v Iv = 2-~7r, 'PI-' = (2) J.t'!!g. Following (11), the representation consisted of the amplitudes at 5 frequencies (v = 0-4) and 8 orientations (J.t = 1 - 8). Each filter output was downsampled by a factor q and normalized to unit length. We tested the performance of the system using q = 1,4,16 and found that q = 16 yielded the best generalization rate. Best performance was obtained with the cosine similarity measure and nearest neighbor Image Representations for Facial Expression Coding 891 classifier. Classification performance with the Gabor representation was 95.5%. This performance was significantly higher than all of the data-driven approaches in the comparison except independent component analysis, with which it tied. 6 Results and Conclusions PCA 79.3 ?3.9 Local PCA 73.4 ?4.2 LFA 81.1 ?3.7 ICA 95.5 ?2.0 FLD 75.7 ?4.1 Gabor 95.5 ?2.0 Table 1: Summary of classification performance for 12 facial actions. We have compared a number of different image analysis methods on a difficult classification problem, the classification of facial actions. Best performances were obtained with the Gabor and ICA representations, which both achieved 95.5% correct classification (see Table 1). The performance of these two methods equaled the agreement level of expert human subjects on these images (94%). Image representations derived from the second-order statistics of the dataset (PCA and LFA) performed in the 80% accuracy range. An image representation derived from supervised learning on the second-order statistics (FLD) also did not significantly differ from PCA. We also obtained evidence that high spatial frequencies are important for classifying facial actions. Classification with the three highest frequencies of the Gabor representation (1/ = 0,1,2, cycles/face = 15,18,21 cycles/face) was 93% compared to 84% with the three lowest frequencies (1/ = 2,3,4, cycles/face = 9,12,15). The two representations that significantly outperformed the others, Gabor and Independent Components, employed local basis images, which supports recent findings that local basis images are important for face image analysis (14) (10) (12). The local property alone, however, does not account for the good performance of these two representations, as LFA performed no better than PCA on this classification task, nor did local implementations of PCA. In addition to spatial locality, the ICA representation and the Gabor filter representation share the property of redundancy reduction, and have relationships to representations in the visual cortex. The response properties of primary visual cortical cells are closely modeled by a bank of Gabor kernels. Relationships have been demonstrated between Gabor kernels and independent component analysis . Bell & Sejnowski (5) found using ICA that the kernels that produced independent outputs from natural scenes were spatially local, oriented edge kernels , similar to a bank of Gabor kernels. It has also been shown that Gabor filter outputs of natural images are at least pairwise independent (16). The Gabor wavelets and ICA each provide a way to represent face images as a linear superposition of basis functions. Gabor wavelets employ a set of pre-defined basis images, whereas ICA learns basis images that are adapted to the data ensemble. The Gabor wavelets are not specialized to the particular data ensemble, but would be advantageous when the amount of data is small. The ICA representation has the advantage of employing two orders of magnitude fewer basis images. This can be an advantage for classifiers that involve parameter estimation. In addition, the ICA representation takes 90% less CPU time than the Gabor representation to compute once the ICA weights are learned, which need only be done once. In summary, this comparison provided converging support for using local basis images, high spatial frequencies, and statistical independence for classifying facial actions. Best performances were obtained with Gabor wavelet decomposition and independent component analysis. These two representations employ graylevel basis functions that share properties of spatial locality, independence, and have relationships to the response properties of visual cortical neurons. An outstanding issue is whether our findings depend on the simple recognition engines we employed. Would a smarter recognition engine alter the relative per- 892 M. S. Bartlett. G. Donato. J. R. Movellan. J. C. Hager, P. Ekman and T. J. Sejnowski formances? Our preliminary investigations suggest that is not the case. Hidden Markov models (HMM's) were trained on the PCA, ICA and Gabor representations. The Gabor representation was reduced to 75 dimensions using PCA before training the HMM. The HMM improved classification performance with ICA to 96.3%, and it did not change the overall findings, as it gave similar percent improvements to the PCA and PCA-reduced Gabor representations over their nearest neighbor performances. The dimensionality reduction of the Gabor representation, however, caused its nearest neighbor performance to drop, and the performance with the HMM was 92.7%. The lower dimensionality of the ICA representation was an advantage when employing the HMM. 7 References [1) M.S. Bartlett. Face Image Analysis by Unsupervised Learning and Redundancy Reduction. PhD thesis, University of California, San Diego, 1998. [2) M.S. Bartlett, H.M . Lades, and T.J. Sejnowski. Independent component representations for face recognition. In T. Rogowitz, B. & Pappas, editor, Proceedings of the SPIE Symposium on Electonic Imaging: Science and Technology; Human Vision and Electronic Imaging III, volume 3299, pages 528-539, San Jose, CA, 1998. SPIE Press. [3) P.N. Belhumeur, J .P . Hespanha, and D.J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transations on Pattern Analysis and Machine Intelligence, 19(7):711-720, 1997. [4) A.J . Bell and T.J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6):1129-1159, 1995. [5) A.J. Bell and T.J. Sejnowski. The independent components of natural scenes are edge filters. Vision Research, 37(23):3327-3338, 1997. [6) R. Brunelli and T. Poggio. Face recognition: Features versus templates. IEEE transactions on pattern analysis and machine intelligence, 15(10):1042-1052, 1993. [7) J.F. Cohn, A.J. Zlochower, J.J. Lien, Y-T Wu, and T . Kanade. Automated face coding: A computer-vision based method of facial expression analysis. Psychophysiology, 35(1):35-43, 1999. [8) G. Donato, M. Bartlett, J . Hager, P. Ekman, and T . Sejnowski. Classifying facial actions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10) :974989, 1999. [9) P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA, 1978. [10) M.S. Gray, J. Movellan, and T.J. Sejnowski. A comparison of local versus global image decomposition for visual speechreading. In Proceedings of the 4th Joint Symposium on Neural Computation, pages 92-98. Institute for Neural Computation, La Jolla, CA, 92093-0523, 1997. [11) M. Lades, J . Vorbruggen, J. Buhmann, J. Lange, W. Konen, C. von der Malsburg, and R. Wurtz. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computers, 42(3):300-311, 1993. [12) D.D. Lee and S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788-791, 1999. [13) J.R. Movellan . Visual speech recognition with stochastic networks. In G. Tesauro, D.S. Touretzky, and T . Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 851-858. MIT Press, Cambridge, MA, 1995. [14) C. Padgett and G. Cottrell. Representing face images for emotion classification. In M. Mozer, M. Jordan, and T . Petsche, editors, Advances in Neural Information Processing Systems, volume 9, Cambridge, MA, 1997. MIT Press. [15) P.S. Penev and J .J. Atick. Local feature analysis: a general statistical theory for object representation. Network: Computation in Neural Systems, 7(3) :477-500, 1996. [16) E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, November 2-5 1997. [17) M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71-86, 1991.
1785 |@word compression:1 advantageous:1 speechreading:1 contraction:1 covariance:2 decomposition:3 brightness:1 ld:2 hager:5 reduction:3 recovered:1 com:3 cottrell:2 subsequent:2 drop:1 discrimination:1 alone:1 v:1 fewer:2 selected:2 intelligence:3 consulting:1 location:4 symposium:2 surprised:1 behavioral:3 affective:1 fld:9 pairwise:1 datadriven:1 ica:22 behavior:2 examine:1 nor:1 automatically:1 cpu:2 little:1 window:1 begin:1 provided:3 underlying:1 project:1 alto:1 coder:1 lowest:1 developed:2 finding:5 sparsification:1 ended:1 marian:1 brow:3 classifier:9 control:1 unit:1 medical:1 grant:1 before:1 local:24 encoding:1 approximately:1 discriminability:3 au:4 examined:2 factorization:1 range:2 hughes:1 movellan:7 bell:4 gabor:29 significantly:4 matching:4 projection:3 pre:2 downsampled:2 suggest:2 cannot:1 marni:1 demonstrated:1 center:2 survey:1 classic:1 coordinate:3 variation:1 graylevel:2 diego:4 spontaneous:1 padgett:2 exact:1 agreement:1 element:1 recognition:12 located:1 database:3 featurebased:1 observed:1 cycle:3 ordering:2 movement:6 highest:1 equalize:1 mozer:1 seung:1 kriegman:1 dynamic:1 highorder:1 trained:3 depend:1 basis:29 joint:1 separated:1 distinct:2 effective:2 sejnowski:11 widely:2 distortion:1 statistic:6 topographic:2 final:1 sequence:9 eigenvalue:2 advantage:3 subtracting:1 interaction:2 product:1 mixing:1 description:1 competition:1 wpca:1 leave:1 object:3 nearest:6 differ:1 closely:1 correct:4 filter:6 pea:2 stochastic:1 human:5 donato:5 generalization:2 investigation:3 preliminary:1 konen:1 pl:1 dictionary:2 estimation:1 facs:6 outperformed:3 superposition:2 palo:1 vice:1 city:1 tool:2 successfully:1 mit:2 concurrently:1 gaussian:1 derived:4 focus:1 ax:1 improvement:1 contrast:1 equaled:1 sense:1 hidden:1 lien:1 pixel:7 issue:1 among:5 classification:15 orientation:2 overall:1 spatial:8 emotion:4 once:2 manually:2 unsupervised:5 alter:1 others:1 inherent:1 employ:7 few:2 randomly:1 oriented:1 densely:1 penev:2 highly:2 analyzed:1 mixture:1 predefined:4 grove:1 edge:2 poggio:1 facial:40 indexed:1 iv:1 euclidean:4 re:1 classify:1 column:1 stewart:1 measuring:1 restoration:1 maximization:1 neutral:2 predictor:1 comprised:3 recognizing:3 successful:1 ital:1 pixelwise:1 dependency:7 truthfulness:1 st:1 explores:1 accessible:1 lee:1 terrence:1 infomax:1 synthesis:1 mouse:1 thesis:1 von:1 containing:2 cognitive:3 expert:3 warped:1 leading:1 account:2 slc:1 coding:12 coefficient:3 register:1 caused:1 blind:2 performed:6 sine:1 analyze:1 wave:1 accuracy:2 variance:2 ensemble:5 produced:1 lighting:1 touretzky:1 colleague:1 frequency:6 turk:1 di:1 spie:2 sampled:1 dataset:3 dimensionality:5 javier:1 amplitude:1 appears:1 attained:1 higher:1 supervised:4 friesen:2 response:2 maximally:1 improved:1 psychophysiology:1 leen:1 evaluated:1 done:1 pappa:1 atick:2 psychopathology:1 horizontal:1 cohn:2 logistic:1 gray:1 utah:1 effect:2 consisted:2 normalized:1 lades:2 spatially:4 brunelli:1 cosine:4 motion:1 percent:1 image:87 novel:1 recently:2 nih:1 common:2 began:1 rotation:1 discriminants:4 specialized:1 rogowitz:1 ji:2 insensitive:1 volume:3 measurement:2 versa:1 cambridge:2 ai:3 automatic:1 similarity:7 cortex:1 whitening:1 base:2 recent:1 jolla:2 driven:4 tesauro:1 der:1 muscle:3 employed:4 belhumeur:1 signal:1 multiple:3 simoncelli:1 full:2 cross:1 converging:2 basic:4 crop:1 regression:1 vision:4 wurtz:1 represent:1 kernel:13 smarter:1 achieved:2 cell:1 cropped:1 addition:3 separately:1 whereas:1 addressed:1 source:3 unlike:3 subject:4 jordan:1 iii:1 automated:3 independence:3 gave:2 architecture:1 bandwidth:1 click:1 inner:2 reduce:1 lange:1 whether:1 expression:14 pca:30 six:1 bartlett:7 speech:1 action:23 detailed:2 eigenvectors:2 involve:1 amount:1 category:1 reduced:3 estimated:1 neuroscience:1 per:1 redundancy:2 four:2 luminance:1 imaging:2 angle:1 jose:1 family:3 electronic:1 wu:1 patch:2 sad:1 separation:1 scaling:1 ki:1 followed:1 correspondence:1 yielded:1 activity:2 adapted:1 scene:2 aspect:2 performing:3 pacific:1 combination:1 march:1 across:1 feret:1 joseph:1 psychologist:1 presently:1 invariant:1 asilomar:1 previously:1 needed:1 serf:1 opponent:2 petsche:1 alternative:1 convolved:1 personality:1 osalk:2 include:2 malsburg:1 medicine:1 objective:2 primary:1 dependence:1 lfa:13 subspace:1 distance:4 link:1 hmm:5 outer:1 whom:1 collected:1 discriminant:1 length:1 modeled:1 relationship:4 ratio:2 happy:1 difficult:1 hespanha:1 negative:1 implementation:2 unknown:1 fisherfaces:1 upper:4 neuron:1 markov:1 sm:1 howard:1 november:1 pentland:1 excluding:1 frame:3 rn:1 redwood:1 ucsd:2 specified:2 engine:2 california:1 learned:5 testbed:1 pattern:4 appeared:1 terry:1 mouth:2 natural:3 buhmann:1 representing:1 improve:3 technology:1 eye:3 literature:1 relative:1 proven:2 versus:2 validation:1 thresholding:1 principle:1 bank:2 editor:3 classifying:7 pi:2 share:2 row:3 gianluca:1 summary:2 supported:1 institute:3 neighbor:6 template:6 face:29 eigenfaces:7 calculated:1 dimension:4 cortical:2 adaptive:2 san:6 projected:2 employing:5 social:1 transaction:3 compact:1 global:4 assumed:1 francisco:1 iterative:1 decomposes:2 table:2 kanade:1 nature:2 ca:6 diag:1 did:4 paul:1 animation:1 salk:1 tied:1 wavelet:9 coincided:1 learns:2 down:2 specific:2 raiser:2 evidence:1 deconvolution:1 consist:1 disclosed:1 phd:1 magnitude:3 locality:2 visual:5 failed:1 tracking:1 ma:2 identity:4 quantifying:1 fisher:5 ekman:8 change:1 transations:1 except:1 principal:10 called:1 la:2 attempted:1 select:1 support:3 rotate:1 modulated:1 outstanding:1 tested:3
857
1,786
Actor-Critic Algorithms Vijay R. Konda John N. Tsitsiklis Laboratory for Information and Decision Systems , Massachusetts Institute of Technology, Cambridge, MA, 02139. konda@mit.edu, jnt@mit.edu Abstract We propose and analyze a class of actor-critic algorithms for simulation-based optimization of a Markov decision process over a parameterized family of randomized stationary policies. These are two-time-scale algorithms in which the critic uses TD learning with a linear approximation architecture and the actor is updated in an approximate gradient direction based on information provided by the critic. We show that the features for the critic should span a subspace prescribed by the choice of parameterization of the actor. We conclude by discussing convergence properties and some open problems. 1 Introduction The vast majority of Reinforcement Learning (RL) [9J and Neuro-Dynamic Programming (NDP) [lJ methods fall into one of the following two categories: (a) Actor-only methods work with a parameterized family of policies. The gradient of the performance, with respect to the actor parameters, is directly estimated by simulation, and the parameters are updated in a direction of improvement [4, 5, 8, 13J. A possible drawback of such methods is that the gradient estimators may have a large variance. Furthermore, as the policy changes, a new gradient is estimated independently of past estimates. Hence, there is no "learning," in the sense of accumulation and consolidation of older information. (b) Critic-only methods rely exclusively on value function approximation and aim at learning an approximate solution to the Bellman equation, which will then hopefully prescribe a near-optimal policy. Such methods are indirect in the sense that they do not try to optimize directly over a policy space. A method of this type may succeed in constructing a "good" approximation of the value function, yet lack reliable guarantees in terms of near-optimality of the resulting policy. Actor-critic methods aim at combining the strong points of actor-only and criticonly methods. The critic uses an approximation architecture and simulation to learn a value function, which is then used to update the actor's policy parameters Actor-Critic Algorithms 1009 in a direction of performance improvement. Such methods, as long as they are gradient-based, may have desirable convergence properties, in contrast to criticonly methods for which convergence is guaranteed in very limited settings. They hold the promise of delivering faster convergence (due to variance reduction), when compared to actor-only methods. On the other hand, theoretical understanding of actor-critic methods has been limited to the case of lookup table representations of policies [6]. In this paper, we propose some actor-critic algorithms and provide an overview of a convergence proof. The algorithms are based on an important observation. Since the number of parameters that the actor has to update is relatively small (compared to the number of states), the critic need not attempt to compute or approximate the exact value function, which is a high-dimensional object. In fact, we show that the critic should ideally compute a certain "projection" of the value function onto a low-dimensional subspace spanned by a set of "basis functions," that are completely determined by the parameterization of the actor. Finally, as the analysis in [11] suggests for TD algorithms, our algorithms can be extended to the case of arbitrary state and action spaces as long as certain ergodicity assumptions are satisfied. We close this section by noting that ideas similar to ours have been presented in the simultaneous and independent work of Sutton et al. [10]. 2 Markov decision processes and parameterized family of RSP's Consider a Markov decision process with finite state space S, and finite action space A. Let 9 : S x A -t ffi. be a given cost function. A randomized stationary policy (RSP) is a mapping I-" that assigns to each state x a probability distribution over the action space A. We consider a set of randomized stationary policies JPl = {1-"9; e E ffi.n }, parameterized in terms of a vector e. For each pair (x, u) E S x A, 1-"9 (x, u) denotes the probability of taking action u when the state x is encountered, under the policy corresponding to e. Let PXy(u) denote the probability that the next state is y, given that the current state is x and the current action is u. Note that under any RSP, the sequence of states {Xn} and of state-action pairs {Xn' Un} of the Markov decision process form Markov chains with state spaces Sand S x A, respectively. We make the following assumptions about the family of policies JPl. e (AI) For all xES and u E A the map t-t 1-"9(X, u) is twice differentiable with bounded first, second derivatives. Furthermore, there exists a ffi.n_ valued function 'l/J9(X, u) such that \l1-"9(X, u) = 1-"9 (x, U)'l/J9(X, u) where the mapping t-t 'l/J9(X, u) is bounded and has first bounded derivatives for any fixed x and u. (A2) For each E ffi.n , the Markov chains {Xn} and {Xn, Un} are irreducible and aperiodic, with stationary probabilities 7r9(X) and 'T}9(X, u) = 7r9 (x) 1-"9 (x, u), respectively, under the RSP 1-"9. e e In reference to Assumption (AI) , note that whenever 1-"9 (x, u) is nonzero we have 'l/J9 (x, u) = \l1-"9(X, u) ( ) 1-"9 x,u = \lIn 1-"9 (x, u). Consider the average cost function>. : ffi.n t-t ffi., given by >.(e) = L xES,uEA g(x, U)'T}9(X, u) . V. R. Konda and J. N. Tsitsiklis 1010 We are interested in minimizing >'(19) over all 19. For each 19 E Rn , let Ve : S t--7 R be the "differential" cost function, defined as solution of Poisson equation: >'(19) + Ve(x) = L I-'e(x,u) [g(X,U) + LPxY(U)Ve(Y)]. uEA Y Intuitively, Ve(x) can be viewed as the "disadvantage" of state x: it is the expected excess cost - on top of the average cost - incurred if we start at state x. It plays a role similar to that played by the more familiar value function that arises in total or discounted cost Markov decision problems. Finally, for every 19 E Rn, we define the q-function qe : S x A -+ R, by qe(x, u) = g(x, u) - >'(19) + LPxy(u)Ve(y). Y We recall the following result, as stated in [8]. (Different versions of this result have been established in [3, 4, 5].) Theorem 1. 8 . >'(19) = '"' . u) 819 L..J 1]e(x, u)qe(x , u)1/;o(x, (1) X,U 1. where 1/;b (x, u) stands for the i th component of 1/;e . In [8], the quantity qe(x,u) in the above formula is interpreted as the expected excess cost incurred over a certain renewal period of the Markov chain {Xn, Un}, under the RSP I-'e, and is then estimated by means of simulation, leading to actoronly algorithms. Here, we provide an alternative interpretation of the formula in Theorem 1, as an inner product, and thus derive a different set of algorithms, which readily generalize to the case of an infinite space as well. For any 19 E Rn , we define the inner product (', .) e of two real valued functions q1 , q2 on S x A, viewed as vectors in RlsiIAI, by (q1, q2)e = L 1]e(x, U)q1 (x, U)q2(X, u). x,u With this notation we can rewrite the formula (1) as 8 819 i >'(19) . = (qe,1/;o)e, i = 1, ... ,n. Let 11 ?l le denote the norm induced by this inner product on RlsiIA I. For each 19 E Rn let we denote the span of the vectors {1/;b; 1 ::; i ::; n} in RISIIAI. (This is same as the set of all functions f on S x A of the form f(x ,u) = 2::7=1 ai1/;~(x , U), for some scalars a1,? . . ,an,) Note that although the gradient of >. depends on the q-function, which is a vector in a possibly very high dimensional space RlsiIAI, the dependence is only through its inner products with vectors in we . Thus, instead of "learning" the function qe, it would suffice to learn the projection of qe on the subspace We. Indeed, let rIe : RlsllAI t--7 We be the projection operator defined by rIeq = arg !llin Ilq - qlle. qEwe Since (qe ,1/;e)e = (rIeqe, 1/;e)e, it is enough to compute the projection of qe onto we. (2) Actor-Critic Algorithms 3 1011 Actor-critic algorithms We view actor critic-algorithms as stochastic gradient algorithms on the parameter space of the actor. When the actor parameter vector is 0, the job of the critic is to compute an approximation of the projection IIeqe of qe onto 'lie. The actor uses this approximation to update its policy in an approximate gradient direction. The analysis in [11, 12] shows that this is precisely what TD algorithms try to do, i.e., to compute the projection of an exact value function onto a subspace spanned by feature vectors. This allows us to implement the critic by using a TD algorithm. (Note, however, that other types of critics are possible, e.g., based on batch solution of least squares problems, as long as they aim at computing the same projection.) We note some minor differences with the common usage of TD. In our context, we need the projection of q-functions, rather than value functions. But this is easily achieved by replacing the Markov chain {xt} in [11, 12] by the Markov chain {X n , Un}. A further difference is that [11, 12] assume that the control policy and the feature vectors are fixed. In our algorithms, the control policy as well as the features need to change as the actor updates its parameters. As shown in [6, 2], this need not pose any problems, as long as the actor parameters are updated on a slower time scale. We are now ready to describe two actor-critic algorithms, which differ only as far as the critic updates are concerned. In both variants, the critic is a TD algorithm with a linearly parameterized approximation architecture for the q-function, of the form m Q~(x, u) = I: r 4>~(x, u), j j=l where r = (rl, ... , rm) E ]Rm denotes the parameter vector of the critic. The features 4>~, j = 1, ... ,m, used by the critic are dependent on the actor parameter vector 0 and are chosen such that their span in ]RlsIIAI, denoted by <Pe, contains 'lI e. Note that the formula (2) still holds if IIe is redefined as projection onto <Pe as long as <Pe contains 'lie. The most straightforward choice would be to let m = n and 4>~ = 't/J~ for each i. Nevertheless, we allow the possibility that m > nand <Pe properly contains 'lie, so that the critic uses more features than that are actually necessary. This added flexibility may turn out to be useful in a number of ways: 1. It is possible for certain values of 0, the features 't/Je are either close to zero or are almost linearly dependent. For these values of 0, the operator IIe becomes ill-conditioned and the algorithms can become unstable. This might be avoided by using richer set of features 't/J~. 2. For the second algorithm that we propose (TD(a) a < 1) critic can only compute approximate - rather than exact - projection. The use of additional features can result in a reduction of the approximation error. Along with the parameter vector r, the critic stores some auxiliary parameters: these are a (scalar) estimate A, of the average cost, and an m-vector z which represents Sutton's eligibility trace [1, 9]. The actor and critic updates take place in the course of a simulation of a single sample path of the controlled Markov chain. Let rk, Zk, Ak be the parameters of the critic, and let Ok be the parameter vectpr of the actor, at time k. Let (Xk, Uk) be the state-action pair at that time. Let Xk+l be the new state, obtained after action Uk is applied. A new action Uk+l is generated according to the RSP corresponding to the actor parameter vector Ok. The critic carries out an update similar to the average cost temporal-difference method of [12]: Ak+l = Ak + 'Ydg(Xk, Uk) - Ak), V. R. Kanda and J. N. Tsitsiklis 1012 rk+l = rk + 1'k(9(Xk' Uk) - Ak + Q~~(Xk+l' Uk+l) - Q~~(Xk,Uk))Zk. (Here, 1'k is a positive stepsize parameter.) The two variants of the critic use different ways of updating Zk: TD(J) Critic: Let x* be a state in S. Zk+l TD(a) Critic, 0 ~ Zk + ?>91c (Xk+l' Uk+l) , if Xk+l ::/= x*, ?>9,. (Xk+l, Uk+d, otherwise. a < 1: Actor: Finally, the actor updates its parameter vector by letting (Jk+l = (Jk - rhf(rk)Q~~ (Xk+l' Uk+l)1/!9,. (Xk+l' Uk+l). Here, 13k is a positive stepsize and r(rk) > 0 is a normalization factor satisfying: (A3) f(?) is Lipschitz continuous. (A4) There exists C > 0 such that C r(r) ~ 1 + Ilrll' The above presented algorithms are only two out of many variations. For instance, one could also consider "episodic" problems in which one starts from a given initial state and runs the process until a random termination time (at which time the process is reinitialized at x*), with the objective of minimizing the expected cost until termination. In this setting, the average cost estimate Ak is unnecessary and is removed from the critic update formula. If the critic parameter rk were to be reinitialized each time that x* is entered, one would obtain a method closely related to Williams' REINFORCE algorithm [13]. Such a method does not involve any value function learning, because the observations during one episode do not affect the critic parameter r during another episode. In contrast, in our approach, the observations from all past episodes affect current critic parameter r, and in this sense critic is "learning". This can be advantageous because, as long as (J is slowly changing, the observations from recent episodes carry useful information on the q-function under the current policy. 4 Convergence of actor-critic algorithms Since our actor-critic algorithms are gradient-based, one cannot expect to prove convergence to a globally optimal policy (within the given class of RSP's). The best that one could hope for is the convergence of '\l A((J) to zero; in practical terms, this will usually translate to convergence to a local minimum of A((J). Actually, because the T D(a) critic will generally converge to an approximation of the desired projection of the value function, the corresponding convergence result is necessarily weaker, only guaranteeing that '\l A((h) becomes small (infinitely often). Let us now introduce some further assumptions. Actor-Critic Algorithms (A5) For each 0 E 1013 ~n, we define an m x m matrix G(O) by G(O) = L1Jo(x,u)?o(x,u)?O(x,U)T. x,u We assume that G(O) is uniformly positive definite, that is, there exists some fl > 0 such that for all r E ~m and 0 E ~n rTG(O)r ~ fdlrW? (A6) We assume that the stepsize sequences bk}, {th} are positive, nonincreasing, and satisfy 15 k > 0, Vk, L 15 k = 00, L 15k < k 00, k where 15 k stands for either /h or 'Yk. We also assume that 13k --+ 'Yk o. Note that the last assumption requires that the actor parameters be updated at a time scale slower than that of critic. Theorem 2. In an actor-critic algorithm with a TD(l) critic, liminf IIV'A(Ok)11 k =0 w.p. 1. Furthermore, if {Od is bounded w.p. 1 then lim IIV' A(Ok)11 = 0 k w.p. 1. Theorem 3. For every f > 0, there exists a: sufficiently close to 1, such that liminfk IIV'A(Ok)11 ::; f w.p. 1. Note that the theoretical guarantees appear to be stronger in the case of the TD(l) critic. However, we expect that TD(a:) will perform better in practice because of much smaller variance for the parameter rk. (Similar issues arise when considering actor-only algorithms. The experiments reported in [7] indicate that introducing a forgetting factor a: < 1 can result in much faster convergence, with very little loss of performance.) We now provide an overview of the proofs of these theorems. Since 13k/'Yk --+ 0, the size of the actor updates becomes negligible compared to the size of the critic updates. Therefore the actor looks stationary, as far as the critic is concerned. Thus, the analysis in [1] for the TD(l) critic and the analysis in [12] for the TD(a:) critic (with a: < 1) can be used, with appropriate modifications, to conclude that the critic's approximation of IIokqok will be "asymptotically correct". If r(O) denotes the value to which the critic converges when the actor parameters are fixed at 0, then the update for the actor can be rewritten as Ok+l = Ok - 13kr(r(Ok))Q~(Ok) (Xk+l, Uk+l)'l/JOk (Xk+1 , Uk+d + 13k ek, where ek is an error that becomes asymptotically negligible. At this point, standard proof techniques for stochastic approximation algorithms can be used to complete the proof. 5 Conclusions The key observation in this paper is that in actor-critic methods, the actor parameterization and the critic parameterization need not, and should not be chosen 1014 V. R. Konda and J. N. Tsitsiklis independently. Rather, an appropriate approximation architecture for the critic is directly prescribed by the parameterization used in actor. Capitalizing on the above observation, we have presented a class of actor-critic algorithms, aimed at combining the advantages of actor-only and critic-only methods. In contrast to existing actor-critic methods, our algorithms apply to high-dimensional problems (they do not rely on lookup table representations), and are mathematically sound in the sense that they possess certain convergence properties. Acknowledgments: This research was partially supported by the NSF under grant ECS-9873451, and by the AFOSR under grant F49620-99-1-0320. References [1] D. P. Bertsekas and J. N. Tsitsiklis. Neurodynamic Programming. Athena Scientific, Belmont, MA, 1996. [2] V. S. Borkar. Stochastic approximation with two time scales. Systems and Control Letters, 29:291-294, 1996. [3] X. R. Cao and H. F. Chen. Perturbation realization, potentials, and sensitivity analysis of Markov processes. IEEE Transactions on Automatic Control, 42:1382-1393,1997. [4] P. W. Glynn. Stochastic approximation for monte carlo optimization. In Proceedings of the 1986 Winter Simulation Conference, pages 285-289, 1986. [5] T . Jaakola, S. P. Singh, and M. 1. Jordan. Reinforcement learning algorithms for partially observable Markov decision problems. In Advances in Neural Information Processing Systems, volume 7, pages 345- 352, San Francisco, CA , 1995. Morgan Kaufman. [6] V. R. Konda and V. S. Borkar. Actor-critic like learning algorithms for Markov decision processes. SIAM Journal on Control and Optimization, 38(1) :94-123, 1999. [7] P. Marbach. Simulation based optimization of Markov reward processes. PhD thesis, Massachusetts Institute of Technology, 1998. [8] P. Marbach and J. N. Tsitsiklis. Simulation-based optimization of Markov reward processes. Submitted to IEEE Transactions on Automatic Control. [9] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1995. [10] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In this proceedings. [11] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674-690, 1997. [12] J. N. Tsitsiklis and B. Van Roy. Average cost temporal-difference learning. Automatica, 35(11):1799-1808, 1999. [13] R. Williams. Simple statistical gradient following algorithms for connectionist reinforcement learning. Machine Learning, 8:229-256, 1992.
1786 |@word version:1 stronger:1 norm:1 advantageous:1 open:1 termination:2 simulation:8 q1:3 carry:2 reduction:2 initial:1 contains:3 exclusively:1 ours:1 past:2 existing:1 current:4 od:1 yet:1 readily:1 john:1 belmont:1 update:12 stationary:5 parameterization:5 xk:13 along:1 differential:1 become:1 prove:1 introduce:1 forgetting:1 indeed:1 expected:3 bellman:1 discounted:1 globally:1 td:14 little:1 considering:1 becomes:4 provided:1 bounded:4 notation:1 suffice:1 what:1 kaufman:1 interpreted:1 q2:3 guarantee:2 temporal:3 every:2 rm:2 uk:13 control:7 grant:2 appear:1 bertsekas:1 positive:4 negligible:2 local:1 sutton:4 ak:6 iie:2 path:1 might:1 twice:1 suggests:1 limited:2 jaakola:1 practical:1 acknowledgment:1 practice:1 implement:1 definite:1 episodic:1 projection:11 onto:5 close:3 cannot:1 operator:2 context:1 accumulation:1 optimize:1 map:1 straightforward:1 williams:2 independently:2 assigns:1 estimator:1 spanned:2 variation:1 updated:4 play:1 ilq:1 exact:3 programming:2 us:4 prescribe:1 roy:2 satisfying:1 jk:2 updating:1 role:1 episode:4 removed:1 yk:3 reward:2 ideally:1 dynamic:1 iiv:3 singh:2 rewrite:1 basis:1 completely:1 easily:1 indirect:1 describe:1 monte:1 richer:1 valued:2 otherwise:1 sequence:2 differentiable:1 advantage:1 propose:3 product:4 cao:1 combining:2 realization:1 entered:1 translate:1 flexibility:1 neurodynamic:1 convergence:12 reinitialized:2 rhf:1 guaranteeing:1 converges:1 object:1 derive:1 n_:1 pose:1 minor:1 job:1 strong:1 auxiliary:1 indicate:1 differ:1 direction:4 drawback:1 aperiodic:1 closely:1 correct:1 stochastic:4 mcallester:1 sand:1 jok:1 mathematically:1 hold:2 sufficiently:1 uea:2 mapping:2 a2:1 hope:1 mit:3 aim:3 rather:3 barto:1 jnt:1 improvement:2 properly:1 vk:1 contrast:3 sense:4 dependent:2 lj:1 nand:1 interested:1 arg:1 issue:1 ill:1 denoted:1 renewal:1 represents:1 look:1 connectionist:1 irreducible:1 winter:1 ve:5 familiar:1 attempt:1 a5:1 possibility:1 ai1:1 nonincreasing:1 chain:6 necessary:1 desired:1 theoretical:2 instance:1 disadvantage:1 a6:1 cost:12 introducing:1 reported:1 randomized:3 sensitivity:1 siam:1 thesis:1 satisfied:1 possibly:1 slowly:1 ek:2 derivative:2 leading:1 li:1 potential:1 lookup:2 satisfy:1 depends:1 try:2 view:1 analyze:1 start:2 square:1 variance:3 generalize:1 carlo:1 submitted:1 simultaneous:1 whenever:1 glynn:1 proof:4 massachusetts:2 recall:1 lim:1 actually:2 ok:9 rie:1 furthermore:3 ergodicity:1 until:2 hand:1 replacing:1 hopefully:1 lack:1 scientific:1 usage:1 hence:1 laboratory:1 nonzero:1 during:2 eligibility:1 qe:10 complete:1 l1:2 common:1 rl:2 overview:2 volume:1 interpretation:1 cambridge:2 ai:2 automatic:3 marbach:2 actor:47 recent:1 store:1 certain:5 discussing:1 morgan:1 minimum:1 additional:1 converge:1 period:1 desirable:1 sound:1 faster:2 long:6 lin:1 a1:1 controlled:1 neuro:1 variant:2 poisson:1 normalization:1 achieved:1 posse:1 induced:1 jordan:1 near:2 noting:1 enough:1 concerned:2 affect:2 architecture:4 inner:4 idea:1 action:9 useful:2 generally:1 delivering:1 j9:4 involve:1 aimed:1 category:1 nsf:1 estimated:3 promise:1 key:1 nevertheless:1 changing:1 vast:1 asymptotically:2 run:1 parameterized:5 letter:1 place:1 family:4 almost:1 decision:8 fl:1 guaranteed:1 played:1 pxy:1 encountered:1 precisely:1 llin:1 span:3 prescribed:2 optimality:1 relatively:1 according:1 smaller:1 modification:1 liminf:1 intuitively:1 equation:2 turn:1 ffi:6 letting:1 capitalizing:1 rewritten:1 apply:1 appropriate:2 stepsize:3 rtg:1 alternative:1 batch:1 slower:2 top:1 denotes:3 a4:1 konda:5 objective:1 added:1 quantity:1 dependence:1 gradient:11 subspace:4 reinforce:1 majority:1 athena:1 unstable:1 minimizing:2 trace:1 stated:1 policy:18 redefined:1 perform:1 observation:6 markov:16 finite:2 extended:1 rn:4 perturbation:1 mansour:1 arbitrary:1 bk:1 pair:3 established:1 usually:1 reliable:1 rely:2 older:1 technology:2 ready:1 understanding:1 afosr:1 loss:1 expect:2 incurred:2 critic:59 course:1 consolidation:1 supported:1 last:1 tsitsiklis:8 allow:1 weaker:1 institute:2 fall:1 taking:1 van:2 f49620:1 xn:5 stand:2 reinforcement:5 san:1 avoided:1 far:2 ec:1 transaction:3 excess:2 approximate:5 observable:1 automatica:1 conclude:2 unnecessary:1 francisco:1 ndp:1 un:4 continuous:1 table:2 learn:2 zk:5 ca:1 necessarily:1 constructing:1 linearly:2 arise:1 je:1 lie:3 pe:4 theorem:5 formula:5 rk:7 xt:1 x:2 jpl:2 a3:1 exists:4 kr:1 phd:1 conditioned:1 chen:1 vijay:1 borkar:2 infinitely:1 rsp:7 partially:2 scalar:2 ma:3 succeed:1 viewed:2 lipschitz:1 change:2 determined:1 infinite:1 uniformly:1 total:1 r9:2 arises:1 ydg:1
858
1,787
Speech Modelling Using Subspace and EM Techniques Gavin Smith Cambridge University Engineering Department Cambridge CB2 1PZ England gas1 oo3@eng.cam.ac.uk Joao FG de Freitas Computer Science Division 487 Soda Hall UC Berkeley CA 94720-1776, USA. jfgf@cs.berkeley.edu 1 Tony Robinson Cambridge University Engineering Department Cambridge CB2 IPZ England ajr@eng.cam.ac.uk Mahesan Niranjan Computer Science Sheffield University Sheffield. S 1 4DP England m.niranjan@dcs.shef.ac.uk Abstract The speech waveform can be modelled as a piecewise-stationary linear stochastic state space system, and its parameters can be estimated using an expectation-maximisation (EM) algorithm. One problem is the initialisation of the EM algorithm. Standard initialisation schemes can lead to poor formant trajectories. But these trajectories however are important for vowel intelligibility. The aim of this paper is to investigate the suitability of subspace identification methods to initialise EM. The paper compares the subspace state space system identification (4SID) method with the EM algorithm. The 4SID and EM methods are similar in that they both estimate a state sequence (but using Kalman filters and Kalman smoothers respectively), and then estimate parameters (but using least-squares and maximum likelihood respectively). The similarity of 4SID and EM motivates the use of 4SID to initialise EM. Also, 4SID is non-iterative and requires no initialisation, whereas EM is iterative and requires initialisation. However 4SID is sub-optimal compared to EM in a probabilistic sense. During experiments on real speech, 4SID methods compare favourably with conventional initialisation techniques. They produce smoother formant trajectories, have greater frequency resolution, and produce higher likelihoods. 1 Work done while in Cambridge Engineering Dept., UK. Speech Modelling Using Subspace and EM Techniques 797 1 Introduction This paper models speech using a stochastic state space model, where model parameters are estimated using the expectation-maximisation (EM) technique. One problem is the initialisation of the EM algorithm. Standard initialisation schemes can lead to poor formant trajectories. These trajectories are however important for vowel intelligibility. This paper investigates the suitability of subspace state space system identification (4SIO) techniques [10,11], which are popular in system identification, for EM initialisation. Speech is split into fixed-length, overlapping frames. Overlap encourages temporally smoother parameter transitions between frames. Oue to the slow non-stationary behaviour of speech, each frame of speech is assumed quasi-stationary and represented as a linear time-invariant stochastic state space (SS) model. Xt+l Yt = AXt + Wt CXt + Vt (1) (2) The system order is p. Xt E ~p X l is the state vector. A E ~p x p and C E ~l x p are system parameters. The output Yt E ~ is the speech signal at the microphone. Process and observation noises are modelled as white zero-mean Gaussian stationary noises Wt E ~p x l N(O, Q) and Vt E ~ N(O, R) respectively. The problem definition is to (A, c, Q, R) from speech Yt only. estimate parameters e f"V = f"V The structure of the paper is as follows . The theory section describes EM and 4SIO applied to the parameter estimation of the above SS model. The similarity of 4SIO and EM motivates the use of 4SID to initialise EM. Experiments on real speech then compare 4SIO with more conventional initialisation methods. The discussion then compares 4SIO with EM. 2 2.1 Theory The Expectation-Maximisation (EM) Technique Given a sequence of N observations Yl:N of a signal such as speech, the maximum likelihood estimate for the parameters is 9ML = arg maxep(Yl :N!e) . EM breaks the maximisation of this potentially difficult likelihood function down into an iterative maximisation of a simpler likelihood function, generating a new estimate ek each iteration. Rewriting P(Yl:N!e) in terms of a hidden state sequence Xl:N, and taking expectations over P(Xl:N!Yl:N, ek) 10gp(Yl:N!e) 10gp(Yl:N!e) = 10gp(Xl:N,yl :N!e) -logp(Xl:N!Yl :N,e) Ek[logp(Xl:N, Yl :N!e)] - E k [logp(Xl :N!Yl:N, e)] (3) (4) Iterative maximisation of the first expectation in equation 4 guarantees an increase in 10gp(Yl:N!e). (5) This converges to a local or global maximum depending on the initial parameter estimate eo. Refer to [8] for more details. EM can thus be applied to the stochastic state space G. Smith, J. F. G. d. Freitas, T. Robinson and M Niranjan 798 model of equations 1 and 2 to determine optimal parameters e. An explanation is given in [3] . The EM algorithm applied to the SS system consists of two stages per iteration. Firstly, given current parameter estimates, states are estimated using a Kalman smoother. Secondly, given these states, new parameters are estimated by maximising the expected log likelihood function. We employ the Rauch-Tung-Striebel formulation of the Kalman smoother [2]. 2.2 The State-Space Model Equations 1 and 2 can be cast in block matrix form and are termed the state sequence and block output equations respectively [10]. Note that the use of blocking and fixed-length signals applies restrictions to the general model in section 1. i > P is the block size. AiXl,j + arWI!i riXI,j + HrWI!i + VI!i Xi+I,i+j YI!i (6) (7) Xi+I,i+j is a state sequence matrix; its columns are the state vectors from time (i + 1) to (i+j). XI ,j is similarly defined. Y Wis a Hankel matrix of outputs from time 1 to (i+j-1). W and V are similarly defined. i is a reversed extended controllability-type matrix, i is the extended observability matrix and Hi is a Toeplitz matrix. These are all defined below where IPxp is an identity matrix. a XI,j def [Xl X2 X3 r Xj] ??. r?~ ~ aw def [A i- I A i- 2 ~ Y ~f l!i - [ Y2 y, ... I] Y2 Y3 Yj Yj+1 Yi+l Yi+j-l : Yi H~~f , 1 [ci, 1 [c1-, 0 C :J A sequence of outputs can be separated into two block output equations containing past and future outputs denoted with subscriptsp and! respectively. With Yp Y i +1!2i and similarly for W and V, and Xp de, = XI,j and X, de, = dg YI!i, Y, dg Xi+I ,i+j, past and future are related by the equations AiXp +arW p riXp + HiWp + Vp rix, + HiW, + V, 2.3 (8) (9) (10) Subspace State Space System Identification (4SID) Techniques Comments throughout this section on 4SIO are largely taken from the work of Van Overschee and Oe Moor [10]. 4SIO methods are related to instrumental variable (IV) methods [11]. 4SIO algorithms are composed of two stages. Stage one involves the low-rank approximation and estimation of the extended observability matrix directly from the output 799 Speech Modelling Using Subspace and EM Techniques data. For example, consider the future output block equation 10. Y, undergoes an orthogo(Yp typ, nal projection onto the row space ofY p' This is denoted by Y, /'J p = Y, where t is the Moore-Penrose inverse. YJ r iX, /'J p + HfW, /'J p + V, /'J p rix,/'J p YJ) (11) Stage two involves estimation of system parameters. The singular value decomposition of Y, /'J p allows the observability and state sequence matrices to be estimated to within a similarity transform from the column and row spaces respectively. From these two matrices, system parameters (A, c, Q, R) can be determined by least-squares. There are two interesting comments. Firstly, the orthogonal projection from stage one coincides with a minimum error between true data Y, and its linear prediction from Y p in the Frobenius norm. Greater flexibility is obtained by weighting the projection with matrices WI and W 2 and analysing this: WI (YJi'J p )W2 ? 4SID and IV methods differ with respect to these weighting matrices. Weighting is similar to prefiltering the observations prior to analysis to preferentially weight some frequency domain, as is common in identification theory [6]. Secondly, the state estimates from stage two can be considered as outputs from a parallel bank of Kalman filters, each one estimating a state from the previous i observations, and initialised using zero conditions. The particular subspace algorithm and software used in this paper is the sto-pos algorithm as detailed in [10]. Although this algorithm introduces a small bias into some of the parameter estimates, it guarantees positive realness of the covariance sequence, which in turn guarantees the definition of a forward innovations model. 3 Experiments Experiments are conducted on the phrase "in arithmetic", spoken by an adult male. The speech waveform is obtained from the Eurom 0 database [4] and sampled at 16 kHz. The speech waveform is divided into fixed-length, overlapping frames, the mean is subtracted and then a hamming window is applied. Frames are 15 ms in duration, shifted 7.5 ms each frame. Speech is modelled as detailed in section 1. All models are order 8. A frame is assumed silent and no analysis done when the mean energy per sample is less than an empirically defined threshold. For the EM algorithm, a modified version of the software in [3] is used. The initial state vector and covariance matrix are set to zero and identity respectively, and 50 iterations are applied. Q is updated by taking its diagonal only in the M-step for numerical stability (see [3]). In these experiments, three schemes are compared at initialising parameters for the EM algorithm, that is the estimation of 9 0 . These schemes are compared in terms of their formant trajectories relative to the spectrogram and their likelihoods. The three schemes are ? 4SID. This is the subspace method in section 2.3 with block size 16 . ? ARMA. This estimates 9 0 using the customised Matlab armax function!, which models the speech waveform as an autoregressive moving average (ARMA) process, with order 8 polynomials. I armax minimises a robustified quadratic prediction error criterion using an iterative GaussNewton algorithm, initialised using a four-stage least-squares instrumental variables algorithm [7]. G. Smith, J F. G. d. Freitas, T. Robinson and M Niranjan 800 ? AR(l). This uses a simplistic method, and models the speech waveform as a first order autoregressive (AR) process with some randomness introduced into the estimation. It still initialises all parameters fully2. Results are shown in Figures 1 and 2. Figure 1 shows the speech waveform, spectrogram and formant trajectories for EM with all three initialisation schemes. Here formant frequencies are derived from the phase of the positive phase eigenvalues of A after 50 iterations of EM. Comparison with the spectrogram shows that for this order 8 model, 4SID-EM produces best formant trajectories. Figure 2 shows mean average plots of likelihood against EM iteration number for each initialisation scheme. 4SID-EM gives greater likelihoods than ARMA-EM and AR(l)-EM. The difference in formant trajectories between subspaceEM and ARMA-EM despite the high likelihoods, demonstrates the multi-modality of the likelihood function. For AR(l)-EM, a few frames were not estimated due to numerical instability. 4 Discussion Both the 4SID and EM algorithms employ similar methodologies: states are first estimated using a Kalman device, and then these states are used to estimate system parameters according to similar criteria. However in EM, states are estimated using past, present and future observations with a Kalman smoother; system parameters are then estimated using maximum likelihood (ML). Whereas in 4SID, states are estimated using the previous i observations only with non-steady state Kalman filters. System parameters are then estimated using least-squares (LS) subject to a positive realness constraint for the covariance sequence. Refer also to [5] for a similar comparison. 4SID algorithms are sub-optimal for three reasons. Firstly, states are estimated using only partial observations sequences. Secondly, the LS criterion is only an approximation to the ML criterion. Thirdly, the positive realness constraint introduces bias. A positive realness constraint is necessary due to a finite amount of data and any lacking in the SS model. For this reason, 4SID methods are used to initialise rather than replace EM in these experiments. 4SID methods also have some advantages. Firstly, they are linear and non-iterative, and do not suffer from the disadvantages typical of iterative algorithms (including EM) such as sensitivity to initial conditions, convergence to local minima, and the definition of convergence criteria. Secondly, they require little prior parameterisation except the definition of the system order, which can be determined in situ from observation of the singular values of the orthogonal projection. Thirdly, the use of the SVD gives numerical robustness to the algorithms. Fourthly, they have higher frequency resolution than prediction error minimisation methods such as ARMA and AR [1]. 5 Conclusions 4SID methods can be used to initialise EM giving better formant tracks, higher likelihoods and better frequency resolution than more conventional initialisation methods. In the future we hope to compare 4SID methods with EM in a principled probabilistic manner, investigate weighting matrices further, and apply these methods to speech enhancement. Further work is done by Smith et al. in [9], and similar work done by Grivel et aI. in [5]. Acknowledgements We are grateful for the use of 4SID software supplied with [10] and the EM software of 2Presented in the software in [3], this method is best used when the dimensions of the state space and observations are the same. Speech Modelling Using Subspace and EM Techniques 801 le4 <I) "0 .5 0 Q.. E t.s -le4 0.1 0.3 0.1 0.3 0 8 time / s 0.5 0.7 0.5 0.7 N ::I: ....: ....... 0" 4 <I) ..::: 0 0 8 N ::I: ....: ....... 0" + 4 tH- <I) ..::: 8 1+14- N ::I: ....: ~ ....... 0" <I) ..::: + 8 N ::I: ....: .... 4 0" ~ + + +1 + 01 80 + ++ + +t ++ + 00 ++ ? + Figure 1: (a) Time waveform and (b) spectrogram for "in arithmetic". Formant trajectories are estimated using EM and a SS model initialised with three different schemes: (d) 4SID, (e) ARMA and (t) AR(l). G. Smith. J. F. G. d. Freitas. T. Robinson and M. Niranjan 802 Zoubin Ghahramani [3]. Gavin Smith is supported by the Schiff Foundation, Cambridge University. At the time of writing, Nando de Freitas was supported by two University of the Witwatersrand Merit Scholarships, a Foundation for Research Development Scholarship (South Africa), an ORS award and a Trinity College External Research Studentship (Cambridge). ?1400 ?1700 ,,' 2 10 iteration nuLr so Figure 2: Likelihood convergence plots for EM and the SS model initialised with 4SID [- -], ARMA [-] and AR(I) [-.] for the experiments in Figure 1. Plots are the mean average over all frames where analysed. 6 References [1] Arun, K.S. & Kung, S.Y. (1990) Balanced Approximation of Stochastic Systems. SIAM Journal on Matrix Analysis and Applications, vol. 11, no. 1, pp. 42--68. [2] Gelb, A. ed., (1974) Applied Optimal Estimation. Cambridge, MA: MIT Press. [3] Ghahramani, Z. & Hinton, G. (1996) Parameter Estimation for Linear Dynamical Systems, Tech. rep. CRG-TR-96-2, Dept. of Computer Science, Univ. of Toronto. Software at www.gatsby.ucl.ac.ukrzoubinlsoJtware.html. [4] Grice, M. & Barry, W. (1989) Multi-lingual Speech Input/Output: Assessment, Methodology and Standardization, Tech. rep., University College, London, ESPRIT Project 1541 (SAM), extension phase final report. [5] Grivel, E., Gabrea, M. & Najim, M. (1999) Subspace State Space Model Identification For Speech Enhancement, Paper 1622, ICASSP'99. [6] Ljung, L. (1987) System Identification: Theory for the User. Englewood Cliffs, NJ: Prentice-Hall, Inc. [7] Ljung, L. (1991) System Identification Toolbox For Use With MatLab. 24 Prime Park Way, Natrick, MA, USA: The MathWorks, Inc. [8] McLachlan, G.J. & Krishnan, T. (1997) The EM Algorithm and Extensions. John Wiley and Sons Inc. [9] Smith, G.A. & Robinson, A.J. & Niranjan, M. (2000) A Comparison Between the EM and Subspace Algorithms for the Time-Invariant Linear Dynamical System. Tech. rep. CUEDIF-INFENGffR.366, Engineering Dept., Cambridge Univ., UK. [10] Van Overschee, P. & De Moor, B. (1996) Subspace Identification for Linear Systems: Theory, Implementation, Applications. Dordrecht, Netherlands: Kluwer Academic Publishers. [11] Viberg, M. & Wahlberg, B. & Ottersten, B. (1997) Analysis of State Space System Identification Methods Based on Instrumental Variables and Subspace Fitting. Automatica, vol. 33, no. 9, pp. 1603-1616.
1787 |@word version:1 polynomial:1 instrumental:3 norm:1 eng:2 decomposition:1 covariance:3 tr:1 initial:3 initialisation:12 past:3 freitas:5 africa:1 current:1 analysed:1 john:1 numerical:3 plot:3 initialises:1 stationary:4 device:1 smith:7 toronto:1 firstly:4 simpler:1 consists:1 fitting:1 manner:1 expected:1 multi:2 realness:4 little:1 window:1 project:1 estimating:1 joao:1 spoken:1 nj:1 guarantee:3 berkeley:2 y3:1 cuedif:1 axt:1 esprit:1 demonstrates:1 uk:5 positive:5 engineering:4 local:2 despite:1 cliff:1 yj:4 maximisation:6 block:6 witwatersrand:1 x3:1 cb2:2 projection:4 zoubin:1 onto:1 prentice:1 instability:1 writing:1 restriction:1 conventional:3 www:1 yt:3 duration:1 l:2 resolution:3 initialise:5 stability:1 updated:1 user:1 us:1 database:1 blocking:1 tung:1 grice:1 oe:1 principled:1 balanced:1 cam:2 grateful:1 division:1 po:1 icassp:1 represented:1 separated:1 univ:2 london:1 gaussnewton:1 dordrecht:1 s:6 toeplitz:1 formant:10 gp:4 transform:1 final:1 sequence:10 eigenvalue:1 advantage:1 ucl:1 flexibility:1 frobenius:1 convergence:3 enhancement:2 produce:3 generating:1 converges:1 depending:1 ac:4 minimises:1 c:1 involves:2 differ:1 waveform:7 filter:3 stochastic:5 nando:1 require:1 trinity:1 behaviour:1 suitability:2 secondly:4 crg:1 extension:2 fourthly:1 gavin:2 hall:2 considered:1 estimation:7 arun:1 moor:2 hope:1 mclachlan:1 mit:1 gaussian:1 aim:1 modified:1 rather:1 minimisation:1 derived:1 modelling:4 likelihood:14 rank:1 tech:3 sense:1 striebel:1 hidden:1 quasi:1 hiw:1 arg:1 html:1 denoted:2 development:1 uc:1 ipz:1 park:1 future:5 report:1 piecewise:1 employ:2 few:1 dg:2 composed:1 phase:3 vowel:2 englewood:1 investigate:2 situ:1 introduces:2 male:1 partial:1 necessary:1 orthogonal:2 iv:2 arma:7 mahesan:1 column:2 ar:7 disadvantage:1 logp:3 phrase:1 conducted:1 aw:1 sensitivity:1 siam:1 probabilistic:2 yl:11 containing:1 external:1 ek:3 yp:2 de:5 inc:3 vi:1 break:1 parallel:1 typ:1 cxt:1 square:4 largely:1 vp:1 modelled:3 identification:11 sid:23 trajectory:10 randomness:1 ed:1 definition:4 against:1 energy:1 frequency:5 initialised:4 pp:2 hamming:1 sampled:1 popular:1 higher:3 methodology:2 formulation:1 done:4 stage:7 favourably:1 overlapping:2 assessment:1 undergoes:1 usa:2 y2:2 true:1 moore:1 white:1 during:1 encourages:1 steady:1 coincides:1 m:2 criterion:5 common:1 empirically:1 khz:1 thirdly:2 kluwer:1 refer:2 cambridge:9 ai:1 similarly:3 moving:1 similarity:3 viberg:1 prime:1 termed:1 rep:3 vt:2 yi:5 minimum:2 greater:3 spectrogram:4 eo:1 determine:1 barry:1 signal:3 arithmetic:2 smoother:6 ofy:1 england:3 jfgf:1 academic:1 divided:1 niranjan:6 award:1 prediction:3 simplistic:1 sheffield:2 schiff:1 expectation:5 iteration:6 c1:1 whereas:2 shef:1 singular:2 modality:1 publisher:1 w2:1 comment:2 subject:1 south:1 split:1 krishnan:1 xj:1 silent:1 observability:3 rauch:1 suffer:1 speech:23 wahlberg:1 armax:2 matlab:2 detailed:2 netherlands:1 amount:1 supplied:1 shifted:1 estimated:13 per:2 track:1 vol:2 four:1 threshold:1 rewriting:1 nal:1 inverse:1 soda:1 hankel:1 throughout:1 rix:2 initialising:1 investigates:1 def:2 hi:1 quadratic:1 constraint:3 x2:1 software:6 arw:1 robustified:1 department:2 according:1 poor:2 lingual:1 describes:1 son:1 em:46 sam:1 wi:3 parameterisation:1 invariant:2 taken:1 equation:7 turn:1 mathworks:1 merit:1 apply:1 intelligibility:2 subtracted:1 robustness:1 ajr:1 tony:1 giving:1 ghahramani:2 scholarship:2 diagonal:1 dp:1 subspace:14 reversed:1 reason:2 maximising:1 kalman:8 length:3 preferentially:1 innovation:1 difficult:1 potentially:1 implementation:1 motivates:2 observation:9 rixi:1 finite:1 controllability:1 gelb:1 extended:3 hinton:1 dc:1 frame:9 introduced:1 cast:1 toolbox:1 robinson:5 adult:1 below:1 dynamical:2 including:1 explanation:1 overschee:2 overlap:1 oue:1 scheme:8 temporally:1 prior:2 acknowledgement:1 relative:1 lacking:1 ljung:2 interesting:1 foundation:2 xp:1 standardization:1 bank:1 row:2 supported:2 bias:2 taking:2 fg:1 van:2 studentship:1 dimension:1 transition:1 prefiltering:1 autoregressive:2 forward:1 najim:1 ml:3 global:1 automatica:1 assumed:2 xi:6 yji:1 iterative:7 ca:1 domain:1 noise:2 gatsby:1 slow:1 sto:1 wiley:1 sub:2 xl:7 weighting:4 ix:1 down:1 xt:2 pz:1 ci:1 penrose:1 applies:1 ma:2 identity:2 replace:1 analysing:1 determined:2 typical:1 except:1 wt:2 microphone:1 svd:1 sio:8 college:2 kung:1 dept:3
859
1,788
Neural Network Based Model Predictive Control Stephen Piche Pavilion Technologies Austin, TX 78758 spiche@pav.com Jim Keeler Pavilion Technologies Austin, TX 78758 jkeeler@pav.com Greg Martin Pavilion Technologies Austin, TX 78758 gmartin@pav.com Gene Boe Pavilion Technologies Austin, TX 78758 gboe@pav.com Doug Johnson Pavilion Technologies Austin, TX 78758 djohnson@pav.com Mark Gerules Pavilion Technologies Austin, TX 78758 mgerules@pav.com Abstract Model Predictive Control (MPC), a control algorithm which uses an optimizer to solve for the optimal control moves over a future time horizon based upon a model of the process, has become a standard control technique in the process industries over the past two decades. In most industrial applications, a linear dynamic model developed using empirical data is used even though the process itself is often nonlinear. Linear models have been used because of the difficulty in developing a generic nonlinear model from empirical data and the computational expense often involved in using nonlinear models. In this paper, we present a generic neural network based technique for developing nonlinear dynamic models from empirical data and show that these models can be efficiently used in a model predictive control framework. This nonlinear MPC based approach has been successfully implemented in a number of industrial applications in the refining, petrochemical, paper and food industries. Performance of the controller on a nonlinear industrial process, a polyethylene reactor, is presented. 1 Introduction Model predictive control has become the standard technique for supervisory control in the process industries with over 2,000 applications in the refining, petrochemicals, chemicals, pulp and paper, and food processing industries [1]. Model Predictive Control was developed in the late 70's and came into wide-spread use, particularly in the refining industry, in the 80's. The economic benefit of this approach to control has been documented [1,2] . 1030 s. Piche, J. Keeler, G. Martin, G. Boe, D. Johnson and M. Gerules Several factors have contributed to the wide-spread use of MPC in the process industries: 1. Multivariate Control: Industrial processes are typically coupled multipleinput multiple-output (MIMO) systems. MIMO control can be implemented using MPC. 2. Constraints: Constraints on the inputs and outputs of a process due to safety considerations are common in the process industries. These constraints can be integrated into the control calculation using MPC. 3. Sampling Period: Unlike systems in other industries such as automotive or aerospace, the open-loop settling times for many processes is on the order of hours rather than milliseconds. This slow settling time translates to sampling periods on the order of minutes. Because the sampling period is sufficiently long, the complex optimization calculations that are required to implement MPC can be solved at each sampling period. 4. Commercial Tools: Commercial tools that facilitate model development and controller implementation have allowed proliferation of MPC in the process industries. Unti! recently, industrial applications of MPC have relied upon linear dynamic models even though most processes are nonlinear. MPC based upon linear models is acceptable when the process operates at a single setpoint and the primary use of the controller is the rejection of disturbances. However, many chemical processes, including polymer reactors, do not operate at a single setpoint. These processes are often required to operate at different set points depending upon the grade of the product that is to be produced. Because these processes operate over the nonlinear range of the system, linear MPC often results in poor performance. To properly control these processes, a nonlinear model is needed in the MPC algorithm. This need for nonlinear models in MPC is well recognized. A number of researchers and commercial companies have developed both simulation and industrial applications using a variety of different technologies including both first principles and empirical approaches such as neural networks [3,4]. Although a variety of different models have been developed, they have not been practical for wide scale industrial application. On one hand, nonlinear models built using first principle techniques are expensive to develop and are specific to a process. Conversely, many empirically based nonlinear models are not appropriate for wide scale use because they require costly plant tests in multiple operating regions or because they are too computationally expensive to use in a real-time environment. This paper presents a nonlinear model that has been developed for wide scale industrial use. It is an empirical model based upon a neural network which is developed using plant test data from a single operating region and historical data from all regions. This is in contrast to the usual approach of using plant test data from multiple regions. This model has been used on over 50 industrial applications and was recognized in a recent survey paper on nonlinear MPC as the most widely used nonlinear MPC controller in the process industries[l]. Neural Network Based Model Predictive Control 1031 After providing a brief overview of model predictive control in the next section, we present details on the formulation of the nonlinear model. After describing the model, an industrial application is presented that validates the usefulness of the nonlinear model in an MPC algorithm. 2 Model Predictive Control Model predictive control is based upon solving an optimization problem for the control actions at each sampling interval. Using MPC, an optimizer computes future control actions that minimize the difference between a model of the process and desired performance over a time horizon (typically the time horizon is greater than the open-loop settling time of the process). For example, given a linear model of process, (1) where u(t) represents the input to the process, the optimizer may be used to minimize an objective function at time t, T J = 2)(Yt+i - Yt+i)2 + (Ut+i - Ut+i_l)2) (2) i=l where Yt is the desired set point for the output and T is the length of the time horizon. In addition to minimizing an objective function, the optimizer is used to observe a set of constraints. For example, it is common to place upper and lower bounds on the inputs as well as bounds on the rate of change of the input, 2:: Ut+i 2:: Ul ower V 1:::; i :::; T 2:: Ut+i - Ut+i-l 2:: AUlower V 1:::; i :::; T U upper AUupper where (3) (4) and Ulower are the upper and lower input bounds while AUupper and are the upper and lower rate of change bounds. After the trajectory of future control actions is computed, only the first value in the trajectory is sent as a setpoint to the actuators. The optimization calculation is re-run at each sampling interval using a model which has been updated using feedback. Uupper AUlower The form of the model, the objective function, the constraints and the type of optimizer have been active areas of research over the past two decades. A number of excellent survey papers on MPC cover these topics [1,2,4]. As discussed above, we have selected a MIMO nonlinear model which is presented in the next section. Although the objective function given above contains two terms (desired output and input move suppression), the objective function used in our implementation contains thirteen separate terms. (The details of the objective function are beyond the scope of this paper.) Our implementation uses the constraints given above in (3) and (4). Because we use nonlinear models, a nonlinear programming technique must be used to solve the optimization problem. We use LS-GRG which is a reduced gradient solver [5]. S. Piche, J. Keeler, G. Martin, G. Roe, D. Johnson and M Gerules 1032 3 A Generic and Parsimonious Nonlinear Model For a nonlinear model to achieve wide-spread industrial use, the model must be parsimonious so that it can be efficiently used in an optimization problem. Furthermore, it must be developed from limited process data. As discussed below, the nonlinear model we use is composed of a combination of a nonlinear steady state model and a linear dynamic model which can be derived from available data. The method of combining the models results in a parsimonious nonlinear model. 3.1 Process data and component models The quantity and quality of available data ultimately determines the structure of an empirical model. In developing our models, the available data dictated the type of model that could be created. In the process industries, two types of data are available: 1. Historical data: The values of the inputs and outputs of most processes are saved at regular intervals to a data base. Furthermore, most processing companies retain historical data associated with their plant for several years. 2. Plant tests: Open-loop testing is a well accepted practice for determining the process dynamics for implementation ofMPC. However, open-loop testing in multiple operating regions is not well accepted and is impractical in most cases even if it were accepted. Most practitioners of MPC models have used plant test data and ignored historical data. Practitioners have ignored the historical data in the past because it was difficult to extract and preprocess the data, and build models. Historical data was also viewed as not useful because it was collected in closed-loop and therefore process dynamics could not be extracted in many cases. Using only the plant test data, the practitioner is limited to linear dynamic models. We chose to use the historical data because it can be used to create nonlinear steady state models of processes that operate at multiple setpoints. Combining the nonlinear steady state model with linear dynamic models from the plant test data provides a generic approach to developing nonlinear models. To easily facilitate the development of nonlinear models, a suite of tools has been developed for data extraction and preprocessing as well as model training. The nonlinear steady state models, Yss = NNss(u) (5) are implemented by a feedforward neural network and trained using variants of the backpropagation algorithm [6]. The developer has a great deal of flexibility in determining the architecture of the network including the ability to select which inputs affect which outputs. Finally, an algorithm for specifying bounds on the gain (Jacobian) of the model has recently been implemented [7]. Because of limited plant test data, the dynamic models are restricted to second order models with input time delay, Yt = -alYt-l - a2Yt-2 + b1 Ut-d-l + b2 U t-d-2 (6) Neural Network Based Model Predictive Control 1033 The parameters of (6) are identified by minimizing the squared error between the model and the plant test data. To prevent a biased estimate of the parameters, the identification problem is solved using an optimizer because of the correlation in the model inputs [8]. Tools for selecting the identification regions and viewing the results are provided. 3.2 Combining the nonlinear steady state and dynamic models A variety of techniques exist for combining nonlinear steady state and linear dynamic models. The dynamic models can be used to either preprocess the inputs or postprocess the outputs of the steady state model. These models, referred to as Hammerstein and Weiner models respectively [8], contain a large number of parameters and are computationally expensive in an optimization problem when the model has many inputs and outputs. These models, when based upon neural networks, also extrapolate poorly. Gain scheduling is often used to combine nonlinear steady state models and linear dynamic models. Using a neural network steady state model, the gain at the current operating point, Ui, gi = ayss au I U=Ui (7) is used to update the gain of the linear dynamic model of (6), (8) where = b 19i b 2gi 1 + al + a2 b1 + b2 (9) 1 + al + a2 b1 + b2 (10) The difference equation is linearized about the point Ui and Yi = N N(Ui), thus, ~Y = Y - Yi and ~u = U - Ui? To simplify the equations above, a single-input singleoutput (8180) system is used. Gain scheduling results in a parsimonious model that is efficient to use in the MPC optimization problem, however, because this model does not incorporate information about the gain over the entire trajectory, its use leads to suboptimal performance in the MPC algorithm. Our nonlinear model approach remedies this problem. By solving a steady state optimization problem whenever a setpoint change is made, it is possible to compute the final steady state values of the inputs, U f. Given the final steady state input values, the gain associated with the final steady state can be computed. For a 8180 system, this gain is given by (11) Using the initial and final gain associated with a setpoint change, the gain structure over the entire trajectory can be approximated. This two point gain scheduling overcomes the limitations of regular gain scheduling in MPC algorithms. s. 1034 Piche, J Keeler, G. Martin, G. Boe. D. Johnson and M Gerules Combining the initial and final gain with the linear dynamic model, a quadratic difference equation is derived for the overall nonlinear model, where bi (1 = b2 (1 + al + a2)(9f - 9i) (b 1 + b2)(uf - ud + al + a2)(9f - 9d (b 1 + b2)(uf - ud (13) (14) and VI and V2 are given by (9) and (10). Use of the gain at the final steady state introduces the last two terms of (12). This model allows the incorporation of gain information over the entire trajectory in the MPC algorithm. The gain at of (12) at Ui is 9i while at uf it is 9f. Between the two points, the gain is a linear combination of 9i and 9 f. For processes with large gain changes, such as polymer reactors, this can lead to dramatic improvements in MPC controller performance. An additional benefit of using the model of (12) is that we allow the user to bound the initial and final gain and thus control the amount of nonlinearity used in the model. For practitioners who are use to implementing MPC with linear models, using gain bounds allows them to transition from linear to nonlinear models. This ability to control the amount of nonlinearity used in the model has been important for acceptance of this new model in many applications. Finally, bounding the gains can be used to guarantee extrapolation performance of the model. The nonlinear model of (12) fits the criteria needed in order to allow wide spread use of nonlinear models for MPC. The model is based upon readily available data and has a parsimonious representation allowing models with many inputs and outputs to be efficiently used in the optimizer. Furthermore, it addresses the primary nonlinearity found in processes, that being the significant change in gain over the operating region. 4 Polymer Application The nonlinear model described above has been used in a wide-variety of industrial applications including Kamyr digesters (pUlp and paper), milk evaporators and dryers (food processing), toluene diamine purification (chemicals), polyethylene and polypropylene reactors (polymers) and a fluid catalytic cracking unit (refining). Highlights of one such application are given below. A MPC controller that uses the model described above has been applied to a Gas Phase High Density Polyethylene reactor at Chevron Chemical Co. in Cedar Bayou, Texas [9]. The process produces homopolymer and copolymer grades over a wide range of melt indices. It's average production rate per year is 230,000 tons. Optimal control of the process is difficult to achieve because the reactor is a highly coupled nonlinear MIMO system (7 inputs and 5 outputs). For example, a number of input-output pairs exhibit gains that varying by a factor of 10 or more over the operating region. In addition, grade changes are made every few days. During these transitions nonprime polymer is produced. Prior to commissioning these controllers, Neural Network Based Model Predictive Control 1035 these transitions took several hours to complete. Linear and gain scheduling based controller have been tried on similar reactors and have delivered limited success. The nonlinear model was constructed using only historical data. The nonlinear steady state model was trained upon historical data from a two year period. This data contained examples of all the products produced by the reactor. Accurate dynamic models were derived both from historical data and knowledge of the process, thus, no step tests were conducted on the process. Excellent performance of this controller has been reported [9]. A two-fold decrease in the variance of the primary quality variable (melt index) has been achieved. In addition, the average transition time has been decreased by 50%. Unscheduled shutdowns which occurred previously have been eliminated. Finally, the controller, which has been on-line for two years, has gained high operator acceptance. 5 Conclusion A generic and parsimonious nonlinear model which can be used in an MPC algorithm has been presented. The model is created by combining a nonlinear steady state model with a linear dynamic models. They are combined using a two-point gain scheduling technique. This nonlinear model has been used for control of a nonlinear MIMO polyethylene reactor at Chevron Chemical Co. The controller has also been used in 50 other applications in the refining, chemicals, food processing and pulp and paper industries. References [1] Qin, S.J. & Badgwell, T.A. (1997) An overview of industrial model predictive control technology. In J. Kantor, C . Garcia and B. Carnahan (eds.), Chemical Process Control AIChE Symposium Series, pp. 232-256. NY: AIChB. [2] Seborg, D.E. (1999) A perspective on advanced strategies for Process Control (Revisited). to appear in Pmc. of European Control Conf. Karlsruhe, Germany. [3] Qin, S.J. & Badgwell, T.A. (1998) An overview of nonlinear model predictive control applications. Pmc. IFAC Workshop on Nonlinear Model Predictive Control - Assessment and Future Directions, Ascona, Switzerland, June 3-5. [4] Meadow, E.S. & Rawlings, J.B . (1997) Model predictive control. In M. Hesnon and D. Seborg (eds.), Nonlinear Model Predictive Control, pp. 233-310. NJ: Prentice Hall. [5] Nash, S. & Sofer, A. (1996) Linear and Nonlinear Programming. NY: McGraw-Hill. [6] Rumelhart D.E, Hinton G.B. & Williams, R.J. (1986) Learning internal representations by error propagation. In D. Rumelhart and J. McClelland (eds.), Parallel Distributed Processing, pp. 318-362. Cambridge, MA: MIT Press. [7] Hartman, E. (2000) Training feedforward neural networks with gain constraints. To appear in Neural Computation. [8] Ljung, L. (1987) System Identification. NJ: Prentice Hall. [9] Goff S., Johnson D. & Gerules, M. (1998) Nonlinear control and optimization of a high density polyethylene reactor. Proc. Chemical Engineering Expo, Houston, June.
1788 |@word open:4 seborg:2 simulation:1 linearized:1 tried:1 dramatic:1 initial:3 contains:2 series:1 selecting:1 past:3 current:1 com:6 must:3 readily:1 cracking:1 update:1 selected:1 postprocess:1 provides:1 revisited:1 constructed:1 become:2 symposium:1 combine:1 proliferation:1 grade:3 rawlings:1 company:2 food:4 solver:1 provided:1 developer:1 developed:8 impractical:1 suite:1 guarantee:1 nj:2 every:1 control:36 unit:1 appear:2 safety:1 engineering:1 chose:1 au:1 conversely:1 specifying:1 co:2 limited:4 range:2 bi:1 practical:1 testing:2 practice:1 implement:1 backpropagation:1 area:1 empirical:6 regular:2 operator:1 scheduling:6 prentice:2 yt:4 williams:1 l:1 survey:2 updated:1 commercial:3 user:1 programming:2 us:3 rumelhart:2 expensive:3 particularly:1 approximated:1 solved:2 region:8 decrease:1 environment:1 nash:1 ui:6 dynamic:17 ultimately:1 trained:2 solving:2 predictive:16 upon:9 easily:1 tx:6 widely:1 solve:2 ability:2 hartman:1 gi:2 itself:1 validates:1 final:7 delivered:1 pavilion:6 took:1 product:2 qin:2 loop:5 combining:6 flexibility:1 achieve:2 poorly:1 produce:1 depending:1 develop:1 implemented:4 direction:1 switzerland:1 saved:1 viewing:1 implementing:1 require:1 polymer:5 keeler:4 sufficiently:1 hall:2 great:1 scope:1 optimizer:7 a2:4 proc:1 create:1 successfully:1 tool:4 mit:1 rather:1 varying:1 derived:3 catalytic:1 refining:5 june:2 properly:1 improvement:1 industrial:13 contrast:1 suppression:1 typically:2 integrated:1 entire:3 germany:1 overall:1 development:2 extraction:1 sampling:6 eliminated:1 represents:1 future:4 simplify:1 few:1 composed:1 phase:1 reactor:10 acceptance:2 highly:1 introduces:1 accurate:1 desired:3 re:1 industry:12 setpoints:1 cover:1 cedar:1 usefulness:1 delay:1 mimo:5 johnson:5 conducted:1 too:1 reported:1 combined:1 density:2 retain:1 sofer:1 squared:1 i_l:1 conf:1 b2:6 vi:1 extrapolation:1 closed:1 relied:1 unti:1 parallel:1 minimize:2 greg:1 purification:1 variance:1 who:1 efficiently:3 preprocess:2 identification:3 polyethylene:5 produced:3 trajectory:5 researcher:1 whenever:1 ed:3 pp:3 involved:1 mpc:27 associated:3 gain:26 knowledge:1 pulp:3 ut:6 day:1 formulation:1 singleoutput:1 though:2 furthermore:3 correlation:1 hand:1 nonlinear:51 assessment:1 propagation:1 quality:2 karlsruhe:1 supervisory:1 facilitate:2 contain:1 remedy:1 chemical:8 deal:1 pav:6 during:1 steady:16 criterion:1 hill:1 complete:1 consideration:1 recently:2 common:2 empirically:1 overview:3 discussed:2 occurred:1 significant:1 cambridge:1 nonlinearity:3 operating:6 meadow:1 base:1 multivariate:1 aiche:1 recent:1 dictated:1 perspective:1 came:1 success:1 yi:2 greater:1 additional:1 houston:1 recognized:2 period:5 ud:2 stephen:1 multiple:5 ifac:1 calculation:3 long:1 y:1 variant:1 controller:11 roe:1 achieved:1 addition:3 interval:3 decreased:1 biased:1 operate:4 unlike:1 sent:1 practitioner:4 feedforward:2 variety:4 affect:1 fit:1 architecture:1 identified:1 suboptimal:1 economic:1 translates:1 texas:1 weiner:1 ul:1 action:3 ignored:2 useful:1 amount:2 mcclelland:1 documented:1 reduced:1 exist:1 millisecond:1 per:1 prevent:1 year:4 run:1 place:1 parsimonious:6 acceptable:1 bound:7 fold:1 quadratic:1 constraint:7 incorporation:1 kantor:1 expo:1 automotive:1 martin:4 uf:3 developing:4 melt:2 combination:2 poor:1 restricted:1 dryer:1 computationally:2 equation:3 previously:1 describing:1 needed:2 available:5 observe:1 actuator:1 v2:1 generic:5 appropriate:1 setpoint:5 build:1 move:2 objective:6 quantity:1 strategy:1 primary:3 costly:1 usual:1 exhibit:1 gradient:1 separate:1 topic:1 collected:1 length:1 index:2 providing:1 minimizing:2 pmc:2 difficult:2 thirteen:1 expense:1 fluid:1 implementation:4 contributed:1 allowing:1 upper:4 gas:1 hinton:1 jim:1 pair:1 required:2 aerospace:1 hour:2 address:1 beyond:1 below:2 built:1 including:4 difficulty:1 settling:3 disturbance:1 advanced:1 technology:8 brief:1 created:2 doug:1 coupled:2 extract:1 prior:1 determining:2 plant:10 highlight:1 ljung:1 grg:1 limitation:1 principle:2 production:1 austin:6 last:1 allow:2 wide:9 benefit:2 distributed:1 feedback:1 toluene:1 transition:4 computes:1 made:2 preprocessing:1 historical:10 mcgraw:1 gene:1 overcomes:1 active:1 hammerstein:1 b1:3 decade:2 excellent:2 complex:1 european:1 spread:4 bounding:1 allowed:1 referred:1 slow:1 ny:2 chevron:2 late:1 jacobian:1 minute:1 specific:1 ton:1 workshop:1 boe:3 ower:1 milk:1 gained:1 horizon:4 rejection:1 garcia:1 contained:1 determines:1 extracted:1 ma:1 viewed:1 change:7 piche:4 operates:1 accepted:3 select:1 internal:1 mark:1 incorporate:1 extrapolate:1
860
1,789
Improved Output Coding for Classification Using Continuous Relaxation Koby Crammer and Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {kob i cs ,sing e r }@ c s.huji.a c .il Abstract Output coding is a general method for solving multiclass problems by reducing them to multiple binary classification problems. Previous research on output coding has employed, almost solely, predefined discrete codes. We describe an algorithm that improves the performance of output codes by relaxing them to continuous codes. The relaxation procedure is cast as an optimization problem and is reminiscent of the quadratic program for support vector machines. We describe experiments with the proposed algorithm, comparing it to standard discrete output codes. The experimental results indicate that continuous relaxations of output codes often improve the generalization performance, especially for short codes. 1 Introduction The problem of multiclass categorization is about assigning labels to instances where the labels are drawn from some finite set. Many machine learning problems include a multiclass categorization component in them. Examples for such applications are text classification, optical character recognition, medical analysis, and object recognition in machine vision. There are many algorithms for the binary class problem, where there are only two possible labels, such as SVM [17], CART [4] and C4.5 [14]. Some of them can be extended to handle multiclass problems. An alternative and general approach is to reduce a multiclass problem to a multiple binary problems. In [9] Dietterich and Bakiri described a method for reducing multiclass problems to multiple binary problems based on error correcting output codes (ECOC). Their method consists of two stages. In the training stage, a set of binary classifiers is constructed, where each classifier is trained to distinguish between two disjoint subsets of the labels. In the classification stage, each of the trained binary classifiers is applied to test instances and a voting scheme is used to decide on the label. Experimental work has shown that the output coding approach can improve performance in a wide range of problems such as text classification [3], text to speech synthesis [8], cloud classification [1] and others [9, 10, 15]. The performance of output coding was also analyzed in statistics and learning theoretic contexts [11, 12, 16, 2]. Most of previous work on output coding has concentrated on the problem of solving multiclass problems using predefined output codes, independently of the specific application and the learning algorithm used to construct the binary classifiers. Furthermore, the "decoding" scheme assigns the same weight to each learned binary classifier, regardless of its performance. Last, the induced binary problems are treated as separate problems and are learned independently. Thus, there might be strong statistical correlations between the resulting classifiers, especially when the induced binary problems are similar. These problems call for an improved output coding scheme. In a recent theoretical work [7] we suggested a relaxation of discrete output codes to continuous codes where each entry of the code matrix is a real number. As in discrete codes, each column of the code matrix defines a partition of the set of the labels into two subsets which are labeled positive (+) and negative (-). The sign of each entry in the code matrix determines the subset association (+ or -) and magnitude corresponds to the confidence in this association. In this paper we discuss the usage of continuous codes for multiclass problems using a two phase approach. First, we create a binary output code matrix that is used to train binary classifiers in the same way suggested by Dietterich and Bakiri. Given the trained classifiers and some training data we look for a more suitable continuous code by casting the problem as a constrained optimization problem. We then replace the original binary code with the improved continuous code and proceed analogously to classify new test instances. An important property of our algorithm is that the resulting continuous code can be expressed as a linear combination of a subset of the training patterns. Since classification of new instances is performed using scalar products between the predictions vector of the binary classifiers and the rows of the code matrix, we can exploit this particular form of the code matrix and use kernels [17] to construct high dimensional product spaces. This approach enables an efficient and simple way to take into account correlations between the different binary classifiers. The rest of this paper is organized as follows. In the next section we formally describe the framework that uses output coding for multiclass problems. In Sec. 3 we describe our algorithm for designing a continuous code from a set of binary classifiers. We describe and discuss experiments with the proposed approach in Sec. 4 and conclude in Sec. 5. 2 Multiclass learning using output coding Let S = {(Xl, Yl)"'" (xm, Ym)} be a set of m training examples where each instance Xi belongs to a domain X. We assume without loss of generality that each label Yi is an integer from the set Y = {I, ... , k}. A multiclass classifier is a function H : X -+ Y that maps an instance X into an element Y of y. In this work we focus on a framework that uses output codes to build multiclass classifiers from binary classifiers. A binary output code M is a matrix of size k x lover { -1, + I} where each row of M correspond to a class Y E y. Each column of M defines a partition of Y into two disjoint sets. Binary learning algorithms are used to construct classifiers, one for each column t of M. That is, the set of examples induced by column t of M is (Xl, Mt,yJ, . .. , (Xm, Mt,y~). This set is fed as training data to a learning algorithm that finds a binary classifier. In this work we assume that each binary classifier ht is of the form h t : X -+ R This reduction yields l different binary classifiers hl , ... , ht. We denote the vector of predictions of these classifiers on an instance X by h(x) = (h l (x), ... , ht(x)). We denote the rth row of M by Mr. Given an example X we predict the label Y for which the row My is the "most similar" to h(x). We use a general notion of similarity and define it through an inner-product function K : JRt X JRt -+ JR. The higher the value of K(h(x), Mr) is the more confident we are that r is the correct label of x according to the set of classifiers h. Note that this notion of similarity holds for both discrete and continuous matrices. An example of a simple similarity function is K(u, v) = u . v. It is easy to verify that when both the output code and the binary classifiers are over { -1, + I} this choice of K is equivalent to picking the row of M which attains the minimal Hamming distance to h(x). To summarize, the learning algorithm receives a training set S, a discrete output code (matrix) of size k x l, and has access to a binary learning algorithm, denoted L. The learning algorithm L is called l times, once for each induced binary problem. The result of this process is a set of binary classifiers h(x) = (hI (x), ... , hl(x)). These classifiers are fed, together with the original labels YI, ... , Ym to our second stage of the learning algorithm which learns a continuous code. This continuous code is then used to classify new instances by choosing the class which correspond to a row with the largest innerproduct. The resulting classifier can be viewed as a two-layer neural network. The first (hidden) layer computes hI (x), ... , hi (x) and the output unit predicts the final class by choosing the label r which maximizes K(h(x), Mr). 3 Finding an improved continuous code We now describe our method for finding a continuous code that improves on a given ensemble of binary classifiers h. We would like to note that we do not need to know the original code that was originally used to train the binary classifiers. For simplicity we use the standard scalar-product as our similarity function. We discuss at the end of this section more general similarity functions based on kernels which satisfy Mercer conditions. The approach we take is to cast the code design problem as a constrained optimization problem. The multiclass empirical error is given by where [7f] is equal to 1 if the predicate 7f holds and 0 otherwise. Borrowing the idea of soft margins [6] we replace the 0-1 multiclass error with a piece wise linear bound maxr{h(xi) . Mr + by"r} -h(Xi) . My" where bi,j = 1 - Oi,j, i.e., it is equal 0 if i = j and 1 otherwise. We now get an upper bound on the empirical loss f.s(M, h) ~~ f [m:x{h(xi) . Mr + by"r} -h(Xi) . My,] (1) i=1 Put another way, the correct label should have a confidence value that is larger by at least one than any of the confidences for the rest of the labels. Otherwise, we suffer loss which is linearly proportional to the difference between the confidence of the correct label and the maximum among the confidences of the other labels. Define the l2-norm of a code M to be the l2-norm of the vector represented by the concatenation of M's rows, IIMII~ = II(MI , ... , Mk)ll~ = Ei,j Mi~j , where f3 > 0 is a regularization constant. We now cast the problem of finding a good code which minimizes the bound Eq. (1) as a quadratic optimization problem with "soft" constraints, 1 m 2f3IIMII~ + L~i i=1 Solving the above optimization problem is done using its dual problem (details are omitted due to lack of space). The solution of the dual problem result in the following form for M (3) where 'T/i,r are variables of the dual problem which satisfy Vi, r : 'T/i,r ~ 0 and L:r 'T/i,r = 1. Eq. (3) implies that when the optimum of the objective function is achieved each row of the matrix M is a linear combination of li(Xi). We thus say that example i is a support pattern for class r if the coefficient (t5 yi ,r - 'T/i,r) of li(Xi) in Eq. (3) is non-zero. There are two cases for which example i can be a support pattern for class r: The first is when Yi = rand 'T/i,r < 1. The second case is when Yi i' rand 'T/i,r > O. Put another way, fixing i, we can view 'T/i,r as a distribution, iii, over the labels r. This distribution should give a high probability to the correct label Yi. Thus, an example i "participates" in the solution for M (Eq. (3? if and only if iii is not a point distribution concentrating on the correct label Yi. Since the continuous output code is constructed from the support patterns, we call our algorithm SPOC for Support Patterns Output Coding. Denote by Ti = Iy. - iii. Thus, from Eq. (3) we obtain the classifier, H(x) = argm:x {li(x) . Mr } = argm:x { ~ Ti,r [li(x) ?li(Xi)] } (4) Note that solution as defined by Eq. (4) is composed of inner-products of the prediction vector on a new instance with the support patterns. Therefore, we can transform each prediction vector to some high dimensional inner-product space Z using a transformation ? : ]Rl -t Z. We thus replace the inner-product in the dual program with a general innerproduct kernel K that satisfies Mercer conditions [17]. From Eq. (4) we obtain the kernelbased classification rule H(x), (5) The ability to use kernels as a means for calculating inner-products enables a simple and efficient way to take into account correlations between the binary classifiers. For instance, a second order polynomial of the form (1 + iiii)2 correspond to a transformation to a feature space that includes all the products of pairs of binary classifiers. Therefore, the relaxation of discrete codes to continuous codes offers a partial remedy by assigning different importance weight to each binary classifier while taking into account the statistical correlations between the binary classifiers. 4 Experiments In this section we describe experiments we performed comparing discrete and continuous output codes. We selected eight multiclass datasets, seven from the VCI repository! and the mnist dataset available from AT&T2. When a test set was provided we used the original split into training and test sets, otherwise we used 5-fold cross validation for evaluating the test error. Since we ran multiple experiments with 3 different codes, 7 kernels, and two base-learners, we used a subset of the training set formnist, letter, and shut tIe. We are in the process of performing experiments with the complete datasets and other datasets using a subset of the kernels. A summary of data sets is given in Table 1. Ihttp://www.ics.uci.edllimlearnIMLRepository.html 2http://www.research.att.comryann/ocr/mnist Name satimage shuttle mnist isolet letter vowel glass soybean No. of Training Examples 4435 5000 5000 6238 5000 528 214 307 No. of Test Examples 2000 9000 10000 1559 4000 462 5-fold eval 376 No. of Classes 6 7 10 26 26 11 7 19 No. of Attributes 36 9 784 6 16 10 10 35 Table 1: Description of the datasets used in experiments. We tested three different types of codes: one-against-all (denoted "id"), BCH (a linear error correcting code), and random codes. For a classification problem with k classes we set the random code to have about 10 log2 (k) columns. We then set each entry in the matrix defining the code to be -1 or + 1 uniformly at random. We used SVM as the base binary learning algorithm in two different modes: In the first mode we used the margin of the vector machine classifier as its real-valued prediction. That is, each binary classifier h t is of the fonn ht(x) = w?x+b where wand b are the parameters ofthe separating hyperplane. In the second mode we thresholded the prediction of the classifiers, ht(x) = sign(w ?x+b) . Thus, each binary classifier ht in this case is of the fonn ht : X -t {-I, +1}. For brevity, we refer to these classifiers as thresholded-SVMs. We would like to note in passing that this setting is by no means superficial as there are learning algorithms, such as RIPPER [5], that build classifiers of this type. We ran SPOC with 7 different kernels: homogeneous and non-homogeneous polynomials of degree 1,2, and 3, and radial-basis-functions (RBF). A summary of the results is depicted in Figure 1. The figure contains four plots. Each plot show the relative test error difference between discrete and continuous codes. Formally, the height of each bar is proportional to (Ed - Ec) / Ed where Ed (Ec) is the test error when using a discrete (continuous) code. For each problem there are three bars, one for each type of code (one-against-all, BCH, and random). The datasets are plotted left to right in decreasing order with respect to the number of training examples per class. The left plots correspond to the results obtained using thresholded-SVM as the base binary classifier and right plots show the results using the real-valued predictions. For each mode we show the results of best performing kernel on each dataset (top plots) and the average (over the 7 different kernels) performance (bottom plots). In general, the continuous output code relaxation indeed results in an improved performance over the original discrete output codes. The most significant improvements are achieved with thresholded-SVM as the base binary classifiers. On most problems all the kernels achieve some improvement. However, the best performing kernel seems to be problem dependent. Impressive improvements are achieved for data sets with a large number of training examples per class, shuttle being a notable example. For this dataset the test error is reduced from an average of over 3% when using discrete code to an average test error which is significantly lower than 1% for continuous codes. Furthennore, using a non-homogeneous polynomial of degree 3 reduces the test error rate down to 0.48%. In contrast, for the so yb e a n dataset, which contains 307 training examples, and 19 classes, none of the kernels achieved any improvement, and often resulted in an increase in the test error. Examining the training error reveals that the greater the decrease in the training error due to the continuous code relaxation the worse the increase in the corresponding test error. This behavior indicates that SPOC overfitted the relatively small training set. 80 0 _ _ BeH . ,.;n;!om ? , SO ! t" L. ~ ~I satmage sallmage _.1 stulle shlAl le _II Isolel ~.I 0.. 011 ?1glass lell er soybean ~_ __I _._ 0._ oil ~II m~st vowel soybean Isolel lell er glass Figure 1: Comparison of the performance of discrete and continuous output codes using SVM (right figures) and thresholded-SVM (left figures) as the base learner for three different families of codes. The top figures show the relative change in test error for the best performing kernel and the bottom figures show the relative change in test error averaged across seven different kernels . To conclude this section we describe an experiment that evaluated the performance of the SPOC algorithm as a function of the length of random codes. Using the same setting described above we ran SPOC with random codes of lengths 5 through 35 for the vo wel dataset and lengths 15 through 50 for the let te r dataset. In Figure 2 we show the test error rate as a function of the the code length with SVM as the base binary learner. (Similar results were obtained using thresholded-SVM as the base binary classifiers.) For the lette r dataset we see consistent and significant improvements of the continuous codes over the discrete ones whereas for vowe l dataset there is a major improvement for short codes that decays with the code's length. Therefore, since continuous codes can achieve performance comparable to much longer discrete codes they may serve as a viable alternative for discrete codes when computational power is limited or for classification tasks of large datasets. 5 Discussion In this paper we described and experimented with an algorithm for continuous relaxation of output codes for multiclass categorization problems. The algorithm appears to be especially useful when the codes are short. An interesting question is whether the proposed approach can be generalized by calling the algorithm successively on the previous code it improved. Another viable direction is to try to combine the algorithm with other scheme for reducing vowel . ~ i 50 -e- Discrete -0- Conli nuous -e- o..avte letter -0- C OltinUDUI , ?, , , l \ ? o. p '(9.?J-e.Q--q 'I!! E'.I GI b/ O--a-0-C:H3, 9 Q O--o-Q. ",e, -----!c----=~~,=,__---=,,_----c'c-_~b-~----"''! c:ode length Figure 2: Comparison of the performance of discrete random codes and their continuous relaxation as a function of the code length. multiclass problems to multiple binary problems such as tree-based codes and directed acyclic graphs [13]. We leave this for future research. References [1] D. W. Aha and R. L. Bankert. Cloud classification using error-correcting output codes. In Artificial Intelligence Applications: Natural Science, Agriculture, and Environmental Science, volume 11 , pages 13- 28, 1997. [2] E.L. Allwein, R.E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. In Machine Learning: Proceedings of the Seventeenth International Conference, 2000. [3] A. Berger. Error-correcting output coding for text classification. In IJCAJ'99: Workshop on machine learning for information filtering , 1999. [4] Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. Classification and Regression Trees. Wadsworth & Brooks, 1984. [5] William Cohen. Fast effective rule induction. In Proceedings of the Twelfth International Conference on Machine Learning, pages 115- 123, 1995. [6] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273297, September 1995. [7] Koby Crammer and Yoram Singer. On the learnability and design of output codes for multiclass problems. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000. [8] Ghulum Bakiri Thomas G. Dietterich. Achieving high-accuracy text-to-speech with machine learning. In Data mining in ~peech synthesis, 1999. [9] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via errorcorrecting output codes. Journal of Artificial Intelligence Research, 2:263- 286, January 1995. [10] Tom Dietterich and Eun Bae Kong. Machine learning bias, statistical bias, and statistical variance of decision tree algorithms. Technical report, Oregon State University, 1995. Available via the WWW at http://www.cs.orst.edu:801'''tgd/cv/tr.html. [11] Trevor Hastie and Robert Tibshirani. Classification by pairwise coupling. The Annals of Statistics, 26(1):451--471, 1998. [12] G. James and T. Hastie. The error coding method and PiCT. Journal of computational and graphical stastistics, 7(3):377- 387, 1998. [13] J.C. Platt, N. Cristianini, and J. Shawe-Taylor. Large margin dags for multiclass classification. In Advances in Neural Information Processing Systems 12. MIT Press, 2000. (To appear.). [14] J. Ross Quillian. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993. [15] Robert E. Schapire. Using output codes to boost multiclass learning problems. In Machine Learning: Proceedings of the Fourteenth International Conference, pages 313- 321, 1997. [16] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):1--40, 1999. [17] Vladimir N. Vapnik. Statistical Learning Theory. Wiley, 1998.
1789 |@word kong:1 repository:1 polynomial:3 norm:2 seems:1 twelfth:1 fonn:2 tr:1 reduction:1 contains:2 att:1 comparing:2 assigning:2 reminiscent:1 partition:2 enables:2 plot:6 intelligence:2 selected:1 shut:1 short:3 argm:2 boosting:1 height:1 constructed:2 viable:2 consists:1 combine:1 pairwise:1 indeed:1 behavior:1 ecoc:1 decreasing:1 provided:1 maximizes:1 israel:1 minimizes:1 finding:3 transformation:2 voting:1 ti:2 tie:1 classifier:41 platt:1 unit:1 medical:1 appear:1 positive:1 engineering:1 id:1 solely:1 might:1 relaxing:1 limited:1 range:1 bi:1 averaged:1 seventeenth:1 directed:1 yj:1 procedure:1 empirical:2 significantly:1 confidence:6 radial:1 get:1 put:2 context:1 www:4 equivalent:1 map:1 jerusalem:1 regardless:1 independently:2 simplicity:1 assigns:1 correcting:4 rule:2 isolet:1 handle:1 notion:2 annals:1 homogeneous:3 us:2 designing:1 element:1 recognition:2 predicts:1 labeled:1 bottom:2 cloud:2 decrease:1 overfitted:1 ran:3 cristianini:1 trained:3 solving:4 serve:1 learner:3 basis:1 represented:1 leo:1 train:2 fast:1 describe:8 effective:1 artificial:2 choosing:2 larger:1 valued:2 say:1 otherwise:4 furthennore:1 ability:1 statistic:2 gi:1 transform:1 final:1 product:9 uci:1 achieve:2 description:1 optimum:1 categorization:3 leave:1 object:1 coupling:1 fixing:1 h3:1 school:1 eq:7 strong:1 c:2 indicate:1 implies:1 direction:1 correct:5 attribute:1 generalization:1 hold:2 ic:1 predict:1 bch:2 major:1 omitted:1 agriculture:1 label:18 ross:1 largest:1 create:1 mit:1 shuttle:2 allwein:1 casting:1 breiman:1 iimii:1 focus:1 improvement:6 indicates:1 contrast:1 attains:1 glass:3 dependent:1 hidden:1 borrowing:1 classification:15 among:1 dual:4 denoted:2 html:2 constrained:2 wadsworth:1 equal:2 construct:3 once:1 f3:1 koby:2 look:1 future:1 others:1 t2:1 report:1 richard:1 composed:1 resulted:1 phase:1 vowel:3 william:1 friedman:1 mining:1 eval:1 jrt:2 analyzed:1 tgd:1 predefined:2 partial:1 tree:3 aha:1 taylor:1 plotted:1 theoretical:1 minimal:1 mk:1 instance:10 column:5 classify:2 soft:2 ghulum:2 subset:6 entry:3 quillian:1 predicate:1 examining:1 learnability:1 my:3 confident:1 st:1 international:3 huji:1 yl:1 participates:1 decoding:1 picking:1 synthesis:2 analogously:1 ym:2 together:1 iy:1 successively:1 soybean:3 worse:1 li:5 account:3 coding:12 sec:3 includes:1 coefficient:1 oregon:1 satisfy:2 notable:1 vi:1 piece:1 performed:2 view:1 try:1 om:1 il:1 oi:1 accuracy:1 variance:1 kaufmann:1 ensemble:1 correspond:4 yield:1 ofthe:1 none:1 comryann:1 ed:3 trevor:1 against:2 james:1 mi:2 hamming:1 dataset:8 concentrating:1 improves:2 organized:1 appears:1 higher:1 originally:1 tom:1 improved:7 rand:2 yb:1 done:1 evaluated:1 generality:1 furthermore:1 stage:4 correlation:4 jerome:1 receives:1 ei:1 vci:1 lack:1 defines:2 mode:4 usage:1 dietterich:5 oil:1 verify:1 name:1 remedy:1 regularization:1 ll:1 generalized:1 stone:1 theoretic:1 complete:1 vo:1 wise:1 charles:1 mt:2 rl:1 cohen:1 volume:1 association:2 rth:1 refer:1 significant:2 cv:1 dag:1 shawe:1 access:1 similarity:5 impressive:1 longer:1 base:7 recent:1 belongs:1 binary:41 yi:7 morgan:1 greater:1 mr:6 employed:1 ii:3 multiple:5 reduces:1 technical:1 offer:1 cross:1 kob:1 prediction:8 regression:1 vision:1 kernel:14 achieved:4 whereas:1 iiii:1 ode:1 thirteenth:1 rest:2 cart:1 induced:4 lover:1 call:2 integer:1 iii:3 easy:1 split:1 hastie:2 reduce:1 inner:5 idea:1 multiclass:22 whether:1 suffer:1 speech:2 proceed:1 passing:1 useful:1 concentrated:1 svms:1 reduced:1 http:2 schapire:3 sign:2 disjoint:2 per:2 tibshirani:1 discrete:18 four:1 achieving:1 drawn:1 thresholded:6 ht:7 graph:1 relaxation:9 bae:1 wand:1 fourteenth:1 letter:3 almost:1 family:1 decide:1 decision:1 comparable:1 wel:1 layer:2 hi:3 bound:3 distinguish:1 fold:2 quadratic:2 annual:1 constraint:1 calling:1 performing:4 optical:1 relatively:1 according:1 combination:2 jr:1 across:1 character:1 hl:2 errorcorrecting:1 discus:3 eun:1 singer:4 know:1 fed:2 end:1 available:2 eight:1 ocr:1 alternative:2 corinna:1 original:5 thomas:2 top:2 include:1 graphical:1 log2:1 unifying:1 calculating:1 exploit:1 yoram:3 especially:3 build:2 bakiri:4 objective:1 question:1 september:1 distance:1 separate:1 separating:1 concatenation:1 seven:2 induction:1 code:74 length:7 berger:1 hebrew:1 vladimir:2 olshen:1 robert:3 negative:1 design:2 upper:1 datasets:6 sing:1 finite:1 january:1 defining:1 extended:1 beh:1 cast:3 pair:1 orst:1 c4:2 learned:2 lell:2 boost:1 brook:1 suggested:2 bar:2 pattern:6 xm:2 summarize:1 program:3 power:1 suitable:1 treated:1 natural:1 innerproduct:2 scheme:4 improve:2 rated:1 text:5 l2:2 relative:3 loss:3 interesting:1 proportional:2 filtering:1 acyclic:1 validation:1 degree:2 consistent:1 mercer:2 row:8 summary:2 last:1 bias:2 wide:1 taking:1 evaluating:1 computes:1 t5:1 ec:2 maxr:1 reveals:1 conclude:2 xi:8 continuous:27 table:2 superficial:1 domain:1 linearly:1 wiley:1 xl:2 learns:1 down:1 specific:1 er:2 decay:1 svm:8 experimented:1 cortes:1 workshop:1 mnist:3 vapnik:2 importance:1 magnitude:1 te:1 margin:4 depicted:1 expressed:1 scalar:2 corresponds:1 determines:1 satisfies:1 environmental:1 viewed:1 rbf:1 satimage:1 replace:3 change:2 reducing:4 uniformly:1 hyperplane:1 called:1 experimental:2 formally:2 support:7 kernelbased:1 crammer:2 brevity:1 tested:1
861
179
777 TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC M.R. Walker. S. Haghighi. A. Afghan. and L.A. Akers Center for Solid State Electronics Research Arizona State University Tempe. AZ 85287-6206 mwalker@enuxha.eas.asu.edu ABSTRACT Hardware implementation of neuromorphic algorithms is hampered by high degrees of connectivity. Functionally equivalent feedforward networks may be formed by using limited fan-in nodes and additional layers. but this complicates procedures for determining weight magnitudes. No direct mapping of weights exists between fully and limited-interconnect nets. Low-level nonlinearities prevent the formation of internal representations of widely separated spatial features and the use of gradient descent methods to minimize output error is hampered by error magnitude dissipation. The judicious use of linear summations or collection units is proposed as a solution. HARDWARE IMPLEMENTATIONS OF FEEDFORWARD, SYNTHETIC NEURAL SYSTEMS The pursuit of hardware implementations of artificial neural network models is motivated by the need to develop systems which are capable of executing neuromorphic algorithms in real time. The most significant barrier is the high degree of connectivity required between the processing elements. Current interconnect technology does not support the direct implementation of large-scale arrays of this type. In particular. the high fan-in/fan-outs of biology impose connectivity requirements such that the electronic implementation of a highly interconnected biological neural networks of just a few thousand neurons would require a level of connectivity which exceeds the current or even projected interconnection density ofULSI systems (Akers et al. 1988). Highly layered. limited-interconnected architectures are however. especially well suited for VLSI implementations. In previous works. we analyzed the generalization and fault-tolerance characteristics of a limited-interconnect perceptron architecture applied in three simple mappings between binary input space and binary output space and proposed a CMOS architecture (Akers and Walker. 1988). This paper concentrates on developing an understanding of the limitations on layered neural network architectures imposed by hardware implementation and a proposed solution. 778 Walker, Haghighi, Afghan and Akers TRAINING CONSIDERATIONS FOR LIMITED ..INTERCONNECT FEEDFORWARD NETWORKS The symbolic layout of the limited fan-in network is shown in Fig. 1. Re-arranging of the individual input components is done to eliminate edge effects. Greater detail on the actual hardware architecture may be found in (Akers and Walker, 1988) As in linear filters, the total number of connections which fan-in to a given processing element determines the degrees of freedom available for forming a hypersurface which implements the desired node output function (Widrow and Stearns, 1985). When processing elements with fixed, low fan-in are employed, the affects of reduced degrees of freedom must be considered in order to develop workable training methods which permit generalization of novel inputs. First. no direct or indirect relation exists between weight magnitudes obtained for a limited-interconnect, multilayered perceptron, and those obtained for the fully connected case. Networks of these types adapted with identical exemplar sets must therefore fonn completely different functions on the input space. Second, low-level nonlinearities prevent direct internal coding of widely separated spatial features in the input set. A related problem arises when hyperplane nonlinearities are used. Multiple hyperplanes required on a subset of input space are impossible when no two second level nodes address identical positions in the input space. Finally, adaptation methods like backpropagation which minimize output error with gradient descent are hindered since the magnitude of the error is dissipated as it back-propagates through large numbers of hidden layers. The appropriate placement of linear summation elements or collection units is a proposed solution. 1 2 12 3 11 10 4 5 9 6 7 8 Figure 1. Symbolic Layout of Limited-Interconnect Feedforward Architecture Training a Limited-Interconnect, Synthetic Neural Ie COMPARISON OF WEIGHT VALVES IN FULLY CONNECTED AND LIMITED-INTERCONNECT NETWORKS Fully connected and limited-interconnect feedforward structures may be functionally equivalent by virtue of identical training sets, but nonlinear node discriminant functions in a fully-connected perceptron network are generally not equivalent to those in a limited-interconnect, multilayered network. This may be shown by comparing the Taylor series expansion of the discriminant functions in the vicinity of the threshold for both types and then equating terms of equivalent order. A simple limited-interconnect network is shown in Fig. 2. x1 x2 y3 x3 x4 Figure 2. Limited-Interconnect Feedforward Network A discriminant function with a fan-in of two may be represented with the following functional form, where e is the threshold and the function is assumed to be continuously differentiable. The Taylor series expansion of the discriminant is, Expanding output node three in Fig. 2 to second order, where fee), fee) and f'(e) are constant terms. Substituting similar expansions for Yl and Y2 into Y3 yields the expression, 779 780 Walker, Haghighi, Afghan and Akers The output node in the fully-connected case may also be expanded, x1 x2 __~y3 x3 x4 Figure 3. Fully Connected Network where Expanding to second order yields, We seek the necessary and sufficient conditions for the two nonlinear discriminant functions to be analytically equivalent. This is accomplished by comparing terms of equal order in the expansions of each output node in the two nets. Equating the constant terms yields, w5 =-w6 Equating the fIrst order terms, W Equating the second order terms, 5 =W 6 = 1 f(9) Training a Limited-Interconnect, Synthetic Neural Ie The ftrst two conditions are obviously contradictory. In addition, solving for w5 or w6 using the ftrst and second constraints or the frrst and third constraints yields the trivial result, w5=w6=O. Thus, no relation exists between discriminant functions occurring in the limited and fully connected feedforward networks. This eliminates the possibility that weights obtained for a fully connected network could be transformed and used in a limited-interconnect structure. More signiftcant is the fact that full and limited interconnect nets which are adapted with identical sets of exemplars must form completely different functions on the input space, even though they exhibit identical output behavior. For this reason, it is anticipated that the two network types could produce different responses to a novel input. NON-OVERLAPPING INPUT SUBSETS Signal routing becomes important for networks in which hidden units do not address identical subsets in the proceeding layer. Figure 4 shows an odd-parity algorithm implemented with a limited-interconnect architecture. Large weight magnitudes are indicated by darker lines. Many nodes act as "pass-through" elements in that they have few dominant input and output connections. These node types are necessary to pass lower level signals to common aggregation points. In general, the use of limited fan-in processing elements implementing a nonlinear discriminant function decreases the probability that a given correlation within the input data will be encoded, especially if the "width" of the feature set is greater than the fan-in, requiring encoding at a high level within the net. In addition, since lower-level connections determine the magnitudes of upper level connections in any layered net when baclcpropagation is used, the set of points in weight space available to a limited-interconnect net for realizing a given function is further reduced by the greater number of weight dependencies occurring in limited-interconnect networks, all of which must be satisfted during training. Finally, since gradient descent is basically a shortcut through an NP-complete search in weight space, reduced redundancy and overlapping of internal representations reduces the probability of convergence to a near-optimal solution on the training set. DISSIPATION OF ERROR MAGNITUDE WITH INCREASING NUMBERS OF LAYERS Following the derivation of backpropagation in (plaut, 1986), the magnitude change for a weight connecting a processing element in the m-Iayer with a processing element in the I-layer is given by, where 781 782 Walker, Haghighi, Afghan and Akers Figure 4. Six-input odd parity function implemented with limited-interconnect then = [~f [fL???[fL(ya-da) i..J k=l '1 J= a=l dYa dy j ] dy ] dx wb - a ? " - d wk _, _k a x, ) dx ) k W 1~ l-k dy dx 1 m Where y is the output of the discriminant function, x is the activation level, w is a connection magnitude, and f is the fan-in for each processing element. If N layers of elements intervene between the m-layer and the output layer, then each of the f (N-l) tenns in the above summation consists of the product, Training a Limited-Interconnect, Synthetic Neural Ie dy. ) Wb -a ?? ' -xd j If we replace the weight magnitudes and the derivatives in each tenn with their mean values, The value of the first derivative of the sigmoid discriminant function is distributed between 0.0 and 0.5. The weight values are typically initially distrlbuted evenly between small positive and negative values. Thus with more layers. the product of the derivatives occurring in each tenn approaches zero. The use of large numbers of perceptron layers therefore has the affect of dissipating the magnitude of the error. This is exacerbated by the low fan-in, which reduces the total number of tenns in the summation. The use of linear collection units (McClelland. 1986), discussed in the following section, is a proposed solution to this problem. LINEAR COLLECTION UNITS As shown in Fig. 5, the output of the limited-interconnect net employing collection units is given by the function, x1 [::>linear summation o non-linear discriminant x2 y3 x3 x4 Figure S. Limited-interconnect network employing linear summations where c 1 and c2 are constants. The position of the summations may be determined by using euclidian k-means clustering on the exemplar set to a priori locate cluster centers 783 784 Walker, Haghighi, Afghan and Akers and determine their widths (Duda and Hart, 1973). The cluster members would be combined using linear elements until they reached a nonlinear discrminant, located higher in the net and at the cluster center. With this arrangement, weights obtained for a fully-connected net could be mapped using a linear transformation into the limited-interconnect network. Alternatively, backpropagation could be used since error dissipation would be reduced by setting the linear constant c of the summation elements to arbitrarily large values. CONCLUSIONS No direct transformation of weights exists between fully and limited interconnect nets which employ nonlinear discrmiminant functions. The use of gradient descent methods to minimize output error is hampered by error magnitude dissipation. In addition, low-level nonlinearities prevent the formation of internal representations of widely separated spatial features. The use of strategically placed linear summations or collection units is proposed as a means of overcoming difficulties in determining weight values in limited-interconnect perceptron architectures. K-means clustering is proposed as the method for determining placement. References L.A. Akers, M.R. Walker, O.K. Ferry & R.O. Grondin, "Limited Interconnectivity in Synthetic Neural Systems," in R. Eckmiller and C. v.d. Malsburg eds., Neural Computers. Springer-Verlag, 1988. L.A. Akers & M.R. Walker, "A Limited-Interconnect Synthetic Neural IC," Proceedings of the IEEE International Conference on Neural Networks, p. 11-151,1988. B. Widrow & S.D. Stearns, Adaptive Signal Processing. Prentice-Hall, 1985. D.C. Plaut, SJ. Nowlan & G.E. Hinton, "Experiments on Learning by Back Propagation," Carnegie-Mellon University, Dept. of Computer Science Technical Report, June, 1986. J.L. McClelland, "Resource Requirements of Standard and Programmable Nets," in D.E. Rummelhart and J.L. McClelland eds., Parallel Distributed Processing Volume 1: Foundations. MIT Press, 1986. R.O. Duda & P.E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973.
179 |@word duda:2 seek:1 fonn:1 euclidian:1 solid:1 electronics:1 series:2 current:2 comparing:2 nowlan:1 activation:1 dx:3 must:4 tenn:2 asu:1 realizing:1 plaut:2 node:9 hyperplanes:1 c2:1 direct:5 consists:1 behavior:1 actual:1 valve:1 increasing:1 becomes:1 transformation:2 y3:4 act:1 xd:1 unit:7 positive:1 encoding:1 tempe:1 equating:4 limited:32 implement:1 backpropagation:3 x3:3 procedure:1 symbolic:2 layered:3 prentice:1 impossible:1 equivalent:5 imposed:1 center:3 layout:2 array:1 arranging:1 element:12 located:1 thousand:1 connected:9 decrease:1 solving:1 completely:2 indirect:1 represented:1 derivation:1 separated:3 artificial:1 formation:2 encoded:1 widely:3 interconnection:1 obviously:1 differentiable:1 net:11 interconnected:2 product:2 adaptation:1 frrst:1 az:1 convergence:1 cluster:3 requirement:2 produce:1 cmos:1 executing:1 develop:2 widrow:2 exemplar:3 odd:2 exacerbated:1 implemented:2 concentrate:1 filter:1 routing:1 implementing:1 require:1 generalization:2 biological:1 summation:9 considered:1 ic:2 hall:1 mapping:2 substituting:1 interconnectivity:1 mit:1 june:1 interconnect:27 eliminate:1 typically:1 initially:1 hidden:2 vlsi:1 relation:2 transformed:1 classification:1 priori:1 spatial:3 equal:1 biology:1 identical:6 x4:3 anticipated:1 rummelhart:1 np:1 report:1 few:2 employ:1 strategically:1 individual:1 freedom:2 w5:3 highly:2 possibility:1 workable:1 analyzed:1 ferry:1 edge:1 capable:1 necessary:2 taylor:2 re:1 desired:1 complicates:1 wb:2 neuromorphic:2 subset:3 dependency:1 synthetic:7 combined:1 density:1 international:1 ie:3 yl:1 connecting:1 continuously:1 connectivity:4 derivative:3 nonlinearities:4 coding:1 wk:1 reached:1 aggregation:1 parallel:1 minimize:3 formed:1 characteristic:1 yield:4 basically:1 ed:2 akers:10 afghan:5 ea:1 back:2 higher:1 response:1 done:1 though:1 just:1 correlation:1 until:1 nonlinear:5 overlapping:2 propagation:1 indicated:1 effect:1 requiring:1 y2:1 vicinity:1 analytically:1 during:1 width:2 complete:1 dissipation:4 consideration:1 novel:2 common:1 sigmoid:1 functional:1 volume:1 discussed:1 functionally:2 significant:1 mellon:1 intervene:1 dominant:1 verlag:1 binary:2 tenns:2 arbitrarily:1 fault:1 accomplished:1 additional:1 greater:3 impose:1 employed:1 determine:2 signal:3 multiple:1 full:1 reduces:2 exceeds:1 technical:1 hart:2 addition:3 walker:9 haghighi:5 eliminates:1 member:1 near:1 feedforward:7 affect:2 architecture:8 hindered:1 motivated:1 expression:1 six:1 programmable:1 generally:1 hardware:5 mcclelland:3 reduced:4 stearns:2 carnegie:1 eckmiller:1 redundancy:1 threshold:2 prevent:3 electronic:1 dy:4 fee:2 layer:10 fl:2 fan:11 arizona:1 adapted:2 placement:2 constraint:2 x2:3 scene:1 expanded:1 developing:1 resource:1 pursuit:1 available:2 permit:1 appropriate:1 hampered:3 clustering:2 malsburg:1 especially:2 arrangement:1 exhibit:1 gradient:4 mapped:1 evenly:1 discriminant:10 trivial:1 reason:1 w6:3 ftrst:2 negative:1 implementation:7 upper:1 neuron:1 descent:4 hinton:1 locate:1 overcoming:1 required:2 connection:5 address:2 pattern:1 difficulty:1 technology:1 dissipated:1 understanding:1 determining:3 fully:11 limitation:1 foundation:1 degree:4 sufficient:1 propagates:1 dissipating:1 placed:1 parity:2 perceptron:5 barrier:1 tolerance:1 distributed:2 collection:6 adaptive:1 projected:1 employing:2 hypersurface:1 sj:1 assumed:1 iayer:1 alternatively:1 search:1 expanding:2 expansion:4 da:1 multilayered:2 x1:3 fig:4 darker:1 wiley:1 position:2 third:1 virtue:1 exists:4 magnitude:12 occurring:3 suited:1 forming:1 springer:1 determines:1 replace:1 shortcut:1 change:1 judicious:1 determined:1 hyperplane:1 contradictory:1 total:2 pas:2 ya:1 internal:4 support:1 arises:1 dept:1
862
1,790
Regularization with Dot-Product Kernels Alex J. SIDola, Zoltan L. Ovari, and Robert C. WilliaIDson Department of Engineering Australian National University Canberra, ACT, 0200 Abstract In this paper we give necessary and sufficient conditions under which kernels of dot product type k(x, y) = k(x . y) satisfy Mercer's condition and thus may be used in Support Vector Machines (SVM), Regularization Networks (RN) or Gaussian Processes (GP). In particular, we show that if the kernel is analytic (i.e. can be expanded in a Taylor series), all expansion coefficients have to be nonnegative. We give an explicit functional form for the feature map by calculating its eigenfunctions and eigenvalues. 1 Introduction Kernel functions are widely used in learning algorithms such as Support Vector Machines, Gaussian Processes, or Regularization Networks. A possible interpretation of their effects is that they represent dot products in some feature space :7, i.e. k(x,y) = ?(x)? ?(y) (1) where ? is a map from input (data) space X into:7. Another interpretation is to connect ? with the regularization properties of the corresponding learning algorithm [8]. Most popular kernels can be described by three main categories: translation invariant kernels [9] k(x, y) = k(x - y), (2) kernels originating from generative models (e.g. those of Jaakkola and Haussler, or Watkins), and thirdly, dot-product kernels k(x, y) = k(x . y). (3) Since k influences the properties of the estimates generated by any of the algorithms above, it is natural to ask which regularization properties are associated with k. In [8, 10, 9] the general connections between kernels and regularization properties are pointed out, containing details on the connection between the Fourier spectrum of translation invariant kernels and the smoothness properties of the estimates. In a nutshell, the necessary and sufficient condition for k(x - y) to be a Mercer kernel (i.e. be admissible for any of the aforementioned kernel methods) is that its Fourier transform be nonnegative. This also allowed for an easy to check criterion for new kernel functions. Moreover, [5] gave a similar analysis for kernels derived from generative models. Dot product kernels k(x . y), on the other hand, have been eluding further theoretical analysis and only a necessary condition [1] was found, based on geometrical considerations. Unfortunately, it does not provide much insight into smoothness properties of the corresponding estimate. Our aim in the present paper is to shed some light on the properties of dot product kernels, give an explicit equation how its eigenvalues can be determined, and, finally, show that for analytic kernels that can be expanded in terms of monomials ~n or associated Legendre polynomials P~(~) [4], i.e. k(x, y) = k(x? y) with k(~) = 00 00 n=O n=O L anC or k(~) = L bnP~(~) (4) a necessary and sufficient condition is an ~ 0 for all n E N if no assumption about the dimensionality of the input space is made (for finite dimensional spaces of dimension d, the condition is that bn ~ 0). In other words, the polynomial series expansion in dot product kernels plays the role of the Fourier transform in translation invariant kernels. 2 Regularization, Kernels, and Integral Operators Let us briefly review some results from regularization theory, needed for the further understanding of the paper. Many algorithms (SVM, GP, RN, etc.) can be understood as minimizing a regularized risk functional Rreg[f] := Remp[f] + AO[f] (5) where Remp is the tmining error of the function f on the given data, A > 0 and O[f] is the so-called regularization term. The first term depends on the specific problem at hand (classification, regression, large margin algorithms, etc.), A is generally adjusted by some model selection criterion, and O[f] is a nonnegative functional of f which models our belief which functions should be considered to be simple (a prior in the Bayesian sense or a structure in a Structuraillisk Minimization sense). 2.1 Regularization Operators One possible interpretation of k is [8] that it leads to regularized risk functionals where O[f] = ~IIPfI12 or equivalently (Pk(x, .), Pk(y,')) = k(x, y). (6) Here P is a regularization operator mapping functions f on X into a dot product space (we choose L2(X)), The following theorem allows us to construct explicit operators P and it provides a criterion whether a symmetric function k(x, y) is suitable. Theorem 1 (Mercer [3]) Suppose k E Loo(X2) such that the integml opemtor Tk : L 2 (X) -t L 2 (X), Tkf(-) := Ix k(?,x)f(x)dp,(x) (7) is positive. Let ?Pj E L 2(X) be the eigenfunction of Tk with eigenvalue Aj =I- 0 and normalized such that II ?P j II L2 = 1 and let ?P j denote its complex conjugate. Then 1. (Aj(T))j E fl. 2. ?Pj E Loo(X) and SUPj II?pjIILoo < 00. 3. k(x,x') = ~ Aj?Pj(X)?Pj(x') jEN holds for almost all (x,x'), where the series converges absolutely and uniformly for almost all (x, x'). This means that by finding the eigensystem (Ai, ?Pi) of Tk we can also determine the regularization operator P via [8] (8) The eigensystem (Ai, ?Pi) tells us which functions are considered "simple" in terms of the operator P. Consequently, in order to determine the regularization properties of dot product kernels we have to find their eigenfunctions and eigenvalues. 2.2 Specific Assumptions Before we diagonalize Tk for a given kernel we have yet to specify the assumptions we make about the measure J.t and the domain of integration X. Since a suitable choice can drastically simplify the problem we try to keep as much of the symmetries imposed by k (x . y) as possible. The predominant symmetry in dot product kernels is rotation invariance. Therefore we set choose the unit ball in lRd X:= Ud := {xix E lRd and IIxl12 ::; I}. (9) This is a benign assumption since the radius can always be adjusted by rescaling k(x? y) --+ k((Ox)? (Oy)). Similar considerations apply to translation. In some cases the unit sphere in lR: is more amenable to our analysis. There we choose X:= Sd-1 := {xix E lRd and IIxl12 = I}. (10) The latter is a good approximation of the situation where dot product kernels perform best - if the training data has approximately equal Euclidean norm (e.g. in images or handwritten digits). For the sake of simplicity we will limit ourselves to (10) in most of the cases. Secondly we choose J.t to be the uniform measure on X. This means that we have to solve the following integral equation: Find functions ?Pi : L 2 (X) --+ lR together with coefficients Ai such that Tk?Pi(X) := k(x? y)?pi(y)dy = Ai?Pi(X). Ix 3 Orthogonal Polynomials and Spherical Harmonics Before we can give eigenfunctions or state necessary and sufficient conditions we need some basic relations about Legendre Polynomials and spherical harmonics. Denote by Pn(~) the Legendre Polynomials and by P~(~) the associated Legendre Polynomials (see e.g. [4] for details). They have the following properties ? The polynomials Pn(~) and P~(~) are of degree n, and moreover Pn := P~ ? The (associated) Legendre Polynomials form an orthogonal basis with r 1 d d 1-1 Pn(~)Pm(~)(I- ~ 2 d-S ) 2 ISd-11 1 d~ = S d-21 N(d,n/m,n. I (11) 2.".d j 2 Here ISd-1 I = I'(d72) denotes the surface of Sd-b and N ( d, n ) denotes the multiplicity of spherical harmonics of order n on Sd-b i.e. N(d,n) = 2ntd-2 (ntd-3). n n-1 ? This admits the orthogonal expansion of any analytic function [-1,1] into P~ by k(~) on Moreover, the Legendre Polynomials may be expanded into an orthonormal basis of spherical harmonics Y':,j by the Funk-Heeke equation (cf. e.g. [4]) to obtain IS I N(d,n) P~(x' y) = N(~~~) ~ where Y:'j(x)Y:'j(y) (13) On,n,Oj,j" (14) Ilxll = Ilyll = 1 and moreover 1 Y:'j(X)Y':',j,(x)dx = Sd - l 4 Conditions and Eigensystems on Sd- l Schoenberg [7] gives necessary and sufficient conditions under which a function k(x . y) defined on Sd-l satisfies Mercer's condition. In particular he proves the following two theorems: Theorem 2 (Dot Product Kernels in Finite Dimensions) A kernel k(x? y) defined on Sd-l x Sd-l satisfies Mercer's condition if and only if its expansion into Legendre polynomials P~ has only nonnegative coefficients, i. e. 00 k(~) = L bnP~(~) with bn :::: O. (15) i=O Theorem 3 (Dot Product Kernels in Infinite Dimensions) A kernel k(x?y) defined on the unit sphere in a Hilbert space satisfies Mercer's condition if and only if its Taylor series expansion has only nonnegative coefficients: 00 k(~) = L anC with an :::: O. (16) i=O Therefore, all we have to do in order to check whether a particular kernel may be used in a SV machine or a Gaussian Process is to look at its polynomial series expansion and check the coefficients. This will be done in Section 5. Before doing so note that (16) is a more stringent condition than (15). In other words, in order to prove Mercer's condition for arbitrary dimensions it suffices to show that the Taylor expansion contains only positive coefficients. On the other hand, in order to prove that a candidate of a kernel function will never satisfy Mercer's condition, it is sufficient to show this for (15) where P~ = Pm i.e. for the Legendre Polynomials. We conclude this section with an explicit representation ofthe eigensystem of k(x?y). It is given by the following lemma: Lemma 4 (Eigensystem of Dot Product Kernels) Denote by k(x?y) a kernel on Sd-l x Sd-l satisfying condition (15) of Theorem 2. Then the eigensystem of k is given by 'II n,j = Y,:;'j with eigenvalues An,j = an ~~~~) of multiplicity N(d,n). (17) In other words, N(d,n) determines the regularization properties of k(x? y). Proof Using the Funk-Heeke formula (13) we may expand (15) further into Spherical Harmonics Y:!,j' The latter, however, are orthonormal, hence computing the dot product of the resulting expansion with Y:!,j (y) over Sd-l leaves only the coefficient Y:!,j (x) J:(~~~~ which proves that Y:!,j are eigenfunctions of the integral operator Tk . ? In order to obtain the eigensystem of k(x . y) on Ud we have to expand k into L:,n=o(llxllllyll)'np~ (~.~) and expand'll into 'II(llxll)'11 The latter is very technical and is thus omitted. See [6] for details. k(x? y) = 5 (~). Examples and Applications In the following we will analyze a few kernels and state under which conditions they may be used as SV kernels. Example 1 (Homogeneous Polynomial Kernels k(x, y) = (x? y)P) It is well known that this kernel satisfies Mercer's condition for pEN. We will show that for p ? N this is never the case. Thus we have to show that (15) cannot hold for an expansion in terms of Legendre Polynomials (d = 3). From [2, 7.126.1J we obtain for k(x, y) = lel P (we need lei to make k well-defined). 1 / -1 J7Tr(p + 1) . n(e)lelP ~ -- 2Pr (1 + ~ - ~) r G+ ~ + ~) Zf n P. even . (18) For odd n the integral vanishes since Pn(-e) = (-I)npn(e). In order to satisfy (15), the integral has to be nonnegative for all n. One can see that r (1 + ~ - ~) is the only term in (18) that may change its sign. Since the sign of the r function alternates with period 1 for x < 0 (and has poles for negative integer arguments) we cannot find any p for which n = 2l~ + IJ and n = 2r~ + 11 correspond to positive values of the integrnl. = (x? y + I)P) Likewise we might conjecture that k(e) = (1 + e)p is an admissible kernel for all p> O. Again, we expand k in a series of Legendre Polynomials to obtain [2, 7.127J Example 2 (Inhomogeneous Polynomial Kernels k(x, y) 1 / -1 2P+lr2(p + 1) Pn(e)(e + I)Pde = r(p + 2 + n)r(p + 1 - n)' (19) For pEN all terms with n > p vanish and the remainder is positive. For noninteger p, however, (19) may change its sign. This is due to r(p + 1 - n). In particular, for any p ? N (with p > 0) we have r(p + 1- n) < 0 for n = rp1 + 1. This violates condition (15), hence such kernels cannot be used in SV machines either. Example 3 (Vovk's Real Polynomial k(x,y) = 11~.5(~~K with pEN) This kernel can be written as k(~) = E::~ ~n, hence all the coefficients ai = 1 which means that this kernel can be used regardless of the dimensionality of the input space. Likewise we can analyze the an infinite power series: Example 4 (Vovk's Infinite Polynomial k(x,y) = (1- (x? y?-l) This kernel can be written as k(~) = E:=o ~n, hence all the coefficients ai = 1. It suggests poor genemlization properties of that kernel. Example 5 (Neural Networks Kernels k(x,y) = tanh(a + (x? y))) It is a longstanding open question whether kernels k(~) = tanh(a +~) may be used as SV kernels, or, for which sets of pammeters this might be possible. We show that is impossible for any set of pammeters. The technique is identical to the one of Examples 1 and 2: we have to show that k fails the conditions of Theorem 2. Since this is very technical (and is best done by using computer algebm progmms, e.g. Maple), we refer the reader to [6J for details and explain for the simpler case of Theorem 3 how the method works. Expanding tanh(a +~) into a Taylor series yields tanh a + (: ~(1- tanh 2 a)(I- 3tanh2 a) + 0?(:4) " (20) Now we analyze (20) coefficient-wise. Since all of them have to be nonnegative we obtain from the first term a E JO' 00), the third term a E (-00,0], and finally from the fourth term lal E [arctanh 3' arctanh 1]. This leaves us with a E 0, hence under no conditions on its pammeters the kernel above satisfies Mercer's condition. 6 1 " cosh' a _ (:2 tanha _ "cosh' a 3 Eigensystems on Ud In order to find the eigensystem of Tk on Ud we have to find a different representation of k where the radial part Ilxllllyll and the angular part ~ = out separately. We assume that k(x? y) can be written as (~ . ~) are factored 00 (21) n=O where Kn are polynomials. To see that we can always find such an expansion for analytic functions, first expand k in a Taylor series and then expand each coefficient (1IxIIIIYII~)n into (1Ixllllyll)nEj=ocj(d,n)Pf(~). Rearranging terms into a series of Pf gives expansion (21). This allows us to factorize the integral operator into its radial and its angular part. We obtain the following theorem: Theorem 5 (Eigenfunctions of Tk on Ud) For any kernel k with expansion (21) the eigensystem of the integml opemtor Tk on Ud is given by CPn,j,!(x) = Y:;'j (~) <Pn,!(llxll) (22) with eigenvalues An,j,! = J:(~~~\ An ,/, and multiplicity N(d, n), where (<Pn,t. An ,/) is the eigensystem of the integml opemtor 10 1r~-lKn(r",ry)<pn,!(r",)dr", = An,!<Pn ,/(ry). (23) In general, (23) cannot be solved analytically. However, the accuracy of numerically solving (23) (finite integral in one dimension) is much higher than when diagonalizing Tk directly. Proof All we have to do is split the integral fUd dx into fol Td - 1dT f Sd _1 dO.. Moreover note that since Tk commutes with the group of rotations it follows from group theory [4] that we may separate the angular and the radial part in the eigenfunctions, hence use the ansatz cp(x) = CPo (~) 4>(llxll). Next apply the Funk-Hecke equation (13) to expand the associated Legendre Polynomials P~ into the spherical harmonics Y':' i . As in Lemma 4 this leads to the spherical harmonics as the angular part of the eigensystem. The remaining radial part is then (23). See [6] for more details. ? This leads to the eigensystem of the homogeneous polynomial kernel k(x, y) = (x? y)P: if we use (18) in conjunction with (12) to expand ~P into a series of P~(~) we obtain an expansion of type (21) where all Kn(T",Ty) ex: (T",Ty)P for n ~ p and Kn(T",Ty) = 0 otherwise. Hence, the only solution to (23) is 4>n(T) = Td, thus CPn,j (x) = IlxlIPY':'i (~). Eigenvalues can be obtained in a similar way. 7 Discussion In this paper we gave conditions on the properties of dot product kernels, under which the latter satisfy Mercer's condition. While the requirements are relatively easy to check in the case where data is restricted to spheres (which allowed us to prove that several kernels never may be suitable SV kernels) and led to explicit formulations for eigenvalues and eigenfunctions, the corresponding calculations on balls are more intricate and mainly amenable to numerical analysis. Acknowledgments: AS was supported by the DFG (Sm 62-1). The authors thank Bernhard Sch6lkopf for helpful discussions. References [1] C. J. C. Burges. Geometry and invariance in kernel based methods. In B. SchOlkopf, C. J . C. Burges, and A. J . Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 89-116, Cambridge, MA, 1999. MIT Press. [2] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Academic Press, New York, 1981. [3] J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. Roy. Soc. London, A 209:415-446, 1909. [4] C. Millier. Analysis of Spherical Symmetries in Euclidean Spaces, volume 129 of Applied Mathematical Sciences. Springer, New York, 1997. [5] N. Oliver, B. Scholkopf, and A.J. Smola. Natural regularization in SVMs. In A.J. Smola, P .L. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 51 - 60, Cambridge, MA, 2000. MIT Press. [6] Z. Ovari. Kernels, eigenvalues and support vector machines. Honours thesis, Australian National University, Canberra, 2000. [7] I. Schoenberg. Positive definite functions on spheres. Duke Math. J., 9:96-108, 1942. [8] A. Smola, B. Scholkopf, and K.-R. Miiller. The connection between regularization operators and support vector kernels. Neural Networks, 11:637-649, 1998. [9] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990. [10] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in Graphical Models. Kluwer, 1998.
1790 |@word rreg:1 briefly:1 polynomial:21 norm:1 open:1 bn:2 commute:1 series:13 contains:1 yet:1 dx:2 written:3 numerical:1 benign:1 analytic:4 generative:2 leaf:2 rp1:1 lr:2 provides:1 math:1 simpler:1 mathematical:1 scholkopf:4 prove:3 intricate:1 bnp:2 ry:2 spherical:8 td:2 pf:2 moreover:5 finding:1 act:1 nutshell:1 shed:1 classifier:1 unit:3 positive:6 before:3 engineering:1 understood:1 ilxll:1 sd:12 limit:1 approximately:1 might:2 suggests:1 acknowledgment:1 definite:1 digit:1 cpn:2 word:3 radial:4 cannot:4 selection:1 operator:9 risk:2 influence:1 impossible:1 map:2 imposed:1 maple:1 williams:1 regardless:1 simplicity:1 factored:1 insight:1 haussler:1 orthonormal:2 schoenberg:2 play:1 suppose:1 duke:1 homogeneous:2 roy:1 satisfying:1 noninteger:1 role:1 solved:1 vanishes:1 solving:1 basis:2 tanh2:1 london:1 tell:1 widely:1 solve:1 otherwise:1 gp:2 transform:2 eigenvalue:9 product:17 remainder:1 requirement:1 converges:1 tk:11 ij:1 odd:1 soc:1 australian:2 radius:1 inhomogeneous:1 stringent:1 observational:1 violates:1 ao:1 suffices:1 zoltan:1 secondly:1 adjusted:2 ryzhik:1 hold:2 considered:2 mapping:1 omitted:1 ntd:2 tanh:5 fud:1 minimization:1 mit:2 gaussian:4 always:2 aim:1 pn:10 jaakkola:1 conjunction:1 derived:1 check:4 mainly:1 sense:2 helpful:1 inference:1 relation:1 originating:1 expand:8 aforementioned:1 classification:1 integration:1 equal:1 construct:1 never:3 identical:1 look:1 lrd:3 np:1 spline:1 simplify:1 few:1 national:2 dfg:1 geometry:1 ourselves:1 predominant:1 light:1 amenable:2 oliver:1 integral:10 necessary:6 orthogonal:3 taylor:5 euclidean:2 theoretical:1 pole:1 monomials:1 uniform:1 loo:2 connect:1 kn:3 sv:5 lkn:1 siam:1 ansatz:1 together:1 jo:1 again:1 thesis:1 containing:1 choose:4 dr:1 rescaling:1 coefficient:11 satisfy:4 depends:1 try:1 doing:1 analyze:3 fol:1 lr2:1 accuracy:1 likewise:2 correspond:1 ofthe:1 yield:1 bayesian:1 handwritten:1 explain:1 ty:3 associated:5 proof:2 popular:1 ask:1 remp:2 dimensionality:2 hilbert:1 cbms:1 higher:1 dt:1 specify:1 formulation:1 done:2 ox:1 angular:4 smola:4 hand:3 aj:3 lei:1 effect:1 normalized:1 regularization:16 hence:7 analytically:1 symmetric:1 ll:1 criterion:3 eigensystem:11 cp:1 geometrical:1 image:1 harmonic:7 consideration:2 wise:1 rotation:2 functional:3 volume:2 thirdly:1 interpretation:3 he:1 kluwer:1 numerically:1 refer:1 cambridge:2 ai:6 smoothness:2 philos:1 pm:2 mathematics:1 pointed:1 dot:16 funk:3 surface:1 etc:2 determine:2 ud:6 period:1 ii:5 eigensystems:2 technical:2 academic:1 calculation:1 sphere:4 pde:1 prediction:2 regression:2 basic:1 kernel:56 represent:1 separately:1 diagonalize:1 regional:1 eigenfunctions:7 jordan:1 integer:1 split:1 easy:2 npn:1 gave:2 wahba:1 tkf:1 whether:3 bartlett:1 miiller:1 york:2 generally:1 cosh:2 svms:1 category:1 nsf:1 sign:3 group:2 pj:4 isd:2 fourth:1 almost:2 reader:1 dy:1 fl:1 nonnegative:7 alex:1 x2:1 sake:1 fourier:3 argument:1 expanded:3 relatively:1 conjecture:1 department:1 tmining:1 alternate:1 ball:2 poor:1 legendre:11 conjugate:1 honour:1 invariant:3 multiplicity:3 pr:1 restricted:1 equation:5 needed:1 supj:1 apply:2 denotes:2 remaining:1 cf:1 graphical:1 calculating:1 lel:1 prof:2 question:1 arctanh:2 dp:1 separate:1 thank:1 minimizing:1 equivalently:1 unfortunately:1 robert:1 negative:2 perform:1 sch6lkopf:1 zf:1 sm:1 finite:3 situation:1 rn:2 arbitrary:1 connection:4 lal:1 eigenfunction:1 trans:1 beyond:1 ocj:1 oj:1 belief:1 power:1 suitable:3 natural:2 regularized:2 diagonalizing:1 philadelphia:1 review:1 understanding:1 prior:1 l2:2 cpo:1 oy:1 degree:1 sufficient:6 mercer:12 editor:3 pi:6 translation:4 supported:1 drastically:1 burges:2 xix:2 dimension:5 author:1 made:1 longstanding:1 functionals:1 bernhard:1 keep:1 conclude:1 factorize:1 spectrum:1 pen:3 table:1 expanding:1 rearranging:1 symmetry:3 schuurmans:1 expansion:13 anc:2 complex:1 domain:1 pk:2 main:1 allowed:2 canberra:2 fails:1 explicit:5 candidate:1 vanish:1 watkins:1 third:1 ix:2 admissible:2 theorem:10 formula:1 specific:2 jen:1 svm:2 admits:1 gradshteyn:1 margin:2 led:1 springer:1 satisfies:5 determines:1 ma:2 consequently:1 change:2 determined:1 infinite:3 uniformly:1 vovk:2 lemma:3 called:1 iixl12:2 invariance:2 support:5 latter:4 absolutely:1 ex:1
863
1,791
Sparse Kernel Principal Component Analysis Michael E. Tipping Microsoft Research St George House, 1 Guildhall St Cambridge CB2 3NH, U.K. mtipping~microsoft.com Abstract 'Kernel' principal component analysis (PCA) is an elegant nonlinear generalisation of the popular linear data analysis method, where a kernel function implicitly defines a nonlinear transformation into a feature space wherein standard PCA is performed. Unfortunately, the technique is not 'sparse', since the components thus obtained are expressed in terms of kernels associated with every training vector. This paper shows that by approximating the covariance matrix in feature space by a reduced number of example vectors, using a maximum-likelihood approach, we may obtain a highly sparse form of kernel PCA without loss of effectiveness. 1 Introduction Principal component analysis (PCA) is a well-established technique for dimensionality reduction, and examples of its many applications include data compression, image processing, visualisation, exploratory data analysis, pattern recognition and time series prediction. Given a set of N d-dimensional data vectors X n , which we take to have zero mean, the principal components are the linear projections onto the 'principal axes', defined as the leading eigenvectors of the sample covariance matrix S = N-1Z=:=lXnX~ = N-1XTX, where X = (Xl,X2, ... ,XN)T is the conventionally-defined 'design' matrix. These projections are of interest as they retain maximum variance and minimise error of subsequent linear reconstruction. However, because PCA only defines a linear projection of the data, the scope of its application is necessarily somewhat limited. This has naturally motivated various developments of nonlinear 'principal component analysis' in an effort to model non-trivial data structures more faithfully, and a particularly interesting recent innovation has been 'kernel PCA' [4]. Kernel PCA, summarised in Section 2, makes use of the 'kernel trick', so effectively exploited by the 'support vector machine', in that a kernel function k(?,?) may be considered to represent a dot (inner) product in some transformed space if it satisfies Mercer's condition - i.e. if it is the continuous symmetric kernel of a positive integral operator. This can be an elegant way to 'non-linearise' linear procedures which depend only on inner products of the examples. Applications utilising kernel PCA are emerging [2], but in practice the approach suffers from one important disadvantage in that it is not a sparse method. Computation of principal component projections for a given input x requires evaluation of the kernel function k(x, xn) in respect of all N 'training' examples Xn. This is an unfortunate limitation as in practice, to obtain the best model, we would like to estimate the kernel principal components from as much data as possible. Here we tackle this problem by first approximating the covariance matrix in feature space by a subset of outer products of feature vectors, using a maximum-likelihood criterion based on a 'probabilistic PCA' model detailed in Section 3. Subsequently applying (kernel) PCA defines sparse projections. Importantly, the approximation we adopt is principled and controllable, and is related to the choice of the number of components to 'discard' in the conventional approach. We demonstrate its efficacy in Section 4 and illustrate how it can offer similar performance to a full non-sparse kernel PCA implementation while offering much reduced computational overheads. 2 Kernel peA Although PCA is conventionally defined (as above) in terms of the covariance, or outer-product, matrix, it is well-established that the eigenvectors of XTX can be obtained from those of the inner-product matrix XXT. If V is an orthogonal matrix of column eigenvectors of XX T with corresponding eigenvalues in the diagonal matrix A, then by definition (XXT)V = VA. Pre-multiplying by X T gives: (XTX)(XTV) = (XTV)A. (1) From inspection, it can be seen that the eigenvectors of XTX are XTV, with eigenvalues A. Note, however, that the column vectors XTV are not normalised since for column i, llTXXTll i = AillTlli = Ai, so the correctly normalised eigenvectors of 1 XTX, and thus the principal axes of the data, are given by Vpca = XTVA -'. This derivation is useful if d > N, when the dimensionality of x is greater than the number of examples, but it is also fundamental for implementing kernel PCA. In kernel PCA, the data vectors Xn are implicitly mapped into a feature space by a set of functions {ifJ} : Xn -+ 4>(xn). Although the vectors 4>n = 4>(xn) in the feature space are generally not known explicitly, their inner products are defined by the kernel: 4>-:n4>n = k(xm, xn). Defining cp as the (notional) design matrix in feature space, and exploiting the above inner-product PCA formulation, allows the eigenvectors of the covariance matrix in feature spacel , S4> = N- l L: n 4>n4>~, to be specified as: 1 Vkpca=cpTVA-', (2) where V, A are the eigenvectors/values of the kernel matrix K, with (K)mn = k(xm,x n). Although we can't compute Vkpca since we don't know cp explicitly, we can compute projections of arbitrary test vectors x* -+ 4>* onto Vkpca in feature space: 4>~Vkpca = 4>~cpTVA -~ = k~VA-~, (3) where k* is the N -vector of inner products of x* with the data in kernel space: (k)n = k(x*,x n). We can thus compute, and plot, these projections - Figure 1 gives an example for some synthetic 3-cluster data in two dimensions. lHere, and in the rest of the paper, we do not 'centre' the data in feature space, although this may be achieved if desired (see [4]). In fact, we would argue that when using a Gaussian kernel, it does not necessarily make sense to do so. 0 .218 0 .203 .' .' -.I.- 0 .191 :fe??? . :!.'-:~ '"F 0 .057 . 0 .053 0 .051 .. ...- ~ : ~~ . . .. ... :. .. - . .. . . .. . 0 .047 0 .043 0 .036 .I:.' . ~ . .. . . '" Figure 1: Contour plots of the first nine principal component projections evaluated over a region of input space for data from 3 Gaussian clusters (standard deviation 0.1; axis scales are shown in Figure 3) each comprising 30 vectors. A Gaussian kernel, exp( -lIx-x'11 2 /r2), with width r = 0.25, was used. The corresponding eigenvalues are given above each projection. Note how the first three components 'pick out' the individual clusters [4]. 3 Probabilistic Feature-Space peA Our approach to sparsifying kernel peA is to a priori approximate the feature space sample covariance matrix Sq, with a sum of weighted outer products of a reduced number of feature vectors. (The basis of this technique is thus general and its application not necessarily limited to kernel peA.) This is achieved probabilistically, by maximising the likelihood of the feature vectors under a Gaussian density model ? ~ N(O, C) , where we specify the covariance C by: N C = (721 + L Wi?i?r = (721 + c)TWC), (4) i=1 where W1 ... WN are the adjustable weights, W is a matrix with those weights on the diagonal, and (72 is an isotropic 'noise' component common to all dimensions of feature space. Of course, a naive maximum of the likelihood under this model is obtained with (72 = a and all Wi = 1/N. However, if we fix (72, and optimise only the weighting factors Wi, we will find that the maximum-likelihood estimates of many Wi are zero, thus realising a sparse representation of the covariance matrix. This probabilistic approach is motivated by the fact that if we relax the form of the model, by defining it in terms of outer products of N arbitrary vectors Vi (rather than the fixed training vectors), i.e. C = (721+ l:~1 WiViV'[, then we realise a form of 'probabilistic peA' [6]. That is, if {Ui' Ai} are the set of eigenvectors/values of Sq" then the likelihood under this model is maximised by Vi = Ui and Wi = (Ai _(72)1/2, for those i for which Ai > (72. For Ai :::; (72, the most likely weights Wi are zero. 3.1 Computations in feature space We wish to maximise the likelihood under a Gaussian model with covariance given by (4). Ignoring terms independent of the weighting parameters, its log is given by: (5) Computing (5) requires the quantities ICI and (VC-1rP, which for infinite dimensionality feature spaces might appear problematic. However, by judicious re-writing of the terms of interest, we are able to both compute the log-likelihood (to within a constant) and optimise it with respect to the weights. First, we can write: log 1(T21 + 4)TW4) I = D log (T2 + log IW- 1 + (T-24)4)TI + log IWI. (6) The potential problem of infinite dimensionality, D, of the feature space now enters only in the first term, which is constant if (T2 is fixed and so does not affect maximisation. The term in IWI is straightforward and the remaining term can be expressed in terms of the inner-product (kernel) matrix: W- 1 + (T-24)4)T = W- 1 + (T-2K, (7) where K is the kernel matrix such that (K)mn = k(xm , xn). For the data-dependent term in the likelihood, we can use the Woodbury matrix inversion identity to compute the quantities rP~C-lrPn: rP~((T21 with k n 3.2 + 4)W4)T)-lrPn = rP~ [(T-21 - (T-44)(W- 1 + (T-24)T4?)-14)TJ rPn' = (T-2k(x n , xn) - (T-4k~(W-l + (T-2K)-lkn , = [k(xn, xt), k(xn, X2), ... ,k(xn, XN )r? (8) Optimising the weights To maximise the log-likelihood with respect to the {)C {)Wi = !2 (A.TC-14)T4)C-1A.. _ '1', '1', (t M~i + = 2~2 , Wi, differentiating (5) gives us: NA.TC-1A..) '1', '1'" N};,ii - NWi) , (9) (10) n=l where};, and I-Ln are defined respectively by };, = (W- 1 + (T-2K)-1, I-Ln = (T-2};'k n . (11) (12) Setting (10) to zero gives re-estimation equations for the weights: N new Wi 2 + ~ = N-1 '"" ~ Mni L<ii? (13) n=l The re-estimates (13) are equivalent to expectation-maximisation updates, which would be obtained by adopting a factor analytic perspective [3], and introducing a set of 'hidden' Gaussian explanatory variables whose conditional means and common covariance, given the feature vectors and the current values of the weights, are given by I-Ln and};, respectively (hence the notation). As such, (13) is guaranteed to increase C unless it is already at a maximum. However, an alternative re-arrangement of (10), motivated by [5], leads to a re-estimation update which typically converges significantly more quickly: ",N W new , = 2 L....n-l JJni N(1 - ~idwi)? (14) Note that these Wi updates (14) are defined in terms of the computable (i.e. not dependent on explicit feature space vectors) quantities ~ and /Ln. 3.3 Principal component analysis The principal axes Sparse kernel peA proceeds by finding the principal axes of the covariance model C = (721 + c)TWc). These are identical to those of c)TWc), but with eigenvalues .-... 1 .-...T ........ all (72 larger. Letting c) = W2c), then, we need the eigenvectors of c) c). Using the technique of Section 2, if the eigenvectors of ~~T = W!c)c)TW! = W!KW! are U, with corresponding eigenvalues X, then the eigevectors/values {U, A} of C that we desire are given by: (15) (16) Computing projections Again, we can't compute the eigenvectors U explicitly in (15), but we can compute the projections of a general feature vector cPo onto the principal axes: (17) where k. is the sparse vector containing the non-zero weighted elements of k., 1 defined earlier. The corresponding rows of W!UX-2 are combined into a single projecting matrix P, each column of which gives the coefficients of the kernel functions for the evaluation of each principal component. 3.4 Computing Reconstruction Error The squared reconstruction error in kernel space for a test vector cPo is given by: (18) with K the kernel matrix evaluated only for 4 Examples the representing vectors. To obtain sparse kernel peA projections, we first specify the noise variance (72, which is the the amount of variance per co-ordinate that we are prepared to allow to be explained by the (structure-free) isotropic noise rather than with the principal axes (this choice is a surrogate for deciding how many principal axes to retain in conventional kernel peA). Unfortunately, the measure is in feature space, which makes it rather more difficult to interpret than if it were in data space (equally so, of course, for interpretation of the eigenvalue spectrum in the non-sparse case). We apply sparse kernel peA to the Gaussian data of Figure 1 earlier, with the same kernel function and specifying (J' = 0.25, deliberately chosen to give nine representing kernels so as to facilitate comparison. Figure 2 shows the nine principal component projections based on the approximated covariance matrix, and gives qualitatively equivalent results to Figure 1 while utilising only 10% of the kernels. Figure 3 shows the data and highlights those examples corresponding to the nine kernels with nonzero weights. Note, although we do not consider this aspect further here, that these representing vectors are themselves highly informative of the structure of the data (i. e. with a Gaussian kernel, for example, they tend to represent distinguishable clusters). Also in Figure 3, contours of reconstruction error, based only on those nine kernels, are plotted and indicate that the nonlinear model has more faithfully captured the structure of the data than would standard linear peA. 0.184 0.161 0.082 0.074 0.074 0.074 0.072 0.071 0.199 00 o o -.0 ?? 2t. rc. 0 0 OJ: ?l'or ..... ? Figure 2: The nine principal component projections obtained by sparse kernel peA. To further illustrate the fidelity of the sparse approximation, we analyse the 200 training examples of the 7-dimensional 'Pima Indians diabetes' database [1]. Figure 4 (left) shows a plot of reconstruction error against the number of principal components utilised by both conventional kernel peA and its sparse counterpart, with (J'2 chosen so as to utilise 20% of the kernels (40). An expected small reduction in accuracy is evident in the sparse case. Figure 4 (right) shows the error on the associated test set when using a linear support vector machine to classify the data based on those numbers of principal components. Here the sparse projections actually perform marginally better on average, a consequence of both randomness and, we note with interest, presumably some inherent complexity control implied by the use of a sparse approximation. Figure 3: The data with the nine representing kernels circled and contours of reconstruction error (computed in feature space although displayed as a function of x) overlaid. ~ 100 e C ffi .Q 0.15 t5 2 t5 Q) (/) 0.1 t5 C 0 ~ a: 80 Q) I- 0.05 0 0 5 10 15 20 25 70 60 0 5 10 15 20 25 Figure 4: RMS reconstruction error (left) and test set misclassifications (right) for numbers ofretained principal components ranging from 1- 25. For the standard case, this was based on all 200 training examples, for the sparse form, a subset of 40. A Gaussian kernel of width 10 was utilised, which gives near-optimal results if used in an SVM classification. References [1] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996. [2] S. Romdhani, S. Gong, and A. Psarrou. A multi-view nonlinear active shape model using kernel PCA. In Proceedings of the 1999 British Machine Vision Conference, pages 483- 492, 1999. [3] D. B. Rubin and D. T. Thayer. EM algorithms for ML factor analysis. Psychometrika, 47(1):69- 76, 1982. [4] B. Sch6lkopf, A. Smola, and K-R. Miiller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. Technical Report No. 44, 1996, Max Planck Institut fiir biologische Kybernetik, Tiibingen. [5] M. E. Tipping. The Relevance Vector Machine. In S. A. Solla, T. KLeen, and K-R. Miiller, editors, Advances in Neural Information Processing Systems 12, pages 652- 658. Cambridge, Mass: MIT Press, 2000. [6] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B, 61(3):611-622, 1999.
1791 |@word inversion:1 compression:1 covariance:12 pick:1 reduction:2 series:2 efficacy:1 offering:1 psarrou:1 current:1 com:1 subsequent:1 kleen:1 informative:1 shape:1 analytic:1 plot:3 update:3 rpn:1 inspection:1 isotropic:2 maximised:1 rc:1 overhead:1 expected:1 themselves:1 multi:1 psychometrika:1 xx:1 notation:1 mass:1 emerging:1 finding:1 transformation:1 every:1 ti:1 tackle:1 control:1 appear:1 planck:1 positive:1 maximise:2 consequence:1 kybernetik:1 might:1 specifying:1 co:1 limited:2 woodbury:1 practice:2 maximisation:2 cb2:1 sq:2 procedure:1 w4:1 significantly:1 xtx:5 projection:15 pre:1 onto:3 operator:1 applying:1 writing:1 conventional:3 equivalent:2 straightforward:1 mtipping:1 importantly:1 exploratory:1 diabetes:1 trick:1 element:1 recognition:2 particularly:1 approximated:1 database:1 lix:1 enters:1 region:1 solla:1 principled:1 ui:2 complexity:1 depend:1 basis:1 various:1 xxt:2 derivation:1 whose:1 larger:1 relax:1 analyse:1 eigenvalue:7 t21:2 reconstruction:7 product:11 exploiting:1 cluster:4 converges:1 illustrate:2 gong:1 indicate:1 subsequently:1 pea:12 vc:1 implementing:1 fix:1 considered:1 guildhall:1 exp:1 deciding:1 scope:1 presumably:1 overlaid:1 adopt:1 estimation:2 iw:1 faithfully:2 weighted:2 mit:1 gaussian:9 rather:3 probabilistically:1 ax:7 likelihood:10 sense:1 dependent:2 typically:1 explanatory:1 hidden:1 visualisation:1 transformed:1 comprising:1 fidelity:1 classification:1 priori:1 development:1 optimising:1 identical:1 kw:1 t2:2 report:1 inherent:1 individual:1 microsoft:2 interest:3 highly:2 evaluation:2 tj:1 integral:1 orthogonal:1 unless:1 institut:1 desired:1 re:5 plotted:1 column:4 earlier:2 classify:1 disadvantage:1 introducing:1 deviation:1 subset:2 lkn:1 synthetic:1 combined:1 st:2 density:1 fundamental:1 retain:2 probabilistic:5 michael:1 quickly:1 na:1 w1:1 again:1 squared:1 containing:1 thayer:1 leading:1 potential:1 coefficient:1 explicitly:3 vi:2 performed:1 view:1 utilised:2 biologische:1 iwi:2 accuracy:1 variance:3 marginally:1 multiplying:1 xtv:4 randomness:1 suffers:1 romdhani:1 definition:1 against:1 realise:1 notional:1 naturally:1 associated:2 popular:1 dimensionality:4 actually:1 tipping:3 wherein:1 specify:2 formulation:1 evaluated:2 smola:1 nonlinear:6 defines:3 facilitate:1 deliberately:1 counterpart:1 hence:1 symmetric:1 nonzero:1 width:2 criterion:1 evident:1 demonstrate:1 cp:2 image:1 ranging:1 common:2 nh:1 interpretation:1 interpret:1 cambridge:4 ai:5 centre:1 dot:1 recent:1 perspective:1 discard:1 exploited:1 seen:1 captured:1 george:1 somewhat:1 greater:1 ii:2 full:1 technical:1 offer:1 equally:1 va:2 prediction:1 vision:1 expectation:1 kernel:47 represent:2 adopting:1 achieved:2 rest:1 twc:3 elegant:2 tend:1 effectiveness:1 near:1 wn:1 affect:1 misclassifications:1 inner:7 computable:1 minimise:1 motivated:3 pca:16 rms:1 effort:1 miiller:2 nine:7 useful:1 generally:1 detailed:1 eigenvectors:11 amount:1 s4:1 prepared:1 reduced:3 problematic:1 correctly:1 per:1 summarised:1 write:1 sparsifying:1 ifj:1 sum:1 utilising:2 guaranteed:1 mni:1 x2:2 nwi:1 aspect:1 em:1 wi:10 tw:1 n4:2 projecting:1 explained:1 ln:4 equation:1 ffi:1 know:1 letting:1 lrpn:2 apply:1 linearise:1 alternative:1 rp:4 remaining:1 include:1 unfortunate:1 approximating:2 society:1 implied:1 already:1 quantity:3 arrangement:1 diagonal:2 surrogate:1 mapped:1 outer:4 argue:1 trivial:1 maximising:1 innovation:1 difficult:1 unfortunately:2 fe:1 pima:1 design:2 implementation:1 adjustable:1 perform:1 sch6lkopf:1 displayed:1 defining:2 arbitrary:2 ordinate:1 specified:1 established:2 able:1 proceeds:1 pattern:2 xm:3 optimise:2 oj:1 max:1 royal:1 mn:2 representing:4 axis:1 conventionally:2 naive:1 circled:1 cpo:2 loss:1 highlight:1 interesting:1 limitation:1 mercer:1 rubin:1 editor:1 row:1 course:2 free:1 normalised:2 allow:1 differentiating:1 sparse:19 dimension:2 xn:14 contour:3 t5:3 qualitatively:1 ici:1 tiibingen:1 approximate:1 implicitly:2 fiir:1 ml:1 active:1 ripley:1 don:1 spectrum:1 continuous:1 controllable:1 ignoring:1 necessarily:3 noise:3 realising:1 wish:1 explicit:1 xl:1 house:1 weighting:2 british:1 xt:1 bishop:1 r2:1 svm:1 effectively:1 t4:2 tc:2 distinguishable:1 likely:1 expressed:2 desire:1 ux:1 utilise:1 satisfies:1 conditional:1 identity:1 judicious:1 generalisation:1 infinite:2 principal:23 support:2 relevance:1 indian:1
864
1,792
Redundancy and Dimensionality Reduction in Sparse-Distributed Representations of Natural Objects in Terms of Their Local Features Penio S. Penev* Laboratory of Computational Neuroscience The Rockefeller University 1230 York Avenue, New York, NY 10021 penev@rockefeller.edu http://venezia.rockefeller.edu/ Abstract Low-dimensional representations are key to solving problems in highlevel vision, such as face compression and recognition. Factorial coding strategies for reducing the redundancy present in natural images on the basis of their second-order statistics have been successful in accounting for both psychophysical and neurophysiological properties of early vision. Class-specific representations are presumably formed later, at the higher-level stages of cortical processing. Here we show that when retinotopic factorial codes are derived for ensembles of natural objects, such as human faces, not only redundancy, but also dimensionality is reduced. We also show that objects are built from parts in a non-Gaussian fashion which allows these local-feature codes to have dimensionalities that are substantially lower than the respective Nyquist sampling rates. 1 Introduction Sensory systems must take advantage of the statistical structure of their inputs in order to process them efficiently, both to suppress noise and to generate compact representations of seemingly complex data. Redundancy reduction has been proposed as a design principle for such systems (Barlow, 1961); in the context of Information Theory (Shannon, 1948), it leads to factorial codes (Barlow et aI., 1989; Linsker, 1988). When only the secondorder statistics are available for a given sensory ensemble, the maximum entropy initial assumption (Jaynes, 1982) leads to a multi-dimensional Gaussian model of the probability density; then, the Karhunen-Loeve Transform (KLT) provides a family of equally efficient factorial codes. In the context of the ensemble of natural images, with a specific model for the noise, these codes have been able to account quantitatively for the contrast sensitivity of human subjects in all signal-to-noise regimes (Atick and Redlich, 1992). Moreover, when the receptive fields are constrained to have retinotopic organization, their circularly symmetric, center-surround opponent structure is recovered (Atick and Redlich, 1992). Although redundancy can be reduced in the ensemble of natural images, because its spectrum obeys a power law (Ruderman and Bialek, 1994), there is no natural cutoff, and the ? Present address: NEC Research Institute, 4 Independence Way, Princeton, NJ 08550 dimensionality of the "retinal" code is the same as that of the input. This situation is not typical. When KLT representations are derived for ensembles of natural objects, such as human faces (Sirovich and Kirby, 1987), the factorial codes in the resulting families are naturally low-dimensional (Penev, 1998; Penev and Sirovich, 2000). Moreover, when a retinotopic organization is imposed, in a procedure called Local Feature Analysis (LFA), the resulting feed-forward receptive fields are a dense set of detectors for the local features from which the objects are built (Penev and Atick, 1996). LFA has also been used to derive local features for the natural-objects ensembles of: 3D surfaces of human heads (Penev and Atick, 1996), and 2D images of pedestrians (Poggio and Girosi, 1998). Parts-based representations of object classes, including faces, have been recently derived by Non-negative Matrix Factorization (NMF) (Lee and Seung, 1999), "biologically" motivated by the hypothesis that neural systems are incapable of representing negative values. As has already been pointed out (Mel, 1999), this hypothesis is incompatible with a wealth of reliably documented neural phenomena, such as center-surround receptive field organization, excitation and inhibition, and ON/OFF visual-pathway processing, among others. Here we demonstrate that when parts-based representations of natural objects are derived by redundancy reduction constrained by retinotopy (Penev and Atick, 1996), the resulting sparse-distributed, local-feature representations not only are factorial, but also are of dimensionalities substantially lower than the respective Nyquist sampling rates. 2 Compact Global Factorial Codes of Natural Objects A properly registered and normalized object will be represented by the receptor readout values ?>(x), where {x} is a grid that contains V receptors. An ensemble ofT objects will be denoted by {?>t (X)}tET. 1 Briefly (see, e.g., Sirovich and Kirby, 1987, for details), when T > V, its Karhunen-Loeve Tran,lform (KLT) representation is given by v ?>t(x) =L a~O'r1Pr(x) (1) r=1 where {an (arranged in non-increasing order) is the eigen.lpectrum of the spatial and temporal correlation matrices, and {1Pr (x)} and {a~} are their respective orthonormal eigenvectors. The KLT representation of an arbitrary, possibly out-of-sample, object ?>(x) is given by the joint activation (2) of the set of global analysis filters {O';I1Pr(x)}, which are indexed with r, and whose outputs, {a r } , are decorrelated. 2 In the context of the ensemble of natural images, the "whitening" by the factor 0';1 has been found to account for the contrast sensitivity of human subjects (Atick and Redlich, 1992). When the output dimensionality is set to N < V, the reconstruction-optimal in the amount of preserved signal power-and the respective error utilize the global synthesis filters {O'r1Pr (x)}, and are given by N ?>ISc = Larar1Pr and ?>JJ'" = ?> - ?>ISc. (3) r=1 = = iPor the illustrations in this study, 2T T 11254 frontal-pose facial images were registered and normalized to a grid with V = 64 x 60 = 3840 pixels as previously described (Penev and Sirovich, 2000). 2This is certainly true for in-sample objects, since {a~} are orthonormal (1). Por out-of-sample objects, there is always the issue whether the size of the training sample, T , is sufficient to ensure proper generalization. The current ensemble has been found to generalize well in the regime for r that is explored here (Penev and Sirovich, 2000). 20 60 100 150 220 350 500 700 1000 Figure 1: Successive reconstructions, errors, and local entropy densities. For the indicated global dimensionalities, N, the reconstructions cP'iV" (3) of an out-oj-sample example are shown in the top row, and the respective residual errors, cP'Nr , in the middle row (the first two errors are amplified 5 x and the rest-20x). The respective entropy densities ON (5), are shown in the bottom, low-pass + UN 2) (cf. Fig. 3), and scaled adaptively at each N to fill the available filtered with Fr ,N = dynamic range. u; /(u; With the standard multidimensional Gaussian model for the probability density P[4>] (Moghaddam and Pentland, 1997; Penev, 1998), the information content of the reconstruction (3)-equal to the optimal-code length (Shannon, 1948; Barlow, I96I)-is N -logP[4>rJ C ] ex Z)a r 2 . I (4) r=l Because of the normalization by fIr in (2), all KLT coefficients have unit variance 0); the model (4) is spherically symmetric, and all filters contribute equally to the entropy of the code. What criterion, then, could guide dimensionality reduction? Following (Atick and Redlich, 1992), when noise is taken into account, N ~ 400 has been found as an estimate of the global dimensionality for the ensemble frontal-pose faces (Penev and Sirovich, 2000). This conclusion is reinforced by the perceptual quality of the successive reconstructions and errors, shown in Fig. I-the face-specific information crosses over from the error to the reconstruction at N ~ 400, but not much earlier. 3 Representation of Objects in Terms of Local Features It was shown in Section 2 that when redundancy reduction on the basis of the second-order statistics is applied to ensembles of natural objects, the resulting factorial code is compact (low dimensional), in contrast with the "retinal" code, which preserves the dimensionality of the input (Atick and Redlich, 1992). Also, the filters in the beginning of the hierarchy (Fig. 2) correspond to intuitively understandable sources of variability. Nevertheless, this compact code has some problems. The learned receptive fields, shown in Fig. 2, are global, in contrast with the local, retinotopic organization of sensory processing, found throughout most of the visual system. Moreover, although the eigenmodes in the regime r E [100,400] are clearly necessary to preserve the object-specific information (Fig. 1), their respective global filters (Fig. 2) are ripply, non-intuitive, and resemble the hierarchy of sine/cosine modes of the translationally invariant ensemble of natural images. In order to cope with these problems in the context of object ensembles, analogously to the local factorial retinal code (Atick and Redlich, 1992), Local Feature Analysis (LFA) has been developed (Penev and Atick, 1996; Penev, 1998). LFA uses a set of local analysisfil- Figure 2: The basis-vector hierarchy of the global factorial code. Shown are the first 14 eigenvectors, and the ones with indices: 21,41; and 60,94, 155,250,500, 1000,2000,3840 (bottom row). centers a b c d e Figure 3: Local feature detectors and residual correlations of their outputs. centers: The typical face ("pI, Fig. 2) is marked with the central positions of five of the feature detectors. a-e: For those choises of x"', the local filters K(x", , y) (6) are shown in the top row, and the residual correlations of their respective outputs with the outputs of all the rest, P(x", , y) (9), in the bottom. In principle, the cutoff at r = N, which effectively implements a low-pass filter, should not be as sharp as in (6)-it has been shown that the human contrast sensitivity is described well by a smooth cutoff of the type Fr = u;/(u; + n 2 ), where n 2 is a mearure of the effective noise power (Atick and Redlich, 1992). For this figure, K(x,y) = L~=I"pr(X):~"pT(Y)' with N = 400 andn = U400. ters, K(x, y), whose outputs are topographically indexed with the grid variable x (cf. eq. 2) O(x) =L'. V1 "~ K(x,y)?(y) (5) y and are as decorrelated as possible. For a given dimensionality, or width of the band, of the compact code, N, maximal decorrelation can be achieved with K(x,y) = K~)(x, y) from the following topographic family of kernels K~)(x,y) ~ N L '?r(x)u;:n'?r(Y)? (6) r=l For the ensemble of natural scenes, which is translationally and rotationally invariant, the local filters (6) are center-surround receptive fields (Atick and Redlich, 1992). For object ensembles, the process of construction-categorization-breaks a number of symmetries and shifts the higher-order statistics into second-order, where they are conveniently exposed to robust estimation and, subsequently, to redundancy reduction. The resulting local receptive fields, some of which are shown in the top row of Fig. 3, turn out to be feature detectors that are optimally tuned to the structures that appear at their respective centers. Although the local factorial code does not exhibit the problems discussed earlier, it has representational properties that are equivalent to those of the global factorial code. The reconstruction and error are identical, but now utilize the local synthesis filers KJ;l) (6) N ?W(x) = Larur'lj;r(X) = r=l ~L KJ;l) (x, y)O(y) (7) y and the information (4) is expressed in terms of O(x), which therefore provides the local information density N L -logP[?~CllX r=l 4 1 lar l2 = V L ION(XW? (8) x Greedy Sparsification of the Smooth Local Information Density In the case of natural images, N = V, and the outputs of the local filters are completely decorrelated (A tick and Redlich, 1992). For natural objects, the code is low-dimensional (N < V), and residual correlations, some shown in the bottom row of Fig. 3, are unavoidable; they are generally given by the projector to the sub band PN(X,y) ~ ~ LO~(x)O~(y) == K~)(x,y) (9) t and are as close to 8(x, y) as possible (Penev and Atick, 1996). The smoothness of the local information density is controlled by the width of the band, as shown in Fig. 1. Since O(x) is band limited, it can generally be reconstructed exactly from a subsampling over a limited set of grid points M ~ {xm}, from the IMI variables {Om ~ O(Xm)}x~EM' as long as this density is critically sampled (1M I = N). When 1M I < N, the maximum-likelihood interpolation in the context of the probability model (8) is given by IMI IMI orec(x) = L Omam(x) with am(x) = L Q-l mnPn(x) (10) m=l n=l where Pm(x) ~ P(xm,x), and Q ~ PIM is the restriction ofP on the set of reference points, with Qnm = Pn(xm) (Penev, 1998). When O(x) is critically sampled (IMI = N) on a regular grid, V -t 00, and the eigenmodes (1) are sines and cosines, then (10) is the familiar Nyquist interpolation formula. In order to improve numerical stability, irregular subsampling has been proposed (Penev and Atick, 1996), by a data-driven greedy algorithm that successively enlarges the support of the subsampling at the n-th step, M(n), by optimizing for the residual entropy error, Ilo~rr (x) 112 = IIO(x) - o~ec (x) 112. The LFA code is sparse. In a recurrent neural-network implementation (Penev, 1998) the dense output O(x) of the feed-forward receptive fields, K(x,y), has been interpreted as sub-threshold activation, which is predictively suppressed through lateral inhibition with weights Pm (x), by the set of active units, at {xm}.3 5 Dimensionality Reduction Beyond the Nyquist Sampling Rate The efficient allocation of resources by the greedy sparsification is evident in Fig. 4A-B; the most prominent features are picked up first (Fig. 4A), and only a handful of active units are used to describe each individual local feature (Fig. 4B). Moreover, when the dimensionality of the representation is constrained, evidently from Fig. 4C-F, the sparse local code has a much better perceptual quality of the reconstruction than the compact global one. 3This type of sparseness is not to be confused with "high kurtosis of the output distribution;" in LFA, the non-active units are completely shut down, rather than "only weakly activated." c B A D E F Figure 4: Efficiency of the sparse allocation of resources. (A): the locations of the first 25 active units, M(25), of the sparsification with N = 220, n = 0"400 (see Fig. 3), of the example in Fig. 1 and in (C), are overlayed on ?(x) and numbered sequentially. (B): the locations of the active units in M(64) are overlayed on O(x). For ?(x) in (C) (cf. Fig. 1), reconstructions with a fixed dimensionality, 64, of its deviation from the typical face (-1/;1 in Fig. 2), are shown in the top row of (D, E, F), and the respective errors, in the bottom row. (D): reconstruction from the sparsification {O(Xm)}X=EM (10) with M = M(64) from (B). (E): reconstruction from the first 64 global coefficients (3), N = 64. (F): reconstruction from a subsampling of ?(x) on a regular 8 x 8 grid (64 samples). The errors in (D) and (E) are magnified 5 x ; in (F), 1 x . 600 i? ~ 0 E 500 "'~" 400 ] 300 001 ~ ~ ia: .. 0001 0 ~ A 100 200 300 400 Number of Reconstruction Terms 500 SOO 00001 0 01 02 03 04 0.5 0.6 07 Rallo of Sparse and Global Dimensionalilies OB B Figure 5: The relationship between the dimensionalities of the global and the local factorial codes. The entropy of the KLT reconstruction (8) for the out-of-sample example (cf. Fig. 1) is plotted in (A) with a solid line as a function of the global dimensionality, N. The entropies of the LFA reconstructions (10) are shown with dashed lines parametrically of the number of active units 1M I for N E {600, 450, 300, 220, 110,64, 32}, from top to bottom, respectively. The ratios of the residual, 110~rr 112, and the total, 11011 2 (8), information are plotted in (B) with dashed lines parametrically of 1M II N, for the same values of N; a true exponential dependence is plotted with a solid line. This is an interesting observation. Although the global code is optimal in the amount of captured energy, the greedy sparsification optimizes the amount of captured information, which has been shown to be the biologically relevant measure, at least in the retinal case (Atick and Redlich, 1992). In order to quantify the relationship between the local dimensionality of the representation and the amount of information it captures, rate-distortion curves are shown in Fig. 5. As expected (4), each degree of freedom in the global code contributes approximately equally to the information content. On the other hand, the first few local terms in (10) pull off a sizeable fraction of the total information, with only a modest increase thereafter (Fig. 5A). In all regimes for N, the residual information decreases approximately exponentially with increasing dimensionality ratio IMI/N (Fig. 5B); 90% of the information is contained in a representation with local dimensionality, 25%-30% of the respective global one; 99%, with 45%-50%. This exponential decrease has been shown to be incompatible with the expectation based on the Gaussian (4), or any other spherical, assumption (Penev, 1999). Hence, the LFA representation, by learning the building blocks of natural objects-the local features-reduces not only redundancy, but also dimensionality. Because LFA captures aspects of the sparse, non-Gaussian structure of natural-object ensembles, it preserves practically all of the information, while allocating resources substantially below the Nyquist sampling rate. 6 Discussion Here we have shown that, for ensembles of natural objects, with low-dimensional global factorial representations, sparsification of the local information density allows undersampling which results in a substantial additional dimensionality reduction. Although more general ensembles, such as those of natural scenes and natural sound, have full-dimensional global representations, the sensory processing of both visual and auditory signals happens in a multi-scale, bandpass fashion. Preliminary results (Penev and Iordanov, 1999) suggest that sparsification within the subbands is possible beyond the respective Nyquist rate; hence, when the sparse dimensionalities of the sub bands are added together, the result is aggregate dimensionality reduction, already at the initial stages of sensory processing. Acknowledgments The major part of this research was made possible by the William o 'Baker Fellowship, so generously extended to, and gratefully accepted by, the author. He is also indebted to M. J. Feigenbaum for his hospitality and support; to MJF and A. J. Libchaber, for the encouraging and enlightening discussions, scientific and otherwise; to R. M. Shapley, for asking the questions that led to Fig. 5; to J. J. Atick, B. W. Knight, A. G. Dimitrov, L. Sirovich, J. D. Victor, E. Kaplan, L. G. Iordanov, E. P. Simoncelli, G. N. Reeke, J. E. Cohen, B. Klejn, A. Oppenheim, and A. P. Blicher for fruitful discussions. References Atick, J. J. and A. N. Redlich (1992). What does the retina know about natural scenes? Neural Comput. 4(2), 196-210. Barlow, H. B. (1961). Possible principles unredlying the transformation of sensory messages. In W. Rosenblith (Ed.), Sensory Communication, pp. 217-234. Cambridge, MA: M.I.T. Press. Barlow, H. B., T. P. Kaushal, and G. J. Mitchison (1989). Finding minimum entropy codes. Neural Computation 1 (3), 412-423. Jaynes, E. T. (1982). On the rationale of maximum-entropy methods. Proc. IEEE 70,939-952. Lee, D. D. and H. S. Seung (1999). Learning the parts of objects by non-negative matrix factorization. Nature 401 (6755), 788-791. Linsker, R. (1988). Self-organization in a perceptual network. Computer 21,105-117. Mel, B. W. (1999). Think positive to find parts. Nature 401 (6755),759- 760. Moghaddam, B. and A. Pentland (1997). Probabilistic visual learning for object representation. IEEE Trans . on Pattern Analysis and Machine Intelligence 19(7), 669- 710. Penev, P. S. (1998). Local Feature Analysis: A Statistical Theory for Information Representation and Transmission. Ph. D. thesis, The Rockefeller University, New York, NY. available at http: / /venezia.rockefeller.edu/penev /thesis/. Penev, P. S. (1999). Dimensionality reduction by sparsification in a local-features representation of human faces. Technical report, The Rockefeller University. ftp:/ /venezia.rockefeller.edu/ pubs / Penev PS- NIPS99- reduce. ps. Penev, P. S. and J. J. Atick (1996). Local Feature Analysis: A general statistical theory for object representation. Network: Comput. Neural Syst. 7(3), 477- 500. Penev, P. S. and L. G. Iordanov (1999). Local Feature Analysis: A flexible statistical framework for dimensionality reduction by sparsification of naturalistic sound. Technical report, The Rockefeller University. ftp:/ /venezia.rockefeller.edu/pubs/PenevPS-ICASSP2000-sparse.ps. Penev, P. S. and L. Sirovich (2000). The global dimensionality of face space. In Proc. 4th Int'l Coni. Automatic Face and Gesture Recognition, Grenoble, France, pp. 264--270. IEEE CS. Poggio, T. and F. Girosi (1998). A sparse representation for function approximation. Neural Comput. 10(6), 1445-1454. Ruderman, D. L. and W. Bialek (1994). Statistics of natural images: Scaling in the woods. Phys. Rev. Lett. 73(6),814--817. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Tech. 1. 27, 379-423, 623- 656. Sirovich, L. and M. Kirby (1987). Low-dimensional procedure for the characterization of human faces . 1. Opt. Soc. Am. A 4,519-524.
1792 |@word middle:1 briefly:1 compression:1 accounting:1 solid:2 reduction:11 initial:2 contains:1 pub:2 tuned:1 recovered:1 current:1 jaynes:2 activation:2 must:1 numerical:1 girosi:2 greedy:4 intelligence:1 shut:1 beginning:1 filtered:1 provides:2 characterization:1 contribute:1 location:2 successive:2 five:1 mathematical:1 pathway:1 shapley:1 expected:1 multi:2 spherical:1 encouraging:1 increasing:2 confused:1 retinotopic:4 moreover:4 baker:1 what:2 interpreted:1 substantially:3 developed:1 finding:1 sparsification:9 magnified:1 nj:1 transformation:1 temporal:1 multidimensional:1 exactly:1 scaled:1 unit:7 appear:1 positive:1 local:34 receptor:2 interpolation:2 approximately:2 factorization:2 limited:2 range:1 obeys:1 acknowledgment:1 block:1 implement:1 procedure:2 bell:1 kaushal:1 regular:2 numbered:1 suggest:1 naturalistic:1 close:1 context:5 restriction:1 equivalent:1 projector:1 imposed:1 center:6 fruitful:1 orthonormal:2 fill:1 pull:1 his:1 stability:1 hierarchy:3 pt:1 construction:1 us:1 hypothesis:2 secondorder:1 recognition:2 bottom:6 capture:2 readout:1 decrease:2 sirovich:9 knight:1 substantial:1 seung:2 dynamic:1 weakly:1 solving:1 topographically:1 exposed:1 efficiency:1 basis:3 completely:2 joint:1 represented:1 effective:1 describe:1 venezia:4 aggregate:1 whose:2 distortion:1 otherwise:1 enlarges:1 statistic:5 topographic:1 think:1 transform:1 seemingly:1 highlevel:1 advantage:1 rr:2 evidently:1 kurtosis:1 reconstruction:15 tran:1 maximal:1 fr:2 relevant:1 amplified:1 representational:1 intuitive:1 transmission:1 p:3 categorization:1 object:26 ftp:2 derive:1 recurrent:1 pose:2 eq:1 soc:1 c:1 resemble:1 quantify:1 iio:1 filter:9 subsequently:1 human:8 generalization:1 preliminary:1 opt:1 practically:1 presumably:1 major:1 early:1 estimation:1 proc:2 clearly:1 generously:1 gaussian:5 always:1 feigenbaum:1 hospitality:1 rather:1 pn:2 derived:4 klt:6 properly:1 likelihood:1 tech:1 contrast:5 am:2 lj:1 france:1 pixel:1 issue:1 among:1 flexible:1 denoted:1 constrained:3 spatial:1 field:7 equal:1 sampling:4 identical:1 linsker:2 others:1 report:2 quantitatively:1 pim:1 few:1 retina:1 grenoble:1 preserve:3 individual:1 familiar:1 translationally:2 overlayed:2 william:1 mjf:1 freedom:1 penev:26 organization:5 message:1 certainly:1 activated:1 ilo:1 isc:2 allocating:1 moghaddam:2 necessary:1 poggio:2 respective:12 facial:1 modest:1 indexed:2 iv:1 plotted:3 earlier:2 asking:1 logp:2 deviation:1 parametrically:2 successful:1 imi:5 optimally:1 adaptively:1 density:9 sensitivity:3 lee:2 off:2 probabilistic:1 synthesis:2 analogously:1 together:1 thesis:2 central:1 unavoidable:1 successively:1 possibly:1 por:1 fir:1 tet:1 syst:1 account:3 retinal:4 sizeable:1 coding:1 coefficient:2 pedestrian:1 int:1 later:1 sine:2 break:1 picked:1 om:1 formed:1 variance:1 efficiently:1 ensemble:18 reinforced:1 correspond:1 generalize:1 critically:2 indebted:1 detector:4 oppenheim:1 phys:1 rosenblith:1 decorrelated:3 ed:1 energy:1 pp:2 naturally:1 sampled:2 auditory:1 dimensionality:26 feed:2 higher:2 arranged:1 stage:2 atick:18 correlation:4 hand:1 ruderman:2 mode:1 eigenmodes:2 lar:1 quality:2 indicated:1 scientific:1 building:1 normalized:2 true:2 barlow:5 hence:2 symmetric:2 laboratory:1 spherically:1 width:2 self:1 mel:2 excitation:1 cosine:2 criterion:1 prominent:1 evident:1 demonstrate:1 cp:2 image:9 recently:1 reeke:1 cohen:1 exponentially:1 libchaber:1 discussed:1 he:1 surround:3 cambridge:1 ai:1 smoothness:1 automatic:1 grid:6 pm:2 pointed:1 predictively:1 gratefully:1 surface:1 inhibition:2 whitening:1 optimizing:1 optimizes:1 driven:1 rockefeller:9 incapable:1 victor:1 rotationally:1 captured:2 additional:1 minimum:1 signal:3 dashed:2 ii:1 full:1 sound:2 rj:1 reduces:1 simoncelli:1 smooth:2 technical:2 gesture:1 cross:1 long:1 ofp:1 equally:3 controlled:1 vision:2 expectation:1 normalization:1 kernel:1 achieved:1 ion:1 irregular:1 preserved:1 fellowship:1 wealth:1 source:1 dimitrov:1 rest:2 subject:2 independence:1 reduce:1 avenue:1 shift:1 whether:1 motivated:1 nyquist:6 york:3 jj:1 generally:2 eigenvectors:2 factorial:14 amount:4 band:5 ph:1 reduced:2 http:2 generate:1 documented:1 neuroscience:1 redundancy:9 key:1 thereafter:1 nevertheless:1 threshold:1 undersampling:1 cutoff:3 utilize:2 v1:1 fraction:1 wood:1 family:3 throughout:1 incompatible:2 ob:1 scaling:1 handful:1 scene:3 loeve:2 aspect:1 em:2 suppressed:1 kirby:3 rev:1 biologically:2 happens:1 intuitively:1 invariant:2 pr:2 taken:1 resource:3 previously:1 turn:1 know:1 available:3 opponent:1 subbands:1 eigen:1 top:5 ensure:1 cf:4 subsampling:4 coni:1 xw:1 psychophysical:1 already:2 added:1 question:1 strategy:1 receptive:7 dependence:1 lfa:9 bialek:2 nr:1 exhibit:1 lateral:1 code:25 length:1 index:1 relationship:2 illustration:1 ratio:2 negative:3 kaplan:1 suppress:1 design:1 reliably:1 proper:1 understandable:1 implementation:1 observation:1 pentland:2 situation:1 extended:1 variability:1 head:1 communication:2 arbitrary:1 sharp:1 nmf:1 registered:2 learned:1 trans:1 address:1 able:1 beyond:2 below:1 pattern:1 xm:6 regime:4 oft:1 built:2 including:1 oj:1 soo:1 enlightening:1 power:3 ia:1 decorrelation:1 natural:23 residual:7 representing:1 improve:1 kj:2 filer:1 l2:1 law:1 rationale:1 interesting:1 allocation:2 degree:1 sufficient:1 principle:3 pi:1 row:8 lo:1 guide:1 tick:1 institute:1 face:12 sparse:10 distributed:2 curve:1 lett:1 cortical:1 sensory:7 forward:2 made:1 author:1 ec:1 cope:1 reconstructed:1 compact:6 global:20 active:6 sequentially:1 mitchison:1 spectrum:1 un:1 nature:2 robust:1 symmetry:1 contributes:1 complex:1 dense:2 noise:5 fig:23 redlich:11 fashion:2 ny:2 sub:3 position:1 bandpass:1 exponential:2 comput:3 perceptual:3 qnm:1 formula:1 down:1 specific:4 explored:1 circularly:1 effectively:1 nec:1 karhunen:2 sparseness:1 entropy:9 led:1 omam:1 neurophysiological:1 visual:4 conveniently:1 expressed:1 contained:1 ters:1 ma:1 marked:1 retinotopy:1 content:2 typical:3 reducing:1 called:1 total:2 pas:2 accepted:1 shannon:3 andn:1 support:2 frontal:2 princeton:1 phenomenon:1 ex:1
865
1,793
Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks Richard H.R. Hahnloser and H. Sebastian Seung Dept. of Brain & Cog. Sci., MIT Cambridge, MA 02139 USA rh~ai.mit.edu, seung~mit.edu Abstract Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]). We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic programming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems. In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are "forbidden" by the recurrent synaptic connections from being coactivated at a stable steady state, no matter what input is applied. Other sets are "permitted," in the sense that they can be coactivated for some input. The same conditions on the synaptic weight matrix also lead to conditional multistability, meaning that there exists an input for which there is more than one stable steady state. In other words, forbidden sets and conditional multistability are inseparable concepts. The existence of permitted and forbidden sets suggests a new way of thinking about memory in neural networks. When an input is applied, the network must select a set of active neurons, and this selection is constrained to be one of the permitted sets. Therefore the permitted sets can be regarded as memories stored in the synaptic connections. Our third theorem states that there are constraints on the groups of permitted and forbidden sets that can be stored by a network. No matter which learning algorithm is used to store memories, active neurons cannot arbitrarily be divided into permitted and forbidden sets, because subsets of permitted sets have to be permitted and supersets of forbidden sets have to be forbidden. 1 Basic definitions Our theory is applicable to the network dynamics + -dt' + x '? = [b?'L...J + "W? ?x ?1 dx? 'J J (1) j where [u]+ = maxi u, O} is a rectification nonlinearity and the synaptic weight matrix is symmetric, W ij = W ji . The dynamics can also be written in a more compact matrix-vector form as :i; + x = [b + W x]+. The state of the network is x. An input to the network is an arbitrary vector b. An output of the network is a steady state;!;. in response to b. The existence of outputs and their relationship to the input are determined by the synaptic weight matrix W. A vector v is said to be nonnegative, v ~ 0, if all of its components are nonnegative. The nonnegative orthant {v : v ~ O} is the set of all nonnegative vectors. It can be shown that any trajectory starting in the nonnegative orthant remains in the nonnegative orthant. Therefore, for simplicity we will consider initial conditions that are confined to the nonnegative orthant x ~ O. 2 Global asymptotic stability Definition 1 A steady state;!;. is stable if for all initial conditions sufficiently close to ;!;., the state trajectory remains close to ;!;. for all later times. A steady state is asymptotically stable if for all initial conditions sufficiently close to ;!;., the state trajectory converges to ;!;.. A set of steady states is globally asymptotically stable if from almost all initial conditions, state trajectories converge to one of the steady states. Exceptions are of measure zero. Definition 2 A principal submatrix A of a square matrix B is a square matrix that is constructed by deleting a certain set of rows and the corresponding columns of B. The following theorem establishes necessary and sufficient conditions on W for global asymptotic stability. Theorem 1 If W is symmetric, then the following conditions are equivalent: 1. All nonnegative eigenvectors of all principal submatrices of I - W have positive eigenvalues. 2. The matrix 1- W is copositive. That is, x T (I - W)x x, except x = O. > 0 for all nonnegative 3. For all b, the network has a nonempty set of steady states that are globally asymptotically stable. Proof sketch: ? (1) ~ (2). Let v* be the minimum of vT(I - W)v over nonnegative v on the unit sphere. If (2) is false, the minimum value is less than or equal to zero. It follows from Lagrange multiplier methods that the nonzero elements of v* comprise a nonnegative eigenvector of the corresponding principal submatrix of W with eigenvalue greater than or equal to unity. ? (2) ~ (3). By the copositivity off - W, the function L = ~xT (I - W)x-b T X is lower bounded and radially unbounded. It is also nonincreasing under the network dynamics in the nonnegative orthant, and constant only at steady states. By the Lyapunov stability theorem, the stable steady states are globally asymptotically stable. In the language of optimization theory, the network dynamics converges to a local minimum of L subject to the nonnegativity constraint x ~ O. ? (3) ~ (1). Suppose that (1) is false. Then there exists a nonnegative eigenvector of a principal submatrix of W with eigenvalue greater than or equal to unity. This can be used to construct an unbounded trajectory of the dynamics .? The meaning of these stability conditions is best appreciated by comparing with the analogous conditions for the purely linear network obtained by dropping the rectification from (1). In a linear network, all eigenvalues of W would have to be smaller than unity to ensure asymptotic stability. Here only nonnegative eigenvectors are able to grow without bound, due to the rectification, so that only their eigenvalues must be less than unity. All principal submatrices of W must be considered, because different sets of feedback connections are active, depending on the set of neurons that are above threshold. In a linear network, I - W would have to be positive definite to ensure asymptotic stability, but because of the rectification, here this condition is replaced by the weaker condition of copositivity. The conditions of Theorem 1 for global asymptotic stability depend only on W, but not on b. On the other hand, steady states do depend on b. The next lemma says that the mapping from input to output is surjective. Lemma 1 For any nonnegative vector v a steady state of equation 1 with input b. 2:: 0 there exists an input b, such that v is Proof: Define c = v-1::W1::v, where 1:: = diag(rTl, ... ,rTN) and rTi = 1 if Vi > 0 and rTi = 0 if Vi = O. Choose bi = Ci for Vi > 0 and bi = -1 - (1::W1::V)i for Vi = 0 .? This Lemma states that any nonnegative vector can be realized as a fixed point. Sometimes this fixed point is stable, such as in networks subject to Theorem 1 in which only a single neuron is active. Indeed, the principal submatrix of I - W corresponding to a single active neuron corresponds to a diagonal elements, which according to (1) must be positive. Hence it is always possible to activate only a single neuron at an asymptotically stable fixed point. However, as will become clear from the following Theorem, not all nonnegative vectors can be realized as an asymptotically stable fixed point. 3 Forbidden and permitted sets The following characterizations of stable steady states are based on the interlacing Theorem [4]. This Theorem says that if A is an - 1 by n - 1 principal submatrix of a n by n symmetric matrix B, then the eigenvalues of A fall in between the eigenvalues of B. In particular, the largest eigenvalue of A is always smaller than the largest eigenvalue of B. Definition 3 A set of neurons is permitted if the neurons can be coactivated at an asymptotically stable steady state for some input b. On the other hand, a set of neurons is forbidden, if they cannot be coactivated at an asymptotically stable steady state no matter what the input b. Alternatively, we might have defined a permitted set as a set for which the corresponding square sub-matrix of I - W has only positive eigenvalues. And, similarly, a forbidden set could be defined as a set for which there is at least one non-positive eigenvalue. It follows from Theorem 1 that if the matrix I - W is copositive, then the eigenvectors corresponding to non-positive eigenvalues of forbidden sets have to have both positive and non-positive components. Theorem 2 If the matrix I - W is copositive, then the following statements are equivalent: 1. The matrix I - W is not positive definite. 2. There exists a forbidden set. 3. The network is conditionally multistable. That is, there exists an input b such that there is more than one stable steady state. Proof sketch: ? (1) => (2) . I - W is not positive definite and so there can be no asymptotically stable steady state in which all neurons are active, e.g. the set of all neurons is forbidden . ? (2) => (3). Denote the forbidden set with k active neurons by 1::. Without loss of generality, assume that the principal submatrix of I - W corresponding to 1:: has k - 1 positive eigenvalues and only one non-positive eigenvalue (by virtue of the interlacing theorem and the fact that the diagonal elements of I - W must be positive, there is always a subset of 1::, for which this is true). By choosing bi > 0 for neurons i belonging to 1; and bj ? 0 for neurons j not belonging to 1;, the quadratic Lyapunov function L defined in Theorem 1 forms a saddle in the nonnegative quadrant defined by 1;. The saddle point is the point where L restricted to the hyperplane defined by the k - 1 positive eigenvalues reaches its minimum. But because neurons can be initialized to lower values of L on either side of the hyperplane and because L is non-increasing along trajectories, there is no way trajectories can cross the hyperplane. In conclusion, we have constructed an input b for which the network is multistable. ? (3) => (1). Suppose that (1) is false. Then for all b the Lyapunov function L is convex and so has only a single local minimum in the convex domain x ~ O. This local minimum is also the global minimum. The dynamics must converge to this minimum .? If I - W is positive definite, then a symmetric threshold-linear network has a unique steady state. This has been shown previously [5]. The next Theorem is an expansion of this result, stating an equivalent condition using the concept of permitted sets. Theorem 3 If W is symmetric, then the following conditions are equivalent: 1. The matrix I - W is positive definite. 2. All sets are permitted. 3. For all b there is a unique steady state, and it is stable. Proof: ? (1) => (2). If I - W is positive definite, then it is copositive. Hence (1) in Theorem 2 is false and so (2) in Theorem 2 is false, e.g. all set are permitted. ? (2) => (1). Suppose (1) is false, so the set of all neurons active must be forbidden, not all sets are permitted. ? (1) {:::=> (3). See [5] .? The following Theorem characterizes the forbidden and the permitted sets. Theorem 4 Any subset of a permitted set is permitted. Any superset of a forbidden set is forbidden. Proof: According to the interlacing Theorem, if the smallest eigenvalue of a symmetric matrix is positive, then so are the smallest eigenvalues of all its principal submatrices. And, if the smallest eigenvalue of a principal submatrix is negative, then so is the smallest eigenvalue of the original matrix .? 4 An example - the ring network A symmetric threshold-linear network with local excitation and larger range inhibition has been studied in the past as a model for how simple cells in primary visual cortex obtain their orientation tuning to visual stimulation [6, 7]. Inspired by these results, we have recently built an electronic circuit containing a ring network, using analog VLSI technology [3]. We have argued that the fixed tuning width of the neurons in the network arises because active sets consisting of more than a fixed number of contiguous neurons are forbidden. Here we give a more detailed account of this fact and provide a surprising result about the existence of some spurious permitted sets. Let the synaptic matrix of a 10 neuron ring-network be translationally invariant. The connection between neurons i and j is given by Wij = -(3 +o:oclij + 0:1 (cli,j+l + cli+l,j) + 0:2 (cli,j+2 + cli+2,j), where (3 quantifies global inhibition, 0:0 self-excitation, 0:1 first-neighbor lateral excitation and 0:2 second-neighbor lateral excitation. In Figure 1 we have numerically computed the permitted sets of this network, with the parameters taken from [3], e.g. 0:0 = 0 0:1 = 1.1 0:2 = 1 (3 = 0.55. The permitted sets were determined by diagonalising the 210 square sub-matrices of I - W and by classifying the eigenvalues corresponding to nonnegative eigenvectors. The Figure 1 shows the resulting parent permitted sets (those that have no permitted supersets). Consistent with the finding that such ring-networks can explain contrast invariant tuning of VI cells and multiplicative response modulation of parietal cells, we found that there are no permitted sets that consist of more than 5 contiguous active neurons. However, as can be seen, there are many non-contiguous permitted sets that could in principle be activated by exciting neurons in white and strongly inhibiting neurons in black. Because the activation of the spurious permitted sets requires highly specific input (inhibition of high spatial frequency), it can be argued that the presence of the spurious permitted sets is not relevant for the normal operation of the ring network, where inputs are typically tuned and excitatory (such as inputs from LGN to primary visual cortex). ..... Q.) .0 E :::l C ID en "0 -~ E Q.) a.... Neuron number Neuron number Figure 1: Left: Output of a ring network of 10 neurons to uniform input (random initial condition). Right: The 9 parent permitted sets (x-axis: neuron number, y-axis: set number). White means that a neurons belongs to a set and black means that it does not. Left-right and translation symmetric parent permitted sets of the ones shown have been excluded. The first parent permitted set (first row from the bottom) corresponds to the output on the left. 5 Discussion We have shown that pattern memorization in threshold linear networks can be viewed in terms of permitted sets of neurons, e.g. sets of neurons that can be coactive at a steady state. According to this definition, the memories are stored by the synaptic weights, independently of the inputs. Hence, this concept of memory does not suffer from input-dependence, as would be the case for a definition of memory based on the fixed points of the dynamics. Pattern retrieval is strongly constrained by the input. A typical input will not allow for the retrieval of arbitrary stored permitted sets. This comes from the fact that multistability is not just dependent on the existence of forbidden sets, but also on the input (theorem 2). For example, in the ring network, positive input will always retrieve permitted sets consisting of a group of contiguous neurons, but not any of the spurious permitted sets, Figure 1. Generally, multistability in the ring network is only possible when more than a single neuron is excited. Notice that threshold-linear networks can behave as traditional attractor networks when the inputs are represented as initial conditions of the dynamics. For example, by fixing b = 1 and initializing a copositive network with some input, the permitted sets unequivocally determine the stable fixed points. Thus, in this case, the notion of permitted sets is no different from fixed point attractors. However, the hierarchical grouping of permitted sets (Theorem 4) becomes irrelevant, since there can be only one attractive fixed point per hierarchical group defined by a parent permitted set. The fact that no permitted set can have a forbidden subset represents a constraint on the possible computations of symmetric networks. However, this constraint does not have to be viewed as an undesired limitation. On the contrary, being aware of this constraint may lead to a deeper understanding of learning algorithms and representations for constraint satisfaction problems. We are reminded of the history of perceptrons, where the insight that they can only solve linearly separable classification problems led to the invention of multilayer perceptrons and backpropagation. In a similar way, grouping problems that do not obey the natural hierarchy inherent in symmetric networks, might necessitate the introduction of hidden neurons to realize the right geometry. For the interested reader, see also [8] for a simple procedure of how to store a given family of possibly overlapping patterns as permitted sets. References [1] J. J. Hopfield. Neurons with graded response have collective properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA, 81:3088- 3092, 1984. [2] M.A. Cohen and S. Grossberg. Absolute stability of global pattern formation and [3] [4] [5] [6] [7] [8] parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man and Cybernetics, 13:288- 307,1983. Richard H.R. Hahnloser, Rahul Sarpeshkar, Misha Mahowald, Rodney J . Douglas, and Sebastian Seung. Digital selection and ananlog amplification coexist in a silicon circuit inspired by cortex. Nature, 405:947- 51, 2000. R.A. Horn and C.R. Johnson. Matrix analysis. Cambridge University Press, 1985. J. Feng and K.P. Hadeler. Qualitative behaviour of some simple networks. J . Phys. A:, 29:5019- 5033, 1996. R. Ben-Yishai, R. Lev Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA, 92:3844- 3848, 1995. R.J. Douglas, C. Koch, M.A. Mahowald, K.A.C. Martin, and H. Suarez. Recurrent excitation in neocortical circuits. Science, 269:981- 985, 1995. Xie Xiaohui, Richard H.R. Hahnloser, and Sebastian Seung. Learning winnertake-all competition between groups of neurons in lateral inhibitory networks. In Proceedings of NIPS2001 - Neural Information Processing Systems: Natural and Synthetic, 2001.
1793 |@word longterm:1 excited:1 initial:7 tuned:1 past:2 coactive:3 comparing:1 surprising:1 activation:3 dx:1 must:7 written:1 realize:1 provides:1 characterization:2 completeness:1 unbounded:2 along:1 constructed:2 differential:1 become:1 qualitative:1 prove:2 introduce:1 indeed:1 behavior:1 brain:1 inspired:2 globally:4 increasing:1 becomes:1 bounded:1 circuit:4 what:2 eigenvector:2 finding:1 guarantee:1 unit:1 positive:19 local:4 acad:2 id:1 lev:1 modulation:1 might:2 black:2 studied:1 equivalence:1 suggests:1 bi:3 range:1 grossberg:1 unique:2 horn:1 definite:6 backpropagation:1 procedure:1 submatrices:4 word:1 quadrant:1 cannot:3 close:3 selection:2 coexist:1 storage:1 applying:1 memorization:1 restriction:1 equivalent:4 xiaohui:1 go:1 starting:1 independently:1 convex:2 simplicity:1 insight:3 regarded:1 retrieve:1 stability:14 notion:1 analogous:1 hierarchy:1 suppose:3 programming:2 complementarity:1 element:3 bottom:1 suarez:1 initializing:1 sompolinsky:1 seung:4 dynamic:8 depend:2 purely:1 hopfield:1 represented:1 sarpeshkar:1 activate:1 formation:1 choosing:1 larger:1 solve:1 say:2 eigenvalue:21 relevant:1 amplification:1 competition:1 cli:4 parent:5 produce:1 converges:2 ring:8 ben:1 derive:1 recurrent:3 depending:1 fixing:1 stating:1 ij:1 come:1 lyapunov:7 viewing:1 memorized:1 argued:2 behaviour:1 clustered:1 sufficiently:2 considered:1 koch:1 normal:1 mapping:1 bj:1 inhibiting:1 inseparable:1 smallest:4 proc:2 applicable:1 largest:2 establishes:1 mit:3 always:4 mainly:1 contrast:1 sense:2 dependent:1 typically:1 spurious:4 vlsi:1 hidden:1 supersets:3 wij:1 lgn:1 interested:1 classification:1 orientation:2 constrained:2 spatial:1 equal:3 construct:2 comprise:1 aware:1 represents:1 thinking:1 richard:3 inherent:1 composed:1 replaced:1 translationally:1 consisting:2 geometry:1 attractor:4 highly:1 misha:1 activated:1 natl:2 yishai:1 nonincreasing:1 necessary:2 initialized:1 theoretical:1 column:1 contiguous:4 mahowald:2 subset:5 uniform:1 ascribing:1 johnson:1 stored:5 synthetic:1 off:1 w1:2 containing:1 choose:1 possibly:1 necessitate:1 account:1 potential:1 matter:3 depends:1 vi:5 later:1 multiplicative:1 characterizes:1 competitive:1 parallel:1 rodney:1 square:4 yield:1 mere:1 trajectory:7 cybernetics:1 history:1 explain:1 reach:1 phys:1 sebastian:3 synaptic:11 definition:6 energy:1 frequency:1 proof:5 radially:1 recall:1 dt:1 xie:1 permitted:47 response:6 rahul:1 formulation:1 strongly:2 generality:1 just:1 sketch:2 hand:2 overlapping:1 eigenmodes:1 usa:3 concept:4 multiplier:1 true:1 hence:3 excluded:1 symmetric:13 nonzero:1 white:2 conditionally:1 attractive:1 undesired:1 width:1 self:1 steady:24 excitation:5 criterion:2 neocortical:1 meaning:2 recently:1 stimulation:1 ji:1 cohen:1 analog:1 numerically:1 silicon:1 cambridge:2 ai:1 tuning:4 similarly:1 nonlinearity:1 winnertake:1 language:1 stable:20 cortex:4 inhibition:3 forbidden:26 perspective:1 belongs:1 irrelevant:1 store:2 certain:3 arbitrarily:1 vt:1 seen:1 minimum:8 greater:2 converge:3 determine:2 interlacing:3 rti:2 cross:1 sphere:1 retrieval:2 divided:1 copositivity:3 basic:1 multilayer:1 sometimes:1 confined:1 cell:3 grow:1 posse:1 associatively:1 subject:2 contrary:1 presence:1 superset:1 hadeler:1 idea:1 whether:1 suffer:1 generally:1 detailed:1 clear:1 eigenvectors:5 exist:1 inhibitory:1 notice:1 neuroscience:1 per:1 dropping:1 group:4 threshold:8 douglas:2 invention:1 asymptotically:10 almost:2 reader:1 family:1 electronic:1 submatrix:7 bound:1 convergent:1 quadratic:3 nonnegative:19 constraint:6 separable:1 martin:1 according:3 belonging:2 smaller:2 unity:4 making:1 restricted:1 invariant:2 taken:2 rectification:4 equation:2 remains:2 previously:1 nonempty:1 subnetworks:1 operation:1 multistability:5 obey:1 hierarchical:2 existence:5 original:1 ensure:2 graded:1 surjective:1 feng:1 realized:2 primary:2 dependence:1 traditional:2 diagonal:2 said:1 sci:3 lateral:3 topic:1 relationship:1 statement:1 negative:1 collective:1 neuron:41 orthant:5 parietal:1 behave:1 arbitrary:2 connection:6 beyond:1 able:1 bar:1 pattern:5 built:1 memory:9 deleting:1 satisfaction:1 natural:2 technology:1 rtn:1 axis:2 understanding:1 asymptotic:5 loss:1 limitation:2 multistable:2 digital:1 sufficient:2 consistent:1 principle:2 exciting:1 unequivocally:1 classifying:1 translation:1 row:2 excitatory:1 appreciated:1 side:1 weaker:1 allow:1 deeper:1 fall:1 neighbor:2 absolute:1 regard:1 feedback:2 transaction:1 compact:1 rtl:1 global:7 active:10 alternatively:2 copositive:5 quantifies:1 nature:1 expansion:1 reminded:1 domain:1 diag:1 main:1 linearly:1 rh:1 en:1 nips2001:1 sub:2 nonnegativity:1 third:2 theorem:27 cog:1 xt:1 specific:1 coactivated:4 maxi:1 virtue:1 grouping:2 exists:5 consist:1 false:6 gained:1 ci:1 led:1 saddle:2 visual:4 lagrange:1 expressed:2 applies:1 monotonic:1 corresponds:2 determines:2 ma:1 hahnloser:3 conditional:2 viewed:2 man:1 determined:2 except:1 typical:1 hyperplane:3 principal:10 lemma:3 perceptrons:2 exception:1 select:1 arises:1 dept:1
866
1,794
Minimum Bayes Error Feature Selection for Continuous Speech Recognition George Saon and Mukund Padmanabhan IBM T. 1. Watson Research Center, Yorktown Heights, NY, 10598 E-mail: {saon.mukund}@watson.ibm.com. Phone: (914)-945-2985 Abstract We consider the problem of designing a linear transformation () E lRPx n, of rank p ~ n, which projects the features of a classifier x E lRn onto y = ()x E lRP such as to achieve minimum Bayes error (or probability of misclassification). Two avenues will be explored: the first is to maximize the ()-average divergence between the class densities and the second is to minimize the union Bhattacharyya bound in the range of (). While both approaches yield similar performance in practice, they outperform standard LDA features and show a 10% relative improvement in the word error rate over state-of-the-art cepstral features on a large vocabulary telephony speech recognition task. 1 Introduction Modern speech recognition systems use cepstral features characterizing the short-term spectrum of the speech signal for classifying frames into phonetic classes. These features are augmented with dynamic information from the adjacent frames to capture transient spectral events in the signal. What is commonly referred to as MFCC+~ + ~~ features consist in "static" mel-frequency cepstral coefficients (usually 13) plus their first and second order derivatives computed over a sliding window of typicaJly 9 consecutive frames yielding 39-dimensional feature vectors every IOms. One major drawback of this front-end scheme is that the same computation is performed regardless of the application, channel conditions, speaker variability, etc. In recent years, an alternative feature extraction procedure based on discriminant techniques has emerged: the consecutive cepstral frames are spliced together forming a supervector which is then projected down to a manageable dimension. One of the most popular objective functions for designing the feature space projection is linear discriminant analysis. LDA [2, 3] is a standard technique in statistical pattern classification for dimensionality reduction with a minimal loss in discrimination. Its application to speech recognition has shown consistent gains for small vocabulary tasks and mixed results for large vocabulary applications [4,6]. Recently, there has been an interest in extending LDA to heteroscedastic discriminant analysis (HDA) by incorporating the individual class covariances in the objective function [6, 8]. Indeed, the equal class covariance assumption made by LDA does not always hold true in practice making the LDA solution highly suboptimal for specific cases [8]. However, since both LDA and HDA are heuristics, they do not guarantee an optimal projection in the sense of a minimum Bayes classification error. The aim of this paper is to study feature space projections according to objective functions which are more intimately linked to the probability of misclassification. More specifically, we will define the probability of misclassification in the original space, E, and in the projected space, E(J, and give conditions under which E(J = Eo Since after a projection y = Ox discrimination information is usually lost, the Bayes error in the projected space will always increase, that is E(J ~ E therefore minimizing E(J amounts to finding 0 for which the equality case holds. An alternative approach is to define an upper bound on E(J and to directly minimize this bound. The paper is organized as follows: in section 2 we recall the definition of the Bayes error rate and its link to the divergence and the Bhattacharyya bound, section 3 deals with the experiments and results and section 4 provides a final discussion. 2 2.1 Bayes error, divergence and Bhattacharyya bound Bayes error Consider the general problem of classifying an n-dimensional vector x into one of C distinct classes. Let each class i be characterized by its own prior Ai and probability density function Pi, i = 1, ... ,C. Suppose x is classified as belonging to class j through the Bayes assignment j = argmax1 <i<C AiPi (x). The expected error rate for this classifier is called Bayes error [3] or probability of misclassification and is defined as (1) Suppose next that we wish to perform the linear transformation f : R n -t RP, y = f(x) = Ox, with 0 a P x n matrix of rank P :::; n. Moreover, let us denote by p~ the transformed density for class i. The Bayes error in the range of 0 now becomes (2) Since the transformation y = Ox produces a vector whose coefficients are linear combinations of the input vector x, it can be shown [1] that, in general, information is lost and E(J ~ E. For a fixed p, the feature selection problem can be stated as finding fj such that fj = argmin E(J (3) (JERPxn, rank((J)=p We will take however an indirect approach to (3): by maximizing the average pairwise divergence and relating it to E(J (subsection 2.2) and by minimizing the union Bhattacharyya bound on E(J (subsection 2.3). 2.2 Interclass divergence Since Kullback [5], the symmetric divergence between class i and j is given by ..!. D(~,J) = Pi(X) Pj X Pj(x) Pi X pi(x)log-(-) +pj(x)log-(-)dx Rn (4) D(i, j) represents a measure of the degree of difficulty of discriminating between the classes (the larger the divergence, the greater the separability between the classes). Similarly, one can define Do(i,j), the pairwise divergence in the range of (). Kullback [5] showed that Do(i,j) :::; D(i,j). If the equality case holds, () is called a sufficient statistic for discrimination. The average pairwise divergence is defined as D = C(J-1) L1:<:;i<j:<:;C D(i,j) and respectively Do = C(J-1) L1:<:;i<j:<:;C Do(i,j) . It follows that Do :::; D. The next theorem due to Decell [1] provides a link between Bayes error and divergence for classes with uniform priors A1 = ... = AC (= b). Theorem [Decell'72] If Do = D then EO = E. The main idea of the proof is to show that if the divergences are the same then the Bayes assignment is preserved because the likelihood ratios are preserved almost everywhere: P; ((x)) = pl ((OOx)) ' i :j:. j. The result follows by noting that for any measurable set A c R P P3 X P X j ! A pf(y)dy = r Pi (x)dx (5) iO - l(A) where ()-1 (A) = {x E Rnl()x E A}. The previous theorem provides a basis for selecting () such as to maximize Do. Let us make next the assumption that each class i is normally distributed with mean /-Li and covariance ~i, that is Pi (x) = N(x; /-Li, ~i) andpf (y) = N(y; ()/-Li, ()~i()T), i = 1, ... , It is straightforward to show that in this case the divergence is given by c. 1 D(i,j) = 2" trace{~i1[~j + (/-Li - /-Lj)(/-Li - /-Lj)T] +~j1[~i+ (/-Li - /-Lj)(/-Li - /-Lj)T]}-n (6) Thus, the objective function to be maximized becomes C '"' T -1 T Do = C(C1-1) trace{~(()~i()) ()Si()} - P where Si =L ~j + (/-Li - /-Lj)(/-Li - /-Ljf, i (7) = 1, ... , C . #i Following matrix differentiation results from [9], the gradient of Do with respect to () has the expression Unfortunately, it turns out that 8f// = ahas no analytical solutions for the stationary points. Instead, one has to use numerical optimization routines for the maximization of D(). 2.3 Bhattacharyya bound An alternative way of minimizing the Bayes error is to minimize an upper bound on this quantity. We will first prove the following statement (9) Indeed, from (1), the Bayes error can be rewritten as (10) and for every x, there exists a permutation of the indices U x : {l, ... , C} -+ {l, ... , C} such that the terms A1P1 (x), ... , ACPC (x) are sorted in increasing order, i.e. Au x (1)Pu x (1) (x) S ... S Aux(C)Pux(C)(x) . Moreover, for 1 S k S C - 1 from which follows that C-1 l~~cLAjPj(X) = - - < j#i C-1 L Aux (k)Pu x (k) k=l (x) sL AUx(k)Pux(k) (X)A ux (k+1)Pu x (k+1) (x) k=l L VAiPi (X)Ajpj (x) l::;i<j::;C (12) which, when integrated over :Rn, leads to (9). As previously, if we assume that the Pi'S are normal distributions with means JLi and covariances ~i, the bound given by the right-hand side of (9) has the closed form expression L VAiAje-P(i,j) (13) 19<j::;C where (14) is called the Bhattacharyya distance between the normal distributions Pi and Pj [3]. Similarly, one can define Po (i, j), the Bhattacharyya distance between the projected densities pf and p~. Combining (9) and (13), one obtains the following inequality involving the Bayes error rate in the projected space EO ~ L ..jAiAje- P9 (i,j) (= Bo) (15) 15,i<j5,C It is necessary at this point to introduce the following simplifying notations: = ~ (JLi - JLj )(JLi - JLj) T and Wij = ~(~i + ~j), 1 ~ i < j ~ c. ? Bij ? From (14), it follows that po(i,j) = 1 -2 trace{(OWijOT)-lOBijOT} + 1 -2 log IOWijOTI ..jIO~iOTIIO~jOTI (16) and the gradient of Bo with respect to 0 is aBo aO = (17) with, again by making use of differentiation results from [9] apo(i,j) aO = 1 "2 (OWijOT)-l[OBijOT (OWijOT)-lOWij - OBij ] + (OWijOT)-lOWij- -~[(O~iOT)-lO~i + (O~jOT)-lO~j] (18) 3 Experiments and results The speech recognition experiments were conducted on a voicemail transcription task [7]. The baseline system has 2.3K context dependent HMM states and 134K diagonal gaussian mixture components and was trained on approximately 70 hours of data. The test set consists of 86 messages (approximately 7000 words). The baseline system uses 39dimensional frames (13 cepstral coefficients plus deltas and double deltas computed from 9 consecutive frames). For the divergence and Bhattacharyya projections, every 9 consecutive 24-dimensional cepstral vectors were spliced together forming 216-dimensional feature vectors which were then clustered to estimate 1 full covariance gaussian density for each state. Subsequently, a 39x216 transformation 0 was computed using the objective functions for the divergence (7) and the Bhattacharyya bound (15), which projected the models and feature space down to 39 dimensions. As mentioned in [4], it is not clear what the most appropriate class definition for the projections should be. The best results were obtained by considering each individual HMM state as a separate class, with the priors of the gaussians summing up to one across states. Both optimizations were initialized with the LDA matrix and carried out using a conjugate gradient descent routine with user supplied analytic gradient from the NAG1 Fortran library. The routine performs an iterative update of the inverse of the hessian of the objective function by accumulating curvature information during the optimization. Figure 1 shows the evolution of the objective functions for the divergence and the Bhattacharyya bound. 300 "dlvg.dat" - - .+. 250 200 ~ c ~ ~ ~ .~ "ro~ 150 ~ 2 .E .. + 100 .+. 50 ... ~. 0 10 0 15 20 25 30 35 Iteration "bhatta.dat" - - 5 .9 5 .B ? 0 .0 ro 5 .7 ~ ~ ~ 5 .6 j!l In 5 .5 5.4 5 .3 0 20 40 60 BO 100 Iteration Figure 1: Evolution of the objective functions. The parameters of the baseline system (with 134K gaussians) were then re-estimated in the transformed spaces using the EM algorithm. Table 1 summarizes the improvements in the word error rates for the different systems. INumerical Algebra Group System Baseline (MFCC+~ + LDA Interclass divergence Bhattacharyya bound ~~) Word error rate 39.61 % 37.39% 36.32% 35.73% Table 1: Word error rates for the different systems. 4 Summary Two methods for performing discriminant feature space projections have been presented. Unlike LDA, they both aim to minimize the probability of misclassification in the projected space by either maximizing the interclass divergence and relating it to the Bayes error or by directly minimizing an upper bound on the classification error. Both methods lead to defining smooth objective functions which have as argument projection matrices and which can be numerically optimized. Experimental results on large vocabulary continuous speech recognition over the telephone show the superiority of the resulting features over their LDA or cepstral counterparts. References [1] H. P. Decell and 1. A. Quirein. An iterative approach to the feature selection problem. Proc. Purdue Univ. Conf. on Machine Processing of Remotely Sensed Data, 3B 13BI2,1972. [2] R. O. Duda and P. B. Hart. Pattern classification and scene analysis. Wiley, New York, 1973. [3] K. Fukunaga. Introduction to statistical pattern recognition. Academic Press, New York,1990. [4] R. Haeb-Umbach and H. Ney. Linear Discriminant Analysis for improved large vocabulary continuous speech recognition. Proceedings of lCASSP'92, volume 1, pages 13-16, 1992. [5] S. Kullback. Information theory and statistics. Wiley, New York, 1968. [6] N. Kumar and A. G. Andreou. Heteroscedastic discriminant analysis and reduced rank HMMs for improved speech recognition. Speech Communcation, 26:283-297, 1998. [7] M. Padmanabhan, G. Saon, S. Basu, 1. Huang and G. Zweig. Recent improvements in voicemail transcription. Proceedings of EUROSPEECH'99, Budapest, Hungary, 1999. [8] G. Saon, M. Padmanabhan, R. Gopinath and S. Chen. Maximum likelihood discriminant feature spaces. Proceedings of lCASSP'2000, Istanbul, Turkey, 2000. [9] S. R. Searle. Matrix algebra useful for statistics. Wiley Series in Probability and Mathematical Statistics, New York, 1982.
1794 |@word manageable:1 duda:1 sensed:1 covariance:5 simplifying:1 searle:1 reduction:1 series:1 selecting:1 bhattacharyya:11 com:1 si:2 dx:2 numerical:1 j1:1 analytic:1 update:1 discrimination:3 stationary:1 short:1 provides:3 argmax1:1 height:1 mathematical:1 rnl:1 prove:1 consists:1 umbach:1 introduce:1 pairwise:3 expected:1 indeed:2 p9:1 window:1 pf:2 increasing:1 becomes:2 project:1 considering:1 moreover:2 notation:1 what:2 argmin:1 finding:2 transformation:4 differentiation:2 guarantee:1 every:3 ro:2 classifier:2 normally:1 superiority:1 io:1 approximately:2 plus:2 au:1 heteroscedastic:2 hmms:1 range:3 union:2 practice:2 lost:2 procedure:1 remotely:1 projection:8 word:5 onto:1 selection:3 context:1 accumulating:1 measurable:1 center:1 maximizing:2 straightforward:1 regardless:1 lowij:2 jli:3 suppose:2 user:1 us:1 designing:2 jlj:2 recognition:9 apo:1 capture:1 spliced:2 mentioned:1 dynamic:1 trained:1 algebra:2 basis:1 po:2 indirect:1 univ:1 distinct:1 whose:1 emerged:1 heuristic:1 larger:1 haeb:1 statistic:4 final:1 analytical:1 combining:1 budapest:1 hungary:1 achieve:1 double:1 extending:1 produce:1 ac:1 drawback:1 subsequently:1 transient:1 ao:2 clustered:1 pl:1 hold:3 normal:2 major:1 consecutive:4 proc:1 lrp:1 always:2 gaussian:2 aim:2 improvement:3 rank:4 likelihood:2 baseline:4 sense:1 dependent:1 lj:5 integrated:1 istanbul:1 transformed:2 wij:1 i1:1 j5:1 classification:4 art:1 equal:1 extraction:1 represents:1 modern:1 divergence:17 individual:2 interest:1 message:1 highly:1 mixture:1 yielding:1 lrn:1 necessary:1 initialized:1 re:1 minimal:1 assignment:2 maximization:1 uniform:1 conducted:1 front:1 eurospeech:1 density:5 discriminating:1 together:2 again:1 huang:1 conf:1 derivative:1 li:9 ioms:1 coefficient:3 performed:1 closed:1 linked:1 bayes:16 minimize:4 maximized:1 yield:1 mfcc:2 classified:1 definition:2 frequency:1 proof:1 static:1 gain:1 popular:1 recall:1 subsection:2 dimensionality:1 organized:1 routine:3 improved:2 ox:3 hand:1 lda:10 true:1 counterpart:1 evolution:2 equality:2 symmetric:1 deal:1 adjacent:1 during:1 speaker:1 mel:1 yorktown:1 performs:1 l1:2 fj:2 recently:1 volume:1 relating:2 numerically:1 ai:1 similarly:2 etc:1 pu:3 curvature:1 own:1 recent:2 showed:1 phone:1 phonetic:1 inequality:1 watson:2 minimum:3 george:1 greater:1 eo:3 maximize:2 signal:2 sliding:1 full:1 turkey:1 smooth:1 characterized:1 academic:1 zweig:1 hart:1 a1:1 hda:2 involving:1 iteration:2 c1:1 preserved:2 unlike:1 noting:1 suboptimal:1 idea:1 avenue:1 expression:2 speech:10 hessian:1 york:4 useful:1 clear:1 amount:1 reduced:1 outperform:1 sl:1 supplied:1 delta:2 estimated:1 group:1 pj:4 year:1 inverse:1 everywhere:1 almost:1 p3:1 dy:1 summarizes:1 bound:13 scene:1 argument:1 fukunaga:1 kumar:1 performing:1 jio:1 according:1 saon:4 combination:1 belonging:1 conjugate:1 across:1 em:1 separability:1 intimately:1 making:2 previously:1 turn:1 fortran:1 end:1 gaussians:2 rewritten:1 spectral:1 appropriate:1 ney:1 alternative:3 rp:1 original:1 dat:2 objective:9 quantity:1 diagonal:1 gradient:4 distance:2 link:2 separate:1 voicemail:2 hmm:2 mail:1 discriminant:7 index:1 ratio:1 minimizing:4 unfortunately:1 statement:1 trace:3 stated:1 perform:1 upper:3 purdue:1 padmanabhan:3 descent:1 defining:1 variability:1 frame:6 rn:2 interclass:3 acpc:1 optimized:1 andreou:1 hour:1 usually:2 pattern:3 misclassification:5 event:1 difficulty:1 bi2:1 scheme:1 library:1 supervector:1 carried:1 prior:3 relative:1 loss:1 permutation:1 mixed:1 telephony:1 gopinath:1 degree:1 sufficient:1 consistent:1 classifying:2 pi:8 ibm:2 lo:2 summary:1 side:1 basu:1 characterizing:1 cepstral:7 abo:1 jot:1 distributed:1 dimension:2 vocabulary:5 commonly:1 made:1 projected:7 obtains:1 kullback:3 transcription:2 summing:1 spectrum:1 continuous:3 iterative:2 table:2 channel:1 main:1 augmented:1 referred:1 ny:1 wiley:3 wish:1 iot:1 bij:1 down:2 theorem:3 specific:1 explored:1 mukund:2 consist:1 incorporating:1 exists:1 chen:1 forming:2 ux:1 bo:3 oox:1 sorted:1 specifically:1 telephone:1 called:3 experimental:1 aux:3
867
1,795
Active Learning for Parameter Estimation in Bayesian Networks Simon Tong Computer Science Department Stanford University simon. tong@cs.stanford.edu Daphne Koller Computer Science Department Stanford University koller@cs.stanford.edu Abstract Bayesian networks are graphical representations of probability distributions. In virtually all of the work on learning these networks, the assumption is that we are presented with a data set consisting of randomly generated instances from the underlying distribution. In many situations, however, we also have the option of active learning, where we have the possibility of guiding the sampling process by querying for certain types of samples. This paper addresses the problem of estimating the parameters of Bayesian networks in an active learning setting. We provide a theoretical framework for this problem, and an algorithm that chooses which active learning queries to generate based on the model learned so far. We present experimental results showing that our active learning algorithm can significantly reduce the need for training data in many situations. 1 Introduction In many machine learning applications, the most time-consuming and costly task is the collection of a sufficiently large data set. Thus, it is important to find ways to minimize the number of instances required. One possible method for reducing the number of instances is to choose better instances from which to learn. Almost universally, the machine learning literature assumes that we are given a set of instances chosen randomly from the underlying distribution. In this paper, we assume that the learner has the ability to guide the instances it gets, selecting instances that are more likely to lead to more accurate models. This approach is called active learning. The possibility of active learning can arise naturally in a variety of domains, in several variants. In selective active learning, we have the ability of explicitly asking for an example of a certain "type"; i.e., we can ask for an full instance where some of the attributes take on requested values. For example, if our domain involves webpages, the learner might be able to ask a human teacher for examples of homepages of graduate students in a Computer Science department. A variant of selective active learning is pool-based active learning, where the learner has access to a large pool of instances, about which it knows only the value of certain attributes. It can then ask for instances in this pool for which these known attributes take on certain values. For example, one could redesign the U.S. census to have everyone fill out only the short form; the active learner could then select among the respondents for those that should fill out the more detailed long form. Another example is a cancer study in which we have a list of people's ages and whether they smoke, and we can ask a subset of these people to undergo a thorough examination. In such active learning settings, we need a mechanism that tells us which instances to select. This problem has been explored in the context of supervised learning [1, 2, 7, 9]. In this paper, we consider its application in the unsupervised learning task of density estimation. We present a formal framework for active learning in Bayesian networks (BNs). We assume that the graphical structure of the BN is fixed, and focus on the task of parameter estimation. We define a notion of model accuracy, and provide an algorithm that selects queries in a greedy way, designed to improve model accuracy as much as possible. At first sight, the applicability of active learning to density estimation is unclear. Given that we are not simply sampling, it is initially not clear that an active learning algorithm even learns the correct density. In fact we can actually show that our algorithm is consistent, i.e., it converges to the right density at the limit. Furthermore, it is not clear that active learning is necessarily beneficial in this setting. After all, if we are trying to estimate a distribution, then random samples from that distribution would seem the best source. Surprisingly, we provide empirical evidence showing that, in a range of interesting circumstances, our approach learns from significantly fewer instances than random sampling. 2 Learning Bayesian Networks Let X = {Xl, ... ,Xn } be a set of random variables, with each variable Xi taking values in some finite domain Dom[ XiJ. A Bayesian network over X is a pair (9, 0) that represents a distribution over the joint space of X. Q is a directed acyclic graph, whose nodes correspond to the random variables in X and whose structure encodes conditional independence properties about the joint distribution. We use U i to denote the set of parents of Xi. 0 is a set of parameters which quantify the network by specifying the conditional probability distributions (CPDs) P(Xi I U i ). We assume that the CPD of each node consists of a separate multinomial distribution over Dom[XiJ for each instantiation u of the parents U i . Hence, we have a parameter OXi;lu for each Xij E Dom[XiJ; we use OXil u to represent the vector of parameters associated with the multinomial P(Xi I u). Our focus is on the parameter estimation task: we are given the network structure Q, and our goal is to use data to estimate the network parameters O. We will use Bayesian parameter estimation, keeping a density over possible parameter values. As usual [5], we make the assumption of parameter independence, which allows us to represent the joint distribution p( 0) as a set of independent distributions, one for each multinomial 0 Xi Iu. For multinomials, the conjugate prior is a Dirichlet distribution [4], which is parameterized by hyperparameters aj E IR+, with a* = l:j aj. Intuitively, aj represents the number of "imaginary samples" observed prior to observing any data. In particular, if X is distributed multinomial with parameters 0 = (01, ... , Or), and p(O) is Dirichlet, then the probability that our next observation is Xj is aj/a*. If we obtain a new instance X = Xj sampled from this distribution, then our posterior distribution p(O) is also distributed Dirichlet with hyperparameters (al, ... ,aj + 1, ... ,a r ). In a BN with the parameter independence assumption, we have a Dirichlet distribution for every multinomial distribution OXi lu. Given a distribution p(O), we use a Xi ; lu to denote the hyperparameter corresponding to the parameter OXi; lu. 3 Active Learning Assume we start out with a network structure Q and a prior distribution p( 0) over the parameters of Q. In a standard machine learning framework, data instances are independently, randomly sampled from some underlying distribution. In an active learning setting, we have the ability to request certain types of instances. We formalize this idea by assuming that some subset C of the variables are controllable. The learner can select a subset of variables Q C C and a particular instantiation q to Q. The request Q = q is called a query. The result of such a query is a randomly sampled instance x conditioned on Q = q. A (myopic) active learner eis a querying function that takes Q and p(O), and selects a query Q = q. It takes the resulting instance x, and uses it to update its distribution p(O) to obtain a posterior p'(O). It then repeats the process, using p' for p. We note that p( 0) summarizes all the relevant aspects of the data seen so far, so that we do not need to maintain the history of previous instances. To fully specify the algorithm, we need to address two issues: we need to describe how our parameter distribution is updated given that x is not a random sample, and we need to construct a mechanism for selecting the next query based on p. To answer the first issue assume for simplicity that our query is Q = q for a single node Q. First, it is clear that we cannot use the resulting instance x to update the parameters of the node Q itself. However, we also have a more subtle problem. Consider a parent U of Q. Although x does give us information about the distribution of U, it is not information that we can conveniently use. Intuitively, P(U I Q = q) is sampled from a distribution specified by a complex formula involving multiple parameters. We avoid this problem simply by ignoring the information provided by x on nodes that are "upstream" of Q. More generally, we define a variable Y to be updateable in the context of a selective query Q if it is not in Q or an ancestor of a node in Q. Our update rule is now very simple. Given a prior distribution p( 9) and an instance x from a query Q = q, we do standard Bayesian updating, as in the case of randomly sampled instances, but we update only the Dirichlet distributions of update able nodes. We use p( 9 t Q = q, x) to denote the distribution pi (9) obtained from this algorithm; this can be read as "the density of 9 after asking query q and obtaining the response x". Our second task is to construct an algorithm for deciding on our next query given our current distribution p. The key step in our approach is the definition of a measure for the quality of our learned model. This allows us to evaluate the extent to which various instances would improve the quality of our model, thereby providing us with an approach for selecting the next query to perform. Our formulation is based on the framework of Bayesian point estimation. In the Bayesian learning framework, we maintain a distribution p( 9) over all of the model parameters. However, when we are asked to reason using the model, we typically "collapse" this distribution over parameters, generate a single representative model iJ, and answer questions relative to that. If we choose to use iJ, whereas the "true" model is 9*, we incur some loss Loss(iJ II 9*). Our goal is to minimize this loss. Of course, we do not have access to 9*. However, our posterior distribution p( 9) represents our "optimal" beliefs about the different possible values of 9*, given our prior knowledge and the evidence. Therefore, we can define the risk of a particular iJ with respect to pas: Ee~p(e) [Loss (6 II iJ)] = 10 Loss (9 II iJ)p(9) d9. (1) We then define the Bayesian point estimate to be the value of iJ that minimizes the risk. We shall only be considering using the Bayesian point estimate, thus we define the risk of a density p, Risk(p( 9)), to be the risk of the optimal iJ with respect to p. The risk of our density p(9) is our measure for the quality of our current state of knowledge, as represented by p(9) . In a greedy scheme, our goal is to obtain an instance x such that the risk of the pi obtained by updating p with x is lowest. Of course, we do not know exactly which x we are going to get. We know only that it will be sampled from a distribution induced by our query. Our expected posterior risk is therefore: ExPRisk(p(9) I Q = q) = Ee~p(e)Ex~Pe(XIQ=q)Risk(p(9 t Q = q, x)). (2) This definition leads immediately to the following simple algorithm: For each candidate query Q = q, we evaluate the expected posterior risk, and then select the query for which it is lowest. 4 Active Learning Algorithm To obtain a concrete algorithm from the active learning framework shown in the previous section, we must pick a loss function. There are many possible choices, but perhaps the best justified is the relative entropy or Kullback-Leibler divergence (KL-divergence) [3] : KL(9 II iJ) = ~x Pe(x) In ;:~:~. The KL-divergence has several independentjustifications, and a variety of properties that make it particularly suitable as a measure of distance between distributions. We therefore proceed in this paper using KL-divergence as our loss function. (An analogous analysis can be carried through for another very natural loss function: negative loglikelihood of future data - in the case of multinomial CPDs with Dirichlet densities over the parameters this results in an identical final algorithm.) We now want to find an efficient approach to computing the risk. Two properties of KL-divergence tum out to be crucial. The first is that the value 0 that minimizes the risk relative to p is the mean value of the parameters, Ee~p(9) [0]. For a Bayesian network with independent Dirichlet distributions over the parameters, this expression reduces to Ox _-Iu = O!Zij Iu , the standard (Bayesian) approach used for collapsing a distribution over "3 O'Zli.lu BN models into a single model. The second observation is that, for BNs, KL-divergence decomposes with the graphical structure of the network: KL(OIIO') where KL(P(Xi I U i ) II =L KL(P9 (Xi I U i) II P9 ,(Xi lUi)), (3) i P'(Xi I U i )) is the conditional KL-divergence and is given by :Eu P(u)KL(P(Xi I u) II P'(Xi I u)) . With these two facts, we can prove the following: Theorem 4.1 Let f(a) be the Gamma junction, \[I" (a) be the digamma function f'(a)/f(a), and H be the entropy junction. Define 8(al,oo.,a r ) = :E;=l [~ (\[I"(aj + 1) - \[I"(a* + 1)) + H (~, ... , ~)]. Then the risk decomposes as: =L L Pij(u)8(aXillu,oo.,aXirilu)' (4) uEDom[Ui] Eq. (4) gives us a concrete expression for evaluating the risk of p(O). However, to evaluate a potential query, we also need its expected posterior risk. Recall that this is the expectation, over all possible answers to the query, of the risk of the posterior distribution p'. In other words, it is an average over an exponentially large set of possibilities. To understand how we can evaluate this expression efficiently, we first consider a much simpler case. Consider a BN where we have only one child node X and its parents U, i.e., the only edges are from the nodes U to X . We also restrict attention to queries where we control all and only the parents U. In this case, a query q is an instantiation to U, and the possible outcomes to the query are the possible values of the variable X. The expected posterior risk contains a term for each variable Xi and each instantiation to its parents. In particular, it contains a term for each of the parent variables U. However, as these variables are not updateable, their hyperparameters remain the same following any query q. Hence, their contribution to the risk is the same in every p( 0 t U = q, x) , and in our prior p( 0). Thus, we can ignore the terms corresponding to the parents, and focus on the terms associated with the conditional distribution P(X I U). Hence, we have: Riskx(p(O)) = LPij(u)8(aXllu,oo.,axrlu) (5) Risk(p(O)) i u ExPRiskx(p(O) I U = q) u j where a~jlu is the hyperparameter in p(O t Q = q, Xj) . Rather than evaluating the expected posterior risk directly, we will evaluate the reduction in risk obtained by asking a query U = q: ~(X I q) = Risk(P(O)) - ExPRisk(P(O) I q) = Riskx(P(O)) - ExPRiskx(P(O) I q) Our first key observatIOn relies on the fact that the variables tl are not updateable for this query, so that their hyperparameters do not change. Hence, Pij (u) and Pij' (u) are the same. The second observation is that the hyperparameters corresponding to an instantiation u are the same in p and p' except for u = q. Hence, terms cancel and the expression simplifies to: Pij (q) (8(aX1Iq, aXrl q) - :Ej Pij(Xj I q)8(a~1Iq' of certain functional properties of \[1", we finally obtain: 00 ? , 00 ? , a~r lq)) . By taking advantage ~(X I q) = Pij(q) (H (:Zllq, ... ,:zrlq) -"L..J Pij(Xj I q)H (:~llq, z.lq ZI.lq ZI.lq j 00. , :~rlq)) ZI . lq (7) If we now select our query q so as to maximize the difference between our current risk and the expected posterior risk, we get a very natural behavior: We will select the query q that leads to the greatest reduction in the entropy of X given its parents. It is also here that we can gain an insight as to where active learning has an edge over random sampling. Consider one situation in which ql which is 100 times less likely than ~; ql will lead us to update a parameter whose current density is Dirichlet(l, 1), whereas q2 will lead us to update a parameter whose current density is Dirichlet(100, 100). However, according to ~, updating the former is worth more than the latter. In other words, if we are confident about commonly occurring situations, it is worth more to ask about the rare cases. We now generalize this derivation to the case of an arbitrary BN and an arbitrary query. Here, our average over possible query answers encompasses exponentially many terms. Fortunately, we can utilize the structure of the BN to avoid an exhaustive enumeration. Theorem 4.2 For an arbitrary BN and an arbitrary query Q = q, the expected KL posterior risk decomposes as: ExPRisk(P(O) 1 Q = q) = Pij(u 1 Q = q)ExPRisk xi (P(O) 1 Vi = u). i uEDom[u;] In other words, the expected posterior risk is a weighted sum of expected posterior risks for conditional distributions of individual nodes Xi, where for each node we consider "queries" that are complete instantiations to the parents Vi of Xi . We now have similar decompositions for the risk and the expected posterior risk. The obvious next step is to consider the difference between them, and then simplify it as we did for the case of a single variable. Unfortunately, in the case of general BNs, we can no longer exploit one of our main simplifying assumptions. Recall that, in the expression for the risk (Eq. (5?, the term involving Xi and u is weighted by Pij (u). In the expected posterior risk, the weight is Pij' (u). In the case of a single node and a full parent query, the hyperparameters of the parents could not change, so these two weights were necessarily the same. In the more general setting, an instantiation x can change hyperparameters all through the network, leading to different weights. However, we believe that a single data instance will not usually lead to a dramatic change in the distributions. Hence, these weights are likely to be quite close. To simplify the formula (and the associated computation), we therefore choose to approximate the posterior probability Pij' (u) using the prior probability Pij(u). Under this assumption, we can use the same simplification as we did in the single node case. Assuming that this approximation is a good one, we have that: ~(X 1 q) = Risk(p(O)) - ExPRisk(p(O) 1 q) ~ Pij(u 1 q)~(Xi 1 u), i uEDom[Ui] L: L: L: L: (8) where ~(Xi 1 u) is as defined in Eq. (7). Notice that we actually only need to sum over the update able XiS since ~(Xi 1 u) will be zero for all non-updateable XiS. The above analysis provides us with an efficient implementation of our general active learning scheme. We simply choose a set of variables in the Bayesian network that we wish to control, and for each instantiation of the controllable variables we compute the expected change in risk given by Eq. (8). We then ask the query with the greatest expected change and update the parameters of the updateable nodes. We now consider the computational complexity of the algorithm. It turns out that, for each potential query, all of the desired quantities can be obtained via two inference passes using a standard join tree algorithm [6] . Thus, the run time complexity of the algorithm is: 0(1 QI . cost of BN join tree inference), where Q is the set of candidate queries. Our algorithm (approximately) finds the query that reduces the expected risk the most. We can show that our specific querying scheme (including the approximation) is consistent. As we mentioned before, this statement is non-trivial and depends heavily on the specific querying algorithm. , ? ,, oI" ;; K ;; ,,~"~---;;;-~--c,,:c-~,C,;-"~---;;;-~---!. "~~"-----c~--~,~""-----c~-----! NOO1 WoIQu ....... (a) NconberofQ...ene& (b) (c) Figure 1: (a) Alann network with three controllable nodes. (b) Asia network with two controllable nodes. (c) Cancer network with one controllable node. The axes are zoomed for resolution. Theorem 4.3 Let U be the set of nodes which are updateable for at least one candidate query at each querying step. Assuming that the underlying true distribution is not deterministic, then our querying algorithm produces consistent estimates for the CPD parameters of every member ofU. 5 Experimental Results We performed experiments on three commonly used networks: Alarm, Asia and Cancer. Alarm has 37 nodes and 518 independent parameters, Asia has eight nodes and 18 independent parameters, and Cancer has five nodes and 11 independent parameters. We first needed to set the priors for each network. We use the standard approach [5] of eliciting a network and an equivalent sample size. In our experiments, we assumed that we had fairly good background knowledge of the domain. To simulate this, we obtained our prior by sampling a few hundred instances from the true network and used the counts (together with smoothing from a uniform prior) as our prior. This is akin to asking for a prior network from a domain expert, or using an existing set of complete data to find initial settings of the parameters. We then compared refining the parameters either by using active learning or by random sampling. We permitted the active learner to abstain from choosing a value for a controlled node if it did not wish to -- that node is then sampled as usual. Figure 1 presents the results for the three networks. The graphs compare the KLdivergence between the learned networks and the true network that is generating the data. We see that active learning provides a substantial improvement in all three networks. The improvement in the Alarm network is particularly striking given that we had control of just three of the 36 nodes. The extent of the improvement depends on the extent to which queries allow us to reach rare events. For example, Smoking is one of the controllable variables in the Asia network. In the original network, P(Smoking) = 0.5. Although there was a significant gain by using active learning in this network, we found that there was a greater increase in performance if we altered the generating network to have P(Smoking) = 0.9; this is the graph that is shown. We also experimented with specifying uniform priors with a small equivalent sample size. Here, we obtained significant benefit in the Asia network, and some marginal improvement in the other two. One possible reason is that the improvement is "washed out" by randomness, as the active learner and standard learner are learning from different instances. Another explanation is that the approximation in Eq. (8) may not hold as well when the prior p( 0) is uninformed and thereby easily perturbed even by a single instance. This indicates that our algorithm may perform best when refining an existing domain model. Overall, we found that in almost all situations active learning performed as well as or better than random sampling. The situations where active learning produced most benefit were, unsurprisingly, those in which the prior was confident and correct about the commonly occurring cases and uncertain and incorrect about the rare ones. Clearly, this is the precisely the scenario we are most likely to encounter in practice when the prior is elicited from an expert. By experimenting with forcing different priors we found that active learning was worse in one type of situation: where the prior was confident yet incorrect about the commonly occurring cases and uncertain but actually correct about the rare ones. This type of scenario is unlikely to occur in practice. Another factor affecting the performance was the degree to which the controllable nodes could influence the updateable nodes. 6 Discussion and Conclusions We have presented a formal framework and resulting querying algorithm for parameter estimation in Bayesian networks. To our knowledge, this is one of the first applications of active learning in an unsupervised context. Our algorithm uses parameter distributions to guide the learner to ask queries that will improve the quality of its estimate the most. BN active learning can also be performed in a causal setting. A query now acts as experiment - it intervenes in a model and forces variables to take particular values. Using Pearl's intervention theory [8], we can easily extend our analysis to deal with this case. The only difference is that the notion of an updateable node is even simpler - any node that is not part of a query is updateable. Regrettably, space prohibits a more complete exposition. We have demonstrated that active learning can have significant advantages for the task of parameter estimation in BNs, particularly in the case where our parameter prior is of the type that a human expert is likely to provide. Intuitively, the benefit comes from estimating the parameters associated with rare events. Although it is less important to estimate the probabilities of rare events accurately, the number of instances obtained if we randomly sample from the distribution is still not enough. We note that this advantage arises even when we have used a loss function that considers only the accuracy of the distribution. In many practical settings such as medical or fault diagnosis, the rare cases are even more important, as they are often the ones that it is critical for the system to deal with correctly. A further direction that we are pursuing is active learning for the causal structure of a domain. In other words, we are presented with a domain whose causal structure we wish to understand and we want to know the best sequence of experiments to perform. Acknowledgements The experiments were performed using the PHROG system, developed primarily by Lise Getoor, Uri Lerner, and Ben Taskar. Thanks to Carlos Guestrin and Andrew Ng for helpful discussions. The work was supported by DARPA's information Assurance program under subcontract to SRI International, and by ARO grant DAAH0496-1-0341 under the MURI program "Integrated Approach to Intelligent Systems". References [1] A.c. Atkinson and A.N. Donev. Optimal Experimental Designs. Oxford University Press, 1992. [2] D. Cohn, Z. Ghahramani, and M. Jordan. Active learning with statistical models. Journal of Artificial intelligence Research, 4, 1996. [3] T.M Cover and J.A. Thomas. information Theory. Wiley, 1991. [4] M. H. DeGroot. Optimal Statistical Decisions. McGraw-Hill, New York, 1970. [5] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20: 197-243, 1995. [6] S. L. Lauritzen and D. J. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. J. Royal Statistical Society, B 50(2), 1988. [7] D. MacKay. Information-based objective functions for active data selection. Neural Computation,4:590-604, 1992. [8] J. Pearl. Causality: Models, Reasoning, and inference. Cambridge University Press, 2000. [9] H.S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Froc. COLT, pages 287294,1992.
1795 |@word sri:1 bn:9 decomposition:1 simplifying:1 pick:1 dramatic:1 thereby:2 reduction:2 initial:1 contains:2 selecting:3 zij:1 imaginary:1 existing:2 current:5 yet:1 must:1 cpds:2 designed:1 update:9 greedy:2 fewer:1 assurance:1 intelligence:1 short:1 provides:2 node:28 simpler:2 daphne:1 five:1 incorrect:2 consists:1 prove:1 expected:14 behavior:1 p9:2 enumeration:1 considering:1 provided:1 estimating:2 underlying:4 homepage:1 lowest:2 minimizes:2 prohibits:1 q2:1 developed:1 kldivergence:1 thorough:1 every:3 act:1 exactly:1 control:3 medical:1 intervention:1 grant:1 before:1 local:1 limit:1 oxford:1 approximately:1 might:1 specifying:2 collapse:1 graduate:1 range:1 directed:1 practical:1 practice:2 empirical:1 significantly:2 word:4 get:3 cannot:1 close:1 selection:1 context:3 risk:34 influence:1 equivalent:2 deterministic:1 demonstrated:1 attention:1 independently:1 resolution:1 simplicity:1 immediately:1 rule:1 insight:1 fill:2 notion:2 analogous:1 updated:1 heavily:1 us:2 pa:1 particularly:3 updating:3 muri:1 observed:1 taskar:1 eis:1 sompolinsky:1 eu:1 mentioned:1 substantial:1 ui:2 complexity:2 asked:1 seung:1 dom:3 incur:1 learner:10 easily:2 joint:3 darpa:1 various:1 represented:1 derivation:1 describe:1 query:40 artificial:1 tell:1 outcome:1 choosing:1 exhaustive:1 whose:5 quite:1 stanford:4 loglikelihood:1 ability:3 itself:1 final:1 advantage:3 sequence:1 lpij:1 aro:1 zoomed:1 relevant:1 webpage:1 parent:12 produce:1 generating:2 converges:1 ben:1 oo:3 iq:1 andrew:1 xiq:1 uninformed:1 ij:9 lauritzen:1 eq:5 c:2 involves:1 come:1 quantify:1 direction:1 correct:3 attribute:3 human:2 hold:1 sufficiently:1 deciding:1 estimation:9 weighted:2 clearly:1 sight:1 rather:1 avoid:2 ej:1 ax:1 focus:3 refining:2 lise:1 improvement:5 indicates:1 experimenting:1 digamma:1 helpful:1 inference:3 typically:1 unlikely:1 integrated:1 initially:1 koller:2 ancestor:1 selective:3 going:1 selects:2 iu:3 issue:2 among:1 overall:1 colt:1 smoothing:1 fairly:1 mackay:1 marginal:1 construct:2 ng:1 sampling:7 identical:1 represents:3 unsupervised:2 cancel:1 future:1 cpd:2 simplify:2 intelligent:1 few:1 primarily:1 randomly:6 lerner:1 gamma:1 divergence:7 individual:1 consisting:1 maintain:2 llq:1 possibility:3 myopic:1 accurate:1 edge:2 tree:2 desired:1 causal:3 theoretical:1 uncertain:2 instance:28 asking:4 cover:1 applicability:1 cost:1 subset:3 rare:7 hundred:1 uniform:2 answer:4 teacher:1 perturbed:1 chooses:1 confident:3 thanks:1 density:11 international:1 pool:3 together:1 d9:1 concrete:2 intervenes:1 choose:4 collapsing:1 worse:1 expert:4 leading:1 potential:2 student:1 donev:1 explicitly:1 vi:2 depends:2 performed:4 observing:1 start:1 option:1 elicited:1 carlos:1 simon:2 contribution:1 minimize:2 oi:1 ir:1 accuracy:3 regrettably:1 efficiently:1 correspond:1 generalize:1 bayesian:17 accurately:1 produced:1 lu:5 worth:2 randomness:1 history:1 reach:1 definition:2 obvious:1 naturally:1 associated:4 sampled:7 gain:2 ask:7 recall:2 knowledge:5 formalize:1 subtle:1 actually:3 tum:1 supervised:1 asia:5 specify:1 response:1 permitted:1 formulation:1 ox:1 furthermore:1 just:1 cohn:1 smoke:1 quality:4 perhaps:1 aj:6 believe:1 true:4 former:1 hence:6 read:1 leibler:1 deal:2 subcontract:1 trying:1 hill:1 complete:3 reasoning:1 abstain:1 multinomial:7 functional:1 exponentially:2 alann:1 extend:1 significant:3 cambridge:1 had:2 access:2 longer:1 posterior:16 forcing:1 scenario:2 certain:6 fault:1 seen:1 guestrin:1 fortunately:1 greater:1 maximize:1 ii:7 full:2 multiple:1 reduces:2 long:1 controlled:1 qi:1 variant:2 involving:2 circumstance:1 expectation:1 represent:2 justified:1 respondent:1 whereas:2 want:2 background:1 affecting:1 source:1 crucial:1 pass:1 degroot:1 induced:1 virtually:1 undergo:1 member:1 seem:1 jordan:1 ee:3 enough:1 variety:2 xj:5 independence:3 zi:3 restrict:1 reduce:1 idea:1 simplifies:1 whether:1 expression:5 akin:1 proceed:1 york:1 generally:1 detailed:1 clear:3 generate:2 xij:4 notice:1 correctly:1 diagnosis:1 hyperparameter:2 shall:1 key:2 utilize:1 graph:3 sum:2 run:1 parameterized:1 striking:1 zli:1 almost:2 pursuing:1 geiger:1 decision:1 summarizes:1 simplification:1 atkinson:1 occur:1 precisely:1 encodes:1 aspect:1 simulate:1 bns:4 department:3 according:1 request:2 combination:1 conjugate:1 beneficial:1 remain:1 heckerman:1 intuitively:3 census:1 ene:1 turn:1 count:1 mechanism:2 committee:1 needed:1 know:4 oxi:3 junction:2 eight:1 encounter:1 original:1 thomas:1 assumes:1 dirichlet:9 graphical:4 exploit:1 ghahramani:1 eliciting:1 society:1 objective:1 question:1 quantity:1 costly:1 usual:2 unclear:1 distance:1 separate:1 extent:3 considers:1 trivial:1 reason:2 assuming:3 providing:1 ql:2 unfortunately:1 statement:1 negative:1 implementation:1 design:1 redesign:1 perform:3 observation:4 finite:1 situation:7 arbitrary:4 pair:1 required:1 specified:1 kl:12 smoking:3 learned:3 pearl:2 address:2 able:3 usually:1 encompasses:1 program:2 royal:1 including:1 explanation:1 everyone:1 belief:1 greatest:2 suitable:1 ofu:1 natural:2 examination:1 event:3 force:1 critical:1 getoor:1 scheme:3 improve:3 altered:1 spiegelhalter:1 carried:1 washed:1 prior:19 literature:1 acknowledgement:1 relative:3 unsurprisingly:1 fully:1 loss:9 interesting:1 querying:7 acyclic:1 age:1 degree:1 pij:13 consistent:3 pi:2 cancer:4 course:2 surprisingly:1 repeat:1 keeping:1 supported:1 guide:2 formal:2 understand:2 allow:1 taking:2 distributed:2 benefit:3 opper:1 xn:1 evaluating:2 collection:1 commonly:4 universally:1 far:2 approximate:1 ignore:1 mcgraw:1 kullback:1 active:37 instantiation:8 assumed:1 consuming:1 xi:22 decomposes:3 learn:1 controllable:7 ignoring:1 obtaining:1 requested:1 necessarily:2 complex:1 upstream:1 domain:8 did:3 main:1 arise:1 hyperparameters:7 alarm:3 child:1 causality:1 representative:1 join:2 tl:1 tong:2 wiley:1 guiding:1 wish:3 lq:5 xl:1 candidate:3 pe:2 chickering:1 learns:2 formula:2 theorem:3 specific:2 showing:2 list:1 explored:1 experimented:1 evidence:2 conditioned:1 occurring:3 uri:1 entropy:3 simply:3 likely:5 conveniently:1 relies:1 conditional:5 goal:3 exposition:1 change:6 except:1 reducing:1 lui:1 called:2 experimental:3 select:6 people:2 latter:1 arises:1 evaluate:5 ex:1
868
1,796
The Early Word Catches the Weights Mark A. Smith Garrison W. Cottrell Karen L. Anderson Department of Computer Science University of California at San Diego La Jolla, CA 92093 {masmith,gary,kanders}@cs.ucsd.edu Abstract The strong correlation between the frequency of words and their naming latency has been well documented. However, as early as 1973, the Age of Acquisition (AoA) of a word was alleged to be the actual variable of interest, but these studies seem to have been ignored in most of the literature. Recently, there has been a resurgence of interest in AoA. While some studies have shown that frequency has no effect when AoA is controlled for, more recent studies have found independent contributions of frequency and AoA. Connectionist models have repeatedly shown strong effects of frequency, but little attention has been paid to whether they can also show AoA effects. Indeed, several researchers have explicitly claimed that they cannot show AoA effects. In this work, we explore these claims using a simple feed forward neural network. We find a significant contribution of AoA to naming latency, as well as conditions under which frequency provides an independent contribution. 1 Background Naming latency is the time between the presentation of a picture or written word and the beginning of the correct utterance of that word. It is undisputed that there are significant differences in the naming latency of many words, even when controlling word length, syllabic complexity, and other structural variants. The cause of differences in naming latency has been the subject of numerous studies. Earlier studies found that the frequency with which a word appears in spoken English is the best determinant of its naming latency (Oldfield & Wingfield, 1965). More recent psychological studies, however, show that the age at which a word is learned, or its Age of Acquisition (AoA), may be a better predictor of naming latency. Further, in many multiple regression analyses, frequency is not found to be significant when AoA is controlled for (Brown & Watson, 1987; Carroll & White, 1973; Morrison et al. 1992; Morrison & Ellis, 1995). These studies show that frequency and AoA are highly correlated (typically r =-.6) explaining the confound of older studies on frequency. However, still more recent studies question this finding and find that both AoA and frequency are significant and contribute independently to naming latency (Ellis & Morrison, 1998; Gerhand & Barry, 1998,1999). Much like their psychological counterparts, connectionist networks also show very strong frequency effects. However, the ability of a connectionist network to show AoA effects has been doubted (Gerhand & Barry, 1998; Morrison & Ellis, 1995). Most of these claims are based on the well known fact that connectionist networks exhibit "destructive interference" in which later presented stimuli, in order to be learned, force early learned inputs to become less well represented, effectively increasing their associated errors. However, these effects only occur when training ceases on the early patterns. Continued training on all the patterns mitigates the effects of interference from later patterns. Recently, Ellis & Lambon-Ralph (in press) have shown that when pattern presentation is staged, with one set of patterns initially trained, and a second set added into the training set later, strong AoA effects are found. They show that this result is due to a loss of plasticity in the network units, which tend to get out of the linear range with more training. While this result is not surprising, it is a good model of the fact that some words may not come into existence until late in life, such as "email" for baby boomers. However, they explicitly claim that it is important to stage the learning in this way, and offer no explanation of what happens during early word acquisition, when the surrounding vocabulary is relatively constant, or why and when frequency and AoA show independent effects. In this paper, we present an abstract feed-forward computational model of word acquisition that does not stage inputs. We use this model to examine the effects of frequency and AoA on sum squared error, the usual variable used to model reaction time. We find a consistent contribution of AoA to naming latency, as well as the conditions under which there is an independent contribution from frequency in some tasks. 2 Experiment 1: Do networks show AoA effects? Our first goal was to show that AoA effects could be observed in a connectionist network using the simplest possible model. First, we need to define AoA in a network. We did this is such a way that staging the inputs was not necessary: we defined a threshold for the error, after which we would say a pattern has been "acquired." The AoA is defined to be the epoch during which this threshold is crossed. Since error for a particular pattern may occasionally go up again during online learning, we also measured the last epoch that the pattern went below the threshold for final time. We analyzed our networks using both definitions of acquisition (which we call first acquisition and final acquisition), and have found that the results vary little between these definitions. In what follows, we use first acquisition for simplicity. 2.1 The Model The simplest possible model is an autoencoder network. Using a network architecture of 20-15-20, we trained the network to autoencode 200 patterns of random bits (each bit had a 50% probability of being on or off). We initialized weights randomly with a flat distribution of values between 0.1 and -0.1, used a learning rate of 0.001 and momentum of 0.9. For this experiment, we chose the AoA threshold to be 2, indicating an average squared error of .1 per input bit, yielding outputs much closer to the correct output than any other. We calculated Euclidean distances between all outputs and patterns to verify that the input was mapped most closely to the correct output. Training on the entire corpus continued until 98% of all patterns fell below this threshold. 2.2 Results After the network had learned the input corpus, we investigated the relationship between the epoch at which the input vector had been learned and the final sum squared error (equivalent, for us, to "adult" naming latency) for that input vector. These results are presented in Figure 1. The relationship between the age of acquisition of the input vector and its '1'$Iac:q llSlll C)n ... ' 1'$Iac:qui sl lon reW""sion final oc;qllS~lon " Inalac:q''''l onmW''''slon ',~-;:,OOO-= ~ -----;~=-,----C_:::---=,OO,------,OO=,-----:~=---;:"OO~~' '~,----C ,oo:::---=~,-----;~=,~,oo =---=oo,~=~-----;ro=-,~ ",~ , ~ ~ EpDdlo1 L....... In~ EpoMNlI'11Df1' Figure 1: Exp. 1. Final SSE vs. AoA. Freq.w:ncyDlAppe"''''''''' , rT----~----~----~------, Figure 2: SSs.m~t!:P~9!!..!?'y Percentile ''-~~7",.~~~",~"OO~ ' ~~~~~~--' ~,staoq""!""""IJ'assoon - "nataoqu'srtoon ',nataoqltlS,bon'''IJ'assoon " .' ,,~----~-===~,;,,====~;===~ PallarnNumbe, Figure 3: Exp. 2 Frequency Distribution "!--'::'OO:, ---:::'------:;:::-=MOO:-::'OO=";--;;;;",,:::-OO---:,=."":;--;:"!::,,,,'---::!,,oo:::-,---;:!",oo, EpochDlLoatnlr'lg Figure 4: Exp. 2 Final SSE vs. AoA final sum squared error is clear: the earlier an input is learned, the lower its final error will be. A more formal analysis of this relationship yields a significant (p ? .005) correlation coefficient of r=0.749 averaged over 10 runs of the network. In order to understand this relationship better, we divided the learned words into five percentile groups depending upon AoA. Figure 2 shows the average SSE for each group plotted over epoch number. The line with the least average SSE corresponds to the earliest acquired quintile while the line with the highest average SSE corresponds to the last acquired quintile. From this graph we can see that the average SSE for earlier learned patterns stays below errors for late learned patterns. This is true from the outset of learning as well as when the error starts to decrease less rapidly as it asymptotically approaches some lowest error limit. We sloganize this result as "the patterns that get to the weights first, win." 3 Experiment 2: Do AoA effects survive a frequency manipulation? Having displayed that AoA effects are present in connectionist networks, we wanted to investigate the interaction with frequency. We model the frequency distribution of inputs after the known English spoken word frequency in which very few words appear very often while a very large portion of words appear very seldom (Zipf's law). The frequency distribution we used (presentation probability= 0.05 + 0.95 * ((1 - (l.O/numinputs) * inpuLnumber) +0.05)10) is presented in Figure 3 (a true version of Zipf's law still shows the result). Otherwise, all parameters are the same as Exp. 1. 3.1 Results Results are plotted in Figure 4. Here we find again a very strong and significant (p ? 0.005) correlation between the age at which an input is learned and its naming latency. The correlation coefficient averaged over 10 runs is 0.668. This fits very well with known data. Figure 5 shows how the frequency of presentation of a given stimulus correlates with NamU"lgLatoocyvs Ff"'loonc'f ! 16 : FflllqU9flC'f'VS AgeofA<:qlllslOOn h9QOOflCll frOCJllOnc'f ' C9'GSSlOn - ++ 18 + ~ 14000 .... i 12000 ..... .. ~ 10000 ... 8000 :+. i Li. 11000 .:- ~5t +:t 4000 ~OO~.~. ~. , o 2000 4DOO 6000 Ff~ ofAppe"n..."" Figure 5: Exp. 2 Frequency vs. SSE Figure 6: Exp. 2 AoA vs. Frequency naming latency. We find that the best fitting correlation is an exponential one in which naming latency correlates most strongly with the log of the frequency. The correlation coefficient averaged over 10 runs is significant (p ? 0.005) at -0.730. This is a slightly stronger correlation than is found in the literature. Finally, figure 6 shows how frequency and AoA are related. Again, we find a significant (p < 0.005) correlation coefficient of -0.283 averaged over 10 runs. However, this is a much weaker correlation than is found in the literature. Performing a multiple regression with the dependent variable as SSE and the two explaining variables as AoA and log frequency, we find that both AoA and log frequency contribute significantly (p? 0.005 for both variables) to the regression equation. Whereas AoA correlates with SSE at 0.668 and log frequency correlates with SSE at -0.730, the multiple correlation coefficient averaged over 10 runs is 0.794. AoA and log frequency each make independent contributions to naming latency. We were encouraged that we found both effects of frequency and AoA on SSE in our model, but were surprised by the small size of the correlation between the two. The naming literature shows a strong correlation between AoA and frequency. However, pilot work with a smaller network showed no frequency effect, which was due to the autoencoding task in a network where the patterns filled 20% of the input space (200 random patterns in a 10-8-10 network, with 1024 patterns possible). This suggests that autoencoding is not an appropriate task to model naming, and would give rise to the low correlation between AoA and frequency. Indeed, English spellings and their corresponding sounds are certainly correlated, but not completely consistent, with many exceptional mappings. Spelling-sound consistency has been shown to have a significant effect on naming latency (Jared, McRae, & Seidenberg, 1990). Object naming, another task in which AoA effects are found, is a completely arbitrary mapping. Our third experiment looks at the effect that consistency of our mapping task has on AoA and frequency effects. 4 Experiment 3: Consistency effects Our model in this experiment is identical to the previous model except for two changes. First, to encode mappings with varying degrees of consistency, we needed to increase the number of hidden units to 50, resulting in a 20-50-20 architecture. Second, we found that some patterns would end up with one bit off, leading to a bimodal distribution of SSE's. We thus used cross-entropy error to ensure that all bits would be learned. Eleven levels of consistency were defined; from 100% consistent, or autoencoding; to 0% consistent, or a mapping from one random 20 bit vector to another random 20 bit vector. Note that in a 0% consistent mapping, since each bit as a 50% chance of being on, about 50% of the bits will be the same by chance. Thus an intermediate level of 50% consistency will have on average 75% of the corresponding bits equal. VlIr1ableSognificanoo vsConsIstency Com:rlahonStrOO!1hvs MappmgConSlSt<lOCy ""' - lOO(hequency) I I .~ .. , .",. h>A a nd RMSE Iog fr""",ncyandRMSEt" (Iog flO<tJ(lncyandAoA) . ,,, ,"" AO ' .L~~L-~~~--~----~--~,oo 60 MaJlI'IngCon... teooy AlItoencodlng Arbitrary ConSIstency A.-.oodong Figure 7: Exp. 3 R-values vs. Consistency Figure 8: Exp. 3 P-values vs. Consistency 4.1 Results Using this scheme, ten runs at each consistency level were performed. Correlation coefficients between AoA and naming latency (RMSE), log(frequency) and naming latency, and AoA and log(frequency) were examined. These results can be found in Figure 7. It is clear that AoA exhibits a strong effect on RMSE at all levels of consistency, peaking at a fully consistent mapping. We believe that this may be due to the weaker effect of frequency when all patterns are consistent, and each pattern is supporting the same mapping. Frequency also shows a strong effect on RMSE at all levels of consistency, with its influence being lowest in the autoencoding task, as expected. Most interesting is the correlation strength between AoA and frequency across consistency levels . While we do not yet have a good explanation for the dip in correlation at the 80-90% level of consistency, it provides a possible explanation of the multiple regression data we describe next. Multiple regressions with the dependent variable as error and explaining variables as log(frequency) and AoA were performed. In Figure 8, we plot the negative log of the pvalue of AoA and log(frequency) in the regression equation over consistency levels. Most notable is the finding that AoA is significant at extreme levels at all levels of consistency. A value of 30 on this plot corresponds to a p-value of 10- 30 . Significance of log frequency has a more complex interaction with consistency. Log frequency does not achieve significance in determining SSE until the patterns are almost 40% consistent. For more consistent mappings, however, significance increases dramatically to a P-value of less than 10- 10 and then declines toward autoencoding. The data which may help us to explain what we see in Figure 8 actually lies in Figure 7. There is a relationship between log frequency significance and the correlation strength between AoA and log frequency. As AoA and frequency become less correlated, the significance of frequency increases, and vice-versa. Therefore, as frequency and AoA become less correlated, frequency is able to begin making an independent contribution to the SSE of the network. Such interactions may explain the sometimes inconsistent findings in the literature; depending upon the task and the individual items in the stimuli, different levels of consistency of mapping can affect the results. However, each of these points represent an average over a set of networks with one average consistency value. It is doubtful that any natural mapping, such as spelling to sound, has such a uniform distribution. We rectify this in the next experiment. 5 Experiment 4: Modelling spelling-sound correspondences Our final experiment is an abstract simulation of learning to read, both in terms of word frequency and spelling-sound consistency. Most English words are considered consistent in their spelling-sound relationship. This depends on whether words in their spelling "neighborhood" agree with them in pronunciation, e.g., "cave," "rave," and "pave." However, a small but important portion of our vocabulary consists of inconsistent words, e.g., "have." FroquoncyolConsl&IOOIandlnoonSiSlOnlPMtOmsvs SSE Inco,,",stonlPaltems CooslstonlPl!lloms . lnw'Fr"'l Inoon5lstonlPl!llems ConSistontPliftorns , Fr"'lOO""" Hg:.Fr"'l Figure 9: Exp. 4 Consistency vs. frequency Figure 10: Exp. 4 Consistency AoA The reason that "have" continues to be pronounced inconsistently is because it is a very frequent word. Inconsistent words have the property that they are on average much more frequent than consistent words although there are far more consistent words by number. To model this we created an input corpus of 170 consistent words and 30 inconsistent words. Inconsistent words were arbitrarily defined as 50% consistent, or an average of 5 bit flips in a 20 bit pattern; consistent words were modeled as 80% consistent, or an average of 2 bit flips per pattern. The 30 inconsistent words were presented with high frequencies corresponding to the odd numbered patterns (1..59) in Figure 3. The even numbered patterns from 2 to 60 were the consistent words. The remaining patterns were also consistent. This allowed us to compare consistent and inconsistent words in the same frequency range, controlling for frequency in a much cleaner manner than is possible in human subject experiments. The network is identical to the one in Experiment 3. 5.1 Results We first analyzed the data for the standard consistency by frequency interaction. We labeled the 15 highest frequency consistent and inconsistent patterns as "high frequency" and the next 15 of each as the "low frequency" patterns, in order to get the same number of patterns in each cell, by design. The results are shown in Figure 9, and show the standard interaction. More interestingly, we did a post-hoc median split of the patterns based on their AoA, defining them as "early" or "late" in this way, and then divided them based on consistency. This is shown in Figure 10. An ANOVA using unequal cell size corrections shows a significant (p < .001) interaction between AoA and consistency. 6 Discussion Although the possibility of Age of Acquisition effects in connectionist networks has been doubted, we found a very strong, significant, and reproducible effect of AoA on SSE, the variable most often used to model reaction time, in our networks. Patterns which are learned in earlier epochs consistently show lower final error values than their late acquired counterparts. In this study, we have shown that this effect is present across various learning tasks, network topologies, and frequencies . Informally, we have found AoA effects across more network variants than reported here, including different learning rates, momentum, stopping criterion, and frequency distributions. In fact, across all runs we conducted for this study, we found strong AoA effects, provided the network was able to learn its task. We believe that this is because AoA is an intrinsic property of connectionist networks. We have performed some preliminary analyses concerning which patterns are acquired early. Using the setup of Experiment 1, that is, autoencoded 20 bit patterns, we have found that the patterns that are most correlated with the other patterns in the training set tend to be the earliest acquired, with r2 = 0.298. (We should note that interpattern correlations are very small, but positive, because no bits are negative). Thus patterns that are most consistent with the training set are learned earliest. We have yet to investigate how this generalizes to arbitrary mappings, but, given the results of Experiment 4, it makes sense to predict that the most frequent, most consistently mapped patterns (e.g., in the largest spelling-sound neighborhood) would be the earliest acquired, in the absence of other factors. 7 Future Work This study used a very general network and learning task to demonstrate AoA effects in connectionist networks. There is therefore no reason to suspect that this effect is limited to words, and indeed, AoA effects have been found in face recognition. Meanwhile, we have not investigated the interaction of our simple model of AoA effects with staged presentation. Presumably words acquired late are fewer in number, and Ellis & Lambon-Ralph (in press) have shown that they must be extremely frequent to overcome their lateness. Our results suggest that patterns that are most consistent with earlier acquired mappings would also overcome their lateness. We are particularly interested in applying these ideas to a realistic model English reading acquisition, where actual consistency effects can be measured in the context of friend/enemy ratios in a neighborhood. Finally, we would like to explore whether the AoA effect is universal in connectionist networks, or if under some circumstances AoA effects are not observed. Acknowledgements We would like to thank the Elizabeth Bates for alerting us to the work of Dr. Andrew Ellis, and for the latter for providing us with a copy of Ellis & Lambon-Ralph (in press). References [1] Brown, G.D.A & Watson, EL. (1987) First in, first out: Word learning age and spoken word frequency as predictors of word familiarity and naming latency. Memory & Cognition, 15:208-216 [2] CarroJl, J.B. & White, M.N. (1973). Word frequency and age of acquisition as determiners of picture-naming latency. Quarterly Journal of Psychology, 25 pp. 85-95 [3] Ellis, AW. & Morrison, C.M. (1998). Real age of acquisition effects in lexical retrieval. Journal of Experimental Psychology: Learning, Memory, & Cognition, 24 pp. 515-523 [4] Ellis, AW. & Larnbon Ralph, M.A (in press). Age of Acquisition effects in adult lexical processing reflect loss of plasticity in maturing systems: Insights from connectionist networks. JEP: LMC. [5] Gerhand, S. & Barry, C. (1998). Word frequency effects in oral reading are not merely Age-ofAcquisition effects in disguise. JEP:LMC, 24 pp. 267-283. [6] Gerhand, S. & Barry, C. (1999). Age of acquisition and frequency effects in speeded word naming. Cognition, 73 pp. B27-B36 [7] Jared, D., McRae, K., & Seidenberg, M.S. (1990). The Basis of Consistency Effects in Word Naming. JML, 29pp. 687-715 [8] Morrison, C.M., Ellis, AW. & Quinlan, P.T. (1992). Age of acquisition, not word frequency, affects object naming, nor object recognition. Memory & Cognition, 20 pp. 705-714 [9] Morrison, C.M. & Ellis, AW. (1995). Roles of Word Frequency and Age of Acquisition in Word Naming and Lexical Decision. JEP:LMC, 21 pp. 116-133 [10] Oldfield, R.C. & Wingfield, A (1965) Response latencies in naming objects. Quarterly Journal of Experimental Psychology 17, pp. 273-281.
1796 |@word determinant:1 version:1 stronger:1 nd:1 simulation:1 paid:1 interestingly:1 reaction:2 com:1 surprising:1 yet:2 written:1 moo:1 must:1 cottrell:1 realistic:1 plasticity:2 eleven:1 wanted:1 plot:2 reproducible:1 hvs:1 v:8 fewer:1 item:1 beginning:1 smith:1 provides:2 contribute:2 five:1 become:3 surprised:1 consists:1 fitting:1 manner:1 acquired:9 expected:1 indeed:3 examine:1 nor:1 actual:2 little:2 increasing:1 begin:1 provided:1 lowest:2 what:3 spoken:3 finding:3 cave:1 ro:1 unit:2 appear:2 positive:1 limit:1 chose:1 examined:1 suggests:1 limited:1 range:2 speeded:1 averaged:5 jep:3 universal:1 significantly:1 word:43 outset:1 numbered:2 suggest:1 get:3 cannot:1 context:1 influence:1 applying:1 equivalent:1 lexical:3 go:1 attention:1 independently:1 simplicity:1 undisputed:1 insight:1 continued:2 sse:16 diego:1 controlling:2 recognition:2 particularly:1 continues:1 labeled:1 observed:2 role:1 went:1 decrease:1 highest:2 complexity:1 trained:2 oral:1 upon:2 inco:1 completely:2 basis:1 represented:1 various:1 surrounding:1 describe:1 neighborhood:3 pronunciation:1 enemy:1 say:1 otherwise:1 ability:1 final:10 online:1 autoencoding:5 hoc:1 interaction:7 fr:4 frequent:4 rapidly:1 achieve:1 flo:1 pronounced:1 object:4 help:1 oo:14 depending:2 friend:1 andrew:1 measured:2 ij:2 odd:1 strong:10 c:1 come:1 closely:1 correct:3 human:1 ao:1 preliminary:1 c_:1 correction:1 considered:1 exp:10 presumably:1 mapping:13 predict:1 cognition:4 claim:3 vary:1 early:7 determiner:1 largest:1 vice:1 exceptional:1 sion:1 varying:1 earliest:4 encode:1 lon:2 consistently:2 modelling:1 sense:1 dependent:2 stopping:1 el:1 typically:1 entire:1 initially:1 hidden:1 interested:1 ralph:4 equal:1 having:1 encouraged:1 identical:2 look:1 survive:1 future:1 connectionist:11 stimulus:3 few:1 randomly:1 individual:1 interest:2 highly:1 investigate:2 possibility:1 certainly:1 analyzed:2 extreme:1 yielding:1 tj:1 hg:1 staging:1 closer:1 necessary:1 filled:1 euclidean:1 initialized:1 plotted:2 doubtful:1 psychological:2 earlier:5 elli:11 bon:1 predictor:2 uniform:1 conducted:1 loo:2 reported:1 aw:4 quintile:2 stay:1 off:2 squared:4 again:3 reflect:1 dr:1 disguise:1 leading:1 li:1 coefficient:6 notable:1 explicitly:2 depends:1 crossed:1 later:3 performed:3 portion:2 start:1 alleged:1 rmse:4 contribution:7 yield:1 bates:1 researcher:1 explain:2 email:1 definition:2 acquisition:17 frequency:66 destructive:1 pp:8 associated:1 pilot:1 actually:1 appears:1 feed:2 response:1 ooo:1 strongly:1 anderson:1 stage:2 correlation:18 until:3 believe:2 effect:43 brown:2 verify:1 true:2 counterpart:2 read:1 freq:1 white:2 during:3 lmc:3 percentile:2 oc:1 criterion:1 demonstrate:1 recently:2 significant:12 versa:1 zipf:2 seldom:1 consistency:27 had:3 rectify:1 carroll:1 recent:3 showed:1 jolla:1 manipulation:1 claimed:1 occasionally:1 watson:2 arbitrarily:1 life:1 baby:1 barry:4 morrison:7 multiple:5 sound:7 offer:1 cross:1 retrieval:1 divided:2 concerning:1 naming:27 post:1 iog:2 controlled:2 variant:2 regression:6 circumstance:1 sometimes:1 represent:1 bimodal:1 cell:2 doo:1 background:1 whereas:1 median:1 jml:1 fell:1 subject:2 tend:2 suspect:1 inconsistent:8 seem:1 call:1 structural:1 intermediate:1 split:1 affect:2 fit:1 psychology:3 iac:2 architecture:2 topology:1 decline:1 idea:1 whether:3 karen:1 cause:1 repeatedly:1 dramatically:1 ignored:1 latency:20 clear:2 informally:1 cleaner:1 ten:1 simplest:2 documented:1 sl:1 df1:1 per:2 group:2 threshold:5 anova:1 graph:1 asymptotically:1 merely:1 sum:3 run:7 almost:1 decision:1 qui:1 bit:15 correspondence:1 strength:2 occur:1 flat:1 lambon:3 rave:1 extremely:1 performing:1 relatively:1 department:1 smaller:1 slightly:1 across:4 elizabeth:1 making:1 happens:1 peaking:1 confound:1 interference:2 equation:2 agree:1 needed:1 jared:2 flip:2 end:1 staged:2 generalizes:1 quarterly:2 appropriate:1 existence:1 inconsistently:1 remaining:1 ensure:1 quinlan:1 maturing:1 question:1 added:1 lnw:1 rt:1 usual:1 spelling:8 pave:1 exhibit:2 win:1 distance:1 thank:1 mapped:2 toward:1 mcrae:2 reason:2 length:1 modeled:1 relationship:6 ratio:1 providing:1 lg:1 setup:1 negative:2 resurgence:1 rise:1 design:1 displayed:1 supporting:1 defining:1 ucsd:1 arbitrary:3 alerting:1 california:1 unequal:1 learned:13 adult:2 able:2 below:3 pattern:38 doubted:2 reading:2 including:1 memory:3 explanation:3 natural:1 force:1 older:1 scheme:1 numerous:1 picture:2 pvalue:1 created:1 ss:1 catch:1 autoencoder:1 utterance:1 epoch:5 literature:5 acknowledgement:1 autoencode:1 determining:1 law:2 loss:2 fully:1 interesting:1 age:14 syllabic:1 degree:1 consistent:22 last:2 copy:1 english:5 formal:1 weaker:2 understand:1 explaining:3 face:1 dip:1 calculated:1 vocabulary:2 overcome:2 forward:2 san:1 far:1 correlate:4 corpus:3 seidenberg:2 why:1 learn:1 ca:1 investigated:2 complex:1 meanwhile:1 did:2 significance:5 allowed:1 ff:2 garrison:1 momentum:2 exponential:1 lie:1 late:5 third:1 familiarity:1 mitigates:1 r2:1 cease:1 intrinsic:1 effectively:1 entropy:1 explore:2 aoa:59 gary:1 corresponds:3 chance:2 goal:1 presentation:5 absence:1 change:1 except:1 experimental:2 la:1 indicating:1 mark:1 latter:1 c9:1 correlated:5
869
1,797
A Gradient-Based Boosting Algorithm for Regression Problems Richard S. Zemel Toniann Pitassi Department of Computer Science University of Toronto Abstract In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Recently adaptive boosting methods for classification problems have been derived as gradient descent algorithms. This formulation justifies key elements and parameters in the methods, all chosen to optimize a single common objective function. We propose an analogous formulation for adaptive boosting of regression problems, utilizing a novel objective function that leads to a simple boosting algorithm. We prove that this method reduces training error, and compare its performance to other regression methods. The aim of boosting algorithms is to "boost" the small advantage that a hypothesis produced by a weak learner can achieve over random guessing, by using the weak learning procedure several times on a sequence of carefully constructed distributions. Boosting methods, notably AdaBoost (Freund & Schapire, 1997), are simple yet powerful algorithms that are easy to implement and yield excellent results in practice. Two crucial elements of boosting algorithms are the way in which a new distribution is constructed for the learning procedure to produce the next hypothesis in the sequence, and the way in which hypotheses are combined to produce a highly accurate output. Both of these involve a set of parameters, whose values appeared to be determined in an ad hoc maImer. Recently boosting algorithms have been derived as gradient descent algorithms (Breiman, 1997; Schapire & Singer, 1998; Friedman et al., 1999; Mason et al., 1999). These formulations justify the parameter values as all serving to optimize a single common objective function. These optimization formulations of boosting originally developed for classification problems have recently been applied to regression problems. However, key properties of these regression boosting methods deviate significantly from the classification boosting approach. We propose a new boosting algorithm for regression problems, also derived from a central objective function, which retains these properties. In this paper, we describe the original boosting algorithm and summarize boosting methods for regression. We present our method and provide a simple proof that elucidates conditions under which convergence on training error can be guaranteed. We propose a probabilistic framework that clarifies the relationship between various optimization-based boosting methods. Finally, we summarize empirical comparisons between our method and others on some standard problems. 1 A Brief Summary of Boosting Methods Adaptive boosting methods are simple modular algorithms that operate as follows. Let 9 : X -t Y be the function to be learned, where the label set Y is finite, typically binary-valued. The algorithm uses a learning procedure, which has access to n training examples, {(Xl, Y1), ... , (xn, Yn)}, drawn randomly from X x Yaccording to distribution D; it outputs a hypothesis I : X -t Y, whose error is the expected value of a loss function on I(x) , g(x), where X is chosen according to D. Given f, cl > 0 and access to random examples, a strong learning procedure outputs with probability 1 - cl a hypothesis with error at most f, with running time polynomial in 1/ f, 1/ cl and the number of examples. A weak learning procedure satisfies the same conditions, but where f need only be better than random guessing. Schapire (1990) showed that any weak learning procedure, denoted WeakLeam, can be efficiently transformed ("boosted") into a strong learning procedure. The AdaBoost algorithm achieves this by calling WeakLeam multiple times, in a sequence of T stages, each time presenting it with a different distribution over a fixed training set and finally combining all of the hypotheses. The algorithm maintains a weight for each training example i at stage i, and a distribution D t is computed by normalizing these weights. The algorithm loops through these steps: w: 1. At stagei, the distribution D t is given to WeakLeam, which generates a hypothesis It- The error rate ft of It w.r.t. D t is: ft = 2:: i f,(x');t'y ' wU 2::7=1 w~ 2. The new training distribution is obtained from the new weights: W;+l w: * (ft/ (l - ft))Hf,(x')-y'l After T stages, a test example X will be classified by a combined weighted-majority hypothesis: y = sgn(2::;=1 cdt (x)). Each combination coefficient Ct = log( (1- fd/ fd takes into account the accuracy of hypothesis It with respect to its distribution. The optimization approach derives these equations as all minimizing a common objective function J, the expected error of the combined hypotheses, estimated from the training set. The new hypothesis is the step in function space in the direction of steepest descent of this objective. For example, if J ~ 2::7=1 exp(- 2::t yicdt(xi)), then the cost after T rounds is the cost after T - 1 rounds times the cost of hypothesis IT : T-1 n ~L J (T) i=l exp (- L yi cdt (xi) ) exp ( _yi cT IT (xi) ) t=l so training IT to minimize J(T) amounts to minimizing the cost on a weighted training distribution. Similarly, the training distribution is formed by normalizing updated weights: = * exp(-yicdt(x i )) = * exp(s~cdwhere s: = 1 if It (xi) i- yi, else s~ = -1. Note that because the objective function J is multiplicative in the costs of the hypotheses, a key property follows: The objective for each hypothesis is formed simply by re-weighting the training distribution. w:+1 w: w; This boosting algorithm applies to binary classification problems, but it does not readily generalize to regression problems. Intuitively, regression problems present special difficulties because hypotheses may not just be right or wrong, but can be a little wrong or very wrong. Recently a spate of clever optimization-based boosting methods have been proposed for regression (Duffy & Helmbold, 2000; Friedman, 1999; Karakoulas & Shawe-Taylor, 1999; R~itsch et al., 2000). While these methods involve diverse objectives and optimization approaches, they are alike in that new hypotheses are formed not by simply changing the example weights, but instead by modifying the target values. As such they can be viewed as forms of forward stage-wise additive models (Hastie & Tibshirani, 1990), which produce hypotheses sequentially to reduce residual error. We study a simple example of this approach, in which hypothesis T is trained not to produce the target output yi on a given case i, but instead to fit the current residual, r~, where r~ = yi - L,;;11 ctft(x). Note that this approach develops a series of hypotheses all based on optimizing a common objective, but it deviates from standard boosting in that the distribution of examples is not used to control the generation of hypotheses, and each hypothesis is not trained to learn the same function. 2 An Objective Function for Boosting Regression Problems We derive a boosting algorithm for regression from a different objective function. This algorithm is similar to the original classification boosting method in that the objective is multiplicative in the hypotheses' costs, which means that the target outputs are not altered after each stage, but rather the objective for each hypothesis is formed simply by re-weighting the training distribution. The objective function is: h J1 1) [T Ct(ft(x')" - y' )2J n (T c;'i = -;1; {; exp {; (1) Here, training hypothesis T to minimize JT, the cost after T stages, amounts to minimizing the exponentiated squared error of a weighted training distribution: n L w~ (c;~ exp [cT(h(x i ) - yi )2J) ; =1 We update each weight by multiplying by its respective error, and form the training distribution for the next hypothesis by normalizing these updated weights. In the standard AdaBoost algorithm, the combination coefficient Ct can be analytically determined by solving %I; = 0 for Ct. Unfortunately, one cannot analytically determine the combination coefficient Ct in our algorithm, but a simple line search can be used to find value of Ct that minimizes the cost Jt . We limit Ct to be between 0 and 1. Finally, optimizing J with respect to y produces a simple linear combination rule for the estimate: fj = L,t Ct It (x) / L,t Ct? We introduce a constant r as a threshold used to demarcate correct from incorrect responses. This threshold is the single parameter of this algorithm that must be chosen in a problem-dependent manner. It is used to judge when the performance of a new hypothesis warrants its inclusion: ft = L,i p~ exp[(ft(x i ) - yi )2 - r] < 1. The algorithm can be summarized as follows: New Boosting Algorithm 1. Input: ? training set examples (Xl, yI) , .... (X n , Yn ) with Y E ~; ? WeakLeam: learning procedure produces a hypothesis h(x) whose accuracy on the training set is judged according to J 2. Choose initial distribution P1(xi) = P~ = w~ = ~ 3. Iterate: ? ? ? ? Call WeakLearn - minimize Jt with distribution Pt Accept iff Et = ~i P~ exp[(ft(xi ) - yi )2 - r] < 1 Set a ~ Ct ~ 1 to minimize J t (using line search) Update training distribution n w;+l / L W{+l j=l 4. Estimate output y on input x: Y = L cdt (x)/ L Ct t t 3 Proof of Convergence Theorem: Assume that for all t ~ T , hypothesis t makes error Et on its distribution. If the combined output y is considered to be in error iff (Y - y)2 > r, then the output of the boosting algorithm (after T stages) will have error at most E, where T E = P[(yi - yi )2 > r] ~ II Et exp[r(T - T L cd] Proof: We follow the approach used in the AdaBoost proof (Freund & Schapire, 1997). We show that the sum of the weights at stage T is bounded above by a constant times the product of the Et'S, while at the same time, for each input i that is incorrect, its corresponding weight w~ at stage T is significant. T < II c~1/ 2 exp(r)Et t= l The inequality holds because a ~ Ct ~ 1. We now compute the new weights: L Ct(ft(x i ) - yi )2 = [L ct][Var(fi ) + (yi - yi )2] ~ [L Ct][(yi - yi )2] t t t i i where yi = ~t cth(x )/ ~t Ct and Var(fi ) = ~t ct (h(x ) - yi )2/ ~t Ct . Thus, T W~ +l = T (II C;1/2) exp(L Ct(ft (x i ) - T yi )2) ~ T (II C; 1/2) exp([L Ct][(yi _ yi )2]) t=l Now consider an example input k such that the final answer is an error. ll1en, by definition, (yk - yk)2 > T => W~+l 2:: (TIt c;1/2) exp(T L:t cd. If f is the total error rate of the combination output, then: L w~+1 2:: L k:k T w~+1 2:: f(II C;1/2) exp(T L Ct) error T f:::; T T (L w~+l)(II ci/ 2) exp[T(T - L Ct)] :::; II ft exp[T(T - L Ct)] t=l i t t=l t Note that as in the binary AdaBoost theorem, there are no assumptions made here about ft, the error rate of individual hypotheses. If all ft = ,6. < I, then f < ,6.T exp[T(T - L:t Ct)], which is exponentially decreasing as long as Ct -+ 1. 4 Comparing the Objectives We can compare the objectives by adopting a probabilistic framework. We associate a probability distribution with the output of each hypothesis on input x, and combine them to form a consensus model M by multiplying the distributions: g(y lx, M) == TIt pt(ylx, (1d,where (1t are parameters specific to hypothesis t. If each hypothesis t produces a single output ft (x) and has confidence Ct assigned to it, then pt (y lx, (1t ) can be considered a Gaussian with mean It (x) and variance 1/ Ct 9 (y Ix, M) = k [If ci /2] exp [- ~ Ct (y - ft ( x ) ) 2] Model parameters can be tuned to maximize g(y*lx, M), where y* is the target for x; our objective (Eq. 1) is the expected value ofthe reciprocal of g(y* lx, M). An alternative objective can be derived by first normalizing g(y lx, M): TIt pt(ylx, (1d ( I M) = g(ylx , M) = pyx, - J,y,g(ylx,M) - J,y' TI tPt ( y'lx, (1t)dy' TIUs probability model underlies the product-of-experts model (Hinton, 2000) and the logarithmic opinion pool (Bordley, 1982).If we again assume pt(ylx, (1t) ~ N(ft(x), C;l)), thenp(y lx, M) is a Gaussian, with mean f(x) = L:?tft(X) t Ct and in- verse variance c = L:t Ct. The objective for this model is: JR=-logp(y* lx,M )=c[y*-f(x)f -~logc (2) TIUs objective corresponds to a type of residual-fitting algorithm. If r(x) [y* - f (x) ] , and {Ct} for t < T are assumed frozen, then training J R is achieved by using r (x) as a target. iT to minimize These objectives can be further compared w.r.t. a bias-variance decomposition (Geman et aI., 1992; Heskes, 1998). The main term in our objective can be re-expressed: L Ct [yO t ft(X)]2 =L t Ct [yO - f(x)] 2 + L Ct [ft(x) - f( x) ] 2 = bias+variance t Meanwhile, the main term of JR corresponds to the bias term. Hence a new hypothesis can minimize JR by having low error (ft (x) = y*), or with a deviant (ambiguous) response (ft(x) -=F f(x) (Krogh & Vedelsby, 1995). Thus our objective attempts to minimize the average error of the models, while the residual-fitting objective minimizes the error of the average model. 0 .065 0.35 0.06 0 .055 0.05 0.3 0.045 0.04 0.035 2 4 8 10 o.25 '------~ 2 ---4 c---~---8 ~-----c 10 Figure 1: Generalization results for our gradient-based boosting algorithm, compared to the residual-fitting and mixture-of-experts algorithms. Left: Test problem F1; Right: Boston housing data. Normalized mean-squared error is plotted against the number of stages of boosting (or number of experts for the mixture-of-experts). 5 Empirical Tests of Algorithm We report results comparing the performance of our new algorithm with two other algorithms. The first is a residual-fitting algorithm based on the J R objective (Eq. 2), but the coefficients are not normalized. The second algorithm is a version of the mixture-of-experts algorithm aacobs et al., 1991). Here the hypotheses (or experts) are trained simultaneously. In the standard mixture-of-experts the combination coefficients depend on the input; to make this model comparable to the others, we allowed each expert one input-independent, adaptable coefficient. This algorithm provides a good alternative to the greedy stage-wise methods, in that the experts are trained simultaneously to collectively fit the data. We evaluate these algorithms on two problems. The first is the nonlinear prediction problem F1 (Friedman, 1991), which has 10 independent input variables uniform in [0 , 1]: y = 10 sin( 7rX1X2) + 20(X3 - .5)2 + 10x4 + 5X5 + n where n is a random variable drawn from a mean-zero, unit-variance normal distribution. In this problem, only five input variables (Xl to X5) have predictive value. We rescaled the target values y to be in [0, 3]. We used 400 training examples, and 100 validation and test examples. The second test problem is the standard Boston Housing problem Here there are 506 examples and twelve continuous input variables. We scaled the input variables to be in [0,1], and the outputs to be in [0, 5] . We used 400 of the examples for training, 50 for validation, and the remainder to test generalization. We used neural networks as the hypotheses and back-propagation as the learning procedure to train them. Each network had a layer of tanhO units between the input units and a single linear output. For each algorithm, we used early stopping with a validation set in order to reduce over-fitting in the hypotheses. One finding was that the other algorithms out-performed ours when the hypotheses were simple: when the weak learners had only one or two hidden nodes, the residual-fitting algorithm reduced test error. With more hidden nodes the relative performance of our algorithm improved. Figure 1 shows average results for threehidden-unit networks over 20 runs of each algorithm on the two problems, with examples randomly assigned to the three sets on each run. The results were consistent for different values of T in our algorithm; here T = 0.1. Overall, the residual-fitting algorithm exhibited more over-fitting than our method. Over-fitting in these approaches may be tempered: a regularization technique known as shrinkage, which scales combination coefficients by a fractional parameter, has been found to im- prove generalization in gradient boosting applications to classification (Friedman, 1999). Finally, the mixture-of-experts algorithm generally out-performed the sequential training algorithm. A drawback of this method is the need to specify the number of hypotheses in advance; however, given that number, simultaneous training is likely less prone to local minima than the sequential approaches. 6 Conclusion We have proposed a new boosting algorithm for regression problems. Like several recent boosting methods for regression, the parameters and updates can be derived from a single common objective. Unlike these methods, our algorithm forms new hypotheses by simply modifying the distribution over training examples. Preliminary empirical comparisons have suggested that our method will not perform as well as a residual-fitting approach for simple hypotheses, but it works well for more complex ones, and it seems less prone to over-fitting. The lack of over-fitting in our method can be traced to the inherent bias-variance tradeoff, as new hypotheses are forced to resemble existing ones if they cannot improve the combined estimate. We are exploring an extension that brings our method closer to the full mixture-ofexperts. The combination coefficients can be input-dependent: a learner returns not only ft(x i ) but also kt(x i ) E [0 ,1], a measure of confidence in its prediction. This elaboration makes the weak learning task harder, but may extend the applicability of the algorithm: letting each learner focus on a subset of its weighted training distribution permits a divide-and-conquer approach to function approximation. References [1] Bordley, R. (1982). A multiplicative formula for aggregating probability assessments. Managment Science, 28, 1137-1148. [2] Breiman, L. (1997). Prediction games and arcing classifiers. TR 504. Statistics Dept., UC Berkeley. [3] Duffy, N. & Helmbold, D. (2000). Leveraging for regression. In Proceedings of COLT, 13. [4] Freund, Y. & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Camp. and System Sci., 55, 119-139. [5] Friedman, J. H. (1999). Greedy function approximation: A gradient boosting machine. TR, Dept. of Statistics, Stanford University. [6] Friedman, J. H., Hastie, T., & Tibshirani, R. (1999). Additive logistic regression: A statistical view of boosting. Annals of Statistics, To appear. [7] Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4,1-58. [8] Hastie, T & TIbshirani, R. (1990). Generalized Additive Models. Chapman and Hall. [9] Heskes, T (1998). Bias-variance decompositions for likelihood-based estimators. Neural Computa- tion, 10, 1425-1433. [10] Hinton, G. E. (2000). Training products of experts by minimizing contrastive divergence. GCNUTR 2000-004. Gatsby Computational Neuroscience Unit, University College London. [11] Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton, G. E. (1991). Adaptive mixtures of local experts. Neural Computation, 3,79-87. [12] Karakoulas, G., & Shawe-Taylor, J. (1999). Towards a strategy for boosting regressors. In Advances in Large Margin Classifiers, Smola, Bartlett, Schdlkopf & Schuurmans (Eds.). [13] Krogh, A. & Vedelsby, J. (1995). Neural network ensembles, cross-validation, and active learning. InJ\TIps 7. [14] Mason, L., Baxter, J., Bartlett, P., & Frean, M. (1999). Boosting algorithms as gradient descent in function space. In NIPS 11. [15] Riitsch, G., Mika, S. Onoda, T, Lemm, S. & Miiller, K.-R. (2000). Barrier boosting. In Proceedings of COLT,13 . [16] Schapire, R. E. (1990). The strength of weak leamability. Machine Learning, 5, 197-227. [17] Schapire, R. E. & Singer, Y. (1998). Improved boosting algorithms using confidence-rated precitions. In Proceedings of COLT, 11.
1797 |@word version:1 polynomial:1 seems:1 decomposition:2 jacob:1 contrastive:1 tr:2 harder:1 initial:1 series:1 tuned:1 ours:1 existing:1 riitsch:1 current:1 comparing:2 nowlan:1 yet:1 must:1 readily:1 additive:3 j1:1 update:3 greedy:2 steepest:1 reciprocal:1 provides:1 boosting:38 node:2 toronto:1 lx:8 five:1 constructed:2 incorrect:2 prove:2 combine:1 fitting:12 manner:1 introduce:1 notably:1 expected:3 p1:1 decreasing:1 little:1 bounded:1 minimizes:2 developed:1 finding:1 berkeley:1 computa:1 ti:1 wrong:3 scaled:1 classifier:2 control:1 unit:5 yn:2 appear:1 local:2 aggregating:1 limit:1 mika:1 schdlkopf:1 practice:1 implement:1 x3:1 procedure:9 empirical:3 significantly:1 confidence:3 cannot:2 clever:1 judged:1 optimize:2 helmbold:2 estimator:1 rule:1 utilizing:1 analogous:1 updated:2 annals:1 target:6 pt:5 elucidates:1 us:1 hypothesis:41 associate:1 element:2 geman:2 ft:21 rescaled:1 yk:2 trained:5 depend:1 solving:1 tit:3 predictive:1 dilemma:1 learner:5 various:1 train:1 forced:1 describe:1 london:1 zemel:1 whose:3 modular:1 stanford:1 valued:1 statistic:3 final:1 hoc:1 advantage:1 sequence:3 frozen:1 housing:2 propose:3 product:3 karakoulas:2 remainder:1 loop:1 combining:1 iff:2 achieve:1 convergence:2 produce:7 derive:1 frean:1 eq:2 krogh:2 strong:2 resemble:1 judge:1 direction:1 drawback:1 correct:1 modifying:2 sgn:1 opinion:1 f1:2 generalization:4 preliminary:1 im:1 exploring:1 extension:1 hold:1 considered:2 hall:1 normal:1 exp:19 achieves:1 early:1 label:1 cdt:3 weighted:4 gaussian:2 aim:1 rather:1 shrinkage:1 breiman:2 boosted:1 arcing:1 derived:5 yo:2 focus:1 likelihood:1 camp:1 dependent:2 stopping:1 typically:1 accept:1 hidden:2 bienenstock:1 transformed:1 overall:2 classification:6 colt:3 denoted:1 special:1 logc:1 uc:1 having:1 chapman:1 x4:1 warrant:1 others:2 report:1 develops:1 richard:1 inherent:1 randomly:2 simultaneously:2 divergence:1 individual:1 friedman:6 attempt:1 fd:2 highly:1 mixture:7 accurate:1 kt:1 closer:1 respective:1 taylor:2 divide:1 re:3 plotted:1 retains:1 logp:1 cost:8 applicability:1 subset:1 uniform:1 answer:1 combined:6 twelve:1 probabilistic:2 pool:1 tip:1 squared:2 central:1 again:1 choose:1 expert:12 tft:1 return:1 account:1 summarized:1 coefficient:8 ad:1 multiplicative:3 performed:2 view:1 tion:1 leamability:1 hf:1 maintains:1 minimize:7 formed:4 accuracy:2 variance:8 efficiently:1 ensemble:1 yield:1 clarifies:1 ofthe:1 generalize:1 weak:8 produced:1 multiplying:2 classified:1 simultaneous:1 ed:1 definition:1 verse:1 against:1 vedelsby:2 proof:4 spate:1 fractional:1 carefully:1 back:1 adaptable:1 originally:1 follow:1 adaboost:5 response:2 improved:2 specify:1 formulation:4 just:1 stage:11 smola:1 nonlinear:1 assessment:1 propagation:1 lack:1 logistic:1 brings:1 normalized:2 analytically:2 assigned:2 hence:1 regularization:1 round:2 sin:1 x5:2 game:1 ambiguous:1 generalized:1 presenting:1 theoretic:1 fj:1 wise:2 novel:1 recently:4 fi:2 common:5 exponentially:1 extend:1 significant:1 ai:1 heskes:2 similarly:1 inclusion:1 shawe:2 had:2 access:2 pitassi:1 showed:1 recent:1 optimizing:2 inequality:1 binary:3 yi:21 tempered:1 minimum:1 determine:1 maximize:1 ii:7 multiple:1 full:1 reduces:1 cross:1 long:1 elaboration:1 prediction:3 underlies:1 regression:16 adopting:1 achieved:1 else:1 crucial:1 operate:1 unlike:1 exhibited:1 doursat:1 leveraging:1 jordan:1 call:1 easy:1 baxter:1 iterate:1 fit:2 hastie:3 ofexperts:1 reduce:2 tradeoff:1 bartlett:2 miiller:1 generally:1 involve:2 ylx:5 amount:2 reduced:1 schapire:7 estimated:1 neuroscience:1 tibshirani:3 serving:1 diverse:1 key:3 threshold:2 traced:1 drawn:2 changing:1 sum:1 run:2 powerful:1 wu:1 decision:1 dy:1 comparable:1 layer:1 ct:36 guaranteed:1 strength:1 lemm:1 calling:1 generates:1 department:1 according:2 combination:8 jr:3 cth:1 alike:1 intuitively:1 equation:1 singer:2 tiu:2 letting:1 permit:1 alternative:2 original:2 running:1 conquer:1 objective:27 strategy:1 guessing:2 gradient:7 sci:1 majority:1 evaluate:1 consensus:1 relationship:1 minimizing:4 unfortunately:1 perform:1 finite:1 descent:4 hinton:3 y1:1 learned:1 boost:2 nip:1 suggested:1 appeared:1 summarize:2 difficulty:1 residual:9 altered:1 improve:1 rated:1 brief:1 deviate:2 relative:1 toniann:1 freund:3 loss:1 generation:1 var:2 validation:4 consistent:1 cd:2 demarcate:1 prone:2 summary:1 bias:6 exponentiated:1 barrier:1 xn:1 forward:1 made:1 adaptive:5 regressors:1 sequentially:2 active:1 assumed:1 xi:6 search:2 continuous:1 learn:1 onoda:1 schuurmans:1 pyx:1 excellent:1 cl:3 meanwhile:1 complex:1 tpt:1 main:2 deviant:1 allowed:1 bordley:2 gatsby:1 xl:3 weighting:2 ix:1 theorem:2 formula:1 specific:1 jt:3 mason:2 normalizing:4 derives:1 sequential:2 ci:2 justifies:1 duffy:2 margin:1 boston:2 logarithmic:1 simply:4 likely:1 expressed:1 applies:1 collectively:1 corresponds:2 satisfies:1 viewed:1 towards:1 determined:2 justify:1 total:1 inj:1 college:1 dept:2
870
1,798
Bayes Networks on Ice: Robotic Search for Antarctic Meteorites Liam Pedersen-, Dimi Apostolopoulos, Red Whittaker Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 {pedersen+, dalv, red}@ri.cmu.edu Abstract A Bayes network based classifier for distinguishing terrestrial rocks from meteorites is implemented onboard the Nomad robot. Equipped with a camera, spectrometer and eddy current sensor, this robot searched the ice sheets of Antarctica and autonomously made the first robotic identification of a meteorite, in January 2000 at the Elephant Moraine. This paper discusses rock classification from a robotic platform, and describes the system onboard Nomad. 1 Introduction Figure 1 : Human meteorite search with snowmobiles on the Antarctic ice sheets, and on foot in the moraines. Antarctica contains the most fertile meteorite hunting grounds on Earth. The pristine, dry and cold environment ensures that meteorites deposited there are preserved for long periods. Subsequent glacial flow of the ice sheets where they land concentrates them in particular areas. To date, most meteorites recovered throughout history have been done so in Antarctica in the last 20 years. Furthermore, they are less likely to be contaminated by terrestrial compounds . ? http://www.cs.cmu.edu/-pedersen Meteorites are of interest to space scientists because, with the exception of the Apollo lunar samples, they are the sole source of extra-terrestrial material and a window on the early evolution of the solar system. The identification of Martian and lunar meteorite samples, and the (controversial) evidence of fossil bacteria in the former underscores the importance of systematically retrieving as many samples as possible. Currently, Antarctic meteorite samples are collected by human searchers, either on foot, or on snowmobiles, who systematically search an area and retrieve samples according to strict protocols. In certain blue ice fields the only rocks visible are meteorites. At other places (moraines - areas where the ice flow brings rocks to the surface) searchers have to contend with many terrestrial rocks (Figure 1). 1.1 Robotic search for Antarctic meteorites color camera reflectance spectrometer Figure 2 : Nomad robot, equipped with scientific instruments, investigates a rock in Antarctica. With the goal of autonomously search for meteorites in Antarctica, Carnegie Mellon University has built and demonstrated [1] a robot, Nomad (Figure 2), capable of long duration missions in harsh environments. Nomad is equipped with a color camera on a pan-tilt platform to survey the ice for rocks and acquire close up images of any candidate objects, and a manipulator arm to place the fiber optic probe of a specially designed visible light reflectance spectrometer over a sample. The manipulator arm can also place other sensors, such a metal detector. The eventual goal, beyond Antarctic meteorite search, is to develop technologies for extended robotic exploration of remote areas, including planetary surfaces. One particular technology is the capacity to carry out autonomous science, including autonomous geology and the ability to recognize a broad range of rock types and note exceptions. Identifying meteorites amongst terrestrial rocks is the fundamental engineering problem of robotic meteorite search and is the topic addressed by the rest of this paper. 2 Bayes network rock and meteorite classifier Classifying rocks from a mobile robotic vehicle entails several unique issues: ? The classifier must learn from examples. Human experts often have trouble explaining how they can identifY many rocks, and will refer to an example. In the words of a veteran Antarctic meteorite searcher [2] "First you find a few meteorites, then you know what to look for". A complication is the difficulty of acquiring large sets of training data, under realistic field conditions. To date this has required two earlier expeditions to Antarctica, as well as visits to the Arctic and the Atacama desert in Chile. Therefore, it is necessary to constrain a classifier as much as possible with available prior knowledge, so that training can be accomplished with minimum data. ? The classifier must be able to accept incomplete data, and compound evidence for different hypotheses as more information becomes available. The robot has multiple sensors, and there is a cost associated with using each one. Sensors such as the spectrometer are particularly expensive to use because the robot must be maneuvered to bring the rock sample into the sensor manipulator workspace. Therefore, it is desirable that initial classifications be made using data from cheap long range sensors, such as a color camera, before final verification using expensive sensors on promising rock samples. A corollary of this is that the classifier should accept prior evidence from other sources, such as an experts knowledge on what to expect in a particular location. ? Rock classes are often ambiguous, and the distinctions between certain types fuzzy at best [3]. The classifier must handle this ambiguity, and indicate several likely hypotheses if a definite classification cannot be achieved. These requirements for a robotic rock classifier argue strongly in favor of a Bayes network based approach, which can satisfy them all. The intuitive graphical structure of a Bayes network makes it easier to encode physical constraints into the network topology, thus reducing the intrinsic dimensionality. Bayesian update is a principled way to compound evidence, and prior information is naturally represented by prior probabilities. Additionally, with a Bayes network it is simple to compute the likelihood of any new data, and thus conceivably recognize bad sensor readings. Furthermore, the network can be queried to estimate the information gain of further sensor readings, enabling active sensor selection. 2.1 Network architecture The (simplified) network architecture for distinguishing rocks from meteorites, using features from sensor data, is shown in Figure 3. It is a compromise between a fully connected network (no constraints whatsoever, and computationally intractable) and a naive Bayes classifier (can be efficiently evaluated, but lacks sufficient representational power). Sensor features are only weakly (conditionally) dependent on each other because of a careful choice of suitable features, and the intermediate node Rock-type, whose states include all possible rock and meteorite types likely to be encountered by the classifier. A complication is that the sensor features are continuous quantities, yet the Bayes network implementation can only handle discrete variables. Therefore the continuous variables need to be suitably quantized. Meteorite? Rock/Meteorite type True False Iron meteorite Sandstone Figure 3 : Bayes network for discriminating meteorites and rocks based on features computed from sensor data. 2.2 Sensors and feature vectors peak ?u ,....----, <: ~ ? ? ~ -e .. ! 0.5 strengt of peak (+) or trough (-) at given wavelength 0 400 600 800 wavelength /[nm] 1000 Figure 4 : Example spectrum (with extracted features) and color images of rocks on ice. One of the rocks in the image is meteorite. In Antarctica Nomad acquired reflectance spectra and color images (Figure 4) of sample rocks. Spectra are obtained by shining white light on the sample and analyzing the reflected light to determine the fraction of light reflected at a series of wavelengths. The relevant features in a spectrum, for the purpose of identifying rocks, are the presence, location and size of peaks and troughs in the spectrum (Figure 4), and the average magnitude (albedo) of the spectrum over certain wavelengths. Spectral troughs and peaks are detected by computing the correlation of the spectrum with a set of 10 templates over a finite region of support (50 nm). Restricting the degree of overlap between templates minimizes statistical dependencies between the resulting spectral features (Figure 3). Normalizing the correlation coefficients makes them (conditionally) independent of the average spectral intensity and robust to changes to scale (important, because in practice, when making a field measurement of a spectrum it is difficult to accurately determine the scale). A 13 element real valued feature vector (each component corresponding to a sensor feature node in Figure 3) is thus obtained from the original 1000+ element spectrum. Color images are harder to interpret (one of the rocks in Figure 4 is a meteorite). First the rock needs to be segmented from the background of snow and ice in the image, using a partially observable Markov model [4]. Features of interest are the rock cross sectional area (used as a proxy for size, and requiring that the scaling of the images be known), average color, and simple texture and shape metrics [4]. Meteorites tend to be small and dark compared to terrestrial rocks. An 8 element real valued feature vector is computed from each image. All real valued features are quantized prior to being entered into the Bayes network, which cannot handle continuous quantities. 2.3 Network training The conditional probability matrices (CPM's) describing the probability distributions of network sensor feature nodes given Rock type (and other parent nodes) are learned from examples (of rock types along with the associated feature vectors derived from sensor readings on rock samples of the given type) using the algorithm in [5] . If X is a node (with N states) with parent Y, and with CPM Pij = P(X=iIY=j), then each column is represented by a Dirichlet distribution (initially uniform) and assumed independent of the others. If (X) " (XN are the Dirichlet parameters for P(XIY=j) then lij Pk[6]. Given a new example {X=i,Y=j}with =a;{f weight w the Dirichlet parameters are updated: (Xi 7 (Xi + w. This is a true Bayesian learning algorithm, and is stable. Furthermore, it is possible to weight each training sample to reflect its frequency of occurrence for the rock type that generated it. This is especially important if multiple sensor readings are taken from a single sample 1 -. .. ..1.~ -2 . . ... .t.. . ... .l.... ... . .. 1... . .. .t. .... ... t. . ... ...i... .... . i , i, l~ l............ i , i, ,i ,i ,i .? ??????t?????????????i?~???? ?r?????????????t?????????????~???????????? ; ????????????i?????????????i??????????????j???????????? ....CD !!! i ..... ,i ~ ?.........~......."..~.............~.............~........... ~ ............~..............!.............~. ............~............ ?.r. ? ,. tr. . ..?.?...? ??? . ? . ?i..?. .. ??. .?J??l.?.? .. .? .?.. c o :;: '2: CI o ?I?..???..?....j..............\............. j..............j............ ?I............ t.......... ? ? ? ?? r . . . t. . . . . . i. . . . . . t. . . . . . ?t. . . . . . l. . . ?. - - u I!! . . . . . '. . . . . . I. . . . . . ?t. . . . . . ?t. . . . . . y........ o false positives spectrum ... Image ::~~ors 1 Figure 5 : Classifier rate of classification curves using laboratory data for training and testing (25% cross validation), for different sensors. The training data (gathered from previous Antarctic expeditions, and from US laboratory collections? of meteorites and Antarctic rocks) is insufficient to fully populate the (quantized) space on which the CPM's are defmed, unless the real valued feature nodes are very coarsely quantized. To avoid this, more spectral data was generated from each sample spectra by adding random noise (generated by a ? Johnson Space Center, Houston and Ohio State University, Columbus. non-linear spectrometer noise model) to it. (This is analogous to the approach used by [7] for training neural networks). Using meteorite and terrestrial rock data acquired in the lab, partitioned into 75% training, 25% testing cross validation sets, the Rate of Classification (ROC) curves in Figure 5 are generated. Note the superior classification with spectra versus classification with color images only. In fact, given a spectrum, a color image does not improve classification. However, because it is easier to acquire color images than spectra, they are still useful as a sensor for preliminary screening. 3 Antarctica 2000 field results In January 2000 the Nomad robot was deployed to the Elephant moraine in Antarctica for robotic meteorite searching trials. Nomad searched areas known to contain meteorites, autonomously acquiring color images and reflection spectra of both native terrestrial rocks and meteorites, and classifying them. On January 22, 2000 Nomad successfully identified a meteorite amongst terrestrial rocks on the ice sheet (http://www. frc.ri.cmu.edulproj ects/meteorobot2000/). 1 : : : : . . . . : c o ~ 0.5 C) o u f ?????????????i???? .""... ~.?~........ ' ?????????????~???????????????i???????????????~ ~ .~ : : : : ???? .(i) a prio.ri ;;~ : :?? jlrrI o0 - ? ~=:~.d 0.5 :::1) 1 false positive rate Figure 6: Rate of classification curves for the Nomad robot searching for meteorites in Antarctica, 2000 A.D. Overall performance (using spectra only, due to a problem that developed with camera zoom control) is indicated by the ROC performance curves in Figure 6. These were generated from a test set of rocks and meteorites (40 and 4 samples respectively, with multiple readings of each) in a particular area of the moraine. Figure 6(i) is using the a priori classifier built from the lab data (used to generate Figure 5), acquired prior to arrival in Antarctica. Performance clearly does not match that in Figure 5. There is a notable improvement in (ii), the ROC curve for the same classifier further trained with field data acquired by the robot in the area (from 8 rocks and 2 meteorites not in this test set). Even with retraining, classification is systematically bad for a particular class of rocks (hydro-thermally altered dolerites and basalts) that occurred in the Elephant moraine. These rocks are stained red with iron oxide (rust) whose spectrum has a very prominent peak at 900 nm, precisely where many meteorite spectra also have a peak. This is not surprising, given that most meteorites contain metallic iron, and therefore can have rust on the surface. However, these rocks were absent from the initial training set and not initially expected in this area. Performance is much better if these rocks are removed from the test set (iii) and the retrained classifier is used. 4 Conclusions With the caveat that training be continued using data acquired by the robot in the field, the Bayes network approach to robotic rock classification is a viable approach to this task. Nomad did autonomously identify several meteorites. However, in areas with hydro-thermally altered rocks (iron-oxide stained) the reflection spectrometer must be supplemented by other sensors, such as metal detectors, magnetometers or more exotic spectrometers (thermal emission or Raman), obviously at greater cost. Sensor noise and systematic effects due to autonomous robot placement of sensors on samples in the unstructured and uncontrolled polar environment are significant. They are hard to know a priori and need to be learned from data acquired by the robot, and in field conditions, as demonstrated by the significant improvement in classification achieved after field retraining. Further work needs to be done in selective sensor selection, active modeling of the local geographical distribution of rocks, and recognizing bad sensor readings, but indications are that this can be done in a principled way with the Bayes network classifier and will be addressed in future papers. Acknowledgments The authors gratefully acknowledge the invaluable assistance of Professor William Cassidy of the University of Pittsburgh, Professor Gunter Faure of Ohio State University, Marilyn Lindstrom and the staff at the Antarctic meteorite curation facility of NASA's Johnson Space Center, and Drs. Martial Hebert and Andrew Moore of Carnegie Mellon University. This work was funded by NASA, and supported in Antarctica by the National Science Foundation's Office of Polar Programs. References [1] D. Apostolopoulos, M. Wagner, W. Whittaker, "Technology and Field Demonstration Results in the Robotic Search for Antarctic Meteorites", Field and Service Robotics Conference, Pittsburgh, USA, 1999 [2] Cassidy, William, University of Pittsburgh Department of Geology, personal communication, 1997. [3] R. Dietrich and B. Skinner, Rocks and Minerals, Wiley 1979. [4] L. Pedersen, D. Apostolopoulos, W. Whittaker, T. Roush, G. Benedix, "Sensing and Data Classification for Robotic Meteorite Search", Proceedings of SPIE Photonics East Conference, Boston, 1998. [5] SpiegelhaJter, David I., A. Philip Dawid, Steffen L. Lauritzen and Robert G. Cowell, "Bayesian analysis in expert systems" in Statistical Science, 8(3), p219-283., 1993. [6] A. Gelman, I. Carlin, H. Stem, D. Rubin, Bayesian Data AnalYSiS, Chapman & Hall, 1995. [7] D. Pomerleau, "Efficient Training of Artificial Neural Networks for Autonomous Navigation", NeurComp vol. 3 no. 1 p 88-97, 1991
1798 |@word trial:1 retraining:2 suitably:1 tr:1 harder:1 carry:1 initial:2 hunting:1 contains:1 series:1 xiy:1 current:1 recovered:1 surprising:1 yet:1 must:5 deposited:1 subsequent:1 visible:2 realistic:1 shape:1 cheap:1 designed:1 update:1 chile:1 caveat:1 quantized:4 complication:2 location:2 node:6 along:1 ect:1 viable:1 retrieving:1 acquired:6 expected:1 steffen:1 equipped:3 window:1 becomes:1 exotic:1 what:2 minimizes:1 fuzzy:1 developed:1 whatsoever:1 classifier:15 control:1 ice:10 before:1 scientist:1 engineering:1 positive:2 local:1 service:1 analyzing:1 p219:1 liam:1 range:2 unique:1 camera:5 acknowledgment:1 testing:2 practice:1 definite:1 cold:1 area:10 word:1 cannot:2 close:1 selection:2 sheet:4 gelman:1 www:2 demonstrated:2 center:2 meteorite:43 duration:1 survey:1 identifying:2 unstructured:1 continued:1 retrieve:1 handle:3 searching:2 autonomous:4 analogous:1 updated:1 distinguishing:2 hypothesis:2 pa:1 element:3 dawid:1 expensive:2 particularly:1 native:1 spectrometer:7 region:1 ensures:1 connected:1 autonomously:4 remote:1 removed:1 principled:2 environment:3 personal:1 trained:1 weakly:1 compromise:1 lunar:2 represented:2 fiber:1 detected:1 artificial:1 whose:2 valued:4 elephant:3 ability:1 favor:1 final:1 obviously:1 indication:1 dietrich:1 rock:47 mission:1 relevant:1 date:2 entered:1 representational:1 intuitive:1 parent:2 requirement:1 cassidy:2 object:1 develop:1 andrew:1 geology:2 lauritzen:1 sole:1 implemented:1 c:1 indicate:1 concentrate:1 foot:2 snow:1 exploration:1 human:3 material:1 preliminary:1 hall:1 ground:1 early:1 albedo:1 earth:1 purpose:1 polar:2 cpm:3 currently:1 successfully:1 gunter:1 clearly:1 sensor:26 avoid:1 mobile:1 office:1 corollary:1 encode:1 derived:1 emission:1 improvement:2 likelihood:1 underscore:1 dependent:1 accept:2 initially:2 selective:1 issue:1 classification:13 overall:1 priori:2 platform:2 field:10 chapman:1 arctic:1 broad:1 look:1 future:1 contaminated:1 others:1 few:1 recognize:2 zoom:1 national:1 william:2 interest:2 screening:1 photonics:1 navigation:1 light:4 iiy:1 capable:1 bacteria:1 necessary:1 unless:1 incomplete:1 column:1 earlier:1 modeling:1 cost:2 uniform:1 recognizing:1 johnson:2 dependency:1 geographical:1 fundamental:1 peak:6 discriminating:1 workspace:1 systematic:1 ambiguity:1 nm:3 reflect:1 oxide:2 expert:3 fossil:1 coefficient:1 trough:3 satisfy:1 notable:1 vehicle:1 lab:2 red:3 bayes:12 solar:1 expedition:2 who:1 efficiently:1 gathered:1 identify:2 dry:1 pedersen:4 identification:2 bayesian:4 accurately:1 dimi:1 history:1 detector:2 frequency:1 naturally:1 associated:2 spie:1 gain:1 color:11 knowledge:2 dimensionality:1 iron:4 eddy:1 nasa:2 reflected:2 nomad:11 done:3 evaluated:1 strongly:1 furthermore:3 correlation:2 lack:1 brings:1 thermally:2 indicated:1 columbus:1 scientific:1 manipulator:3 usa:1 effect:1 requiring:1 true:2 contain:2 evolution:1 former:1 facility:1 skinner:1 laboratory:2 moore:1 white:1 conditionally:2 assistance:1 defmed:1 ambiguous:1 prominent:1 invaluable:1 bring:1 onboard:2 reflection:2 image:13 ohio:2 superior:1 physical:1 rust:2 tilt:1 occurred:1 interpret:1 mellon:3 refer:1 measurement:1 significant:2 queried:1 glacial:1 gratefully:1 funded:1 robot:12 entail:1 stable:1 surface:3 compound:3 certain:3 accomplished:1 minimum:1 greater:1 houston:1 staff:1 determine:2 period:1 ii:1 multiple:3 desirable:1 stem:1 segmented:1 match:1 cross:3 long:3 magnetometer:1 curation:1 visit:1 cmu:3 metric:1 searcher:3 robotics:2 achieved:2 preserved:1 background:1 addressed:2 source:2 extra:1 rest:1 specially:1 strict:1 tend:1 fertile:1 flow:2 presence:1 intermediate:1 iii:1 carlin:1 architecture:2 topology:1 identified:1 absent:1 o0:1 useful:1 hydro:2 dark:1 http:2 generate:1 stained:2 blue:1 carnegie:3 discrete:1 vol:1 coarsely:1 fraction:1 year:1 you:2 place:3 throughout:1 raman:1 scaling:1 investigates:1 uncontrolled:1 frc:1 encountered:1 optic:1 constraint:2 precisely:1 constrain:1 placement:1 ri:3 department:1 according:1 describes:1 pan:1 partitioned:1 making:1 conceivably:1 taken:1 computationally:1 discus:1 describing:1 know:2 drs:1 instrument:1 available:2 probe:1 spectral:4 occurrence:1 apollo:1 original:1 dirichlet:3 include:1 trouble:1 graphical:1 reflectance:3 especially:1 veteran:1 quantity:2 amongst:2 capacity:1 mineral:1 philip:1 topic:1 terrestrial:9 collected:1 argue:1 insufficient:1 demonstration:1 acquire:2 difficult:1 robert:1 implementation:1 pomerleau:1 contend:1 markov:1 enabling:1 finite:1 acknowledge:1 thermal:1 january:3 extended:1 communication:1 retrained:1 intensity:1 david:1 required:1 distinction:1 learned:2 planetary:1 beyond:1 able:1 reading:6 program:1 built:2 including:2 power:1 suitable:1 overlap:1 difficulty:1 marilyn:1 arm:2 improve:1 altered:2 technology:3 martial:1 harsh:1 prio:1 naive:1 lij:1 prior:6 fully:2 expect:1 versus:1 validation:2 foundation:1 degree:1 controversial:1 metal:2 verification:1 sufficient:1 proxy:1 pij:1 rubin:1 systematically:3 classifying:2 land:1 cd:1 supported:1 last:1 hebert:1 populate:1 institute:1 explaining:1 template:2 wagner:1 curve:5 xn:1 author:1 made:2 collection:1 simplified:1 observable:1 robotic:12 active:2 pittsburgh:4 assumed:1 xi:2 spectrum:18 search:9 continuous:3 additionally:1 promising:1 learn:1 robust:1 metallic:1 protocol:1 did:1 pk:1 noise:3 arrival:1 antarctica:12 martian:1 roc:3 deployed:1 wiley:1 candidate:1 lindstrom:1 shining:1 bad:3 supplemented:1 sensing:1 evidence:4 normalizing:1 intrinsic:1 intractable:1 false:3 restricting:1 adding:1 importance:1 ci:1 texture:1 magnitude:1 easier:2 boston:1 wavelength:4 likely:3 sectional:1 partially:1 cowell:1 acquiring:2 whittaker:3 extracted:1 conditional:1 goal:2 careful:1 eventual:1 professor:2 change:1 hard:1 reducing:1 east:1 exception:2 desert:1 searched:2 support:1
871
1,799
Who Does What? A Novel Algorithm to Determine Function Localization Ranit Aharonov-Barki Interdisciplinary Center for Neural Computation The Hebrew University, Jerusalem 91904, Israel ranit@alice.nc.huji.ac.il Isaac Meilijson and Eytan Ruppin School of Mathematical Sciences Tel-Aviv University, Tel-Aviv, Israel isaco@math.tau.ac.il, ruppin@math.tau.ac.il Abstract We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain. 1 Introduction Even simple nervous systems are capable of performing multiple and unrelated tasks, often in parallel. Each task recruits some elements of the system (be it single neurons or cortical areas), and often the same element participates in several tasks. This poses a difficult challenge when one attempts to identify the roles of the network elements, and to assess their contributions to the different tasks. Assessing the importance of single neurons or cortical areas to specific tasks is usually achieved either by assessing the deficit in performance after a lesion of a specific area, or by recording the activity during behavior, assuming that areas which deviate from baseline activity are more important for the task performed. These classical methods suffer from two fundamental flaws: First, they do not take into account the probable case that there are complex interactions among elements in the system. E.g., if two neurons have a high degree of redundancy, lesioning of either one alone will not reveal its influence. Second, they are qualitative measures, lacking quantitative predictions. Moreover, the very nature of the contribution of a neural element is quite elusive and ill defined. In this paper we propose both a rigorous, operative definition for the neuron's contribution and a novel algorithm to measure it. Identifying the contributions of elements of a system to varying tasks is often used as a basis for claims concerning the degree of the distribution of computation in that system (e.g. [1]). The distributed representation approach hypothesizes that computation emerges from the interaction between many simple elements, and is supported by evidence that many elements are important in a given task [2, 3, 4]. The local representation hypothesis suggests that activity in single neurons represents specific concepts (the grandmother cell notion) or performs specific computations (see [5]). This question of distributed versus localized computation in nervous systems is fundamental and has attracted ample attention. However there seems to be a lack of a unifying definition for these terms [5]. The ability of the new algorithm suggested here, to quantify the contribution of elements to tasks, allows us to deduce both the distribution of the different tasks in the network and the degree of specialization of each neuron. We applied the Performance Prediction Algorithm (PPA) to two models of recurrent neural networks: The first model is a network hand-crafted to exhibit redundancy, feedback and modulatory effects. The second consists of evolved neurocontrollers for behaving autonomous agents [6]. In both cases the algorithm results in measures which are highly consistent with what is qualitatively known a-priori about the models. The fact that these are recurrent networks, and not simple feed-forward ones, suggests that the algorithm can be used in many classes of neural systems which pose a difficult challenge for existing analysis tools. Moreover, the proposed algorithm is scalable and applicable to the analysis of large neural networks. It can thus make a major contribution to studying the organization of tasks in biological nervous systems as well as to the long-debated issue of local versus distributed computation in the brain. 2 Indices of Contribution, Localization and Specialization 2.1 The Contribution Matrix Assume a network (either natural or artificial) of N neurons performing a set of P different functional tasks. For any given task, we would like to find the contribution vector c = (Cl' ... , CN), where Ci is the contribution of neuron i to the task in question. We suggest a rigorous and operative definition for this contribution vector, and propose an algorithm for its computation. Suppose a set of neurons in the network is lesioned and the network then performs the specified task. The result of this experiment is described by the pair < m, Pm > where m is an incidence vector of length N, such that m(i) = 0 if neuron i was lesioned and 1 if it was intact. Pm is the peiformance of the network divided by the baseline case of a fully intact network. Let the pair < f, C >, where f is a smooth monotone non-decreasing l function and C a normalized column vector such that E~l ICil = 1, be the pair which minimizes the following error function 1 ~ 2 E = 2N L)f(m. c) - Pm] . (1) {m} lIt is assumed that as more important elements are lesioned (m . c decreases), the performance (Pm) decreases, and hence the postulated monotonicity of f. This c will be taken as the contribution vector for the task tested, and the corresponding f will be called its adjoint performance prediction function. Given a configuration m of lesioned and intact neurons, the predicted performance of the network is the sum of the contribution values of the intact neurons (m . c), passed through the performance prediction function f. The contribution vector c accompanied by f is optimal in the sense that this predicted value minimizes the Mean Square Error relative to the real performance, over all possible lesion configurations. The computation of the contribution vectors is done separately for each task, using some small subset of all the 2N possible lesioning configurations. The training error E t is defined as in equation 1 but only averaging on the configurations present in the training set. The Performance Prediction Algorithm (PPA): ? Step 1: Choose an initial normalized contribution vector c for the task. If there is no bias for a special initial choice, set all entries to random values. Repeat steps 2 and 3 until the error E t converges or a maximal number of steps has been reached: ? Step 2: Compute f. Given the current c, perform isotonic regression [7] on the pairs < m . c,Pm > in the training set. Use a smoothing spline [8] on the result of the regression to obtain the new f . ? Step 3: Compute c. Using the current f compute new c values by training a perceptron with input m, weights c and transfer function f. The output of the perceptron is exactly f(m . c), and the target output is Pm. Hence training the perceptron results in finding a new vector c, that given the current function f, minimizes the error E t on the training set. Finally re-normalize c. The output of the algorithm is thus a contribution value for every neuron, accompanied by a function, such that given any configuration of lesioned neurons, one can predict with high confidence the performance of the damaged network. Thus, the algorithm achieves two important goals: a) It identifies automatically the neurons or areas which participate in a cognitive or behavioral task. b) The function f predicts the result of multiple lesions, allowing for non linear combinations of the effects of single lesions 2 . The application of the PPA to all tasks defines a contribution matrix C, whose kth column (k = L.P) is the contribution vector computed using the above algorithm for task k, i.e. Cik is the contribution of neuron i to task k. 2.2 Localization and Specialization Introducing the contribution matrix allows us to approach issues relating to the distribution of computation in a network in a quantitative manner. Here we suggest quantitative measures for localization of function and specialization of neurons. If a task is completely distributed in the network, the contributions of all neurons to that task should be identical (full equipotentiality [2]). Thus, we define the localization Lk of task k as a deviation from equipotentiality. Formally, Lk is the standard deviation of column k of the contribution matrix divided by the maximal possible standard deviation. L _ k - t, std(C*k) J(N _ 1)jN2 (2) 2Tbe computation of involving a simple perceptron-based function approximation, implies the immediate applicability of the PPA for large networks, given weB-behaved performance prediction functions. (a) (b) Neuron 7 Neuron 8 ":0 ":0 01230123 Figure 1: Hand-crafted neural network: a) Architecture of the network. Solid lines are weights, all of strength 1. Dashed lines indicate modulatory effects. Neurons 1 through 6 are spontaneously active (activity equals 1) under normal conditions. The performance of the network is taken to be the activity of neuron 10. b) The activation functions of the nonspontaneous neurons. The x axis is the input field and the y axis is the resulting activity of the neuron. Neuron 8 has two activation functions. If both neurons 2 and 3 are switched on they activate a modulating effect on neuron 8 which switches its activation function from the inactive case to the active case. ? Note that Lk is in the range [0,1] where Lk = indicates full distribution and Lk = 1 indicates localization of the task to one neuron alone. The degree of localization of function in the whole network, L, is the simple average of Lk over all tasks. Similarly, if neuron i is highly specialized for a certain task, Ci * will deviate strongly from a uniform distribution, and thus we define Si, the specialization of neuron i as (3) 3 Results We tested the proposed index on two types of recurrent networks. We chose to study recurrent networks because they pose an especially difficult challenge, as the output units also participate in the computation, and in general complex interactions among elements may arise3 . We begin with a hand-crafted example containing redundancy, feedback and modulation, and continue with networks that emerge from an evolutionary process. The evolved networks are not hand-crafted but rather their structure emerges as an outcome of the selection pressure to successfully perform the tasks defined. Thus, we have no prior knowledge about their structure, yet they are tractable models to investigate. 3.1 Hand-Crafted Example Figure 1 depicts a neural network we designed to include potential pitfalls for analysis procedures aimed at identifying important neurons of the system (see details in the caption). Figure 2(a) shows the contribution values computed by three methods applied to this network. The first estimation was computed as the correlation between the activity of the 3In order to single out the role of output units in the computation, lesioning was performed by decoupling their activity from the rest of the network and not by knocking them out completely. (a) _ _ 0.3 o ActiVity Smgle leSions PPA 0.25 0.2 (b) ~:: Correlation: 0.9978 ~' 0.7 ~6 " " If " 0 .5 0.15 0 .1 ,-,,-,'-"'" ~: 0.1 /,/'({ go'! "-"") '* ", o :f;" o x 0.2 0.4 0.6 Actual Performance 0.8 Figure 2: Results of the PPA: a) Contribution values obtained using three methods: The correlation of activity to performance, single neuron lesions, and the PPA. b) Predicted versus actual performance using c and its adjoint performance prediction function f obtained by the PPA. Insert: The shape of f. neuron and the performance of the network4 . To allow for comparison between methods these values were normalized to give a sum of 1. The second estimation was computed as the decrease in performance due to lesioning of single neurons. Finally, we used the PPA, training on a set of 64 examples. Note that as expected the activity correlation method assigns a high contribution value to neuron 9, even though it actually has no significance in determining the performance. Single lesions fail to detect the significance of neurons involved in redundant interactions (neurons 4 - 6). The PPA successfully identifies the underlying importance of all neurons in the network, even the subtle significance of the feedback from neuron 10. We used a small training set (64 out of 210 configurations) containing lesions of either small (up to 20% chance for each neuron to be lesioned) or large (more than 90% chance of lesioning) degree. Convergence was achieved after 10 iterations. As opposed to the two other methods, the PPA not only identifies and quantifies the significance of elements in the network, but also allows for the prediction of performances from multi-element lesions, even if they were absent from the training set. The predicted performance following a given configuration of lesioned neurons is given by f(m . c) as explained in section 2.1. Figure 2(b) depicts the predicted versus actual performances on a test set containing 230 configurations of varying degrees (0 - 100% chance of lesioning). The correlation between the predicted value and the actual one is 0.9978, corresponding to a mean prediction error of only 0.0007. In principle, the other methods do not give the possibility to predict the performance in any straightforward way, as is evident from the non-linear form of the performance prediction error (see insert of figure 2(b?. The shape of the performance prediction function depends on the organization of the network, and can vary widely between different models (results not shown here). 3.2 Evolved Neurocontrollers Using evolutionary simulations we developed autonomous agents controlled by fully recurrent artificial neural networks. High performance levels were attained by agents performing simple life-like tasks of foraging and navigation. Using various analysis tools we found a common structure of a command neuron switching the dynamics of the network between 4Neuron 10 was omitted in this method of analysis since it is by definition in full correlation with the performance. radically different behavioral modes [6]. Although the command neuron mechanism was a robust phenomenon, the evolved networks did differ in the role other neurons performed. When only limited sensory information was available, the command neuron relied on feedback from the motor units. In other cases no such feedback was needed, but other neurons performed some auxiliary computation on the sensory input. We applied the PPA to the evolved neurocontrollers in order to test its capabilities in a system on which we have previously obtained qualitative understanding, yet is still relatively complex. Figure 3 depicts the contribution values of the neurons of three successful evolved neurocontrollers obtained using the PPA. Figure 3(a) corresponds to a neurocontroller of an agent equipped with a position sensor (see [6] for details), which does not require any feedback from the motor units. As can be seen these motor units indeed receive contribution values of near zero. Figures 3(b) and 3(c) correspond to neurocontrollers who strongly relied on motor feedback for their memory mechanism to function properly. The algorithm easily identifies their significance. In all three cases the command neuron receives high values as expected. The performance prediction capabilities are extremely high, giving correlations of 0.9999, 0.9922 and 0.9967 for the three neurocontrollers, on a test set containing 100 lesion configurations of mixed degrees (0 - 100% chance of lesioning). We also obtained the degree of localization of each network, as explained in section 2.2. The values are: 0.56, 0.35 and 0.47 for the networks depicted in figures 3(a) 3(b) and 3(c) respectively. These values are in good agreement with the qualitative descriptions of the networks we have obtained using classical neuroscience tools [6]. (a) (b) (c) CN 045 045 D4 04 ~D35 Cl 035 ~ " g 03 ~025 ~ 03 i CN 025 ~ 02 i 8015 ? 015 D1 005 o 1 ~3 4 5 slll!l Neuron Number eN 02 D1 I. MotorUmts Motor Units DDS B 9 10 1 2 :3 4 5 6 7 Neuron Number 8 9 10 1 3 5 .. 7 9 ,_..-. 11 1315 17 19 21 NelronNumber Figure 3: Contribution values of neurons in three evolved neurocontrollers: Neurons 1-4 are motor neurons. eN is the command neuron that emerged spontaneously in all evolutionary runs. 4 Discussion We have introduced a novel algorithm termed PPA (Performance Prediction Algorithm) to measure the contribution of neurons to the tasks that a neural network performs. These contributions allowed us to quantitatively define an index of the degree of localization of function in the network, as well as for task-specialization of the neurons. The algorithm uses data from performance measures of the network when different sets of neurons are lesioned. Theoretically, pathological cases can be devised where very large training sets are needed for correct estimation. However it is expected that many cases are well-behaved and will demonstrate behaviors similar to the models we have used as test beds, i.e. that a relatively small subset suffices as a training set. It is predicted that larger training sets containing different degrees of damage will be needed to achieve good results for systems with higher redundancy and complex interactions. We are currently working on studying the nature of the training set needed to achieve satisfying results, as this in itself may reveal information on the types of interactions between elements in the system. We have applied the algorithm to two types of artificial recurrent neural networks, and demonstrated that it results in agreement with our qualitative a-priori notions and with qualitative classical analysis methods. We have shown that estimation of the importance of system elements using simple activity measures and single lesions, may be misleading. The new PPA is more robust as it takes into account interactions of higher degrees. Moreover it serves as a powerful tool for predicting damage caused by multiple lesions, a feat that is difficult even when one can accurately estimate the contributions of single elements. The shape of the performance prediction function itself may also reveal important features of the organization of the network, e.g. its robustness to neuronal death. The prediction capabilities of the algorithm can be used for regularization of recurrent networks. Regularization in feed-forward networks has been shown to improve performance significantly, and algorithms have been suggested for effective pruning [9]. However, networks with feedback (e.g. Elman-like networks) pose a difficult problem, as it is hard to determine which elements should be pruned. As the PPA can be applied on the level of single synapses as well as single neurons, it suggests a natural algorithm for effective regularization, pruning the elements by order of their contribution values. Recently a large variety of reversible inactivation techniques (e.g. cooling) have emerged in neuroscience. These methods alleviate many of the problematic aspects of the classical lesion technique (ablation), enabling the acquisition of reliable data from multiple lesions of different configurations (for a review see [10]). It is most likely that a plethora of data will accumulate in the near future. The sensible integration of such data will require quantitative methods, to complement the available qualitative ones. The promising results achieved with artificial networks and the potential scalability of the PPA lead us to believe that it will prove extremely useful in obtaining insights into the organization of natural nervous systems. Acknowledgments We greatly acknowledge the valuable contributions made by Ehud Lehrer, Hanoch Gutfreund and Tuvik Beker . References [1] J. Wu, L. B. Cohen, and C. X. Falk. Neuronal activity during different behaviors in aplysia: A distributed organiation? Science, 263:820--822, 1994. [2] K. S. Lashley. Brain Mechanisms in Intelligence. University of Chicago Press, Chicago, 1929. [3] J. L. McClelland, elhart D.E. Ru, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 2: Psychological and Biological Models. MIT Press, Massachusetts, 1986. [4] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 79:2554-2558, 1982. [5] S. Thorpe. Localized versus distributed representations. In M. A. Arbib, editor, Handbook of Brain Theory and Neural Networks. MIT Press, Massachusetts, 1995. [6] R. Aharonov-Barki, T. Beker, and E. Ruppin. Emergence of memory-driven command neurons in evolved artificial agents. Neural Computation, In print. [7] R. Barlow, D. Bartholemew, J. Bremner, and H. Brunk. Statistical Inference Under Order Restrictions. John Wiley, New York, 1972. [8] G. D. Knott. Interpolating cubic splines. In National Science Foundation J. C. Chemiavsky, editor, Progress in Computer Science and Applied Logic. Birkhauser, 2000. [9] R. Reed. Prunning algorithms - a survey. IEEE Trans. on Neural Networks , 4(5):740--747, 1993. [10] S. G. Lomber. The advantages and limitations of permanent or reversible deactivation techniques in the assessment of neural function. 1. of Neuroscience Methods, 86: 109- 117, 1999.
1799 |@word seems:1 simulation:1 pressure:1 solid:1 initial:2 configuration:10 existing:1 current:3 incidence:1 activation:3 si:1 yet:2 attracted:1 john:1 chicago:2 shape:3 motor:6 designed:1 alone:2 intelligence:1 nervous:5 math:2 contribute:1 mathematical:1 qualitative:6 consists:1 prove:1 behavioral:3 manner:1 introduce:1 theoretically:1 indeed:1 expected:3 behavior:3 elman:1 multi:2 brain:4 decreasing:1 pitfall:1 automatically:1 actual:4 equipped:1 begin:1 unrelated:1 moreover:3 underlying:1 what:2 israel:2 evolved:8 minimizes:3 recruit:1 developed:1 gutfreund:1 finding:1 quantitative:4 every:1 shed:1 exactly:1 unit:6 local:3 switching:1 modulation:1 chose:1 suggests:3 alice:1 lehrer:1 limited:1 range:1 acknowledgment:1 spontaneously:2 procedure:1 area:6 significantly:2 confidence:1 suggest:2 lesioning:7 selection:1 influence:1 isotonic:1 restriction:1 demonstrated:2 center:1 peiformance:1 elusive:1 jerusalem:1 attention:1 go:1 straightforward:1 survey:1 identifying:2 assigns:1 insight:1 notion:2 autonomous:2 aharonov:2 suppose:1 target:1 damaged:1 caption:1 us:1 hypothesis:1 agreement:2 ppa:18 element:21 satisfying:1 std:1 cooling:1 predicts:1 role:3 knocking:1 decrease:4 valuable:1 lesioned:8 dynamic:1 localization:9 basis:1 completely:2 easily:1 hopfield:1 emergent:1 various:1 effective:2 activate:1 artificial:5 outcome:1 quite:1 whose:1 widely:1 emerged:2 larger:1 tested:2 ability:2 emergence:1 itself:2 advantage:1 propose:2 interaction:8 maximal:2 ablation:1 achieve:2 academy:1 adjoint:2 description:1 bed:1 normalize:1 scalability:1 convergence:1 assessing:2 plethora:1 converges:1 recurrent:8 ac:3 pose:4 school:1 progress:1 auxiliary:1 predicted:7 implies:1 indicate:1 quantify:1 differ:1 correct:1 exploration:1 require:2 suffices:1 microstructure:1 alleviate:1 biological:3 probable:1 insert:2 normal:1 cognition:1 predict:2 claim:1 major:1 achieves:1 vary:1 omitted:1 estimation:4 applicable:2 currently:1 modulating:1 successfully:2 tool:4 mit:2 sensor:1 rather:1 inactivation:2 varying:2 command:6 properly:1 indicates:2 greatly:1 rigorous:2 baseline:2 sense:1 detect:1 inference:1 flaw:1 issue:2 among:3 ill:1 priori:2 smoothing:1 special:1 integration:1 equal:1 field:1 identical:1 represents:1 lit:1 future:1 spline:2 quantitatively:2 thorpe:1 pathological:1 falk:1 national:2 attempt:1 organization:5 highly:2 investigate:1 possibility:1 navigation:1 light:1 accurate:1 capable:1 re:1 psychological:1 column:3 applicability:1 introducing:1 deviation:3 subset:2 entry:1 uniform:1 successful:1 foraging:1 fundamental:2 huji:1 interdisciplinary:1 participates:1 containing:5 choose:1 opposed:1 cognitive:2 account:2 potential:3 accompanied:2 hypothesizes:1 postulated:1 permanent:1 caused:1 depends:1 performed:4 meilijson:1 reached:1 deactivation:1 relied:2 parallel:2 capability:3 contribution:35 ass:1 il:3 square:1 who:2 correspond:1 identify:1 accurately:1 synapsis:1 definition:4 acquisition:1 involved:1 isaac:1 massachusetts:2 knowledge:1 emerges:2 subtle:1 actually:1 cik:1 feed:2 attained:1 higher:2 brunk:1 done:1 though:1 strongly:2 until:1 correlation:7 hand:5 receives:1 working:1 web:1 reversible:3 lack:1 assessment:1 defines:1 mode:1 reveal:3 behaved:2 believe:1 aviv:2 usa:1 effect:4 concept:1 normalized:3 barlow:1 hence:2 regularization:3 death:1 during:2 d4:1 evident:1 demonstrate:1 performs:4 ruppin:3 novel:4 recently:1 common:1 specialized:1 functional:1 physical:1 cohen:1 volume:1 relating:1 accumulate:1 pm:6 similarly:1 behaving:1 deduce:1 recent:1 driven:1 termed:2 certain:1 smgle:1 continue:1 life:1 seen:1 determine:2 redundant:1 dashed:1 multiple:4 full:3 smooth:1 long:2 divided:2 concerning:1 slll:1 devised:1 controlled:1 prediction:17 scalable:2 regression:2 involving:1 iteration:1 achieved:3 cell:1 isaco:1 receive:1 separately:1 operative:2 rest:1 recording:1 ample:1 effectiveness:1 near:2 hanoch:1 switch:1 variety:1 architecture:1 arbib:1 cn:3 absent:1 inactive:1 specialization:6 passed:1 suffer:1 york:1 useful:1 modulatory:2 aimed:1 mcclelland:1 problematic:1 neuroscience:3 group:1 redundancy:4 monotone:1 sum:2 tbe:1 run:1 powerful:1 wu:1 activity:13 strength:1 aspect:1 extremely:2 pruned:1 performing:3 relatively:2 combination:1 lasting:1 explained:2 taken:2 equation:1 previously:1 fail:1 mechanism:3 needed:4 tractable:1 serf:1 studying:2 available:2 robustness:1 include:1 unifying:1 giving:1 especially:1 classical:4 question:2 print:1 damage:2 exhibit:1 evolutionary:3 kth:1 deficit:1 sensible:1 participate:3 bremner:1 assuming:1 ru:1 length:1 index:3 reed:1 hebrew:1 nc:1 difficult:5 debate:1 lashley:1 collective:1 perform:2 allowing:1 neuron:61 knott:1 aplysia:1 enabling:1 acknowledge:1 immediate:1 pdp:1 introduced:1 complement:1 pair:4 specified:1 trans:1 suggested:2 usually:1 challenge:3 grandmother:1 reliable:1 tau:2 memory:2 natural:3 predicting:1 improve:1 misleading:1 identifies:5 lk:6 axis:2 icil:1 deviate:2 prior:1 understanding:2 review:1 determining:1 relative:1 lacking:1 fully:2 mixed:1 limitation:1 versus:6 localized:2 foundation:1 switched:1 degree:11 agent:5 consistent:1 principle:1 dd:1 editor:2 supported:1 repeat:1 bias:1 allow:1 perceptron:4 emerge:1 distributed:8 feedback:8 cortical:2 sensory:2 forward:2 qualitatively:1 made:1 pruning:2 feat:1 logic:1 monotonicity:1 active:2 handbook:1 assumed:1 quantifies:1 promising:1 nature:2 transfer:1 robust:2 decoupling:1 tel:2 obtaining:1 complex:5 cl:2 interpolating:1 ehud:1 did:1 significance:5 whole:1 lesion:16 allowed:1 neuronal:2 crafted:5 en:2 depicts:3 cubic:1 wiley:1 position:1 debated:1 specific:4 evidence:1 importance:3 ci:2 depicted:1 likely:1 radically:1 corresponds:1 chance:4 goal:1 hard:1 birkhauser:1 averaging:1 called:1 eytan:1 intact:4 formally:1 d1:2 phenomenon:1
872
18
564 PROGRAMMABLE SYNAPTIC CHIP FOR ELECTRONIC NEURAL NETWORKS A. Moopenn, H. Langenbacher, A.P. Thakoor, and S.K. Khanna Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91009 ABSTRACT A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of "long channel" NMOSFET binary connection elements implemented in a 3-um bulk CMOS process. Since the neurons are kept offchip, the synaptic chip serves as a "cascadable" building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silcon area. The performance of a synaptic chip in a 32neuron breadboard system in an associative memory test application is discussed. INTRODUCTION The highly parallel and distributive architecture of neural networks offers potential advantages in fault-tolerant and high speed associative information processing. For the past few years, there has been a growing interest in developing electronic hardware to investigate the computational capabilities and application potential of neural networks as well as their dynamics and collective propertiesl - 5 ? In an electronic hardware implementation of neural networks6 ? 7 r the neurons (analog processing units) are represented by threshold amplifiers and the synapses linking the neurons by a resistive connection network. The synaptic strengths between neurons (the electrical resistance of the connections) represent the stored information or the computing function of the neural network. Because of the massive interconectivity of the neurons and the large number of the interconnects required with the increasing number of neurons, implementation of a synaptic network using current LSI/VLSI technology can become very difficult. A synaptic network based on a multi-chip architecture would lessen this difficulty. He have designed, fabricated, and successfully tested CMOS-based programmable synaptic chips which could serve as basic " cascadabl e" building blocks for a multi-chip electronic neural network. The synaptic chips feature complete programmability of 1024, (32X32) binary synapses. Since the neurons are kept offchip, the synaptic chips can be connected in parallel, to obtain multiple grey levels of the connection strengths, as well as ? American Institute of Physics 1988 565 "cascaded" to form larger synaptic arrays for an expansion to a 512neuron system in a feedback or feed-forward architecture. As a research tool, such a system would offer a significant speed software-based neural network improvement over conventional simulations since convergence times for the parallel hardware system would be significantly smaller. In this paper, we describe the basic design and operation of synaptic CMOS chips incorporating MOSFET's as binary connection elements. The design and fabrication of synaptic test chips with tailored thin film resistors as ballast resistors for controlling power dissipation are also described. Finally, we describe a synaptic chip-based 32-neuron breadboard system in a feedback configuration and discuss its performance in an associative memory test application. BINARY SYNAPTIC CMOS CHIP WITH MOSFET CONNECTION ELEMENTS There are two important design requirements for a binary connection element in a high density synaptic chip. The first requirement is that the connection in the ON state should be "weak" to ensure low overall power dissipation. The required degree of "weakness" of the ON connection largely depends on the synapse density of the chip. If, for example, a synapse density larger than 1000 per chip is desired, a dynamic resistance of the ON connection should be greater than ~100 X-ohms. The second requirement is that to obtain grey scale synapses with up to four bits of precision from binary connections, the consistency of the ON state connection resistance must be better than +/-5 percent, to ensure proper threshold operation of the neurons. Both of the requirements are generally difficult to satisfy simultaneously in conventional VLSI CMOS technology. For example, doped-polysilicon resistors could be used to provide the weak connections, but they are difficult to fabricate with a resistance uniformity of better than 5 percent. We have used NMOSFET's as connection elements in a multi-chip synaptic network. By designing the NMOSFET's with long channel, both the required high uniformity and high ON state resistance have been obtained. A block diagram of a binary synaptic test chip incorporating NMOSFET's as programmable connection elements is shown in Fig. 1. A photomicrograph of the chip is shown in Fig. 2. The synaptic chip was fabricated through MOSIS (MOS Implementation bulk CMOS, two-level metal, P-well Service) in a 3-micron, technology. The chip contains 1024 synaptic cells arranged in a 32X32 matrix configuration. Each cell consists of a long channel NMOSFET connected in series with another NMOSFET serving as a simple ON/OFF switch. The state of the FET switch is controlled by the output of a latch which can be externally addressed via the ROW/COL address decoders. The 32 analog input lines (from the neuron outputs) and 32 analog output lines (to the neuron inputs) allow a number of such chips to be connected together to form larger connection matrices with up to 4-bit planes. The long channel NMOSFET can function as either a purely resistive or a constant current source connection element, depending 566 FROM NEURON OUTPUTS 1 ??? 32 vG ______~~~----~~----_=~ ~-.--a. .... Ur-C~ /I6-A9 \ AOOR OECOOER SETRST --------I \ . . ROW~V~ C~ .\ .\ . S ? \ RST R . ::-Q ??????????????? ? ? 1 1 ?? ? 0 . 32 TO NEURON INPUTS ? I L -----d I 6=. . . . . =a AO-M Figure 1. Block diagram of a 32X32 binary synnaptic chip with long channel NMOSFETs as connection elements . ... ~' . ,,) .,. ," Figure 2. Photomicrographs of a 32X32 binary connection CMOS chip. The blowup on the right shows several synaptic cells; the "S"-shape structures are the long-channel NMOSFETs. on whether analog or binary output neurons are used. As a resistive connection. the NMOSFET's must operate in the linear region of the transistor's drain I-V characteristics. In the linear region. the channel resistance is approximately given byB Ro N = (11K) (LIN) (VG - VT H ) - 1 ? 567 Here, K is a proportionality constant which depends on process parameters, Land Ware the channel length and width respectively, VG is the gate voltage, and VTH is the threshold voltage. The transistor acts as a linear resistor provided the voltage across the channel is much less than the difference of the gate and threshold voltages, and thus dictates the operating voltage range of the connection. The NMOSFET's presently used in our synaptic chip design have a channel length of 244 microns and width of 12 microns. At a gate voltage of 5 volts, a channel resistance of about 200 Kohms was obtained over an operating voltage range of 1.5 volts. The consistency of the transistor I-V characteristics has been verified to be within +/-3 percent in a single chip and +/-5 percent for chips from different fabrication runs. In the latter case, the transistor characteristics in the linear region can be further matched to within +/-3% by the fine adjustment of their common gate bias. With two-state neurons, current source connections may be used by operating the transistor in the saturation mode. Provided the voltage across the channel is greater than (VG - VTH), the transistor behaves almost as a constant current source with the saturation current given approximately byB ION = K (W/L) (VG - VTH)2 . With the appropriate selection of L, W, and VG, it is possible to obtain ON-state currents which vary by two orders of magnitude in values. Figure 3 shows a set of measured I-V curves for a NMOSFET with the channel dimensions, L= 244 microns and W=12 microns and applied gate voltages from 2 to 4.5 volts. To ensure constant current source operation, the neuron's ON-state output should be greater than 3.5 volts. A consistency of the ON-state currents to within +/-5 percent has similarly been observed in a set of chip samples. With current source connections therefore, quantized grey scale synapses with up to 16 grey levels (4 bits) can be realized using a network of binary weighted current sources. Figure 3. I-V characteristics of an NMOSFET connection element. L=244 urn, Channel dimension: W=12um For proper operation of the NMOSFET connections, the analog output lines (to neuron inputs) should always be held close to ground potential. Moreover, the voltages at the analog input lines must be at or above ground potential. Since the current normally 568 flows from the analog input to the output, the NMOSFET's may be used as either all excitatory or inhibitory type connections. However, the complementary connection function can be realized using long For a PMOSFET channel PMOSFET's in series with PMOSFET switches. connection, the voltage of an analog input line would be at or below ground. Furthermore, due to the difference in the mobilites of electrons and holes in the channel, a PMOSFET used as a resistive connection has a channel resistance about twice as large as an NMOSFET with the same channel dimension. This fact results in a subtantial reduction in the size of PMOSFET needed. THIN FILM RESISTOR CONNECTIONS The use of MOSFET's as connection elements in a CMOS synaptic matrix chip has the major advantage that the complete device can be readily fabricated in a conventional CMOS production run. However, the main disadvantages are the large area (required for the long channel) for the MOSFET's connections and their non-symmetrical inhibitory/excitatory functional characteristics. The large overall gate area not only substantially limits the number of synapses that can be fabricated on a single chip, but the transistors are more susceptible to processing defects which can lead to excessive gate leakage and thus reduce chip yield considerably. An alternate approach is simply to use resistors in place of MOSFET's. We have investigated one such approach where thin film resistors are deposited on top of the passivation layer of CMOS-processed chips as an additional special processing step to the normal CMOS fabrication run. With an appropriate choice of resistive materials, a dense array of resistive connections with highly uniform resistance of up to 10 M-ohms appears feasible. Several candidate materials, including a cermet based on platinum/aluminum oxide, and amorphous semiconductor/metal alloys such as a-Ge:Cu and a-Ge:Al, have been examined for their applicability as thin film resistor connections. These materials are of particular interest since their resistivity can easily be tailored in the desired semiconducting range of 1-10 ohm-cm by controlling the metal content'. The a-Ge/metal films are deposited by thermal evaporation of presynthesized alloys of the desired composition in high vacuum, whereas platinum/aluminum oxide films are deposited by co-sputtering from platinum and aluminum oxide targets in a high purity argon and oxygen gas mixture. Room temperature resistivities in the 0.1 to 100 ohm-cm range have been obtained by varying the metal content in these materials. Other factors which would also determine their suitability include their device processing and material compatibilities and their stability with time, temperature, and extended application of normal operating electric current. The temperature coefficient of resistance (TCR) of these materials at room temperature has been measured to be in the 2000 to 6000 ppm range. Because of their relatively high TCR's, the need for weak connections to reduce the effect of localized heating is especially important here. The a-Ge/metal alloy films are observed to be relatively stable with exposure to air for temperatures below 130o C. 569 The platinum/aluminum oxide film stabilize with time after annealing in air for several hours at 130o C. Sample test arrays of thin film resistors based on the described materials have been fabricated to test their consistency. The resistors, with a nominal resistance of 1 M-ohm, were deposited on a glass substrate in a 40X40 array over a O.4cm by O.4cm area. Variation in the measured resistance in these test arrays has been found to be from +/- 2-5 percent for all three materials. Smaller test arrays of a-Ge:Cu thin film resistors on CMOS test chips have also been fabricated. A photo-micrograph of a CMOS synaptic test chip containing a 4X4 array of a-Ge:Cu thin film resistors is shown in Fig. 4. Windows in the passivation layer of silicon nitride (SiN) were opened in the final processing step of a normal CMOS fabrication run to provide access to the aluminum metal for electrical contacts. A layer of resistive material was deposited and patterned by lift-off. A layer of buffer metal of platinum or nickel was then deposited by RF sputtering and also patterned by lift-off. The buffer metal pads serve as a conducting bridges for connecting the aluminum electrodes to the thin film resistors. In addition to providing a reliable ohmic contact to the aluminum and resistor, it also provides conformal step coverage over the silicon nitride window edge. The resistor elements on the test chip are 100 micron long, 10 micron wide with a thickness of about 1500 angstroms and a nominal resistance of 250 K-ohms. Resistance variations from 10-20 percent have been observed in several such test arrays. The unusually large variation is largely due to the surface roughness of the chip passivation layer. As one possible solution, a thin spin- Figure 4. Photomicrographs of? a CMOS synaptic test chip with a 4X4 array of a-Ge:Cu thin film resistors. The nominal resistance was 250 K-ohms. 570 on coating of an insulating material such as polyimide to smooth out the surface of the passivation layer prior to depositing the resistors is under investigation. SYNAPTIC CHIP-BASED 32-NEURON BREADBOARD SYSTEM A 32-neuron breadboard system utilizing an array of discrete neuron electronics has been fabricated to evaluate the operation of 32X32 binary synaptic CMOS chips with NMOSFET connection elements. Each neuron consists of an operational amplifier configured as a current to voltage converter (with virtual ground input) followed by a fixed-gain voltage difference amplifier. The overall time constant of the neurons is approximately 10 microseconds. The neuron array is interfaced directly to the synaptic chip in a full feedback configuration. The system also contains prompt electronics consisting of a programmable array of RC discharging circuits with a relaxation time of approximately 5 microseconds. The prompt hardware allows the neuron states to be initialized by precharging the selected capacitors in the RC circuits. A microcomputer interfaced to the breadboard system is used for programming the synaptic matrix chip, controlling the prompt electronics, and reading the neuron outputs. The stability of the breadboard system is tested in an associative mellory feedback configuration,b. A dozen random dilutecoded binary vectors are stored using the following simplified outer-product storage scheme: ~ s s if L Vi Vj = 0 -1 Ti j = f 10 S otherwise. In this scheme, the feedback matrix consists of only inhibitory (1lor open (0) connections. The neurons are set to be normally ON and are driven OFF when inhibited by another neuron via the feedback matrix. The system exhibits excellent stability and associative recall performance. Convergence to a nearest stored memory in Hamming distance is always observed for any given input cue. Figure 5 shows some typical neuron output traces for a given test prompt and a set of stored memories. The top traces show the response of two neurons that are initially set ON; the bottom traces for two other neurons initially set OFF. Convergence times of 10-50 microseconds have been observed, depending on the prompt conditions, but are primarily governed by the speed of the neurons. CONCLUSIONS Synaptic CMOS chips containing 1024 programmable binary synapses in a 32X32 array have been designed, fabricated, and tested. These synaptic chips are designed to serve as basic building blocks for large multi-chip synaptic networks. The use of long channel MOSFET's as either resistive or current source connection elements meets the "weak" connection and consistency 571 Figure 5. Typical neuron response curves for (Horiz scale: 10 microseconds per div) a test prompt input. requirements. Alternately, CMOS-based synaptic test chips with specially deposited thin film high-valued resistors, in series with FET switches, offer an attractive approach to high density programmable synaptic chips. A 32-neuron breadboard system incorporating a 32X32 NMOSFET synaptic chip and a feedback configuration exhibits excellent stability and associative recall performance as an associative memory. Using discrete neuron array, convergence times of 10-50 microseconds have been demonstrated. With optimization of the input/output wiring layout and the use of high speed neuron electronics, convergence times can certainly be reduced to less than a microsecond. ACKNOWLEDGEMENTS This work was performed by the Jet Propulsion Laboratory, California Institute of Technology, and was sponsored by the Joint Tactical Fusion Program Office, through an agreement with the National Aeronautics and Space Administration. The authors would like to thank John Lambe for his invaluable suggestions, T. Duong for his assistance in the breadboard hardware development, J. Lamb and S. Thakoor for their help in the thin film resistor deposition, and R. Nixon and S. Chang for their assistance in the chip layout design. REFERENCES 1. 2. 3. 4. 5. J. Lambe, A. Moopenn, and A.P. Thakoor, Proc. AIAA/ACM/NASA/IEEE Computers in Aerospace V, 160 (1985) A.P. Thakoor, J.L. Lamb, A. Moopenn, and S.K. Khanna, MRS Proc. 95, 627 (1987) W. Hubbard, D. Schwartz, J. Denker, H.P. Graf, R. Howard, L. Jackel, B. Straughn, and D. Tennant, AIP Conf. Proc. 151, 227 (1986) M.A. Sivilotti, M.R. Emerling, and C. Mead, AIP Conf. Proc. 151, 408 (1986) J.P. Sage, K. Thompson, and R.S. Withers, AIP Conf. Proc. 151, 572 6. 7. 8, 9. 381 3.3. 3.3. S.M. ley, 3.L. Sci. (19861 Hopfield, Proc. Nat. Acad. SCi., 81, 3088 (1984) Hopfield, Proc. Nat. Acad. Sci., 79, 2554 (1982) Sze, "Semiconductor Devices-Physics and Technology," (WiNew York, 1985) p.205 Lamb, A.P. Thakoor, A. Moopenn, and S.K. Khanna, 3. Vac. Tech., A 5(4), 1407 (1987)
18 |@word cu:4 open:1 proportionality:1 grey:4 simulation:1 reduction:1 electronics:4 configuration:5 contains:3 series:4 past:1 duong:1 current:14 must:3 readily:1 deposited:8 john:1 shape:1 designed:3 sponsored:1 cue:1 selected:1 patterning:1 device:3 plane:1 provides:1 quantized:1 lor:1 rc:2 become:1 consists:3 resistive:8 fabricate:1 blowup:1 growing:1 multi:5 window:2 increasing:1 provided:2 matched:1 moreover:1 circuit:2 sivilotti:1 cm:4 substantially:1 developed:1 microcomputer:1 fabricated:8 act:1 unusually:1 ti:1 um:2 ro:1 schwartz:1 unit:1 normally:2 discharging:1 service:1 aiaa:1 limit:1 semiconductor:2 acad:2 breadboard:8 mead:1 ware:1 meet:1 approximately:4 twice:1 examined:1 co:1 patterned:2 range:5 block:5 x512:1 area:4 significantly:1 dictate:1 close:1 selection:1 alloy:3 storage:1 conventional:3 demonstrated:1 exposure:1 layout:2 thompson:1 x32:8 array:15 utilizing:1 his:2 stability:4 variation:3 controlling:3 target:1 nominal:3 massive:1 substrate:1 programming:1 designing:1 agreement:1 element:14 observed:5 bottom:1 electrical:2 region:3 connected:3 byb:2 substantial:1 ppm:1 dynamic:2 uniformity:2 serve:3 purely:1 easily:1 joint:1 hopfield:2 chip:51 represented:1 ohmic:1 mosfet:6 describe:2 lift:2 film:16 larger:3 valued:1 otherwise:1 final:1 associative:7 a9:1 advantage:2 transistor:7 product:1 doped:1 rst:1 convergence:5 electrode:1 requirement:5 cmos:19 help:1 depending:2 measured:3 nearest:1 offchip:2 implemented:1 coverage:1 coating:1 thakoor:5 ley:1 opened:1 aluminum:7 material:10 virtual:1 require:1 ao:1 suitability:1 investigation:1 roughness:1 ground:4 normal:3 mo:1 electron:1 major:1 vary:1 proc:7 jackel:1 platinum:5 bridge:1 hubbard:1 successfully:1 tool:1 weighted:1 always:2 emerling:1 varying:1 voltage:13 office:1 improvement:1 argon:1 tech:1 glass:1 pad:1 pasadena:1 vlsi:2 initially:2 compatibility:1 overall:3 development:1 special:1 saving:1 x4:2 excessive:1 thin:13 aip:3 inhibited:1 few:1 primarily:1 simultaneously:1 national:1 lessen:1 consisting:1 amplifier:3 interest:2 highly:2 investigate:1 certainly:1 weakness:1 mixture:1 held:1 programmability:1 edge:1 initialized:1 desired:3 sze:1 disadvantage:1 applicability:1 uniform:1 fabrication:4 ohm:7 stored:4 thickness:1 considerably:1 density:4 physic:2 off:5 together:1 connecting:1 containing:2 conf:3 oxide:4 american:1 horiz:1 potential:4 stabilize:1 tactical:1 coefficient:1 configured:1 satisfy:1 depends:2 vi:1 performed:1 parallel:3 capability:1 amorphous:1 air:2 spin:1 largely:2 characteristic:5 conducting:1 yield:1 interfaced:2 weak:5 synapsis:6 resistivity:2 synaptic:37 hamming:1 gain:1 recall:2 nasa:1 appears:1 feed:1 response:2 synapse:2 arranged:1 furthermore:1 khanna:3 mode:1 building:3 effect:1 volt:4 laboratory:2 attractive:1 wiring:1 latch:1 sin:1 width:2 assistance:2 fet:3 complete:2 dissipation:2 temperature:5 invaluable:1 percent:7 oxygen:1 common:1 behaves:1 functional:1 passivation:4 discussed:1 analog:8 linking:1 he:1 significant:1 composition:1 silicon:2 consistency:5 i6:1 similarly:1 stable:1 access:1 operating:4 surface:2 aeronautics:1 driven:1 buffer:2 binary:15 fault:1 vt:1 additional:2 greater:3 mr:1 purity:1 determine:1 multiple:1 full:1 smooth:1 jet:2 polysilicon:1 offer:3 long:11 lin:1 controlled:1 basic:3 represent:1 tailored:3 cell:3 ion:1 whereas:1 addition:1 fine:1 addressed:1 annealing:1 diagram:2 source:7 operate:1 specially:1 flow:1 capacitor:1 switch:5 architecture:3 converter:1 reduce:2 administration:1 x40:1 whether:1 resistance:15 york:1 programmable:8 generally:1 hardware:5 processed:1 reduced:1 lsi:1 inhibitory:3 per:2 bulk:2 serving:1 discrete:2 promise:1 four:1 threshold:4 photomicrograph:3 micrograph:1 verified:1 kept:2 mosis:1 defect:1 relaxation:1 year:1 run:4 micron:7 place:1 almost:1 lamb:3 electronic:5 withers:1 bit:3 layer:6 followed:1 nixon:1 strength:2 straughn:1 software:1 speed:4 urn:1 relatively:2 developing:1 alternate:1 vacuum:1 smaller:2 across:2 ur:1 presently:1 discus:1 needed:1 ge:7 lambe:2 serf:1 photo:1 conformal:1 operation:5 denker:1 appropriate:2 alternative:1 gate:7 top:2 ensure:3 include:1 especially:1 leakage:1 contact:2 realized:2 exhibit:2 div:1 distance:1 thank:1 sci:3 distributive:1 decoder:1 propulsion:2 outer:1 length:2 providing:1 difficult:3 susceptible:1 trace:3 sage:1 insulating:1 implementation:3 design:5 collective:1 proper:2 evaporation:1 neuron:37 howard:1 thermal:1 gas:1 extended:1 prompt:6 required:4 connection:41 aerospace:1 california:2 hour:1 alternately:1 address:1 below:2 reading:1 saturation:2 program:1 rf:1 including:1 memory:5 reliable:1 power:2 difficulty:1 cascaded:1 scheme:2 interconnects:1 technology:6 prior:1 acknowledgement:1 drain:1 graf:1 suggestion:1 vg:6 localized:1 nickel:1 degree:1 metal:9 land:1 production:1 row:2 excitatory:2 bias:1 allow:1 institute:3 wide:1 feedback:7 curve:2 dimension:3 forward:1 author:1 simplified:1 tolerant:1 symmetrical:1 channel:21 ca:1 operational:1 expansion:1 investigated:1 excellent:2 electric:1 moopenn:4 tcr:2 vj:1 main:1 dense:1 heating:1 complementary:1 fig:3 precision:1 resistor:21 col:1 candidate:1 governed:1 externally:1 dozen:1 vac:1 fusion:1 incorporating:3 tennant:1 magnitude:1 nat:2 hole:1 simply:1 vth:3 adjustment:1 chang:1 acm:1 microsecond:6 room:2 feasible:1 deposition:2 content:2 typical:2 latter:1 evaluate:1 cascadable:1 tested:3
873
180
116 THE BOLTZMANN PERCEPTRON NETWORK: A MULTI-LAYERED FEED-FORWARD NETWORK EQUIVALENT TO THE BOLTZMANN MACHINE Eyal Yair and Allen Gersho Center for Infonnation Processing Research Department of Electrical & Computer Engineering University of California, Santa Barbara, CA 93106 ABSTRACT The concept of the stochastic Boltzmann machine (BM) is auractive for decision making and pattern classification purposes since the probability of attaining the network states is a function of the network energy. Hence, the probability of attaining particular energy minima may be associated with the probabilities of making certain decisions (or classifications). However, because of its stochastic nature, the complexity of the BM is fairly high and therefore such networks are not very likely to be used in practice. In this paper we suggest a way to alleviate this drawback by converting the stochastic BM into a deterministic network which we call the Boltzmann Perceptron Network (BPN). The BPN is functionally equivalent to the BM but has a feed-forward structure and low complexity. No annealing is required. The conditions under which such a convmion is feasible are given. A learning algorithm for the BPN based on the conjugate gradient method is also provided which is somewhat akin to the backpropagation algorithm. INTRODUCTION In decision-making applications, it is desirable to have a network which computes the probabilities of deciding upon each of M possible propositions for any given input pattern. In principle, the Boltzmann machine (BM) (Hinton, Sejnowski and Ackley, 1984) can provide such a capability. The network is composed of a set of binary units connected through symmetric connection links. The units are randomly and asynchronously changing their values in (O.l) according to a stochastic transition rule. The transition rule used by Hinton et. al. defines the probability of a unit to be in the 'on' state as the logistic function of the energy change resulting by changing the value of that unit. The BM can be described by an ergodic Markov chain in which the thermal equilibrium probability of attaining each state obeys the Boltzmann distribution which is a function of only the energy. By associating the set of possible propositions with subsets of network states, the probability of deciding upon each of these propositions can be measured by the probability of attaining the corresponding set of states. This probability is also affected by the temperature. As the temperature This work was supported by the Weizrnann Foundation for lCientific resean:h. by the University of California MICRO program. and by Ben Communications Relearch. Inc. 11 7 The Boltzmann Perceptron Network increases, the Boltzmann probability distribution become more uniform and thus the decision made is 'vague'. The lower the temperature the greater is the probability of attaining states with lower energy thereby leading to more 'distinctive' decisions. This approach, while very attractive in principle, has two major drawbacks which make the complexity of the computations become non-feasible for nontrivial problems. The first is the need for thermal equilibrium in order to obtain the Boltzmann distribution. To make distinctive decisions a low temperature is required. This implies slower convergence towards thermal eqUilibrium. GeneralJy, the method used to reach thermal equilibrium is simulated annealing (SA) (Kirkpatrick et. al., 1983) in which the temperature starts from a high value and is gradualJy reduced to the desired final value. In order to avoid 'freezing' of the network, the cooling schedule should be fairly slow. SA is thus time consuming and computationally expensive. The second drawback is due to the stochastic nature of the computation. Since the network state is a random vector, the desired probabilities have to be estimated by accumulating statistics of the network behavior for only a finite period of time. Hence, a trade-off between speed and accuracy is unavoidable. In this paper, we propose a mechanism to alleviate the above computational drawbacks by converting the stochastic BM into a functionally equivalent deterministic network, which we call the Boltzmann Perceptron Network (BPN). The BPN circumvents the need for a Monte Carlo type of computation and instead evaluates the desired probabilities using a multilayer perceptron-like network. The very time consuming learning process for the BM is similarly replaced by a deterministic learning scheme, somewhat akin to the backpropagalion algorithm. which is computationally affordable. The similarity between the learning algorithm of a BM having a layered structure and that of a two-layer perceptron has been recently pointed out by Hopfield (1987). In this paper we further elaborate on such an equivalence between the BM and the new perceptron-like network, and give the conditions under which the conversion of the stochastic BM into the deterministic BPN is possible. Unlike the original BM, the BPN is virtually always in thermal equilibrium and thus SA is no longer required. Nevertheless, the temperature still plays the same role and thus varying it may be beneficial to control the 'sofmess' of the decisions made by the BPN. Using the BPN as a soft classifier is described in details in (Yair and Gersho, 1989). THE BOLTZMANN PERCEPTRON NETWORK Suppose we have a network of K units connected through symmetric connection links with no self-feedback, so that the connection matrix r is symmetric and zero-diagonal. Let us categorize the units into three different types: input, output and hidden units. The input pattern will be supplied to the network by clamping the input units, denoted by ~ = (x It??? x; ?..? XI f, with this pattern. ~ is a real-valued vector in RI. The output of the network will be observed on the output units 1= (y It.. ,Y,,, ,?. ,YM)T, which is a binary vector. The remaining units. denoted y=(vt> .. ,Vj,,,,vJ)T. are the hidden units, which are also binary-valued. The hidden and output units are asynchronously and randomly changing their binary values in (O,I) according to inputs they receive from other units. The state of the network will be denoted by the vector y which is partitioned as follows: II =(~T,yT,IT). The energy associated with state y is denoted by EN and is given by: (1) where ~ is a vector of bias values. partitioned to comply with the partition of 11 as follows: ~T = (f.~T,fl). 118 Yair and Gersho The transition from one state to another is perfonned by selecting each time one unit, say unit k. at random and detennine its output value according to the following stochastic rule: set the output of the unit to 1 with probability Pl:, and to 0 with a probability 1- Pl: . The parameter Pl: is detennined locally by the k -th unit as a function of the energy change ~l: in the following fashion: ~ Pl:=g(~l:) , g(x) = 1 1 +e -lIx (2) ~l: = (E" (unit k is off) - E,,(unit k is on) ), and p= liT is a control parameter. T is called the temperature and g (.) is the logistic function. With this transition rule the thennal equilibrium probability P" of attaining a state .u. obeys the Boltzmann distribution: 1 -liE PIC = e ? Zx (3) where Zx, called the partition junction. is a nonnalization factCX' (independent of y and i) such that the sum of P" over all the 2' +M possible states will sum to unity. In order to use the network in a detenninistic fashion rather than accumulate statistics while observing its random behavior. we should be able to explicitly compute the probability of attaining a certain vector I on the output units while K is clamped on the input units. This probability. denoted by Py Ix ? can be expressed as: Py Ix = ~ P ~ ".y Ix = YEB] 1 7 ~ ~ ~ e -liE""", (4) yeB] where BJ is the set of all binary vectors of length J ? and v.y Ix denotes a state II in which a specific input vector K is clamped. The explicit evaluation of the desired probabilities therefore involves the computation of the partition function for which the number of operations grows exponentially with the number of units. That is. , the complexity is 0 (2'+M). Obviously. this is computationally unacceptable. Nevertheless. we shall see that under a certain restriction on the connection matrix r the explicit computation of the desired probabilities becomes possible with a complexity of 0 (1M) which is computationally feasible. Let us assume that for each input pattern we have M possible propositions which are associated with the M output vectors: 1M = (11 ?.. J". ?.. J.v) ? where L is the m-th column of the MxM identity matrix. Any state of the network having output vector I=L (for any m) will be denoted by v,m I x and will be called a feasible state. All other state vectors v .y I x for 1*L will be considered as intennediate steps between two feasible states. This redefinition of the network states is equivalent to redefining the states of the underlying Markov model of the network and thus conserves the equilibrium Boltzmann distribution. The probability of proposition m for a given input K. denoted by P1ft Ix. will be taken as the probability of obtaining output vector 1= L given that the output is one of the feasible values. That is. P 1ft Ix = Pr (I = L I K ? IEIM ) (5) which can be computed from (4) by restricting the state space to the 2' M feasible state vectors and by setting 1=L . The partition function. conditioned on restricting I to lie in the set of feasible oUtputs,IM. is denoted by ~ and is given by: (6) 119 The Boltzmann Perceptron Network Let us now partition the connection matrix r to comply with the partition of the state vector and rewrite the energy for the feasible state V.ln Ix as: -E.,,,,lx = yT(Rx+QL +~D2Y+~ + !!(WK+~D:J". +1.) + KT(~Dl:1+I). (7) Since X is clamped on the input units. the last tenn in the energy expression serves only as a bias tenn for the energy which is independent of the binary units y and X. Therefore. without any loss of generality it may be assumed that D1 = 0 and l..= Q ? The second tenn, denoted by T", Ix ? can be simplified since D3 has a zero diagonal. Hence. I T",lx = L Willi Xi (8) + S", i=1 The absence of the matrix D3 in the energy expression means that interconnections between output units have no effect on the probability of attaining output vectors Xe1M. and may be assumed absent without any loss of generality. Defining L", W to be: (9) in which q", is the m -th column of Q. the desired probabilities. P", Ix. for m=l .... M are obtained using (4) and (7) as a function of these quantities as follows: P",lx = -=-1 e-'"'00 with: ~ = LM e L,.oo (10) ~ "'~ The complexity of evaluating the desired probabilities P", Ix is still exponential with the number of hidden units J due to the sum in (9). However. if we impose the restriction that D2 = O. namely. the hidden units are not directly interconnected. then this sum becomes separable in the components Vj and thus can be decomposed into the product of only J tenns. This restricted connectivity of course imposes some restrictions on the capability of the network compared to that of a fully connected network. On the other hand. it allows the computation of the desired probabilities in a deterministic way with the attractive complexity of only 0 (JM) operations. The tedious estimation of conditional expectations commonly required by the learning algorithm for a BM and the annealing procedure are avoided. and an accurate and computationally affordable scheme becomes possible. We thus suggest a trade-off in which the operation and learning of the BM are tremendously simplified and the exact decision probabilities are computed (rather than their statistical estimates) at the expense of a restricted connectivity. namely. no interconnections are allowed between the hidden units. Hence. in our scheme. the connection matrix. r. becomes zero block-diagonal. meaning that the network has connections only between units of different categories. This structure is shown schematically in Figure 1. x MLP ......-----4~ Soft Competition p trt/~ Figure 1. Schematic architecture of the stochastic BM. Figure 2. Block diagram of the COlTespondjng detenninistic BPN. 120 Yair and Gersho By applying the property D2 =0 to (9). the sum over the space of hidden units. which can be explicitly written as the sum over all the J components of y. can be decomposed using the separability of the different Vj components into a sum of only J tenns as follows: J Lift 00 = ~T1ft b: + ~ I (V; I X) where: (11a) (llb) and I (-) is called the activation function. Note that as is increased I (-) approaches the linear threshold function in which a zero response is obtained for a negative input and a linear one (with slope ~) for a positive inpuL ~ Finally. the desired probabilities P1ft Ix can be expressed as a function of the Lift 00 (m=I ?..? M) in an expression which can be regarded as the generalization of the logistic function to M inputs: P 1ft Ix = [1 + f e _L...,IIw]-l where: . (12) 11=1 ""'" Eqs. (8) and (11) describe a two-layer feed-forward perceptron-like subnetwork which uses the nonlinearity 10. It evaluates the quantity Lift 00 which we call the score of the m-th proposition. Eq. (12) describes a competition between the scores Lift 00 generated by the M subnetworks (m=I ?..? M) which we call a solt competition with lateral inhibition. That is. If several scores receive relatively high values compared to the others. they will share. according to their relative strengths. the total amount (unity) of probability. while inhibiting the remaining probabilities to approach zero. For example. if one of the scores. say Lloo, is large compared to all the other scores. then the exponentiation of the pairwise score differences will result in P llx =1 while the remaining probabilities will approach zero. Specifically. for any n#C. P lllx = exp (-L l ,IIoo). which is essentially zero if Lloo is sufficiently high. In other words. by being large compared to the others, Ll W won the competition so that the corresponding probability P II x approaches unity, while all the remaining probabilities have been attenuated by the high value of Ll W to approach zero. Let us examine the effect of the gain ~ on this competition. When ~ is increased. the slope of the activation function I (-) is increased thereby accentuating the differences between the M contenders. In the limit when ~~. one of the Lift W will always be sufficiently large compared to the others. and thus only one proposition will win. The competition then becomes a winner-take-all competition. In this case, the network becomes a maximum a posteriori (MAP) decision scheme in which the Lift W play the role of nonlinear discriminant functions and the most probable proposition for the given input pattern is chosen: P l1x =1 for k=argmax{Llftool and Plllx=O for n~k. (13) 1ft This results coincides with our earlier observation that the temperature controls the 'softness' of the decision. The lower the temperature. the 'harder' is the competition and the more distinctive are the decisions. However. in contrast to the stochastic network, there is no need to gradually 'cool' the network to achieve a desired (low) temperature. Any desired value of ~ is directly applicable in the BPN scheme. The above notion of soft competition has its merits in a wider scope of applications apart from its role in the BPN classifier. In many competitive schemes a soft competition between a set of contenders has The Boltzmann Perceptron Network a substantial benefit in comparison to the winner-lake-all paradigm. The above competition scheme which can be implemented by a two-layer feed-forward network thus offers a valuable scheme for such purposes. The block diagram of the BPN is depicted in Figure 2. The BPN is thus a four-layer feedforward deterministic network. It is comprised of a two-layer perceptron-like network followed by a two-layer competition network. The competition can be ?hard ? (winner-ta1ce-all) or 'soft' (graded decision) and is governed by a single gain parameter ~. THE LEARNING ALGORITHM Let us denote the BPN output by the M -dimensional probability vector lx. where: lx = (P 11% ??? ,P'" 1% ??? ,Piii I%)T. For any given set of weights D. the BPN realizes some deterministic mapping'll: RI -+ [O.I:f so that lx = '?W. The objective of learning is to detennine the mapping 'II (by estimating the set of parameters ID which 'best' explains a finite set of examples given from the desired mapping and are called the training set. The training set is specified by a set of N patterns (~lt ...L ?.. ~ ) (denoted for simplicity by ( ~ }). the a priori probability for each training pattern ~: Q W. and the desired mapping for each pattern x: Ox = (Q 11% ????Q'" 1% ????QIiI 1% l. where Q", 1% =Pr (proposition m I I) is the desired probability. For each input pattern presented to the BPN. the actual output probability vector lx is. in general. different from the desired one, Ox. We denote by G% the distortion between the actual ~ and the desired Ox for an input pattern ~. Thus. our task is to determine the network weights (and the bias values) Dso that, on the average, the distortion over the whole set of training patterns will be minimized. Adopting the original distortion measure suggested for Boltzmann machines, the average distortion, GOO, is given by: M G%OO = L Q",I% In[Q", 1% I p",I%<ID1 (14) which is always non-negative since ~ and Ox are probability vectors. To minimize G <ID a gradient based minimization search is used. Specifically, a Partial Conjugate Gradient (pcG) search (Fletcher and Reeves, 1964; Luenberger, 1984) was found to be significantly more efficient than the ordinary steepest descent approach which is so widely used in multilayer perceptrons. A further discussion supporting this finding is given in (Yair and Gersho. 1989). For each set of weights we thus have to be able to compute the gradient &= Ve G of the cost function GOO. Let us denote the components of the 'instantaneous' gradient by G!I%=oo% los"" G:':'%=oG% loqjm' G; l%=oG% 10cj. G:'/%=OO% lOw,,", G~I%=oG% lorji. To get the full gradient, the instantaneous components should be accurnuwhile the input patterns are presented (one at a time) to the network. until one full cycle through the whole training set is completed. taied It is straightforward to show that the gradient may be evaluated in a recursive manner. in a fashion somewhat similar to the evaluation of the gradient by the backpropagation algorithm used for feed-forward networks (Rumelhart et. al., 1986). The evaluation of the gradient is accomplished by propagating the errors e", 1% = Q", 1% - P '" 1% through a linear network, termed the Error Propagation Network (EPN), as follows: G: b = Xi G!I% M G~I%= ~ G~I% J ~ ",=1 "" (15) G"I% = x.I GjC 1% ji The only new variables required are the b;I%. given by b;I%=g(V7 1%). which can be 121 122 Yair and Gersho V; easily obtained by applying the logistic nonlinearity to 1%. The above error propagation scheme can also be written in a matrix form. Let us define the following notation: L = (e 11% ....ellll% ... ,eM I%)T. E% will be a diagonal M xM matrix whose diagonal is L. Bx = [b~ IX]. a J xM matrix. Let .Lv denote a column vector of length M whose comb ... )T. and the ponents' are alII's. Similarly we will define the vectors: GAI % = (... AI matrices: G % = [G~X] with the appropriate dimensions (for any A.. E and 1'\). Hence. the error propagation can be written as: G: G6b = -~L Gf 1% = _~ B% E% , Gwlx : G C 1% = G61 % x.T = Gf 1%.Lv ? Gr 1% = fic 1% ~T (16) The EPN is depicted in Figure 3. This is a linear system composed of inner and outer products between matrices. which can be efficiently implemented using a neural network. The gradient I is used in the PCG update formula in which a new set of weights is created and is used for the next update iteration. The learning scheme of the BPN is given in Figure 4. Diag left right Bx ?1M right G$/% IT GW /% Gq/lC GqX Gr/ll Figure 3. The mar Propagation Network (EPN). 'Diag' is a diagooalization operator. 'right' and 1efi' are right and left multipliel'l, respectively. Figure 4. The learning scheme. The BPN outputs, ~lC , are compared with the desired probabilities, 9% . The resulted errors,!% ,propagate through the EPN to form the gradient I which is used in the PCG alg. to create the DeW weights. SIMULATION RESULTS We now present several simulation results of two-class classification problems with Gaussian sources. That is. we have two propositions represented by class 0 or class 1. Suppose there are L random sources (i = 1... ,1..) over the input space. some of them are attributed to class O. and the others to class 1. Each time. a source is chosen according to an a priori probability P (i). When chosen. the i -th source then emits a pattern ~ according to a probability density Q% Ii 00. Measuring a pattern X. it is desired to decide upon the most probable origin class - in the binary decision problem (MAP classifier). or obtain some estimate to Q 11%. the probability that class 1 emitted this pattern - in the soft classification problem. In the learning phase. a training set of size N was presented to the network. and the weights were iteratively modified by the learning scheme (Figure 4) until convergence. The final weights were used to construct a BPN classifier. which was then tested for new input patterns. The output classification probability of the BPN. P lI was compared with the true (Bayesian) conditional probability. QlI%W which was computed analytically. Results are shown in Figures 5-7. In Figure 5. two symmetric equi-probable Gaussian sources with substantial overlap were used. one for each class. The network was trained on N = 8 patterns with gain p= 1. Figure 5b shows how the BPN performs for the problem given in Figure Sa. For p= 1. Le.? when the same gain is used in both the training and classification phases. there is an almost perfect match between the BPN output. P 11% (x). denoted in the figures by 'p = 1'. and the true curve. Q11% (x). For p= 10. the high gain winner-lake-all %oo. 123 The Boltzmann Perceptron Network competition is taking place and the classifier becomes, practically, a binary (yes/no) decision network. In Fig. 6 disconnected decision regions were fonned by using four sources, two of which were attributed to each class. Again, a nearly perfect match can be seen between actual (P = 1) and desired (Q liz) outputs. Also, the simplicity of making 'hard' classification decisions by increasing the gain is again illustrated (p= 10). In Fig. 7 the classifier was required to find the boundary (expressed by QlIz =0.5) between two 2D classes. (5.a) >- .... 'iii 0 .3 ....>- D." D.2 ~ >- C? / :0 cs .0 t ., .0 / / / 0 .1 / 0.3 :0 == 0.2 ...0.o 0.1 D.O l-I-oc::I::..J....I....J....L............J.....I--'--'L...J.....L...I..::.......,.ol...J -2 0 2 -" \ I c C >- (7.a) 10 'iii '1:1 ;:: (S.a) I I , class 0 I / Q-tl / / "' \ I I / Ouu~~~~uu~~~~u -10 input pattern - x 0 o 10 2 input pattern - x " 8 8 10 (7.b) .,.'" ., III III U ... o .0 u ... D.6 J=l e 0. o 8 PI - 0.5 0.6 4 ~0. 2 0 .0 o input pattern - x Figure 5. L....,j"A-.L....J.--L.......I..-Io......o.............L.L.J.--'--'--1o....J -6 0 6 10 J=6 D L...J..........................J....I...............J....L..................................&-I...J 0 2 4 8 I input pattern - x Figure 6. Figure 7. Figure S: Classification for Gaussian sources. (S.a) The two sources. (S.b) 'Soft' ~= 1) and 'hard' ~ = 10) classifications versus Q 11". J indicates the nwnber of hidden units used. Figure 6: Classification for disconnected decision regions. (6.a) The sources used: dashed lines indicate class 0 and solid lines - class 1. (6.b) Soft (p= 1) and hard (P= 10) classifications versus Q11,,' Figure 7: Classification in a 2D space. (7.a) The two classes and the true boundary indicated by QII" =0.5. (7.b) The boundary found by the BPN, marked by P II" =0.5, versus the true one. References Fletcher, R., Reeves. C.M. (1964). Function minimizatioo by conjugate gradienu. Complll" J., 7, 149-154. Hinton, G.E., Sejnowlki T .R., & Ackley D.H. (1984). Boltmwm machines: constraint satiJfaction networks that learn. CarMgi.-MeJQII T'CMicaJ R,port, CMU-CS-84-U9. Hopfield, 1.1. (1987). Learning algoritJuns IDd probability distributions in feed-forward IDd feed-back networks. Proc. Nail. Acad. Sci. USA, 84, 8429-8433. Kirk~trick, S., Gelatt, C.D., & Vecchi M.P. (1983). Optimization by simulated annealing. Sci'N:', 220, 61l-680. Luenberger, D.G. (1984). LiMar aNi lIOn/war programming, AddiJon-Wesley, Reading, Man. Rumelhart, D.E., Hinton, G.E., & Williams RJ. (1986). Learning internal representations by enor propagatioo. In D.E. Rumelhart & 1.1- McOelland (Eds.), ptJTtJIJeJ Distriblll,d Procus;"g., MIT Press/Brad{ord Books. Yair, E., & Gersho, A. (1989). The Bollmlann perecpcron network: a soft clusifier. Submitted to the JOIITJfQI of N'lII'aJ N'tworb, December, 1988.
180 |@word tedious:1 d2:2 simulation:2 propagate:1 thereby:2 solid:1 harder:1 score:6 selecting:1 activation:2 written:3 partition:6 update:2 tenn:3 steepest:1 equi:1 yeb:2 lx:7 unacceptable:1 become:2 comb:1 manner:1 pairwise:1 behavior:2 examine:1 multi:1 ol:1 decomposed:2 actual:3 jm:1 increasing:1 becomes:7 provided:1 estimating:1 underlying:1 notation:1 intennediate:1 enor:1 finding:1 softness:1 classifier:6 control:3 unit:31 positive:1 engineering:1 limit:1 io:1 acad:1 id:2 equivalence:1 qii:1 obeys:2 practice:1 block:3 recursive:1 backpropagation:2 procedure:1 significantly:1 nonnalization:1 word:1 suggest:2 get:1 layered:2 operator:1 applying:2 accumulating:1 py:2 equivalent:4 restriction:3 map:2 center:1 yt:2 deterministic:7 straightforward:1 williams:1 ergodic:1 simplicity:2 rule:4 d1:1 regarded:1 notion:1 play:2 suppose:2 exact:1 programming:1 us:1 origin:1 trick:1 rumelhart:3 expensive:1 conserve:1 cooling:1 observed:1 ackley:2 role:3 lix:1 ft:3 electrical:1 region:2 connected:3 l1x:1 cycle:1 trade:2 goo:2 valuable:1 substantial:2 complexity:7 trained:1 rewrite:1 upon:3 distinctive:3 vague:1 easily:1 hopfield:2 represented:1 describe:1 sejnowski:1 monte:1 lift:6 whose:2 widely:1 valued:2 say:2 distortion:4 interconnection:2 statistic:2 asynchronously:2 final:2 obviously:1 propose:1 interconnected:1 product:2 gq:1 detennined:1 achieve:1 competition:14 los:1 qli:1 convergence:2 perfect:2 ben:1 wider:1 oo:5 propagating:1 measured:1 eq:2 sa:4 implemented:2 cool:1 involves:1 implies:1 c:2 uu:1 indicate:1 drawback:4 stochastic:10 explains:1 generalization:1 alleviate:2 proposition:10 probable:3 im:1 pl:4 practically:1 sufficiently:2 considered:1 liz:1 deciding:2 exp:1 equilibrium:7 scope:1 bj:1 mapping:4 lm:1 fletcher:2 inhibiting:1 major:1 purpose:2 estimation:1 proc:1 applicable:1 realizes:1 infonnation:1 create:1 minimization:1 mit:1 always:3 gaussian:3 modified:1 rather:2 avoid:1 varying:1 og:3 indicates:1 contrast:1 tremendously:1 posteriori:1 hidden:8 classification:12 denoted:11 priori:2 pcg:3 fairly:2 construct:1 having:2 lit:1 nearly:1 minimized:1 others:4 micro:1 randomly:2 composed:2 ve:1 resulted:1 replaced:1 argmax:1 phase:2 mlp:1 evaluation:3 willi:1 kirkpatrick:1 chain:1 kt:1 accurate:1 detenninistic:2 partial:1 desired:19 increased:3 column:3 soft:9 earlier:1 measuring:1 ordinary:1 cost:1 subset:1 uniform:1 comprised:1 gr:2 contender:2 density:1 off:3 ym:1 connectivity:2 dso:1 again:2 unavoidable:1 q11:2 v7:1 book:1 lii:1 leading:1 bx:2 li:1 attaining:8 wk:1 inc:1 explicitly:2 eyal:1 observing:1 start:1 competitive:1 capability:2 slope:2 minimize:1 accuracy:1 efficiently:1 yes:1 bayesian:1 carlo:1 rx:1 zx:2 submitted:1 reach:1 ed:1 evaluates:2 energy:11 associated:3 attributed:2 gain:6 emits:1 cj:1 schedule:1 back:1 feed:7 wesley:1 response:1 evaluated:1 ox:4 mar:1 generality:2 until:2 hand:1 freezing:1 nonlinear:1 propagation:4 defines:1 logistic:4 aj:1 indicated:1 grows:1 usa:1 effect:2 concept:1 true:4 hence:5 analytically:1 symmetric:4 iteratively:1 illustrated:1 attractive:2 gw:1 ll:4 self:1 won:1 coincides:1 oc:1 trt:1 performs:1 allen:1 temperature:10 meaning:1 iiw:1 instantaneous:2 recently:1 accentuating:1 ji:1 winner:4 exponentially:1 functionally:2 accumulate:1 llx:1 ai:1 reef:2 similarly:2 pointed:1 nonlinearity:2 similarity:1 longer:1 inhibition:1 inpul:1 apart:1 barbara:1 termed:1 certain:3 binary:8 tenns:2 vt:1 accomplished:1 seen:1 minimum:1 greater:1 somewhat:3 impose:1 dew:1 converting:2 determine:1 paradigm:1 period:1 dashed:1 ii:6 full:2 desirable:1 rj:1 match:2 offer:1 schematic:1 multilayer:2 redefinition:1 mxm:1 expectation:1 essentially:1 affordable:2 iteration:1 cmu:1 adopting:1 llb:1 receive:2 schematically:1 annealing:4 diagram:2 source:9 unlike:1 virtually:1 december:1 idd:2 call:4 emitted:1 feedforward:1 iii:4 architecture:1 associating:1 inner:1 attenuated:1 absent:1 expression:3 war:1 akin:2 santa:1 amount:1 locally:1 category:1 reduced:1 supplied:1 estimated:1 shall:1 affected:1 four:2 nevertheless:2 threshold:1 d3:2 changing:3 ani:1 sum:7 exponentiation:1 place:1 almost:1 decide:1 nail:1 bpn:24 lake:2 circumvents:1 decision:17 layer:6 fl:1 epn:4 followed:1 nontrivial:1 strength:1 constraint:1 ri:2 speed:1 vecchi:1 separable:1 relatively:1 department:1 according:6 disconnected:2 conjugate:3 beneficial:1 describes:1 em:1 separability:1 unity:3 partitioned:2 making:4 restricted:2 pr:2 gradually:1 taken:1 computationally:5 ln:1 mechanism:1 merit:1 fic:1 gersho:7 serf:1 subnetworks:1 luenberger:2 junction:1 detennine:2 operation:3 efi:1 appropriate:1 gelatt:1 yair:7 slower:1 original:2 denotes:1 remaining:4 completed:1 graded:1 mcoelland:1 objective:1 quantity:2 diagonal:5 subnetwork:1 gradient:11 win:1 link:2 simulated:2 lateral:1 sci:2 outer:1 discriminant:1 length:2 ql:1 expense:1 negative:2 boltzmann:17 conversion:1 ord:1 observation:1 markov:2 finite:2 descent:1 thermal:5 supporting:1 defining:1 hinton:4 communication:1 nwnber:1 pic:1 namely:2 required:6 specified:1 minimizatioo:1 connection:7 redefining:1 california:2 able:2 suggested:1 lion:1 pattern:22 xm:2 reading:1 program:1 u9:1 perfonned:1 overlap:1 scheme:12 created:1 gf:2 alii:1 comply:2 relative:1 loss:2 fully:1 versus:3 lv:2 foundation:1 imposes:1 principle:2 port:1 share:1 pi:1 course:1 supported:1 last:1 bias:3 perceptron:13 taking:1 benefit:1 feedback:1 dimension:1 curve:1 transition:4 evaluating:1 boundary:3 computes:1 forward:6 made:2 commonly:1 fonned:1 simplified:2 bm:15 avoided:1 lloo:2 assumed:2 consuming:2 xi:3 search:2 nature:2 learn:1 ca:1 obtaining:1 alg:1 vj:4 diag:2 whole:2 allowed:1 fig:2 en:1 tl:1 elaborate:1 fashion:3 gai:1 slow:1 lc:2 explicit:2 exponential:1 lie:3 clamped:3 governed:1 kirk:1 ix:13 formula:1 specific:1 dl:1 restricting:2 conditioned:1 clamping:1 depicted:2 lt:1 likely:1 expressed:3 brad:1 conditional:2 identity:1 marked:1 towards:1 ponents:1 absence:1 feasible:9 change:2 hard:4 man:1 specifically:2 called:5 total:1 id1:1 perceptrons:1 internal:1 categorize:1 tested:1
874
1,800
Learning continuous distributions: Simulations with field theoretic priors lIya Nemenman1 ,2 and William Bialek2 of Physics, Princeton University, Princeton, New Jersey 08544 2NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540 nemenman@research.nj.nec.com, bialek@research.nj.nec.com 1 Department Abstract Learning of a smooth but nonparametric probability density can be regularized using methods of Quantum Field Theory. We implement a field theoretic prior numerically, test its efficacy, and show that the free parameter of the theory (,smoothness scale') can be determined self consistently by the data; this forms an infinite dimensional generalization of the MDL principle. Finally, we study the implications of one's choice of the prior and the parameterization and conclude that the smoothness scale determination makes density estimation very weakly sensitive to the choice of the prior, and that even wrong choices can be advantageous for small data sets. One of the central problems in learning is to balance 'goodness of fit' criteria against the complexity of models. An important development in the Bayesian approach was thus the realization that there does not need to be any extra penalty for model complexity: if we compute the total probability that data are generated by a model, there is a factor from the volume in parameter space-the 'Occam factor' -that discriminates against models with more parameters [1, 2]. This works remarkably welJ for systems with a finite number of parameters and creates a complexity 'razor' (after 'Occam's razor') that is almost equivalent to the celebrated Minimal Description Length (MDL) principle [3]. In addition, if the a priori distributions involved are strictly Gaussian, the ideas have also been proven to apply to some infinite-dimensional (nonparametric) problems [4]. It is not clear, however, what happens if we leave the finite dimensional setting to consider nonparametric problems which are not Gaussian, such as the estimation of a smooth probability density. A possible route to progress on the nonparametric problem was opened by noticing [5] that a Bayesian prior for density estimation is equivalent to a quantum field theory (QFT). In particular, there are field theoretic methods for computing the infinite dimensional analog of the Occam factor, at least asymptotically for large numbers of examples. These observations have led to a number of papers [6, 7, 8, 9] exploring alternative formulations and their implications for the speed of learning. Here we return to the original formulation of Ref. [5] and use numerical methods to address some of the questions left open by the analytic work [10]: What is the result of balancing the infinite dimensional Occam factor against the goodness of fit? Is the QFT inference optimal in using alJ of the information relevant for learning [II]? What happens if our learning problem is strongly atypical of the prior distribution? Following Ref. [5], if N i. i. d. samples {Xi}, i = 1 ... N, are observed, then the probability that a particular density Q(x) gave rise to these data is given by P[Q(x)l{x.}] P[Q(x)] rr~1 Q(Xi) ? - J[dQ(x)]P[Q(x)] rr~1 Q(Xi) , (1) where P[Q(x)] encodes our a priori expectations of Q. Specifying this prior on a space of functions defines a QFf, and the optimal least square estimator is then Q (I{ .}) - (Q(X)Q(Xl)Q(X2) ... Q(XN)}(O) est X X. (Q(Xl)Q(X2) ... Q(XN ))(0) , (2) where ( ... )(0) means averaging with respect to the prior. Since Q(x) ~ 0, it is convenient to define an unconstrained field ?(x), Q(x) (l/io)exp[-?(x)]. Other definitions are also possible [6], but we think that most of our results do not depend on this choice. = The next step is to select a prior that regularizes the infinite number of degrees of freedom and allows learning. We want the prior P[?] to make sense as a continuous theory, independent of discretization of x on small scales. We also require that when we estimate the distribution Q(x) the answer must be everywhere finite. These conditions imply that our field theory must be convergent at small length scales. For x in one dimension, a minimal choice is P[?(x)] 1 11 1 = Z exp [?2 --2- f (8 dx [1 f 11 8xll?)2] c5 io ] dxe-?(x) -1 , (3) where'T/ > 1/2, Z is the normalization constant, and the c5-function enforces normalization of Q. We refer to i and 'T/ as the smoothness scale and the exponent, respectively. In [5] this theory was solved for large Nand 'T/ = 1: N (II Q(Xi))(O) ~ (4) = (5) + (6) i=1 Seff i8;?c1 (x) where ?cl is the 'classical' (maximum likelihood, saddle point) solution. In the effective action [Eq. (5)], it is the square root term that arises from integrating over fluctuations around the classical solution (Occam factors). It was shown that Eq. (4) is nonsingular even at finite N, that the mean value of ?c1 converges to the negative logarithm of the target distribution P(x) very quickly, and that the variance of fluctuations 'Ij;(x) ?(x) [- log ioP( x)] falls off as ....., 1/ iN P( x). Finally, it was speculated that if the actual i is unknown one may average over it and hope that, much as in Bayesian model selection [2], the competition between the data and the fluctuations will select the optimal smoothness scale i*. J = At the first glance the theory seems to look almost exactly like a Gaussian Process [4]. This impression is produced by a Gaussian form of the smoothness penalty in Eq. (3), and by the fluctuation determinant that plays against the goodness of fit in the smoothness scale (model) selection. However, both similarities are incomplete. The Gaussian penalty in the prior is amended by the normalization constraint, which gives rise to the exponential term in Eq. (6), and violates many familiar results that hold for Gaussian Processes, the representer theorem [12] being just one of them. In the semi--classical limit of large N, Gaussianity is restored approximately, but the classical solution is extremely non-trivial, and the fluctuation determinant is only the leading term of the Occam's razor, not the complete razor as it is for a Gaussian Process. In addition, it has no data dependence and is thus remarkably different from the usual determinants arising in the literature. The algorithm to implement the discussed density estimation procedure numerically is rather simple. First, to make the problem well posed [10, 11] we confine x to a box a ~ x ~ L with periodic boundary conditions. The boundary value problem Eq. (6) is then solved by a standard 'relaxation' (or Newton) method of iterative improvements to a guessed solution [13] (the target precision is always 10- 5 ). The independent variable x E [0,1] is discretized in equal steps [10 4 for Figs. (l.a-2.b), and 105 for Figs. (3.a, 3.b)]. We use an equally spaced grid to ensure stability of the method, while small step sizes are needed since the scale for variation of ?el (x) is [5] (7) c5x '" Jl/NP(x) , which can be rather small for large N or smalll. Since the theory is short scale insensitive, we can generate random probability densities chosen from the prior by replacing ? with its Fourier series and truncating the latter at some sufficiently high wavenumber kc [k c = 1000 for Figs. (l.a-2.b), and 5000 for Figs. (3.a, 3.b)]. Then Eq. (3) enforces the amplitude of the k'th mode to be distributed a priori normally with the standard deviation 21 / 2 uk = l'l/-1/2 (L) '1/ 27rk (8) Coded in such a way, the simulations are extremely computationally intensive. Therefore, Monte Carlo averagings given here are only over 500 runs, fluctuation determinants are calculated according to Eq. (5), not using numerical path integration, and Qcl = (l/lo) eXP[-?ed is always used as an approximation to Qest. As an example of the algorithm's performance, Fig. (l.a) shows one particular learning run for TJ = 1 and l = 0.2. We see that singularities and overfitting are absent even for N as low as 10. Moreover, the approach of Qel(X) to the actual distribution P(x) is remarkably fast: for N = 10, they are similar; for N = 1000, very close; for N = 100000, one needs to look carefully to see the difference between the two. To quantify this similarity of distributions, we compute the Kullback-Leibler divergence DKL(PIIQest) between the true distribution P(x) and its estimate Qest(x), and then average over the realizations of the data points and the true distribution. As discussed in [11], this learning curve A(N) measures the (average) excess cost incurred in coding the N + 1 'st data point because of the finiteness of the data sample, and thus can be called the "universalleaming curve". If the inference algorithm uses all of the information contained in the data that is relevant for learning ("predictive information" [11]), then [5, 9, 11, 10] A(N) '" (L/l)1/2'1/N1/2'1/- 1. (9) We test this prediction against the learning curves in the actual simulations. For TJ = 1 and l = 0.4, 0.2, 0.05, these are shown on Fig. (l.b). One sees that the exponents are extremely close to the expected 1/2, and the ratios of the prefactors are within the errors from the predicted scaling'" 1/Vi. All of this means that the proposed algorithm for finding densities not only works, but is at most a constant factor away from being optimal in using the predictive information of the sample set. Next we investigate how one's choice of the prior influences learning. We first stress that there is no such thing as a wrong prior. If one admits a possibility of it being wrong, then (a) (b) 3.5 3 10' Fit for 10 samples Fit for 1000 samples - - - Fit for 100000 samples Actual distribution - e~ , .. 2.5 :s:: Cl 10-' 2 ~ O'i ?5 ... . . . ~' . "0", < o. .. . o? .. "'- 10-' '0. ". ? 0 '. ". 0.5 0 0 -<>~. 1=0.4, data and best fit 1=0.2 , data and best fit 1=0.05 , data and best fit 10-' 0.2 0.4 0.6 x 0.8 10' 10' 10' N 10' 10' Figure 1: (a) QcJ found for different N at f = 0.2. (b) A as a function of N and f. The best fits are: for f = 0.4, A = (0.54 ? 0.07)N-o. 4 83?O.014; for f = 0.2, A = (0.83 ? 0.08)N- o.493 ?O.09; for f = 0.05, A = (1.64 ? 0.16)N- o.50 7?O.09. it does not encode all of the a priori knowledge! It does make sense, however, to ask what happens if the distribution we are trying to learn is an extreme outlier in the prior P[?]. One way to generate such an example is to choose a typical function from a different prior PI[?], and this is what we mean by 'learning with a wrong prior.' If the prior is wrong in this sense, and learning is described by Eqs. (2-6), then we still expect the asymptotic behavior, Eq. (9), to hold; only the prefactors of A should change, and those must increase since there is an obvious advantage in having the right prior; we illustrate this in Figs. (2.a, 2.b). For Fig. (2.a), both PI[?] and P[?] are given by Eq. (3), but pI has the 'actual' smoothness scale fa = 0.4, 0.05, and for P the ' learning' smoothness scale is f = 0.2 (we show the case fa = f = 0.2 again as a reference). The A,...., l/VN behavior is seen unmistakably. The prefactors are a bit larger (unfortunately, insignificantly) than the corresponding ones from Fig. (1.b), so we may expect that the 'right' f, indeed, provides better learning (see later for a detailed discussion). Further, Fig. (2.b) illustrates learning when not only f, but also 'fI is 'wrong' in the sense defined above. We illustrate this for 'fIa = 2, 0.8, 0.6, 0 (remember that only 'fIa > 0.5 removes UV divergences). Again, the inverse square root decay of A should be observed, and this is evident for 'fIa = 2. The 'fIa = 0.8,0.6,0 cases are different: even for N as high as 105 the estimate of the distribution is far from the target, thus the asymptotic regime is not reached. This is a crucial observation for our subsequent analysis of the smoothness scale determination from the data. Remarkably, A (both averaged and in the single runs shown) is monotonic, so even in the cases of qualitatively less smooth distributions there still is no overfitting. On the other hand, A is well above the asymptote for 'fI = 2 and small N , which means that initially too many details are expected and wrongfully introduced into the estimate, but then they are almost immediately (N ,...., 300) eliminated by the data. Following the argument suggested in [5], we now view P[?], Eq. (3), as being a part of some wider model that involves a prior over l. The details of the prior are irrelevant, however, if 8 e ff(f), Eq. (5), has a minimum that becomes more prominent as N grows. We explicitly note that this mechanism is not tuning of the prior's parameters, but Bayesian inference at work: f* emerges in a competition between the smoothness, the data, and the Occam terms to make 8 e ff smaller, and thus the total probability of the data is larger. In its (a) (b) 10' 0"', 0, 10-' '" . . 0' < 0 - _ 0, '~ '0 0, '", 10-' - 0- - e- 10-' 10' 10' N '0, 10? '0- , ______ 11a=1, '.=0.2, data, best fit 10-3 - 9 - 11 a =2, '. =O.l , data, best fit TJ. =0.8, '. =0.1, data, best fit - 0TJ.=0.6, '. =0.1 , data, one run 10-4 TJ.=O, '. =0.12, data, one run 6 "', '.=0.2, data and best fit '.=0.4, data and best fit '. =0 .05 , data and best fit 10' < 10-' '0 'Q. 10' 10' 10' 10' ''', 10? "'" 10' N Figure 2: (a) A as a function of N and fa. Best fits are: for fa = 0.4, A = (0.56 ? 0.08)N- o.47 7?O.015; for fa = 0.05, A = (1.90 ? 0.16)N- o.502 ?o.oo8. Learning is always with f = 0.2. (b) A as a function of N, 1/a and fa. Best fits: for 1/a = 2, fa = 0.1, A = (0.40?0.05)N-o.4 93?O.013; for1/a = 0.8, fa = 0.1, A = (1.06?0.08)N-o.355?O.008. f = 0.2 for all graphs, but the one with 1/a = 0, for which f = 0.1. turn, larger probability means shorter total code length. The data term, on average, is equal to NDKL(PIIQc1), and, for very regular P(x) (an implicit assumption in [5]), it is small. Thus only the kinetic and the Occam terms matter, and f* ,..., N 1 /3[5]. For less regular distributions P(x), this is not true [cf. Fig. (2.b)]. For 1/ = 1, Qc1(X) approximates large-scale features of P(x) very well, but details at scales smaller than,..., Jf/NL are averaged out. If P(x) is taken from the prior, Eq. (3), with some 1/a, then these details fall off with the wave number k as ,..., k-'T/a. Thus the data term is,..., N1.5-'T/ a f'T/ a - O. 5 and is not necessarily small. For 1/a < 1.5 this dominates the kinetic term and competes with the fluctuations to set f* ,..., N('T/a- 1 )/'T/a, 1/a < 1.5. (10) There are two remarkable things about Eq. (10). First, for 1/a = 1, f* stabilizes at some constant value, which we expect to be equal to fa. Second, even for 1/ f:. 1/a, Eqs. (9, 10) ensure that A scales as ,..., N 1 / 2 'T/ a -1 , which is at worst a constant factor away from the best scaling, Eq. (9), achievable with the 'right' prior, 1/ = 1/a. So, by allowing f* to vary with N we can correctly capture the structure of models that are qualitatively different from our expectations (1/ f:. 1/a) and produce estimates of Q that are extremely robust to the choice of the prior. To our knowledge, this feature has not been noted before in a reference to a nonparametric problem. We present simulations relevant to these predictions in Figs. (3.a, 3.b). Unlike on the previous Figures, the results are not averaged due to extreme computational costs, so all our further claims have to be taken cautiously. On the other hand, selecting f* in single runs has some practical advantages: we are able to ensure the best possible learning for any realization of the data. Fig. (3.a) shows single learning runs for various 1/a and fa . In addition, to keep the Figure readable, we do not show runs with 1/a = 0.6, 0.7, 1.2, 1.5,3, and 1/a -+ 00, which is a finitely parameterizable distribution. All of these display a good agreement with the predicted scalings: Eq. (10) for 1/a < 1.5, and f* ,..., N 1 /3 otherwise. Next we calculate the KL divergence between the target and the estimate at f = f*; the average of this divergence over the samples and the prior is the learning curve [cf. Eq. (9)]. For 1/a = 0.8,2 we plot the divergencies on Fig. (3.b) side by side with their fixed f = 0.2 (a) (b) 10' ,-----~-----~----~ ,-------:c---.----.--".,-- - - - - - - - , 0 .2 10-2 --+--11a- 1 , 'a- "0.. - - -0- _ _ ~_ ~ '1.=0.8.1.=0.1. I=f 10-4 ~ '1.=0.8. 1.=0.1. 1=0.2 _ 0 _ '1.=2. 1.=0.1 . 1= f _" _ '1.=2 . 1.=0.1. 1=0.2 ""- '1.=0.8. 1.=0.1 --<>- '1.=1. variable I?? mean 0.12 o '1.=2 . 1. =0 .1 _ 0 - '0. 10-3 l'=========='-_~_----.J 1~ 1~ N 1~ N Figure 3: (a) Comparison of learning speed for the same data sets with different a priori assumptions. (b) Smoothness scale selection by the data. The lines that go off the axis for small N symbolize that Seff monotonically decreases as ? -+ 00. analogues. Again, the predictions clearly are fulfilled. Note, that for 'TJa qualitative advantage in using the data induced smoothness scale. :I 'TJ there is a The last four Figures have illustrated some aspects of learning with 'wrong' priors. However, all of our results may be considered as belonging to the 'wrong prior' class. Indeed, the actual probability distributions we used were not nonparametric continuous functions with smoothness constraints, but were composed of kc Fourier modes, thus had 2kc parameters. For finite parameterization, asymptotic properties of learning usually do not depend on the priors (cf. [3, 11]), and priorless theories can be considered [14]. In such theories it would take well over 2kc samples to even start to close down on the actual value of the parameters, and yet a lot more to get accurate results. However, using the wrong continuous parameterization [4>(x)] we were able to obtain good fits for as low as 1000 samples [cf. Fig. (l.a)] with the help of the prior Eq. (3). Moreover, learning happened continuously and monotonically without huge chaotic jumps of overfitting that necessarily accompany any brute force parameter estimation method at low N. So, for some cases, a seemingly more complex model is actually easier to learn! Thus our claim: when data are scarce and the parameters are abundant, one gains even by using the regularizing powers of wrong priors. The priors select some large scale features that are the most important to learn first and fill in the details as more data become available (see [11] on relation of this to the Structural Risk Minimization theory). If the global features are dominant (arguably, this is generic), one actually wins in the learning speed [cf. Figs. (l.b, 2.a, 3.b)]. If, however, small scale details are as important, then one at least is guaranteed to avoid overfitting [cf. Fig. (2.b)]. One can summarize this in an Occam-like fashion [11]: if two models provide equally good fits to data, a simpler one should always be used. In particular, the predictive information, which quantifies complexity [11], and of which A is the derivative, in a QFT model is ......, N 1 / 2 TJ, and it is . . . , kc log N in the parametric case. So, for kc > N 1 / 2 TJ, one should prefer a 'wrong' QFT formulation to the correct finite parameter model. These results are very much in the spirit of our whole program: not only is the value of ?* selected that simplifies the description of the data, but the continuous parameterization itself serves the same purpose. This is an unexpectedly neat generalization of the MDL principle [3] to non parametric cases. Summary: The field theoretic approach to density estimation not only regularizes the learning process but also allows the self-consistent selection of smoothness criteria through an infinite dimensional version of the Occam factors. We have shown numerically that this works, even more clearly than was conjectured: for "la < 1.5, the learning curve truly becomes a property of the data, and not of the Bayesian prior! If we can extend these results to other "la and combine this work with the reparameterization invariant formulation of [7, 8], this should give a complete theory of Bayesian learning for one dimensional distributions, and this theory has no arbitrary parameters. In addition, if this theory properly treats the limit "la -* 00, we should be able to see how the well-studied finite dimensional Occam factors and the MDL principle arise from a more general nonparametric formulation . References [1] D. MacKay, Neural Compo 4,415-448 (1992). [2] V. Balasubramanian, Neural Compo 9, 349-368 (1997), http://xxx . lanl . gov/abs/ a d a p - org/9601001 . [3] J. Rissanen. Stochastic Complexity and Statistical Inquiry. World Scientific, Singapore (1989). [4] D. MacKay, NIPS , Tutorial Lecture Notes (1997), ftp : //wol . ra . phy . c a m. ac . uk/pub/ma ckay/gp . ps . gz. [5] W. Bialek, C. Callan, and S. Strong, Phys. Rev. Lett. 77, 4693-4697 (1996), http : //xxx.l a nl . gov/ a bs/cond-ma t/96071BO. [6] T. Holy, Phys. Rev. Lett. 79,3545-3548 (1997), http : //xxx . l a nl . gov/ a bs/physics/9706015 . [7] V. Periwal, Phys. Rev. Lett. 78,4671-4674 (1997), http://xxx . lanl . gov/he p - th/9703135 . [8] V. Periwal, Nuc!. Phys. B, 554 [FS], 719-730 (1999), http://xxx.l a nl . gov/ a dap- org/9B01001 . [9] T. Aida, Phys. Rev. Lett. 83, 3554-3557 (1999), http : //xxx . l a nl . gov/cond- ma t/9911474. [10] A more detailed version of our current analysis may be found in: I. Nemenman, Ph.D. Thesis, Princeton, (2000), http : //xxx . l a n l . gov/ a bs/phys i cs/OOO 9032 . [11] W. Bialek, I. Nemenman, N. Tishby. Preprint http : //xxx . l a nl . gov/ a bs/physics/0007070 . [12] G. Wahba. In B. Sh6lkopf, C. 1. S. Burges, and A. 1. Smola, eds., Advances in Kernel Methods-Support Vector Learning, pp. 69-88. MIT Press, Cambridge, MA (1999), ftp : //ftp . st a t . wisc . edu/pub/wa hb a /nips97rr . ps . [13] w. Press et al. Numerical Recipes in C. Cambridge UP, Cambridge (1988). [14] Vapnik, V. Statistical Learning Theory. John Wiley & Sons, New York (1998).
1800 |@word determinant:4 version:2 achievable:1 advantageous:1 seems:1 open:1 simulation:4 holy:1 phy:1 celebrated:1 series:1 efficacy:1 selecting:1 pub:2 current:1 com:2 discretization:1 yet:1 dx:1 must:3 john:1 numerical:3 subsequent:1 analytic:1 remove:1 asymptote:1 plot:1 selected:1 parameterization:4 short:1 compo:2 provides:1 org:2 simpler:1 become:1 qualitative:1 combine:1 indeed:2 ra:1 expected:2 behavior:2 discretized:1 balasubramanian:1 gov:8 actual:7 becomes:2 moreover:2 competes:1 what:5 finding:1 nj:2 remember:1 exactly:1 wrong:11 uk:2 brute:1 normally:1 arguably:1 before:1 treat:1 limit:2 io:2 fluctuation:7 path:1 approximately:1 studied:1 specifying:1 averaged:3 practical:1 enforces:2 implement:2 chaotic:1 procedure:1 convenient:1 integrating:1 regular:2 get:1 close:3 selection:4 risk:1 influence:1 equivalent:2 go:1 truncating:1 immediately:1 estimator:1 fill:1 reparameterization:1 stability:1 variation:1 target:4 play:1 us:1 agreement:1 observed:2 preprint:1 solved:2 capture:1 worst:1 calculate:1 unexpectedly:1 decrease:1 cautiously:1 discriminates:1 complexity:5 weakly:1 depend:2 predictive:3 creates:1 jersey:2 various:1 fast:1 effective:1 monte:1 posed:1 larger:3 otherwise:1 think:1 gp:1 itself:1 seemingly:1 advantage:3 rr:2 relevant:3 realization:3 description:2 competition:2 recipe:1 p:2 produce:1 leave:1 converges:1 wider:1 illustrate:2 help:1 ftp:3 ac:1 finitely:1 ij:1 alj:1 progress:1 eq:19 strong:1 predicted:2 involves:1 c:1 quantify:1 correct:1 opened:1 stochastic:1 wol:1 violates:1 require:1 generalization:2 singularity:1 strictly:1 exploring:1 hold:2 around:1 confine:1 sufficiently:1 considered:2 exp:3 claim:2 stabilizes:1 vary:1 purpose:1 estimation:6 sensitive:1 hope:1 minimization:1 mit:1 clearly:2 gaussian:7 always:4 rather:2 avoid:1 encode:1 improvement:1 consistently:1 properly:1 likelihood:1 sense:4 inference:3 el:1 nand:1 initially:1 kc:6 relation:1 priori:5 exponent:2 development:1 integration:1 mackay:2 field:8 equal:3 having:1 eliminated:1 look:2 representer:1 np:1 composed:1 divergence:4 familiar:1 william:1 n1:2 ab:1 freedom:1 nemenman:3 huge:1 investigate:1 possibility:1 mdl:4 truly:1 extreme:2 nl:6 tj:8 implication:2 accurate:1 callan:1 shorter:1 incomplete:1 logarithm:1 abundant:1 minimal:2 divergency:1 goodness:3 cost:2 deviation:1 too:1 tishby:1 answer:1 periodic:1 st:2 density:9 physic:3 off:3 quickly:1 continuously:1 again:3 central:1 thesis:1 choose:1 derivative:1 leading:1 return:1 insignificantly:1 coding:1 gaussianity:1 matter:1 explicitly:1 vi:1 later:1 root:2 view:1 lot:1 reached:1 wave:1 start:1 square:3 variance:1 guessed:1 nonsingular:1 spaced:1 bayesian:6 produced:1 carlo:1 inquiry:1 phys:6 ed:2 definition:1 against:5 pp:1 involved:1 obvious:1 periwal:2 gain:1 ask:1 knowledge:2 emerges:1 amplitude:1 carefully:1 actually:2 xxx:8 ooo:1 formulation:5 box:1 strongly:1 just:1 implicit:1 smola:1 hand:2 replacing:1 glance:1 defines:1 mode:2 scientific:1 grows:1 true:3 iop:1 leibler:1 illustrated:1 self:2 seff:2 razor:4 noted:1 criterion:2 trying:1 prominent:1 stress:1 impression:1 theoretic:4 dap:1 evident:1 complete:2 fi:2 insensitive:1 volume:1 jl:1 extend:1 analog:1 discussed:2 approximates:1 numerically:3 he:1 refer:1 cambridge:3 smoothness:14 tuning:1 uv:1 unconstrained:1 grid:1 had:1 similarity:2 prefactors:3 dominant:1 conjectured:1 irrelevant:1 route:1 wavenumber:1 seen:1 minimum:1 monotonically:2 ii:2 semi:1 smooth:3 determination:2 equally:2 coded:1 dkl:1 prediction:3 expectation:2 normalization:3 kernel:1 c1:2 addition:4 remarkably:4 want:1 finiteness:1 crucial:1 extra:1 unlike:1 induced:1 accompany:1 thing:2 spirit:1 structural:1 hb:1 independence:1 fit:20 gave:1 symbolize:1 wahba:1 qel:1 idea:1 simplifies:1 intensive:1 absent:1 penalty:3 f:1 york:1 action:1 clear:1 detailed:2 nuc:1 nonparametric:7 ph:1 generate:2 http:8 singapore:1 tutorial:1 happened:1 fulfilled:1 arising:1 correctly:1 four:1 rissanen:1 wisc:1 asymptotically:1 relaxation:1 graph:1 run:8 inverse:1 noticing:1 everywhere:1 almost:3 vn:1 prefer:1 scaling:3 bit:1 guaranteed:1 convergent:1 display:1 constraint:2 x2:2 encodes:1 fourier:2 speed:3 argument:1 extremely:4 aspect:1 department:1 according:1 belonging:1 smaller:2 son:1 rev:4 b:4 happens:3 outlier:1 invariant:1 taken:2 computationally:1 turn:1 mechanism:1 needed:1 serf:1 fia:4 available:1 apply:1 away:2 generic:1 alternative:1 original:1 ensure:3 cf:6 newton:1 readable:1 classical:4 question:1 restored:1 fa:10 parametric:2 dependence:1 usual:1 bialek:3 win:1 trivial:1 length:3 code:1 ratio:1 balance:1 unfortunately:1 negative:1 rise:2 xll:1 unknown:1 allowing:1 observation:2 finite:7 regularizes:2 arbitrary:1 introduced:1 kl:1 lanl:2 for1:1 nip:1 address:1 able:3 suggested:1 usually:1 regime:1 summarize:1 program:1 qc1:1 analogue:1 power:1 force:1 regularized:1 scarce:1 imply:1 axis:1 gz:1 prior:33 literature:1 asymptotic:3 expect:3 lecture:1 proven:1 remarkable:1 incurred:1 degree:1 consistent:1 principle:4 dq:1 i8:1 occam:11 balancing:1 pi:3 lo:1 summary:1 last:1 free:1 neat:1 side:2 burges:1 institute:1 fall:2 distributed:1 boundary:2 lett:4 dimension:1 xn:2 dxe:1 calculated:1 curve:5 quantum:2 world:1 amended:1 c5:2 qualitatively:2 jump:1 far:1 excess:1 kullback:1 keep:1 global:1 overfitting:4 conclude:1 xi:4 continuous:5 iterative:1 quantifies:1 tja:1 learn:3 robust:1 cl:2 necessarily:2 complex:1 whole:1 arise:1 ref:2 fig:17 ff:2 fashion:1 wiley:1 precision:1 exponential:1 xl:2 atypical:1 qff:1 theorem:1 rk:1 down:1 decay:1 admits:1 dominates:1 vapnik:1 nec:3 illustrates:1 easier:1 led:1 saddle:1 contained:1 bo:1 speculated:1 monotonic:1 kinetic:2 ma:4 jf:1 change:1 determined:1 infinite:6 typical:1 averaging:1 total:3 called:1 la:3 est:1 cond:2 select:3 support:1 latter:1 arises:1 princeton:4 regularizing:1
875
1,801
Place Cells and Spatial Navigation based on 2d Visual Feature Extraction, Path Integration, and Reinforcement Learning A. Arleo* F. Smeraldi S. Hug W. Gerstner Centre for Neuro-Mimetic Systems, MANTRA Swiss Federal Institute of Technology Lausanne, CH-1015 Lausanne EPFL, Switzerland Abstract We model hippocampal place cells and head-direction cells by combining allothetic (visual) and idiothetic (proprioceptive) stimuli. Visual input, provided by a video camera on a miniature robot, is preprocessed by a set of Gabor filters on 31 nodes of a log-polar retinotopic graph. Unsupervised Hebbian learning is employed to incrementally build a population of localized overlapping place fields. Place cells serve as basis functions for reinforcement learning. Experimental results for goal-oriented navigation of a mobile robot are presented. 1 Introduction In order to achieve spatial learning, both animals and artificial agents need to autonomously locate themselves based on available sensory information. Neurophysiological findings suggest the spatial self-localization of rodents is supported by place-sensitive and directionsensitive cells. Place cells in the rat Hippocampus provide a spatial representation in allocentric coordinates [1]. A place cell exhibits a high firing rate only when the animal is in a specific region of the environment, which defines the place field of the cell. Head-direction cells observed in the hippocampal formation encode the animal's allocentric heading in the azimuthal plane [2]. A directional cell fires maximally only when the animal's heading is equal to the cell's preferred direction, regardless of the orientation of the head relative to the body, of the rat's location, or of the animal's behavior. We ask two questions. (i) How do we get place fields from visual input [3]? This question is non-trivial given that visual input depends on the direction of gaze. We present a computational model which is consistent with several neurophysiological findings concerning biological head-direction and place cells. Place-coding and directional sense are provided by two coupled neural systems, which interact with each other to form a single substrate for spatial navigation (Fig. lea?~ . Both systems rely on allothetic cues (e.g., visual stimuli) as well as idiothetic signals (e.g., proprioceptive cues) to establish stable internal representations. The resulting representation consists of overlapping place fields with properties similar to those of hippocampal place cells. (iiJ What's the use of place cells for navigation ?Corresponding author, angeio.arleo@epji.ch Idiothctic IdiothctlC Stimuli Stimult Motor ~_N _A---" /--~ Commands (3) (b) Figure 1: (a) An overview of the entire system. Dark grey areas are involved in space representation, whereas light grey components form the head-direction circuit. Glossary: SnC: hypothetical snapshot cells, sLEC: superficial lateral entorhinal cortex, sMEC: superficial medial entorhinal cortex, DG: dentate gyrus, CA3-CAl: hippocampus proper, NA: nucleus accumbens, VIS: visual bearing cells, CAL: hypothetical calibration cells, HAV: head angular velocity cells, PSC: postsubiculum, ADN: anterodorsal nucleus, LMN: lateral mammillary nuclei. (b) A visual scene acquired by the robot during spatial learning. The image resolution is 422 x 316 pixels. The retinotopic sampling grid (white crosses) is employed to sample visual data by means of Gabor decomposition. Black circles represent maximally responding Gabor filters (the circle radius varies as a function of the filter's spatial frequency). [I]? We show that a representation by overlapping place fields is a natural "state space" for reinforcement learning. A direct implementation of reinforcement learning on real visual streams would be impossible given the high dimensionality of the visual input space. A place field representation extracts the low-dimensional view manifold on which efficient reinforcement learning is possible. To validate our model in real task-environment contexts, we have tested it on a Khepera miniature mobile robot. Visual information is supplied by an on-board video camera. Eight infrared sensors provide obstacle detection and measure ambient light. Idiothetic signals are provided by the robot's dead-reckoning system. The experimental setup consists of an open-field square arena of about 80 x 80 cm in a standard laboratory background (Fig. 1(b?. The vision-based localization problem consists of (i) detecting a convenient lowdimensional representation of the continuous high-dimensional input space (images have a resolution of 422 x 316 pixels), (ii) learning the mapping function from the visual sensory space to points belonging to this representation. Since our robot moves on a twodimensional space with a camera pointing in the direction of motion, the high-dimensional visual space is not uniformly filled. Rather, all input data points lie on a low-dimensional surface which is embedded in an Euclidean space whose dimensionality is given by the total number of camera pixels. This low-dimensional description of the visual space is referred to as view manifold [4]. 2 Extracting the low-dimensional view manifold Hippocampal place fields are determined by a combination of highly-processed multimodal sensory stimuli (e.g., visual, auditory, olfactory, and somatosensory cues) whose mutual relationships code for the animal's location [1]. Nevertheless, experiments on rodents suggest that vision plays an eminent role in determining place cell activity [5]. Here, we focus on the visual pathway, and we propose a processing in four steps. As a first step, we place a retinotopic sampling grid on the image (Fig. I(b?. In total we have 31 grid points with high resolution only in a localized region of the view field (fovea), whereas peripheral areas are characterized by a low-resolution vision [6]. At each point of the grid we place 24 Gabor filters with different orientations and amplitudes. Gabor filters [7] provide a suitable mathematical model for biological simple cells [8]. Specifically, we employ a set of modified Gabor filters [9]. A modified Gabor filter Ii, tuned to orientation cPj and angular frequency WI = e /;l, corresponds to a Gaussian in the Log-polar frequency plane rather than in the frequency domain itself, and is defined by the Fourier function G(~ , cP) = A ? e- (/; -/;; )2 /2a~ (1) . e- (?-?tl 2 / 2a ; (~, cP) are coordinates in the Log-polar Fourier plane (~, cP) = (logll (wx, wy) 11, arct an(wy / wx )) (2) A key property of the Log-polar reference frame is that translations along cP correspond to where A is a normalization term, and rotations in the image domain, while translations along ~ correspond to scaling the image. In our implementation, we build a set of 24 modified Gabor filters,:F = {fi(WI , cPj ) 11 :::; l :::; 3, 1 :::; j :::; 8}, obtained by taking 3 angular frequencies WI , W2 , W 3 , and 8 orientations cPI, ... , cP8 . As a second step, we take the magnitude of the responses of these Gabor filters for detecting visual properties within video streams. While the Gabor filter itself has properties related to simple cells, the amplitude of the complex response does not depend on the exact position within the receptive field and has therefore properties similar to cortical complex cells. Thus, given an image I( x , V), we compute the magnitude of the response of all Ji filters for each retinal point ? r, (!i) ~ ( (~&(f,(X)) lUi +X))' + (~lm(f,(X)) where if varies over the area occupied by the filter I (.Ii +X))' Y (3) Ji in the spatial domain. The third step within the visual pathway of our model, consists of interpreting visual cues by means of neural activity. We take a population of hypothetical snapshot cells (SnC in Fig. I(a? one synapse downstream from the Gabor filter layer. Let k be an index over all K filters forming the retinotopic grid. Given a new image I, a snapshot cell S E SnC is created which receives afferents from all /k filters. Connections from filters Jk to cell s are initialized according to W s k = rk, Vk E K. If, at a later point, the robot sees an image I', the firing activity r s of cell s E SnC is computed by (4) r s -- e - (-k L k h- W skl )2 / 2a where rk are the Gabor filter responses to image I'. Eq. 4 defines a radial basis function in the filter space that measures the similarity of the current image to the image stored in the weights W s k. The width a determines the discrimination capacity of the system for visual scene recognition. 2 As final step, we apply unsupervised Hebbian learning to achieve spatial coding one synapse downstream from the SnC layer (sLEC in Fig. I(a?. Indeed, the snapshot cell activity r s defined in Eq. 4 depends on the robot's gaze direction, and does not code for a spatial location. In order to collect information from several gaze directions, the robot takes four snapshots corresponding to north, east, south, and west views at each location visited during exploration. To do this, it relies on the allocentric compass information provided by the directional system [2, 10]. For each visited location the robot creates four SnC snapshot cells, which are bound together to form a place cell in the sLEC layer. Thus, sLEC cell activity depends on a combination of several visual cues, which results in non-directional place fields (Fig. 2(a? [11]. 00 00 Figure 2: (a) A sample of spatial receptive field for a sLEC cell in our model. The lighter a region, the higher the cell's firing rate when the robot is in that region of the arena. (b) A typical place field in the CA3-CAllayer of the model. 3 Hippocampal CAI-CA3 place field representation When relying on visual data only, the state space representation encoded by place cells does not fulfill the Markov hypothesis [12]. Indeed, distinct areas of the environment may provide identical visual cues and lead to singularities in the view manifold (sensory input aliasing). We employ idiothetic signals along with visual information in order to remove such singularities and solve the hidden-state problem. An extra-hippocampal path integrator drives Gaussian-tuned neurons modeling self-motion information (sMEC in Fig. l(a?. A fundamental contribution to build the sMEC idiothetic space representation comes from head-direction cells (projection B in Fig. l(a?. As the robot moves, sMEC cell activity changes according to self-motion signals and to the current heading of the robot as estimated by the directional system. The firing activity T m of a cell m E sMEC is given by Tm = exp( - (Sdr - sm)2/2(J2), where Sdr is the robot's current position estimated by dead-reckoning, sm is the center of the receptive field of cell m, and (J is the width of the Gaussian field. Allothetic and idiothetic representations (i.e., sLEC and sMEC place field representations, respectively) converge onto CA3-CAI regions to form a stable spatial representation (Fig. l(a? . On the one hand, unreliable visual data are compensated for by means of path integration. On the other hand, reliable visual information can calibrate the path integrator system and maintain the dead-reckoning error bounded over time. Correlational learning is applied to combine visual cues and path integration over time. CA3-CA 1 cells are recruited incrementally as exploration proceeds. For each new location, connections are established from all simultaneously active cells in sLEC and sMEC to newly recruited CA3-CA1 cells. Then, during the agent-environment interaction, Hebbian learning is applied to update the efficacy of the efferents from sLEC and sMEC to the hippocampus proper [11]. After learning, the CA3-CA1 space representation consists of a population of localized overlapping place fields (Fig. 2(b? covering the two-dimensional workspace densely. Fig. 3(a) shows an example of distribution of CA3-CA1 place cells after learning. In this experiment, the robot, starting from an empty population, recruited about 1000 CA3-CA 1 place cells. In order to interpret the information represented by the ensemble CA3-CA 1 pattern of activity, we employ population vector coding [13, 14]. Let s be the unknown robot's location, Ti (S) the firing activity of a CA3-CA 1 place cell i, and Si the center of its place field . The population vector p is given by the center of mass of the network activity: p = Li Si Ti(S)/ L i Ti(S). The approximation p ~ S is good for large neural populations covering the environment densely and uniformly [15]. In Fig. 3(a) the center of mass coding for the robot's location is represented by the black cross. ,', \\,------------",----------------------------"\, t r, , -------------.................. _ . . . __ .................. ___ _ f, , __ , ................ , ' ____ _ ---, --1 1 1 1 1 1 1 1 \ \ \ '\ '\ '\ '\ '\ '\ \ '\ ...... '\ ............ '\ '\ ...... .... ............ 1 , \ , \ '\ '\ '\ . . . . . . ___ _ 1 I I' , \ I, ,_ 1 , 1 \ I , 1 I I 1 __ ttl , __ '_11111 " , _,1_ ,___ I' , , __ " ,__ , ________ t .... . . . ___ . . 1 _ _ _ _ _ _ _ _ ! 111_ 1_, _ , , ___ 1 _ 1 1 I " _ " , " " /' f I , ___ , , t ~ """ _ , ... (b) (a) Figure 3: (a) The ensemble activity of approximately 1000 CA3-CAI place cells created by the robot during spatial learning. Each dot is the center of a CA3-CA 1 place cell. The lighter a cell, the higher its firing rate. The black cross is the center of mass of the ensemble activity. (b) Vector field representation of a navigational map learned after 5 trials. The target area (about 2.5 times the area occupied by the robot) is the upper-left corner of the arena. 4 Action learning: Goal-oriented navigation The above spatial model enables the robot to localize itself within the environment. To support cognitive spatial behavior [1], the hippocampal circuit must also allow the robot to learn navigational maps autonomously. Our CA3-CAI population provides an incrementally learned coarse coding representation suitable for applying reinforcement learning for continuous high-dimensional state spaces. Learning an action-value function over a continuous location space endows the system with spatial generalization capabilities. We apply a Q(),) learning scheme [16] to build navigational maps [17, 18]. The overlapping localized CA3-CA 1 place fields provide a natural set of basis functions that can be used to learn a parameterized form of the Q(),) function [19]. Note that we do not have to choose parameters like width and location of the basis functions. Rather, the basis functions are created automatically by unsupervised learning. Our representation also solves the problem of ambiguous input or partially hidden states [12], therefore the current state is fully known to the system and reinforcement learning can be applied in a straightforward manner. Let ri denote the activation of a CA3-CAI place cell i. Each state s is encoded by the ensemble place cell activity vector f(8) = (rl (8) , rds), ... ,rn (8)), where n is the number of created place cells. The state-action value function Qw (s, a) is of the form n Qw(s,a) = (ura)T f(S) = Lwi'ri(S) (5) i=l where S, a is the state-action pair, and ura = (w'l , ... , w~) is an adjustable parameter vector. The learning task consists of updating the weight vector ura to approximate the optimal function Q~(s, a). The state-value prediction error is defined by St = R t+l + 'Y max Qt (St+l , a) - Qt(St, at) a (6) where Rt+l is the immediate reward, and 0 ::::; 'Y ::::; 1 is a constant discounting factor. At each time step the weight vector ura changes according to (7) __ ----------,I ', , ' ,~ ,"'--------- " , ,'--------\,\ " ,"-------,'"" " ------------ - -, , '--------------", ,, --------------, - - - - -- - - - - - - , , , , - - - - - - - - - - -, , , ----- -- -... \"' ,' ,'-------" \"" ," - - " " " \ \ \ I I I I I I I I I I I I , " \ \ , \ \ , , "t, , I ". , , , I I I I I , \ "- , \ , \ ! \ II I I 'I '2 ' , 2 ,, \I , , , II I\ \ I I \ \ I I I I f f f I I I I I I I I I I 1I I I I1 tIll 1111 1 111 1\ - ... \ , t II tIl 1\,11\"\\,,, 11"",1 " " , """,,\ , 1\, ",, \ 1\1\'\ ' , \ ' ~\ ~\ o ' , \ I \ - ,\ ,......--.. _ \ , ... .... ..... '1 / /' _ _ _ _ //_ _____ / / / _ \ ' , _______ / / __ \ , , ______ - - - __ I 1 \ ' , , __ __ ___ - __ _ 1",\11,1\", \ \ \'" \ " -- -- -------- ---------(b) (a) Figure 4: Two samples of learned navigational maps. The obstacle (dark grey object) is "transparent" with respect to vision, while it is detectable by the robot's infrared sensors. (a) The map learned by the robot after 20 training paths. (b) The map learned by the robot after 80 training trials. where 0 ::; ex ::; 1 is a constant learning rate parameter, and et is the eligibility trace vector. During learning, the exploitation-exploration trade-off is determined by an f-greedy policy, with 0 ::; f ::; 1. As a consequence, at each step t the agent might either behave greedily (exploitation) with probability 1 - f, by selecting the best action a; with respect to the Q-value functions, a; = argm axaQt (St , a) , or resort to uniform random action selection (exploration) with probability equal to f. The update of the eligibility trace depends on whether the robot selects an exploratory or an exploiting action . Specifically, the vector et changes according to (we start with eo = 0) - _ -(- ) + { et - r St 'Y >..et- l 0 if exploiting if exploring (8) where 0 ::; >.. ::; 1 is a trace-decay parameter [19], and f( Sf) is the CA3-CA 1 vector activity. Learning consists of a sequence of training paths starting at random positions and determined by the f-greedy policy. When the robot reaches the target, a new training path begins at a new random location. Fig. 3(b) shows an example of navigational map learned after 5 training trials . Fig. 4 shows some results obtained by adding an obstacle within the arena after the place field representation has been learned. Map of Fig. 4(a) has been learned after 20 training paths. It contains proper goal-oriented information, whereas it does not provide obstacle avoidance accuratelyl . Fig. 4(b) displays a navigational map learned by the robot after 80 training paths. Due to longer training, the map provides both appropriate goal-oriented and obstacle avoidance behavior. The vector field representations of Figs. 3(b) and 4 have been obtained by rastering uniformJy over the environment. Many of sampled locations were not visited by the robot during training, which confirms the generalization capabilities of the method. That is, the robot was able to associate appropriate goal-oriented actions to never experienced spatial positions. Reinforcement learning takes long training time when applied directly on high-dimensional input spaces [19]. We have shown that by means of an appropriate state space representation, based on localized overlapping place fields, the robot can learn goal-oriented behavior after only 5 training trials (without obstacles). This is similar to the escape platform learning time of rats in Morris water-maze [20]. INote that this does not really impair the robot's goal-oriented behavior, since obstacle avoidance is supported by a low-level reactive module driven by infrared sensors. Acknowledgments Supported by the Swiss National Science Foundation, project nr. 21-49174.96. References [1] J. O'Keefe and L. Nadel. The Hippocampus as a cognitive map. Clarendon Press, Oxford, 1978. [2] J.S. Taube, R.I. Muller, and J.B.Jr. Ranck. Head direction cells recorded from the postsubiculum in freely moving rats. 1. Description and quantitative analysis. Journal ofNeuroscience, 10:420435,1990. [3] J. O'Keefe and N. Burgess. Geometric determinants of the place fields of hippocampal neurons. Nature , 381 :425-428, 1996. [4] M.O. Franz, B. Schtilkopf, H.A. Mallot, and H.H. Billthoff. Learning View Graphs for Robot Navigation. Autonomous Robots, 5:111-125,1998. [5] J.J. Knierim, H.S. Kudrimoti, and B.L. McNaughton. Place cells, head direction cells, and the learning oflandmark stability. Journal of Neuroscience, 15:1648- 1659, 1995. [6] F. Smeraldi, J. Biglin, and W. Gerstner. On the role of dimensionality in face authentication. In Proceedings of the Symposium of the Swedish Society for Automated Image Analysis, Halmstad (Sweden), pages 87-91. Halmstad University, Sweden, 2000. [7] D. Gabor. Theory of communication. Journal of the lEE, 93:429-457, 1946. [8] J.G. Daugman. Two-dimensional spectral analysis of cortical receptive field profiles. Vision Research, 20:847-856, 1980. [9] F. Smeraldi, N. Capdevielle, and J. Biglin. Facial features detection by saccadic exploration of the Gabor decomposition and support vector machines. In Proceedings ofthe 11 th Scandinavian Conference on Image Analysis - SCIA 99, Kangerlussuaq, Greenland, pages 39-44, 1999. [10] A. Arleo and W. Gerstner. Modeling rodent head-direction cells and place cells for spatial learning in bio-mimetic robotics. In J.-A. Meyer, A. Berthoz, D. Floreano, H. Roitblat, and S.w. Wilson, editors, From Animals to Animats VI, pages 236- 245, Cambridge MA, 2000. MIT Press. [11] A. Arleo and W. Gerstner. Spatial cognition and neuro-mimetic navigation: A model of hippocampal place cell activity. Biological Cybernetics, Special Issue on Navigation in Biological and Artificial Systems, 83:287- 299, 2000. [12] R.A. McCallum. Hidden state and reinforcement learning with instance-based state identification. IEEE Systems, Man, and Cybernetics, 26(3):464-473, 1996. [13] A.P. Georgopoulos, A. Schwartz, and R.E. Kettner. Neuronal population coding of movement direction. Science, 233:1416-1419, 1986. [14] M.A. Wilson and B.L. McNaughton. Dynamics of the hippocampal ensemble code for space. Science, 261 :1055-1058, 1993. [15] E. Salinas and L.F. Abbott. Vector reconstruction from firing rates. Journal of Computational Science , 1:89- 107, 1994. [16] C.J.C.H. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge, England, 1989. [17] P. Dayan. Navigating through temporal difference. In R.P. Lippmann, J.E. Moody, and D.S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 464-470. Morgan Kaufmann, San Mateo, CA, 1991. [18] D.J. Foster, R.G.M. Morris, and P. Dayan. A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus, 10(1):1-16,2000. [19] R.S. Sutton and A.G. Barto. Reinforcement learning, an introduction. MIT Press-Bradford Books, Cambridge, Massachusetts, 1998. [20] R.G.M. Morris, P. Ganud, 1.N.P. Rawlins, and 1. O'Keefe. Place navigation impaired in rats with hippocampal lesions. Nature, 297:681- 683,1982.
1801 |@word trial:4 exploitation:2 determinant:1 hippocampus:5 open:1 grey:3 confirms:1 azimuthal:1 decomposition:2 contains:1 efficacy:1 selecting:1 tuned:2 ranck:1 current:4 si:2 activation:1 must:1 wx:2 enables:1 motor:1 remove:1 medial:1 update:2 discrimination:1 cue:7 greedy:2 plane:3 mccallum:1 eminent:1 argm:1 detecting:2 provides:2 node:1 location:12 coarse:1 mathematical:1 along:3 direct:1 symposium:1 consists:7 pathway:2 combine:1 olfactory:1 manner:1 acquired:1 indeed:2 behavior:5 themselves:1 aliasing:1 integrator:2 relying:1 automatically:1 provided:4 retinotopic:4 bounded:1 begin:1 circuit:2 mass:3 qw:2 project:1 what:1 ttl:1 cm:1 accumbens:1 ca1:3 finding:2 temporal:2 quantitative:1 hypothetical:3 ti:3 schwartz:1 bio:1 consequence:1 sutton:1 oxford:1 path:10 firing:7 approximately:1 black:3 might:1 mateo:1 scia:1 collect:1 lausanne:2 khepera:1 psc:1 acknowledgment:1 camera:4 swiss:2 area:6 gabor:14 convenient:1 projection:1 radial:1 suggest:2 get:1 onto:1 selection:1 cal:2 twodimensional:1 context:1 impossible:1 applying:1 map:11 center:6 compensated:1 straightforward:1 regardless:1 starting:2 resolution:4 rule:1 avoidance:3 population:9 stability:1 exploratory:1 coordinate:2 autonomous:1 mcnaughton:2 floreano:1 target:2 play:1 exact:1 substrate:1 lighter:2 hypothesis:1 associate:1 velocity:1 recognition:1 jk:1 updating:1 infrared:3 observed:1 role:2 module:1 region:5 autonomously:2 trade:1 movement:1 environment:7 smeraldi:3 reward:2 dynamic:1 depend:1 serve:1 localization:2 creates:1 basis:5 multimodal:1 represented:2 distinct:1 artificial:2 formation:1 rds:1 salina:1 whose:2 encoded:2 solve:1 itself:3 final:1 sequence:1 cai:5 propose:1 lowdimensional:1 interaction:1 reconstruction:1 j2:1 combining:1 till:1 achieve:2 description:2 validate:1 exploiting:2 empty:1 impaired:1 object:1 qt:2 eq:2 solves:1 somatosensory:1 come:1 switzerland:1 direction:14 radius:1 filter:18 exploration:5 transparent:1 generalization:2 really:1 biological:4 singularity:2 exploring:1 exp:1 mapping:1 dentate:1 cognition:1 lm:1 pointing:1 miniature:2 polar:4 visited:3 sensitive:1 hav:1 federal:1 kudrimoti:1 mit:2 sensor:3 gaussian:3 modified:3 rather:3 occupied:2 fulfill:1 mobile:2 command:1 wilson:2 barto:1 encode:1 focus:1 vk:1 greedily:1 sense:1 dayan:2 dependent:1 epfl:1 entire:1 hidden:3 i1:1 selects:1 pixel:3 issue:1 orientation:4 animal:7 spatial:19 integration:3 platform:1 mutual:1 special:1 field:26 equal:2 never:1 extraction:1 sampling:2 identical:1 unsupervised:3 hippocampally:1 adn:1 stimulus:4 escape:1 employ:3 oriented:7 dg:1 simultaneously:1 densely:2 national:1 delayed:1 fire:1 sdr:2 maintain:1 detection:2 highly:1 arena:4 navigation:10 light:2 ambient:1 sweden:2 facial:1 filled:1 euclidean:1 initialized:1 circle:2 instance:1 modeling:2 obstacle:7 compass:1 calibrate:1 ca3:17 uniform:1 stored:1 varies:2 st:5 fundamental:1 workspace:1 lee:1 off:1 gaze:3 together:1 moody:1 na:1 thesis:1 recorded:1 choose:1 dead:3 corner:1 cognitive:2 resort:1 til:1 book:1 li:1 retinal:1 skl:1 coding:6 north:1 afferent:1 depends:4 vi:2 stream:2 later:1 view:7 start:1 capability:2 contribution:1 square:1 kaufmann:1 reckoning:3 ensemble:5 correspond:2 ofthe:1 directional:5 identification:1 roitblat:1 arleo:4 drive:1 cybernetics:2 reach:1 touretzky:1 frequency:5 involved:1 halmstad:2 efferent:1 sampled:1 auditory:1 newly:1 massachusetts:1 ask:1 dimensionality:3 amplitude:2 clarendon:1 higher:2 response:4 maximally:2 synapse:2 swedish:1 angular:3 hand:2 receives:1 overlapping:6 incrementally:3 defines:2 discounting:1 laboratory:1 proprioceptive:2 white:1 during:6 self:3 width:3 eligibility:2 covering:2 ambiguous:1 authentication:1 rat:5 hippocampal:11 motion:3 cp:4 interpreting:1 mammillary:1 image:13 fi:1 rotation:1 ji:2 overview:1 rl:1 interpret:1 cambridge:3 postsubiculum:2 grid:5 centre:1 dot:1 moving:1 robot:33 stable:2 cortex:2 calibration:1 surface:1 similarity:1 longer:1 scandinavian:1 driven:1 muller:1 morgan:1 employed:2 eo:1 freely:1 converge:1 taube:1 signal:4 ii:6 hebbian:3 characterized:1 england:1 cross:3 long:1 schtilkopf:1 concerning:1 prediction:1 neuro:2 nadel:1 vision:5 represent:1 normalization:1 robotics:1 cell:55 lea:1 whereas:3 background:1 w2:1 extra:1 south:1 recruited:3 ura:4 mallot:1 extracting:1 automated:1 burgess:1 tm:1 whether:1 action:8 dark:2 daugman:1 morris:3 processed:1 gyrus:1 supplied:1 estimated:2 neuroscience:1 key:1 four:3 nevertheless:1 localize:1 preprocessed:1 abbott:1 graph:2 downstream:2 parameterized:1 place:44 mimetic:3 scaling:1 layer:3 bound:1 display:1 activity:15 georgopoulos:1 scene:2 ri:2 fourier:2 according:4 peripheral:1 combination:2 belonging:1 mantra:1 jr:1 wi:3 inote:1 berthoz:1 detectable:1 available:1 eight:1 apply:2 appropriate:3 spectral:1 allocentric:3 responding:1 build:4 establish:1 society:1 move:2 question:2 receptive:4 saccadic:1 rt:1 nr:1 exhibit:1 navigating:1 fovea:1 lateral:2 capacity:1 manifold:4 trivial:1 water:1 code:3 index:1 relationship:1 setup:1 trace:3 implementation:2 proper:3 policy:2 unknown:1 adjustable:1 animats:1 upper:1 neuron:2 snapshot:6 markov:1 sm:2 behave:1 immediate:1 communication:1 head:10 locate:1 frame:1 rn:1 knierim:1 pair:1 connection:2 learned:9 hug:1 established:1 able:1 impair:1 proceeds:1 wy:2 pattern:1 navigational:6 reliable:1 max:1 video:3 suitable:2 natural:2 rely:1 endows:1 rawlins:1 scheme:1 technology:1 created:4 coupled:1 extract:1 geometric:1 determining:1 relative:1 embedded:1 fully:1 localized:5 foundation:1 nucleus:3 agent:3 consistent:1 foster:1 editor:2 lmn:1 translation:2 supported:3 heading:3 allow:1 institute:1 greenland:1 taking:1 face:1 cortical:2 maze:1 sensory:4 author:1 glossary:1 reinforcement:10 san:1 franz:1 approximate:1 lippmann:1 preferred:1 unreliable:1 active:1 continuous:3 learn:3 nature:2 superficial:2 ca:8 kettner:1 interact:1 bearing:1 gerstner:4 complex:2 domain:3 profile:1 lesion:1 body:1 cpi:1 fig:17 referred:1 west:1 tl:1 neuronal:1 board:1 iij:1 experienced:1 position:4 meyer:1 sf:1 lie:1 watkins:1 third:1 rk:2 specific:1 decay:1 cpj:2 adding:1 keefe:3 phd:1 entorhinal:2 magnitude:2 rodent:3 forming:1 neurophysiological:2 visual:28 partially:1 ch:2 corresponds:1 determines:1 relies:1 ma:1 lwi:1 goal:7 man:1 change:3 determined:3 specifically:2 uniformly:2 lui:1 typical:1 correlational:1 total:2 bradford:1 experimental:2 east:1 internal:1 support:2 reactive:1 tested:1 ex:1
876
1,802
The Kernel Gibbs Sampler Thore Graepel Statistics Research Group Computer Science Department Technical University of Berlin Berlin, Germany guru@cs.tu-berlin.de Ralf Herbrich Statistics Research Group Computer Science Department Technical University of Berlin Berlin, Germany ralfh@cs.tu-berlin.de Abstract We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise. 1 Introduction Two great ideas have dominated recent developments in machine learning: the application of kernel methods and the popularisation of Bayesian inference. Focusing on the task of classification, various connections between the two areas exist: kernels have long been a part of Bayesian inference in the disguise of covariance nmctions that characterise priors over functions [9]. Also, attempts have been made to re-derive the support vector machine (SVM) [1] - possibly the most prominent representative of kernel methods - as a maximum a-posteriori estimator (MAP) in a Bayesian framework [8] . While this work suggests good strategies for evidencebased model selection the MAP estimator is not truly Bayesian in spirit because it is not based on the concept of model averaging which is crucial to Bayesian reasoning. As a consequence, the MAP estimator is generally not as robust as a real Bayesian estimator. While this drawback is inconsequential in a noise-free setting or in a situation dominated by feature noise, it may have severe consequences when the data is contaminated by label noise that may lead to a multi-modal posterior distribution. In order to make use of the full Bayesian posterior distribution it is necessary to generate samples from this distribution. This contribution is concerned with the generation of samples from the Bayesian posterior over the hypothesis space of lin- ear classifiers in arbitrary kernel spaces in the case of label noise. In contrast to [8] we consider normalised weight vectors, IIwll.~: = 1, because the classification given by a linear classifier only depends on the spatial direction of the weight vector w and not on its length. This point of view leads to a hypothesis space isomorphic to the surface of an n-dimensional sphere which - in the absence of prior information - is naturally equipped with a uniform prior over directions. Incorporating the label noise model into the likelihood then leads to a piecewise constant posterior on the surface of the sphere. The kernel Gibbs sampler (KGS) is designed to sample from this type of posterior by iteratively choosing a random direction and sampling on the resulting piecewise constant one-dimensional density in the fashion of a hit-and-run algorithm [7]. The resulting samples can be used in various ways: i) In Bayesian transduction [3] the decision about the labels of new test points can be inferred by a majority decision of the sampled classifiers. ii) The posterior mean - the Bayes point machine (BPM) solution [4] - can be calculated as an approximation to transduction. iii) The binary entropy of candidate training points can be calculated to determine their information content for active learning [2]. iv) The model evidence [5] can be evaluated for the purpose of model selection. We would like to point out, however, that the KGS is limited in practice to a sample size of m ~ 100 and should thus be thought of as an analytical tool to advance our understanding of the interaction of kernel methods and Bayesian reasoning. The paper is structured as follows: in Section 2 we introduce the learning scenario and explain our Bayesian approach to linear classifiers in kernel spaces. The kernel Gibbs sampler is explained in detail in Section 3. Different applications of the KGS are discussed in Section 4 followed by an experimental demonstration of the BPM solution based on using the KGS under label noise conditions. We denote n-tuples by italic bold letters (e.g. x), vectors by roman bold letters (e.g. x), random variables by sans seriffont (e.g. X), and vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E and I denote a probability measure, the expectation of a random variable and the indicator function, respectively. 2 Bayesian Learning in Kernel spaces We consider learning given a sequence x = (Xl, ... ,X m ) E xm and y = (Yl, ... Ym) E {-I, +I} m drawn iid from a fixed distribution P XY = P z over the space X x { -1, + I} = Z of input-output pairs. The hypotheses are linear classifiers X I-t (w,ifJ(x))/C =: (w,x)/C in some fixed feature space K ~ ?~ where we assume that a mapping ? : X -+ K is chosen a priori 1 . Since all we need for learning is the real-valued output (w, Xi) /C of the classifier w at the m training points in Xl, ... , Xm we can assume that w can be expressed as (see [9]) m W = LOiX;. (1) i=l Thus, it suffices to learn the m expansion coefficients a E IRm rather than the n components of w E K. This is particularly useful if the dimensionality dim (K) = n of the feature space K is much greater (or possibly infinite) than the number m of training points. From (1) we see that all that is needed is the inner product function k (x, x') = (? (x) ,? (x'))/C also known as the kernel (see [9] for a detailed introduction to the theory of kernels). lFor the sake of convenience, we sometimes abbreviate cfJ {x} by x. This, however, should not be confused with n-tuple x denoting the training objects. (a) (b) Figure 1: Illustration of the (log) posterior distribution on the surface of a 3dimensional sphere {w E Il~a IlIwllK = I} resulting from a label noise model with a label flip rate of q = 0.20 (a) m = 10, (b) m = 1000. The log posterior is plotted over the longitude and latitude, and for small sample size it is multi-modal due to the label noise. The classifier w* labelling the data (before label noise) was at (~, 11"). In a Bayesian spirit we consider a prior Pw over possible weight vectors w E W of unit length, i.e. W = {v E J( IIIvllK = I}. Given an iid training set z = (x,y) and a likelihood model PYlx=x,w=w we obtain the posterior PWlz==z using Bayes' formula PY=lx==."w=w (y) Pw (w) () (2) PWlz=-z w = [ ] ' Ew PY=lx==."w=w (y) By the iid assumption and the independence of the denominator from w we obtain m .i=l . .c[w,z] In the absence of specific prior knowledge symmetry suggests to take Pw uniform on W. Furthermore, we choose the likelihood model PY1X=x,w=w (Y) ={ if y (w,x)K otherwise i _q ::; a where q specifies the assumed level of label noise. Please note the difference to the commonly assumed model of feature noise which essentially assumes noise in the (mapped) input vectors x instead of the labels y and constitutes the basis of the soft-margin SVM [1]. Thus the likelihood C[w,z] of the weight vector w is given by C [w, z] = qm.Re=p [w,z] (1 _ q)m(l-Re=p [W,Z]) , (3) where the training error Remp [w, z] is defined as Remp [w,z] = 1 m m L i=l IYi(W,Xi } K:~O. Two data points YIXI and Y2X2 divide the space of normalised weight vectors W into four equivalence classes with different posterior density indicated by the gray shading. In each iteration, starting from Wj_l a random direction v with v..LWj_l is generated. We sample from the piecewise constant density on the great circle determined by the plane defined by Wj-l and v. In order to obtain (*, we calculate the 2m angles (i where the training samples intersect with the circle and keep track of the number m . ei of training errors for each region i. Figure 2: Schematic view of the kernel Gibbs sampling procedure. Clearly, the posterior Pw1z==z is piecewise constant for all error Remp [w,z] (see Figure 1). 3 W with equal training The Kernel Gibbs Sampler In order to sample from PWlz==z on W we suggest a Markov Chain sampling method. For a given value of q, the sampling scheme can be decomposed into the following steps (see Figure 2): 1. Choose an arbitrary starting point Wo E W and set j = O. 2. Choose a direction v E W in the tangent space {v E W I (v, Wj)K = O}. 3. Calculate all m hit points b i E W from W in direction v with the hyperplane having normal YiXi' Before normalisation, this is achieved by [4] b i = Wj - (Wj,Xi)K ( ) V. V,Xi K 4. Calculate the 2m angular distances (i from the current position Wj Vi E {l, ... ,m}: (2i-l = -sign ((v,bi)d arccos ((wj,bi)K) Vi E {l, ... ,m}: (2i = ((2i-l + 7r) , mod (27r) . 5. Sort the (i in ascending order, i.e. II: {I, ... , 2m} -+ {I, ... , 2m} such that Vi E {2, ... ,2m}: (nCi-l):::; (nCi) . 6. Calculate the training errors ei of the 2m intervals [(nCi-l),(nCi)] byevaluating ei = Hemp [cos ((nCHl)2- (nCi)) Wj - sm . ((nCHl)2- (nCi)) v, z ] Here, we used the shorthand notation (nC2m+1) = (nCl)' 7. Sample an angle (* using the piecewise uniform distribution and (3). 8. Calculate a new sample Wj+! by Wj+l 9. Set j f- j + 1 and go back to step 2. = cos ((*) Wj - sin ((*) v. Since the algorithm is carried out in feature space K we can use m W m = LCtiXi, i=l m v= LViXi, b = L.BiXi. i=l i=l For the inner products and norms it follows that, e.g. (w, v)K = a'Gv, IIwll~ = a'Ga, where the m x m matrix G is known as the Gram matrix and is given by G ij = (Xi,Xj)K = k(Xi,Xj) . As a consequence the above algorithm can be implemented in arbitrary kernel spaces only making use of k. 4 Applications of the Kernel Gibbs Sampler The kernel Gibbs sampler provides samples from the full posterior distribution over the hypothesis space of linear classifiers in kernel space for the case of label noise. These samples can be used for various tasks related to learning. In the following we will present a selection of these tasks. Bayesian Transduction Given a sample from the posterior distribution over hypotheses, a good strategy for prediction is to let the sampled classifiers vote on each new test data point. This mode of prediction is closest to the Bayesian spirit and has been shown for the zero-noise case to yield excellent generalisation performance [3]. Also the fraction of votes for the majority decision is an excellent indicator for the reliability of the final estimate: Rejection of those test points with the closest decision results in a great reduction of the generalisation error on the remaining test points x. Given the posterior PWlz~=% the transductive decision is BT% (x) = sign (Ewlz~=% [sign ((W,x)x;)J) . In practice, this estimator is approximated by replacing the expectation by a sum over the sampled weight vectors W j. (4) EWlz~=% Bayes Point Machines For classification, Bayesian Transduction requires the whole collection of sampled weight vectors W in memory. Since this may be impractical for large data sets we would like to derive a single classifier W from the Bayesian posterior. An excellent approximation of the transductive decision BT% (x) by a single classifier is obtained by exchanging the expectation with the inner sign-function in (4). Then the classifier hbp is given by (5) where the classifier Wbp is referred to as the Bayes point and has been shown to yield generalisation performance superior to the well-known support vector solution WSVM, which - in turn - can be looked upon as an approximation to Wbp in the noise-free case [4]. Again, wbp is estimated by replacing the expectation by the mean over samples W j. Note that there exists no SVM equivalence WSVM to the Bayes point Wbp in the case of label noise - a fact to be elaborated on in the experimental part in Section 5. q = 0.0 q = 0.1 q = 0.2 Figure 3: A set of 50 samples Wj of the posterior PWlz~=z for various noise levels q. Shown are the resulting decision boundaries in data space X. Active Learning The Bayesian posterior can also be employed to determine the usefulness of candidate training points - a task that can be considered as a dual counterpart to Bayesian Transduction. This is particularly useful when the label y of a training point x is more expensive to obtain than the training point x itself. It was shown in the context of "Query by Committee" [2) that the binary entropy S (x,z) = p+ log2P+ + p-Iog2P- with p? = PWlz~=z (? (W, x) K > 0) is an indicator of the information content of a data point x with regard to the learning task. Samples W j from the Bayesian posterior PWlz~=z make it possible to estimate S for a given candidate training points x and the current training set z to decide on the basis of S if it is worthwhile to query the corresponding label y . Evidence Estimation for Model Selection Bayesian model selection is often based on a quantity called the evidence [5) of the model (given by the denominator of (2)) In the PAC-Bayesian framework this quantity has been demonstrated to be responsible for the generalisation performance of a model [6). It turns out that in the zero-noise case the margin (the quantity maximised by the SVM) is a measure of the evidence of the model used [4) . In the case of label noise the KGS serves to estimate this quantity. 5 Experiments In a first experiment we used a surrogate dataset of m = 76 data points x in X = IR2 and the kernel k (x,x') = exp(-t Ilx - x'II~). Using the KGS we sampled 50 different classifiers with weight vectors W j for various noise levels q and plotted the resulting decision boundaries {x E IR2 I (w j, x) K = O} in Figure 3 (circles and crosses depict different classes). As can be seen form these plots, increasing the noise level q leads to more diverse classifiers on the training set z. In a second experiment we investigated the generalisation performance of the Bayes point machine (see (5)) in the case of label noise. In IR3 we generated 100 random training and test sets of size mtrain = 100 and mtest = 1000, respectively. For each normalised point x E IR3 the longitude and latitude were sampled from a Beta(5, 5) and Beta(O.l, 0.1) distribution, respectively. The classes y were obtained by randomly flipping the classes assigned by the classifier w* at (~, 7r) (see also Figure 1) with a true label flip rate of q* = 5%. In Figure 4 we plotted the estimated generalisation error for a BPM (trained using 100 samples Wj from the KGS) and Generalisation errors of BPMs (circled error-bars) and soft-margin SVMs (triangled error-bars) vs. assumed noise level q and margin slack penalisation A, respectively. The dataset consisted of m = 100 observations with a label noise of 5% (dotted line) and we used k(x,x') = (x,x')x+A?I"=,,,. Note that the abscissa is jointly used for q and A. Figure 4: Comparison of BPMs and SVMs on data contaminated by label noise. quadratic soft-margin SVM at different label noise levels q and margin slack penalisation A, respectively. Clearly, the BPM with the correct noise model outperformed the SVM irrespective of the chosen level of regularisation. Interestingly, the BPM appears to be quite "robust" w.r.t. the choice of the label noise parameter q. 6 Conclusion and Future Research The kernel Gibbs sampler provides an analytical tool for the exploration of various Bayesian aspects of learning in kernel spaces. It provides a well-founded way for dealing with label noise but suffers from its computational complexity which - so far - makes it inapplicable for large scale applications. Therefore it will be an interesting topic for future research to invent new sampling schemes that may be able to trade accuracy for speed and would thus be applicable to large data sets. Acknowledgements This work was partially done while RH and TG were visiting Robert C. Williamson at the ANU Canberra. Thanks, Bob, for your great hospitality! References [1] C. Cortes and V. Vapnik. Support Vector Networks. Machine Learning, 20:273- 297, 1995. [2] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28:133- 168, 1997. [3] T. Graepel, R. Herbrich, and K. Obermayer. Bayesian Transduction. In Advances in Neural Information System Processing 12, pages 456-462, 2000. [4] R. Herbrich, T. Graepel, and C. Campbell. Bayesian learning in reproducing kernel Hilbert spaces. Technical report, Technical University of Berlin, 1999. TR 99-1l. [5] D. MacKay. The evidence framework applied to classification networks. 4(5):720-736, 1992. Neural Computation, [6] D. A. McAllester. Some PAC Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 230-234, Madison, Wisconsin, 1998. [7] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical report, Dept. of Computer Science, University of Toronto, 1993. CRG-TR-93-l. [8] P. Sollich. Probabilistic methods for Support Vector Machines. In Advances in Neural Information Processing Systems 12, pages 349-355, San Mateo, CA, 2000. Morgan Kaufmann. [9] G. Wahba. Support Vector Machines, Reproducing Kernel Hilbert Spaces and the randomized GACV. Technical report , Department of Statistics, University of Wisconsin, Madison, 1997. TR- NO- 984.
1802 |@word pw:3 norm:1 covariance:1 tr:3 shading:1 reduction:1 denoting:1 interestingly:1 outperforms:1 current:2 gv:1 designed:1 plot:1 depict:1 v:1 plane:1 maximised:1 provides:3 toronto:1 herbrich:3 lx:2 bixi:1 along:1 beta:2 shorthand:1 eleventh:1 introduce:1 abscissa:1 multi:2 decomposed:1 equipped:1 increasing:1 confused:1 notation:1 kg:11 impractical:1 classifier:17 hit:2 qm:1 unit:1 before:2 consequence:3 inconsequential:1 mateo:1 equivalence:2 suggests:2 co:2 limited:1 bi:2 responsible:1 practice:2 cfj:1 procedure:1 intersect:1 area:1 thought:1 suggest:1 convenience:1 ga:1 selection:6 ir2:2 context:1 py:2 map:3 demonstrated:1 go:1 starting:2 estimator:5 ralf:1 shamir:1 hypothesis:6 approximated:1 particularly:2 expensive:1 calculate:5 wj:12 region:1 trade:1 complexity:1 seung:1 trained:1 upon:1 inapplicable:1 basis:2 gacv:1 various:6 monte:2 query:3 popularisation:1 choosing:1 quite:1 ralfh:1 valued:1 otherwise:1 statistic:3 transductive:2 jointly:1 itself:1 final:1 sequence:1 analytical:3 interaction:1 product:2 tu:2 object:1 derive:2 ij:1 longitude:2 implemented:1 c:2 direction:7 drawback:1 correct:1 exploration:2 mcallester:1 suffices:1 crg:1 sans:1 considered:1 normal:1 exp:1 great:4 mapping:1 purpose:1 estimation:1 outperformed:1 applicable:1 label:26 tool:3 clearly:2 hospitality:1 rather:1 likelihood:5 contrast:1 dim:1 posteriori:1 inference:3 bt:2 bpm:7 selective:1 germany:2 classification:4 dual:1 lfor:1 ir3:2 priori:1 development:1 arccos:1 spatial:1 mackay:1 equal:1 having:1 sampling:6 constitutes:1 future:2 contaminated:3 report:3 piecewise:7 roman:1 randomly:1 attempt:1 normalisation:1 severe:1 truly:1 chain:3 tuple:1 necessary:1 xy:1 iv:1 divide:1 irm:1 re:3 plotted:3 circle:3 soft:3 exchanging:1 tg:1 uniform:4 usefulness:1 tishby:1 guru:1 chooses:1 thanks:1 density:4 randomized:1 probabilistic:2 yl:1 ym:1 again:1 ear:1 choose:3 possibly:2 disguise:1 toy:1 account:1 de:2 bold:2 coefficient:1 depends:1 vi:3 view:2 bayes:8 sort:1 elaborated:1 contribution:1 il:1 accuracy:1 kaufmann:1 yield:2 bayesian:27 iid:3 carlo:2 bob:1 explain:1 suffers:1 naturally:1 sampled:7 dataset:2 remp:3 knowledge:1 dimensionality:1 graepel:3 hilbert:2 back:1 campbell:1 focusing:1 appears:1 modal:2 evaluated:1 done:1 furthermore:1 angular:1 ei:3 replacing:2 mode:1 gray:1 indicated:1 thore:1 concept:1 true:1 consisted:1 counterpart:1 assigned:1 iteratively:1 neal:1 sin:1 please:1 prominent:1 demonstrate:1 reasoning:2 superior:1 log2p:1 discussed:1 gibbs:9 reliability:1 iyi:1 surface:3 posterior:19 closest:2 recent:1 scenario:1 incapable:1 binary:2 calligraphic:1 pylx:1 seen:1 morgan:1 greater:1 employed:1 determine:2 ii:3 full:2 technical:6 cross:1 long:1 lin:1 sphere:3 penalisation:2 schematic:1 prediction:2 denominator:2 essentially:1 expectation:4 invent:1 iteration:1 kernel:25 sometimes:1 achieved:1 nci:6 interval:1 crucial:1 spirit:3 mod:1 iii:1 concerned:1 independence:1 xj:2 wahba:1 inner:3 idea:1 wo:1 generally:1 useful:2 detailed:1 characterise:1 svms:2 generate:1 specifies:1 exist:1 dotted:1 sign:4 estimated:2 track:1 diverse:1 group:2 four:1 drawn:1 ifj:1 fraction:1 sum:1 run:1 angle:2 letter:3 decide:1 decision:8 followed:1 quadratic:1 annual:1 your:1 sake:1 dominated:2 aspect:1 speed:1 mtest:1 structured:1 department:3 wsvm:2 sollich:1 making:1 explained:1 turn:2 slack:2 committee:2 needed:1 flip:2 ascending:1 serf:1 worthwhile:1 assumes:1 remaining:1 madison:2 hbp:1 quantity:4 looked:1 flipping:1 strategy:2 italic:1 surrogate:1 visiting:1 obermayer:1 distance:1 mapped:1 berlin:7 majority:2 topic:1 length:2 illustration:1 demonstration:1 ncl:1 robert:1 observation:1 markov:3 sm:1 situation:1 reproducing:2 arbitrary:3 inferred:1 pair:1 connection:1 able:1 bar:2 xm:2 latitude:2 memory:1 indicator:3 abbreviate:1 scheme:2 capitalised:1 irrespective:1 carried:1 prior:6 understanding:1 circled:1 tangent:1 acknowledgement:1 regularisation:1 wisconsin:2 freund:1 mtrain:1 generation:1 interesting:1 free:2 normalised:4 taking:1 regard:1 boundary:2 calculated:2 gram:1 made:1 commonly:1 collection:1 san:1 founded:1 far:1 keep:1 dealing:1 active:3 assumed:3 tuples:1 xi:6 learn:1 robust:2 yixi:2 ca:1 symmetry:1 expansion:1 williamson:1 excellent:3 investigated:1 iiwll:2 rh:1 whole:1 noise:32 representative:1 referred:1 canberra:1 transduction:7 fashion:1 hemp:1 position:1 xl:2 candidate:3 formula:1 theorem:1 specific:1 pac:2 symbol:1 svm:7 cortes:1 evidence:6 incorporating:1 exists:1 vapnik:1 labelling:1 anu:1 margin:6 rejection:1 entropy:2 ilx:1 expressed:1 partially:1 absence:2 content:2 experimentally:1 infinite:1 determined:1 generalisation:7 sampler:8 averaging:1 hyperplane:1 called:1 isomorphic:1 experimental:2 vote:2 ew:1 support:5 dept:1
877
1,803
Error-correcting Codes on a Bethe-like Lattice Renato Vicente David Saad The Neural Computing Research Group Aston University, Birmingham, B4 7ET, United Kingdom {vicenter,saadd}@aston.ac.uk Yoshiyuki Kabashima Department of Computational Intelligence and Systems Science Tokyo Institute of Technology, Yokohama 2268502, Japan kaba@dis.titech.ac.jp Abstract We analyze Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as an extremization of a proper free-energy. We find a thermodynamic phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape. 1 Introduction In the last years increasing interest has been devoted to the application of mean-field techniques to inference problems. There are many different ways of building mean-field theories. One can make a perturbative expansion around a tractable model [1,2], or assume a tractable structure and variationally determine the model parameters [3]. Error-correcting codes (ECC) are particularly interesting examples of inference problems in loopy intractable graphs [4]. Recently the focus has been directed to the state-of-the art high performance turbo codes [5] and to Gallager and MN codes [6,7]. Statistical physics has been applied to the analysis of ECCs as an alternative to information theory methods yielding some new interesting directions and suggesting new high-performance codes [8]. Sourlas was the first to relate error-correcting codes to spin glass models [9], showing that the Random-energy Model [10] can be thought of as an ideal code capable of saturating Shannon's bound at vanishing code rates. This work was extended recently to the case of finite code rates [11] and has been further developed for analyzing MN codes of various structures [12]. All of the analyzes mentioned above as well as the recent turbo codes analysis [13] relied on the replica approach under the assumption of replica symmetry. To date, the only model that can be analyzed exactly is the REM that corresponds to an impractical coding scheme of a vanishing code rate. Here we present a statistical physics treatment of non-structured Gallager codes by employing a mean-field approximation based on the use of a generalized tree structure (Bethe lattice [14]) known as Husimi cactus that is exactly solvable. The model parameters are just assumed to be those of the model with cycles. In this framework the probability propagation decoding algorithm (PP) emerges naturally providing an alternative view to the relationship between PP decoding and mean-field approximations already observed in [15]. Moreover, this approach has the advantage of being a slightly more controlled and easier to understand than replica calculations. This paper is organized as follows: in the next section we present unstructured Gallager codes and the statistical physics framework to analyze them, in section 3 we make use of the lattice geometry to solve the model exactly. In section 4 we analyze the typical code performance. We summarize the results in section 5. 2 Gallager codes: statistical physics formulation We will concentrate here on a simple communication model whereby messages are represented by binary vectors and are communicated through a Binary Symmetric Channel (BSC) where uncorrelated bit flips appear with probability /. A Gallager code is defined by a binary matrix A = [CI I C 2 ], concatenating two very sparse matrices known to both sender and receiver, with C 2 (of dimensionality (M - N) x (M - N) being invertiblethe matrix C I is of dimensionality (M - N) x N. Encoding refers to the production of an M dimensional binary code word t E {O, l}M (M > N) from the original message E {O,l}N by t = GTe (mod 2), where all operations are performed in the field {a, I} and are indicated by (mod 2). The generator matrix is G = [1 I C 2I C I ] (mod 2), where 1 is the N x N identity matrix, implying that AGT (mod 2) = and that the first N bits oft are set to the message In regular Gallager codes the number of non-zero elements in each row of A is chosen to be exactly K . The number of elements per column is then C = (1 - R)K, where the code rate is R = N I M (for unbiased messages). The encoded vector t is then corrupted by noise represented by the vector, E {O, l}M with components independently drawn from P( () = (1- J)8( () + /8(( - 1). The received vector takes the form r = GTe (mod 2). e ? e. +, Decoding is carried out by multiplying the received message by the matrix A to produce the syndrome vector z = Ar = A, (mod 2) from which an estimate T for the noise vector can be produced. An estimate for the original message is then obtained as the first N bits of r + T (mod 2). The Bayes optimal estimator (also known as marginal posterior maximizer, MPM) for the noise is defined as Tj = argmaxr 1. P(Tj I z) , where Tj E {a, I}. The performance of this estimator can be measured by the probability of bit error Pb = 1 - 11M ~~1 8[Tj; (j], where 8[;] is Kronecker's delta. Knowing the matrices C 2 and C I , the syndrome vector z and the noise level/it is possible to apply Bayes' theorem and compute the posterior probability P(r I z) 1 = ZX [z = Ar(mod 2)] P(r), (1) ? where X[X] is an indicator function providing 1 if X is true and otherwise. To compute the MPM one has to compute the marginal posterior Ph I z) = ~i#j P(r I z), which in general requires O(2M) operations, thus becoming impractical for long messages. To solve this problem one can use the sparseness of A to design algorithms that require O(M) operations to perform the same task. One of these methods is the probability propagation algorithm (PP), also known as belief propagation or sum-product algorithm [16]. The connection to statistical physics becomes clear when the field {a, I} is replaced by Ising spins {? I} and mod 2 sums by products [9] . The syndrome vector acquires the form of a multi-spin coupling Jp, = TIjE.c(p,) (j where j = 1,? .. ,M and f..L = 1,?? . , (M - N). Figure 1: Husimi cactus with K = 3 and connectivity C = 4. The K indices of nonzero elements in the row f.L of a matrix A, which is not necessarily a concatenation of two separate matrices (therefore, defining an unstructured Gallager code), are given by C(f.L) = {it,'" ,jK}, and in a column l are given by M(l) = {f.Ll'???' f.Lc}. The posterior (1) can be written as the Gibbs distribution [12]: P{3 (T 1.1) = -Zl lim exp [- ,81l'Y (Tj .1)1 (2) 1'--+00 -, Mf fA=l II (.1fA Tj - 1) - jE?.(fA) F t Tj . j=l The external field corresponds to the prior probability over the noise and has the form F = atanh(l- 2J). Note that the Hamiltonian depends on a hyper-parameter that has to be set as , -t 00 for optimal decoding. The disorder is trivial and can be gauged as .1fA f-t 1 by using Tj f-t Tj (j. The resulting Hamiltonian is a multi-spin ferromagnet with finite connectivity in a random field h j = F(j. The decoding process corresponds to finding local magnetizations at temperature,8 = 1, mj = (Tj) (3=1 and calculating estimates as Tj = sgn(mj). In the {? 1} representation the probability of bit error, acquires the form Pb 11M 2M L(j sgn(mj), = 2' - (3) j=l connecting the code performance with the computation of local magnetizations. 3 Bethe-like Lattice calculation 3.1 Generalized Bethe lattice: the "usimi cactus A Husimi cactus with connectivity C is generated starting with a polygon of K vertices with one Ising spin in each vertex (generation 0). All spins in a polygon interact through a single coupling .1fA and one of them is called the base spin. In figure 1 we show the first step in the construction of a Husimi cactus, in a generic step the base spins of the n - 1 generation polygons, numbering (C -l)(K -1), are attached to K -1 vertices ofa generation n polygon. This process is iterated until a maximum generation nmax is reached, the graph is then completed by attaching C uncorrelated branches of nmax generations at their base spins. In that way each spin inside the graph is connected to exactly C polygons. The local magnetization at the centre mj can be obtained by fixing boundary (initial) conditions in the O-th generation and iterating recursion equations until generation nmax is reached. Carrying out the calculation in the thermodynamic limit corresponds to having nmax "" In M generations and M -t 00. The Hamiltonian of the model has the form (2) where C(f.L) denotes the polygon f.L of the lattice. Due to the tree-like structure, local quantities far from the boundary can be cal- culated recursively by specifying boundary conditions. The typical decoding performance can therefore be computed exactly without resorting to replica calculations [17]. 3.2 Recursion relations: probability propagation We adopt the approach presented in [18] where recursion relations for the probability distribution Pl-'k(Tk) of the base spin of the polygon J-L is connected to (C - I)(K - I) distributions Pvj (Tj), with v E M (j) \ J-L (all polygons linked to j but J-L) of polygons in the previous generation: Pl-'k(Tk) = ~ Tr{Tj} exp [(3 II (.1I-'Tk Tj jE'c(I-')\k -I) II + FTk] II Pvjh), vEM(j)\l-'jE'c(I-')\k (4) where the trace is over the spins Tj such that j E C(J-L) \ k. The effective field Xvj on a base spin j due to neighbors in polygon v can be written as : exp (-2x .) = e 2F Pvj( -) (5) Pvj (+)' VJ Combining (4) and (5) one finds the recursion relation: ~ Trh} exp [-(3.11-' ITjE'c(I-')\k Tj + EjE'c(I-')\k(F + EVEMU)\I-' XVj)Tj] Trh} exp [+(3.11-' ITjE'c(I-')\k Tj + EjE?(I-')\k(F + EVEMU)\I-' XVj)Tj] exp(-2xl-'k)=------~~------------------------------------~ (6) By computing the traces and taking (3 -+ XI-'k = atanh 00 one obtains: II [.11-' tanh(F + jE'c(I-')\k L (7) XVj)] VEMU)\I-' The effective local magnetization due to interactions with the nearest neighbors in one branch is given by ml-'j = tanh (x I-'j). The effective local field on a base spin j of a polygon J-L due to C - 1 branches in the previous generation and due to the external field is XI-'j = F + EVEMU)\I-' Xvj; the effective local magnetization is, therefore, ml-'j = tanh(xl-'j). Equation (7) can then be rewritten in terms ofml-'j and ml-'j and the PP equations [7,15,16] can be recovered: ml-'k = tanh (F + L atanh (mVk)) ml-'k = .11-' II ml-'j (8) jE'c(I-')\k vEMU)\1-' Once the magnetizations on the boundary (O-th generation) are assigned, the local magnetization mj in the central site is determined by iterating (8) and computing: mj = tanh (F + L atanh (mVj)) (9) vEMU) 3.3 Probability propagation as extremization of a free-energy The equations (8) describing PP decoding represent extrema of the following free-energy: M-N .1'( {ml-'k' ml-'d) = M-N L L In(1 + ml-'iml-'i) - L 1-'=1 iE'c 1-'=1 In(1 + .11-' II ml-'i) iE'c (10) ,, 0.8 /\ ::2: I:> 0> ,, ,, ,, o. 0.6 0.6 a:: V 0.4 0.4 0.2 0.2 00 0.1 0.2 0.3 0.4 f 00 (b) ,, ,, ,, ? ,, ? ? 0.1 0.2 0.3 0.4 0.5 Figure 2: (a) Mean normalized overlap between the actual noise vector C and decoded noise T for K = 4 and C = 3 (therefore R = 1/4). Theoretical values (D), experimental averages over 20 runs for code word lengths M = 5000 (e) and M = 100 (full line). (b) Transitions for K = 6. Shannon's bound (dashed line), information theory upper bound (full line ) and thermodynamic transition obtained numerically (0). Theoretical (0) and experimental (+, M = 5000 averaged over 20 runs) PP decoding transitions are also shown. In both figures, symbols are chosen larger than the error bars. tin reF j=l II IJ.EM(j) (1 + mlJ.j) + e- F II (1- mlJ.j)] IJ.EM(j) The iteration of the maps (8) is actually one out of many different methods of finding extrema of this free-energy (not necessarily stable) . This observation opens an alternative way for analyzing the performance of a decoding algorithm by studying the landscape (10). 4 Typical performance 4.1 Macroscopic description The typical macroscopic states of the system during decoding can be described by histograms of the variables mlJ.k and mlJ.k averaged over all possible realizations of the noise vector C. By applying the gauge transformation:flJ. r-+ 1 and Tj r-+ Tj(j, assigning the probability distributions Po (x) to boundary fields and averaging over random local fields F( one obtains from (7) the recursion relation in the space of probability distributions P(x): = (11) where Pn(x) is the distribution of effective fields at the n-th generation due to the previous generations and external fields, in the thermodynamic limit the distribution far from the boundary will be Poo(x) (generation n -+ (0). The local field distribution at the central site is computed by replacing C - 1 by C in (11), taking into account C polygons in the generation just before the central site, and inserting the distribution P00 (x) . Equations (11) are identical to those obtained by the replica symmetric theory as in [12] . By setting initial (boundary) conditions Po(x) and numerically iterating (11), for C ~ 3 one can find, up to some noise level ls, a single stable fixed point at infinite fields, corresponding to a totally aligned state (successful decoding). At ls a bifurcation occurs and two other fixed points appear, stable and unstable, the former corresponding to a misaligned state (decoding failure). This situation is identical to that one observed in [12]. In terms of the free-energy (10), below ls the landscape is dominated by the aligned state that is the global minimum. Above ls a sub-optimal state, corresponding to an exponentially large number of spurious local minima of the Hamiltonian (2), appears and convergence to the totally aligned state becomes a difficult task. At some critical noise level the totally aligned state loses the status of global minimum and the thermodynamic transition occurs. The practical PP decoding is performed by setting initial conditions as ml-'j = 1 - 21, corresponding to the prior probabilities and iterating (8), until stationarity or a maximum number of iterations is attained. The estimate for the noise vector is then produced by computing Tj = sign(mj). At each decoding step the system can be described by histograms of the variables (8), this is equivalent to iterating (11) (a similar idea was presented in [7]). Below ls the process always converges to the successful decoding state, above ls it converges to the successful decoding only if the initial conditions are fine tuned, in general the process converges to the failure state. In Fig.2a we show the theoretical mean overlap between actual noise C and the estimate T as a function of the noise levell, as well as results obtained with PP decoding. Information theory provides an upper bound for the maximum attainable code rate by equalizing the maximal information contents of the syndrome vector z and of the noise estimate T [7]. The thermodynamic phase transition obtained by finding the stable fixed points of (11) and their free-energies interestingly coincides with this upper bound within the precision of the numerical calculation. Note that the performance predicted by thermodynamics is not practical as it requires O(2M) operations for an exhaustive search for the global minimum of the free-energy. In Fig.2b we show the thermodynamic transition for K = 6 and compare with the upper bound, Shannon's bound and the theoretical ls values. 4.2 Tree-like approximation and the thermodynamic limit The geometrical structure of a Gallager code defined by the matrix A can be represented by a bipartite graph (Tanner graph) [16] with bit and check nodes. Each column j of A represents a bit node and each row J.L represents a check node, AI-'j = 1 means that there is an edge linking bit j to check J.L. It is possible to show that for a random ensemble of regular codes, the probability of completing a cycle after walking l edges starting from an arbitrary node is upper bounded by P[l; K, C, M] :-:; l2 Kl 1M. It implies that for very large M only cycles of at least order In M survive. In the thermodynamic limit M -+ 00 the probability P [l; K, C, M] -+ a for any finite l and the bulk of the system is effectively treelike. By mapping each check node to a polygon with K bit nodes as vertices, one can map a Tanner graph into a Husimi lattice that is effectively a tree for any number of generations of order less than In M. It is experimentally observed that the number of iterations of (8) required for convergence does not scale with the system size, therefore, it is expected that the interior of a tree-like lattice approximates a Gallager code with increasing accuracy as the system size increases. Fig.2a shows that the approximation is fairly good even for sizes as small as M = 100. 5 Conclusions To summarize, we solved exactly, without resorting to the replica method, a system representing a Gallager code on a Husimi cactus. The results obtained are in agreement with the replica symmetric calculation and with numerical experiments carried out in systems of moderate size. The framework can be easily extended to MN and similar codes. New insights on the decoding process are obtained by looking at a proper free-energy landscape. We believe that methods of statistical physics are complimentary to those used in the statistical inference community and can enhance our understanding of general graphical models. Acknowledgments We acknowledge support from EPSRC (GRlN00562), The Royal Society (RV,DS) and from the JSPS RFTF program (YK). References [1] Plefka, T., (1982) Convergence condition of the TAP equation for the infinite-ranged Ising spin glass model. Journal of Physics A 15, 1971-1978. [2] Tanaka, T., Information geometry of mean field approximation. to appear in Neural Computation [3] Saul, L.K. & , M.L Jordan (1996) Exploiting tractable substructures in intractable. In Touretzky, D. S. , M. C. Mozer and M. E. Hasselmo (eds.), Advances in Neural Information Processing Systems 8, pp. 486-492. Cambridge, MA: MIT Press. [4] Frey, B.J. & D.J.C. MacKay (1998) A revolution: belief propagation in graphs with cycles. In Jordan, M.L, M. J. Kearns and S.A. Solla (eds.), Advances in Neural Information Processing Systems 10, pp. 479-485 . Cambridge, MA: MIT Press. [5] Berrou, C. & A. Glavieux (1996) Near optimum error correcting coding and decoding: Turbocodes, IEEE Transactions on Communications 44,1261-1271. [6] Gallager, R.G. (1963) Low-density parity-check codes, MIT Press, Cambridge, MA. [7] MacKay, D.J.C. (1999) Good error-correcting codes based on very sparse matrices, IEEE Transactions on Information Theory 45, 399-431. [8] Kanter, 1. & D. Saad (2000) Finite-size effects and error-free communication in Gaussian channels, Journal of Physics A 33, 1675-1681. [9] Sourlas, N. (1989) Spin-glass models as error-correcting codes, Nature 339, 693-695. [10] Derrida, B. (1981) Random-energy model: an exactly solvable model of disordered systems, Physical Review B 24(5),2613-2626. [11] Vicente, R., D. Saad & Y. Kabashima (1999) Finite-connectivity systems as error-correcting codes, Physical Review E 60(5), 5352-5366. [12] Kabashima, Y., T. Murayama & D.Saad (2000) Typical performance of Gallager-type errorcorrecting codes, Physical Review Letters 84 (6), 1355-1358. [13] Montanari, A. & N. Sourlas (2000) The statistical mechanics of turbo codes, European Physical Journal B 18,107-119. [14] Sherrington, D. & K.Y.M. Wong (1987) Graph bipartitioning and the Bethe spin glass, Journal of Physics A 20, L 785-L791. [15] Kabashima, Y. & D. Saad (1998) Belief propagation vs. TAP for decoding corrupted messages, Europhysics Letters 44 (5), 668-674. [16] Kschischang, F.R. & B.J. Frey, (1998) Iterative decoding of compound codes by probability probagation in graphical models, IEEE Journal on Selected Areas in Comm. 16 (2), 153-159. [17] Gujrati, P.D. (1995) Bethe or Bethe-like lattice calculations are more reliable than conventional mean-field calculations , Physical Review Letters 74 (5) , 809-812. [18] Rieger, H. & T.R. Kirkpatrick (1992) Disordered p-spin interaction models on Husirni trees, Physical Review B 45 (17), 9772-9777.
1803 |@word open:1 attainable:1 tr:1 recursively:1 initial:4 united:1 tuned:1 interestingly:1 recovered:1 assigning:1 perturbative:1 written:2 numerical:2 v:1 implying:1 intelligence:1 selected:1 mpm:2 vanishing:2 hamiltonian:4 provides:1 node:6 inside:1 expected:1 mechanic:1 multi:2 rem:1 actual:2 increasing:2 becomes:2 totally:3 moreover:1 bounded:1 complimentary:1 developed:1 finding:3 extremum:2 transformation:1 impractical:2 ofa:1 exactly:8 uk:1 zl:1 appear:3 before:1 ecc:2 local:11 frey:2 limit:4 encoding:1 analyzing:2 becoming:1 specifying:1 misaligned:1 averaged:2 directed:1 practical:3 acknowledgment:1 xvj:5 communicated:1 area:1 thought:1 word:2 refers:1 regular:2 nmax:4 interior:1 cal:1 applying:1 wong:1 equivalent:1 map:2 conventional:1 poo:1 starting:2 independently:1 l:7 bipartitioning:1 unstructured:2 disorder:1 correcting:7 estimator:2 insight:1 construction:1 flj:1 agreement:1 element:3 particularly:1 jk:1 walking:1 ising:3 observed:3 epsrc:1 solved:1 cycle:4 connected:2 solla:1 yk:1 mentioned:1 mozer:1 comm:1 turbocodes:1 carrying:1 bipartite:1 po:2 easily:1 various:1 represented:3 polygon:13 effective:5 hyper:1 exhaustive:1 kanter:1 encoded:1 larger:1 solve:2 otherwise:1 advantage:1 equalizing:1 interaction:3 cactus:6 product:2 maximal:1 inserting:1 sourlas:3 combining:1 realization:1 date:1 aligned:4 murayama:1 description:1 exploiting:1 convergence:3 optimum:1 produce:1 converges:3 tk:3 coupling:2 ac:2 fixing:1 derrida:1 measured:1 ij:2 nearest:1 received:2 predicted:1 implies:1 direction:1 concentrate:1 tokyo:1 disordered:2 sgn:2 require:1 pl:2 around:1 exp:6 mapping:1 adopt:1 birmingham:1 tanh:5 hasselmo:1 gauge:1 argmaxr:1 bsc:1 mit:3 always:1 gaussian:1 pn:1 focus:1 check:5 glass:4 inference:3 treelike:1 spurious:1 relation:4 art:1 bifurcation:1 fairly:1 marginal:2 field:20 once:1 mackay:2 having:1 identical:2 represents:2 survive:1 preserve:1 replaced:1 geometry:3 phase:2 stationarity:1 interest:1 message:8 kirkpatrick:1 analyzed:1 yielding:1 tj:22 devoted:1 edge:2 capable:1 tree:6 theoretical:5 column:3 ar:2 eje:2 lattice:9 loopy:1 vertex:4 plefka:1 jsps:1 successful:3 corrupted:2 density:1 ie:2 physic:9 decoding:22 tanner:2 connecting:1 enhance:1 connectivity:4 central:3 p00:1 external:3 japan:1 suggesting:1 account:1 coding:2 depends:1 performed:2 extremization:2 view:1 analyze:3 linked:1 reached:2 relied:1 bayes:2 substructure:1 vem:1 trh:2 spin:18 accuracy:1 iml:1 ensemble:1 landscape:4 iterated:1 produced:2 multiplying:1 zx:1 kabashima:4 explain:1 touretzky:1 ed:2 failure:2 energy:11 pp:10 naturally:2 recovers:1 treatment:1 lim:1 emerges:1 dimensionality:2 organized:1 variationally:1 actually:1 mlj:4 appears:1 attained:1 formulation:1 just:2 until:3 d:1 replacing:1 maximizer:1 propagation:8 indicated:1 believe:1 building:1 effect:1 normalized:1 unbiased:1 true:1 ranged:1 former:1 assigned:1 symmetric:3 nonzero:1 ll:1 during:1 acquires:2 whereby:1 coincides:2 generalized:2 sherrington:1 magnetization:7 temperature:1 geometrical:1 recently:2 physical:6 b4:1 jp:2 attached:1 exponentially:1 linking:1 approximates:1 numerically:2 rftf:1 cambridge:3 gibbs:1 ai:1 resorting:2 centre:1 stable:4 base:6 posterior:4 recent:1 moderate:1 compound:1 binary:4 analyzes:1 minimum:4 syndrome:4 determine:1 berrou:1 dashed:1 ii:9 branch:3 thermodynamic:9 full:2 rv:1 levell:1 calculation:8 long:1 europhysics:1 controlled:1 titech:1 iteration:3 yoshiyuki:1 represent:1 histogram:2 fine:1 macroscopic:2 saad:5 rieger:1 mod:9 jordan:2 near:1 ideal:1 idea:1 knowing:1 iterating:5 clear:1 ph:1 sign:1 delta:1 per:1 bulk:1 group:1 pb:2 drawn:1 replica:7 graph:8 year:1 sum:2 saadd:1 run:2 letter:3 bit:9 renato:1 bound:8 completing:1 turbo:3 kronecker:1 ftk:1 dominated:1 department:1 structured:1 numbering:1 mvj:1 slightly:1 em:2 errorcorrecting:1 equation:6 describing:1 flip:1 tractable:3 studying:1 operation:4 rewritten:1 apply:1 generic:1 alternative:3 yokohama:1 original:2 denotes:1 completed:1 graphical:2 calculating:1 society:1 atanh:4 already:1 quantity:1 occurs:2 fa:5 separate:1 concatenation:1 unstable:1 trivial:1 code:38 length:1 index:1 relationship:1 providing:2 pvj:3 kingdom:1 difficult:1 relate:1 trace:2 design:1 proper:2 perform:1 upper:6 observation:1 finite:5 acknowledge:1 defining:1 extended:2 communication:3 situation:1 looking:1 arbitrary:1 community:1 david:1 distorts:1 kl:1 required:1 connection:1 tap:2 tanaka:1 bar:1 below:2 oft:1 summarize:2 program:1 royal:1 reliable:1 belief:3 overlap:2 critical:1 solvable:2 indicator:1 recursion:5 mn:3 representing:1 scheme:1 thermodynamics:1 aston:2 technology:1 carried:2 prior:2 understanding:1 l2:1 review:5 gauged:1 interesting:2 generation:16 generator:1 uncorrelated:2 production:1 row:3 last:1 free:10 parity:1 dis:1 understand:1 institute:1 neighbor:2 saul:1 taking:2 attaching:1 sparse:2 boundary:7 transition:7 employing:2 far:2 transaction:2 obtains:2 status:1 ml:11 global:3 receiver:1 assumed:1 xi:2 search:1 iterative:1 mvk:1 bethe:7 channel:2 mj:7 nature:1 kschischang:1 symmetry:1 interact:1 expansion:1 necessarily:2 european:1 vj:1 montanari:1 noise:14 ref:1 site:4 je:5 fig:3 lc:1 precision:1 sub:1 decoded:1 concatenating:1 xl:2 tin:1 theorem:1 showing:1 revolution:1 symbol:1 intractable:2 effectively:2 ci:1 agt:1 sparseness:1 easier:1 mf:1 sender:1 gallager:13 saturating:1 corresponds:4 loses:1 ma:3 glavieux:1 identity:1 content:1 experimentally:1 vicente:2 typical:5 determined:1 infinite:2 averaging:1 gte:2 kearns:1 called:1 experimental:2 shannon:3 support:1
878
1,804
The Manhattan World Assumption: Regularities in scene statistics which enable Bayesian inference James M. Coughlan Smith-Kettlewell Eye Research Inst. 2318 Fillmore St. San Francisco, CA 94115 A.L. Yuille Smith-Kettlewell Eye Research Inst. 2318 Fillmore St. San Francisco, CA 94115 coughlan@ski.org yuille@ski.org Abstract Preliminary work by the authors made use of the so-called "Manhattan world" assumption about the scene statistics of city and indoor scenes. This assumption stated that such scenes were built on a cartesian grid which led to regularities in the image edge gradient statistics. In this paper we explore the general applicability of this assumption and show that, surprisingly, it holds in a large variety of less structured environments including rural scenes. This enables us, from a single image, to determine the orientation of the viewer relative to the scene structure and also to detect target objects which are not aligned with the grid. These inferences are performed using a Bayesian model with probability distributions (e.g. on the image gradient statistics) learnt from real data. 1 Introduction In recent years, there has been growing interest in the statistics of natural images (see Huang and Mumford [4] for a recent review). Our focus, however, is on the discovery of scene statistics which are useful for solving visual inference problems. For example, in related work [5] we have analyzed the statistics of filter responses on and off edges and hence derived effective edge detectors. In this paper we present results on statistical regularities of the image gradient responses as a function of the global scene structure. This builds on preliminary work [2] on city and indoor scenes. This work observed that such scenes are based on a cartesian coordinate system which puts (probabilistic) constraints on the image gradient statistics. Our current work shows that this so-called "Manhattan world" assumption about the scene statistics applies far more generally than urban scenes. Many rural scenes contain sufficient structure on the distribution of edges to provide a natural cartesian reference frame for the viewer. The viewers' orientation relative to this frame can be determined by Bayesian inference. In addition, certain structures in the scene stand out by being unaligned to this natural reference frame. In our theory such structures appear as "outlier" edges which makes it easier to detect them. Informal evidence that human observers use a form of the Manhattan world assumption is provided by the Ames room illusion, see figure (6), where the observers appear to erroneously make this assumption, thereby grotesquely distorting the sizes of objects in the room. 2 Previous Work and Three- Dimensional Geometry Our preliminary work on city scenes was presented in [2]. There is related work in computer vision for the detection of vanishing points in 3-d scenes [1], [6] (which proceeds through the stages of edge detection, grouping by Hough transforms, and finally the estimation of the geometry). We refer the reader to [3] for details on the geometry of the Manhattan world and report only the main results here. Briefly, we calculate expressions for the orientations of x, y, z lines imaged under perspective projection in terms of the orientation of the camera relative to the x, y, z axes. The camera orientation relative to the xyz axis system may be specified by three Euler angles: the azimuth (or compass angle) a, corresponding to rotation about the z axis, the elevation (3 above the xy plane, and the twist'Y about the camera's line of sight. We use ~ = (a, (3, 'Y) to denote all three Euler angles of the camera orientation. Our previous work [2] assumed that the elevation and twist were both zero which turned out to be invalid for many of the images presented in this paper. We can then compute the normal orientation of lines parallel to the x, y, z axes, measured in the image plane, as a function of film coordinates (u, v) and the camera orientation ~. We express the results in terms of orthogonal unit camera axes ii, b and c, which are aligned to the body of the camera and are determined by ~. For x lines (see Figure 1, left panel) we have tan Ox = -(ucx + fax)/(vc x + fb x ), where Ox is the normal orientation of the x line at film coordinates (u, v) and f is the focal length of the camera. Similarly, tanOy = -(ucy + fay)/(vc y + fb y) for y lines and tanOz = -(ucz + faz)/(vc z + fb z ) for z lines. In the next section will see how to relate the normal orientation of an object boundary (such as x,y,z lines) at a point (u, v) to the magnitude and direction of the image gradient at that location. I ~e ~u vanishing point ~I I~ Figure 1: (Left) Geometry of an x line projected onto (u,v) image plane. 0 is the normal orientation of the line in the image. (Right) Histogram of edge orientation error (displayed modulo 180?). Observe the strong peak at 0?, indicating that the image gradient direction at an edge is usually very close to the true normal orientation of the edge. 3 Pon and Poff : Characterizing Edges Statistically Since we do not know where the x, y, z lines are in the image, we have to infer their locations and orientations from image gradient information. This inference is done using a purely local statistical model of edges. A key element of our approach is that it allows the model to infer camera orientation without having to group pixels into x, y, z lines. Most grouping procedures rely on the use of binary edge maps which often make premature decisions based on too little information. The poor quality of some of the images - underexposed and overexposed - makes edge detection particularly difficult, as well as the fact that some of the images lack x, y, z lines that are long enough to group reliably. Following work by Konishi et al [5], we determine probabilities Pon(Ea) and POf!(Ea ) for the probabilities of the image gradient magnitude Ea at position it in the image conditioned on whether we are on or off an edge. These distributions quantify the tendency for the image gradient to be high on object boundaries and low off them, see Figure 2. They were learned by Konishi et al for the Sowerby image database which contains one hundred presegmented images. Figure 2: POf!(Y) (left) and Pon(y)(right), the empirical histograms of edge re- IVII is quantized to sponses off and on edges, respectively. Here the response y = take 20 values and is shown on the horizontal axis. Note that the peak of POf!(Y) occurs at a lower edge response than the peak of Pon (y). We extend the work of Konishi et al by putting probability distributions on how accurately the image gradient direction estimates the true normal direction of the edge. These were learned for this dataset by measuring the true orientations of the edges and comparing them to those estimated from the image gradients. This gives us distributions on the magnitude and direction of the intensity gradient Pon CEaIB), Pof! CEa), where Ea = (Ea, CPa), B is the true normal orientation of the edge, and CPa is the gradient direction measured at point it = (u, v). We make a factorization assumption that Pon(EaIB) = Pon(Ea)Pang(CPa - B) and POf!(Ea) = Pof!(Ea)U(cpa). Pang(.) (with argument evaluated modulo 271" and normalized to lover the range 0 to 271") is based on experimental data, see Figure 1 (right), and is peaked about 0 and 71". In practice, we use a simple box-shaped function to model the distribution: Pang (r5B) = (1 - f)/47 if r5B is within angle 7 of 0 or 71", and f/(271" - 47) otherwise (i.e. the chance of an angular error greater than ?7 is f ). In our experiments f = 0.1 and 7 = 4? for indoors and 6? outdoors. By contrast, U(.) = 1/271" is the uniform distribution. 4 Bayesian Model We devised a Bayesian model which combines knowledge of the three-dimensional geometry of the Manhattan world with statistical knowledge of edges in images. The model assumes that, while the majority of pixels in the image convey no information about camera orientation, most of the pixels with high edge responses arise from the presence of x, y, z lines in the three-dimensional scene. An important feature of the Bayesian model is that it does not force us to decide prematurely which pixels are on and off an object boundary (or whether an on pixel is due to x,y, or z), but allows us to sum over all possible interpretations of each pixel. The image data Eil at a single pixel u is explained by one of five models mil: mil = 1,2,3 mean the data is generated by an edge due to an x, y, z line, respectively, in the scene; mil = 4 means the data is generated by an outlier edge (not due to an x, y, z line); and mil = 5 means the pixel is off-edge. The prior probability P(mil) of each of the edge models was estimated empirically to be 0.02,0.02,0.02,0.04,0.9 for mil = 1,2, ... , 5. Using the factorization assumption mentioned before, we assume the probability of the image data Eil has two factors, one for the magnitude of the edge strength and another for the edge direction: P(Eillmil, ~,u) = P(Eillmil)P(?illmil, ~,u) (1) where P(Eillmil) equals Po/!(Eil ) if mil = 5 or Pon(Eil ) if mil # 5. Also, P(?illmil, ~,u) equals Pang(?il-O(~,mil'U)) if mil = 1,2,3 or U(?il) if mil = 4,5. Here O(~, mil, u)) is the predicted normal orientation of lines determined by the equation tan Ox = -(ucx + fax)/(vc x + fb x ) for x lines, tanOy = -(ucy+ fay)/(vc y + fb y) for y lines, and tanOz = -(ucz + faz)/(vc z + fb z ) for z lines. In summary, the edge strength probability is modeled by Pon for models 1 through 4 and by po/! for model 5. For models 1,2 and 3 the edge orientation is modeled by a distribution which is peaked about the appropriate orientation of an x, y, z line predicted by the camera orientation at pixel location u; for models 4 and 5 the edge orientation is assumed to be uniformly distributed from 0 through 27f. Rather than decide on a particular model at each pixel, we marginalize over all five possible models (i.e. creating a mixture model): 5 P(Eill~,u) = 2: P(Eillmil, ~,u)P(mil) (2) mit=l Now, to combine evidence over all pixels in the image, denoted by {Ea}, we assume that the image data is conditionally independent across all pixels, given the camera orientation ~: P({Ea}I~) = II P(Eill~,u) (3) il (Although the conditional independence assumption neglects the coupling of gradients at neighboring pixels, it is a useful approximation that makes the model computationally tractable.) Thus the posterior distribution on the camera orientation is given by nil P(Eill~, U)P(~)/Z where Z is a normalization factor and P(~) is a uniform prior on the camera orientation. To find the MAP (maximum a posterior) estimate, our algorithm maximizes the log posterior term log[P({Eil}I~)P(~)] = logP(~) + L:illog[L:muP(Eillmil,~,u)P(mil)] numerically by searching over a quantized set of compass directions ~ in a certain range. For details on this procedure, as well as coarse-to-fine techniques for speeding up the search, see [3]. 5 Experimental Results This section presents results on the domains for which the viewer orientation relative to the scene can be detected using the Manhattan world assumption. In particular, we demonstrate results for : (I) indoor and outdoor scenes (as reported in [2]), (II) rural English road scenes, (III) rural English fields, (IV) a painting of the French countryside, (V) a field of broccoli in the American mid-west, (VI) the Ames room, and (VII) ruins of the Parthenon (in Athens). The results show strong success for inference using the Manhattan world assumption even for domains in which it might seem unlikely to apply. (Some examples of failure are given in [3]. For example, a helicopter in a hilly scene where the algorithm mistakenly interprets the hill silhouettes as horizontal lines ). The first set of images were of city and indoor scenes in San Francisco with images taken by the second author [2]. We include four typical results, see figure 3, for comparison with the results on other domains. Figure 3: Estimates of the camera orientation obtained by our algorithm for two indoor scenes (left) and two outdoor scenes (right). The estimated orientations of the x, y lines, derived for the estimated camera orientation q!, are indicated by the black line segments drawn on the input image. (The z line orientations have been omitted for clarity.) At each point on a sub grid two such segments are drawn - one for x and one for y. In the image on the far left, observe how the x directions align with the wall on the right hand side and with features parallel to this wall. The y lines align with the wall on the left (and objects parallel to it). We now extend this work to less structured scenes in the English countryside. Figure (4) shows two images of roads in rural scenes and two fields. These images come from the Sowerby database. The next three images were either downloaded from the web or digitized (the painting). These are the mid-west broccoli field , the Parthenon ruins, and the painting of the French countryside. 6 Detecting Objects in Manhattan world We now consider applying the Manhattan assumption to the alternative problem of detecting target objects in background clutter. To perform such a task effectively requires modelling the properties of the background clutter in addition to those of the target object. It has recently been appreciated that good statistical modelling of the image background can improve the performance of target recognition [7]. The Manhattan world assumption gives an alternative way of probabilistically modelling background clutter. The background clutter will correspond to the regular structure of buildings and roads and its edges will be aligned to the Manhattan grid. The target object, however, is assumed to be unaligned (at least, in part) to this grid. Therefore many of the edges of the target object will be assigned to model 4 by the algorithm. (Note the algorithm first finds the MAP estimate q!* of the Figure 4: Results on rural images in England without strong Manhattan structure. Same conventions as before. Two images of roads in the countryside (left panels) and two images of fields (right panel). Figure 5: Results on an American mid-west broccoli field, the ruins of the Parthenon, and a digitized painting of the French countryside. compass orientation, see section (4), and then estimates the model by doing MAP of P(ma!Ea, ~*,'it) to estimate ma for each pixel 'it.) This enables us to significantly simplify the detection task by removing all edges in the images except those assigned to model 4. The Ames room, see figure (6), is a geometrically distorted room which is constructed so as to give the false impression that it is built on a cartesian coordinate frame when viewed from a special viewpoint. Human observers assume that the room is indeed cartesian despite all other visual cues to the contrary. This distorts the apparent size of objects so that, for example, humans in different parts of the room appear to have very different sizes. In fact, a human walking across the room will appear to change size dramatically. Our algorithm, like human observers, interprets the room as being cartesian and helps identify the humans in the room as outlier edges which are unaligned to the cartesian reference system. 7 Summary and Conclusions We have demonstrated that the Manhattan world assumption applies to a range of images, rural and otherwise, in addition to urban scenes. We demonstrated a Bayesian model which used this assumption to infer the orientation of the viewer relative to this reference frame and which could also detect outlier edges which are unaligned to the reference frame. A key element of this approach is the use of image gradient statistics, learned from image datasets, which quantify the distribution of the image gradient magnitude and direction on and off object boundaries. We expect that there are many further image regularities of this type which can be used for building effective artificial vision systems and which are possibly made use of by biological vision systems. f'-~, ..-\:,'. . l - . i .1 ?;;t.J-' \. Figure 6: Detecting people in Manhattan world. The left images (top and bottom) show the estimated scene structure. The right images show that people stand out as residual edges which are unaligned to the Manhattan grid. The Ames room (top panel) violates the Manhattan assumption but human observers, and our algorithm, interpret it as if it satisfied the assumptions. In fact, despite appearances, the two people in the Ames room are really the same size. Acknowledgments We want to acknowledge funding from NSF with award number IRI-9700446, support from the Smith-Kettlewell core grant, and from the Center for Imaging Sciences with Army grant ARO DAAH049510494. This work was also supported by the National Institute of Health (NEI) with grant number R01-EY 12691-01. It is a pleasure to acknowledge email conversations with Song Chun Zhu about scene clutter. We gratefully acknowledge the use ofthe Sowerby image dataset from Sowerby Research Centre, British Aerospace. References [1] B. Briliault-O'Mahony. "New Method for Vanishing Point Detection". Computer Vision, Graphics, and Image Processing. 54(2). pp 289-300. 1991. [2] J. Coughlan and A.L. Yuille. "Manhattan World: Compass Direction from a Single Image by Bayesian Inference" . Proceedings International Conference on Computer Vision ICCV '99. Corfu, Greece. 1999. [3] J. Coughlan and A.L. Yuille. "Manhattan World: Orientation and Outlier Detection by Bayesian Inference." Submitted to International Journal of Computer Vision. 2000. [4] J. Huang and D. Mumford. "Statistics of Natural Images and Models". In Proceedings Computer Vision and Pattern Recognition CVPR'99. Fort Collins, Colorado. 1999. [5] S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu. "Fundamental Bounds on Edge Detection: An Information Theoretic Evaluation of Different Edge Cues." Proc. Int 'l con/. on Computer Vision and Pattern Recognition, 1999. [6] E. Lutton, H. Maitre, and J. Lopez-Krahe. "Contribution to the determination of vanishing points using Hough transform". IEEE Trans. on Pattern Analysis and Machine Intelligence. 16(4). pp 430-438. 1994. [7] S. C. Zhu, A. Lanterman, and M. I. Miller. "Clutter Modeling and Performance Analysis in Automatic Target Recognition". In Proceedings Workshop on Detection and Classification of Difficult Targets. Redstone Arsenal, Alabama. 1998.
1804 |@word briefly:1 thereby:1 contains:1 current:1 comparing:1 enables:2 cue:2 intelligence:1 plane:3 coughlan:5 smith:3 vanishing:4 core:1 detecting:3 coarse:1 quantized:2 location:3 ames:5 org:2 five:2 constructed:1 kettlewell:3 lopez:1 combine:2 indeed:1 growing:1 eil:5 little:1 provided:1 pof:6 panel:4 maximizes:1 hilly:1 unit:1 grant:3 appear:4 before:2 local:1 despite:2 might:1 black:1 factorization:2 range:3 statistically:1 acknowledgment:1 camera:16 practice:1 illusion:1 procedure:2 empirical:1 arsenal:1 significantly:1 projection:1 road:4 regular:1 cpa:4 onto:1 close:1 marginalize:1 mahony:1 put:1 applying:1 map:4 demonstrated:2 pon:9 center:1 rural:7 iri:1 konishi:4 searching:1 coordinate:4 target:8 tan:2 colorado:1 modulo:2 element:2 recognition:4 particularly:1 walking:1 database:2 observed:1 bottom:1 calculate:1 mentioned:1 environment:1 solving:1 segment:2 yuille:5 purely:1 po:2 effective:2 detected:1 artificial:1 apparent:1 film:2 cvpr:1 otherwise:2 statistic:11 transform:1 aro:1 unaligned:5 helicopter:1 neighboring:1 aligned:3 turned:1 regularity:4 object:13 help:1 coupling:1 measured:2 strong:3 predicted:2 come:1 quantify:2 convention:1 direction:11 filter:1 vc:6 human:7 enable:1 violates:1 wall:3 preliminary:3 elevation:2 broccoli:3 biological:1 really:1 viewer:5 hold:1 ruin:3 normal:8 omitted:1 estimation:1 proc:1 athens:1 city:4 mit:1 sight:1 rather:1 mil:14 probabilistically:1 derived:2 focus:1 ax:3 modelling:3 contrast:1 detect:3 inst:2 inference:8 unlikely:1 pixel:14 classification:1 orientation:34 denoted:1 special:1 equal:2 field:6 having:1 shaped:1 presegmented:1 peaked:2 report:1 simplify:1 national:1 geometry:5 detection:8 interest:1 evaluation:1 analyzed:1 mixture:1 edge:39 xy:1 fay:2 orthogonal:1 iv:1 hough:2 re:1 modeling:1 compass:4 measuring:1 logp:1 applicability:1 euler:2 hundred:1 uniform:2 azimuth:1 too:1 graphic:1 reported:1 learnt:1 st:2 peak:3 international:2 fundamental:1 probabilistic:1 off:7 mup:1 satisfied:1 huang:2 possibly:1 creating:1 american:2 corfu:1 int:1 vi:1 performed:1 observer:5 doing:1 parallel:3 contribution:1 pang:4 il:3 miller:1 correspond:1 identify:1 painting:4 ofthe:1 bayesian:9 accurately:1 submitted:1 detector:1 email:1 failure:1 pp:2 james:1 con:1 dataset:2 knowledge:2 conversation:1 greece:1 ea:11 response:5 done:1 ox:3 evaluated:1 box:1 ucx:2 stage:1 angular:1 hand:1 horizontal:2 mistakenly:1 web:1 lack:1 french:3 outdoors:1 indicated:1 quality:1 building:2 contain:1 true:4 normalized:1 hence:1 assigned:2 imaged:1 conditionally:1 hill:1 impression:1 theoretic:1 demonstrate:1 image:52 recently:1 funding:1 rotation:1 empirically:1 twist:2 extend:2 interpretation:1 numerically:1 interpret:1 refer:1 automatic:1 grid:6 focal:1 similarly:1 centre:1 gratefully:1 align:2 posterior:3 recent:2 perspective:1 certain:2 binary:1 success:1 greater:1 ey:1 determine:2 ii:3 infer:3 england:1 determination:1 long:1 devised:1 award:1 sowerby:4 vision:8 histogram:2 normalization:1 addition:3 background:5 fine:1 want:1 contrary:1 lover:1 seem:1 presence:1 iii:1 enough:1 variety:1 independence:1 interprets:2 whether:2 distorting:1 expression:1 song:1 dramatically:1 useful:2 generally:1 indoors:1 transforms:1 clutter:6 mid:3 nsf:1 estimated:5 express:1 group:2 key:2 putting:1 four:1 urban:2 drawn:2 clarity:1 eill:3 imaging:1 geometrically:1 year:1 sum:1 angle:4 distorted:1 reader:1 decide:2 decision:1 bound:1 strength:2 constraint:1 scene:30 erroneously:1 argument:1 structured:2 poor:1 across:2 outlier:5 explained:1 iccv:1 taken:1 computationally:1 equation:1 xyz:1 ucz:2 know:1 tractable:1 informal:1 apply:1 observe:2 appropriate:1 alternative:2 assumes:1 top:2 include:1 neglect:1 countryside:5 build:1 r01:1 occurs:1 mumford:2 gradient:16 pleasure:1 majority:1 length:1 modeled:2 difficult:2 relate:1 stated:1 reliably:1 ski:2 perform:1 datasets:1 poff:1 acknowledge:3 displayed:1 digitized:2 frame:6 prematurely:1 nei:1 intensity:1 fort:1 distorts:1 specified:1 aerospace:1 learned:3 alabama:1 trans:1 proceeds:1 usually:1 pattern:3 indoor:5 built:2 including:1 natural:4 rely:1 force:1 residual:1 cea:1 zhu:3 improve:1 eye:2 axis:3 health:1 fax:2 speeding:1 review:1 prior:2 discovery:1 relative:6 manhattan:19 expect:1 downloaded:1 sufficient:1 viewpoint:1 summary:2 surprisingly:1 supported:1 english:3 appreciated:1 side:1 institute:1 characterizing:1 distributed:1 boundary:4 world:14 stand:2 fb:6 author:2 made:2 san:3 projected:1 premature:1 far:2 silhouette:1 global:1 assumed:3 francisco:3 ucy:2 search:1 ca:2 domain:3 main:1 r5b:2 arise:1 convey:1 fillmore:2 body:1 west:3 sub:1 position:1 outdoor:2 removing:1 british:1 chun:1 evidence:2 grouping:2 workshop:1 false:1 effectively:1 sponses:1 magnitude:5 conditioned:1 cartesian:7 easier:1 vii:1 led:1 explore:1 appearance:1 army:1 visual:2 applies:2 chance:1 ma:2 conditional:1 viewed:1 invalid:1 room:12 change:1 determined:3 typical:1 uniformly:1 except:1 called:2 nil:1 tendency:1 experimental:2 faz:2 indicating:1 people:3 support:1 collins:1
879
1,805
On iterative Krylov-dogleg trust-region steps for solving neural networks nonlinear least squares problems Eiji Mizutani Department of Computer Science National Tsing Hua University Hsinchu, 30043 TAIWAN R.O.C. eiji@wayne.cs.nthu.edu.tw James w. Demmel Mathematics and Computer Science University of California at Berkeley, Berkeley, CA 94720 USA demmel@cs.berkeley.edu Abstract This paper describes a method of dogleg trust-region steps, or restricted Levenberg-Marquardt steps, based on a projection process onto the Krylov subspaces for neural networks nonlinear least squares problems. In particular, the linear conjugate gradient (CG) method works as the inner iterative algorithm for solving the linearized Gauss-Newton normal equation, whereas the outer nonlinear algorithm repeatedly takes so-called "Krylov-dogleg" steps, relying only on matrix-vector multiplication without explicitly forming the Jacobian matrix or the Gauss-Newton model Hessian. That is, our iterative dogleg algorithm can reduce both operational counts and memory space by a factor of O(n) (the number of parameters) in comparison with a direct linear-equation solver. This memory-less property is useful for large-scale problems. 1 Introduction We consider the so-called n e ural networks nonlinear least squares problem 1 wherein the objective is to optimize the n weight parameters of neural networks (NN) [e.g., multilayer perceptrons (MLP)]' denoted by an n-dimensional vector 8 , by minimizing the following: (1) where a p (8) is the MLP output for the pth training data pattern and tp is the desired output. (Of course, these become vectors for a multiple-output MLP.) Here r(8) denotes the m-dimensional residual vector composed of ri(8) , i = 1, ... , m , for all m training data. 1The posed problem can be viewed as an implicitly constrained optimization problem as long as hidden-node outputs are produced by sigmoidal "squashing" functions [1] . Our algorithm exploits the special structure of the sum of squared error measure in Equation (1); hence , the other objective functions are outside the scope of this paper. The gradient vector and Hessian matrix are given by g = g(9) == JT rand H = H( 9) == JT J +S, where J is the m x n Jacobian matrix of r, and S denotes the matrix of second-derivative terms. If S is simply omitted based on the "small residual" assumption, then the Hessian matrix reduces to the Gauss-Newton model Hessian: i.e., JT J. Furthermore, a family of quasi-Newton methods can be applied to approximate term S alone, leading to the augmented Gauss-Newton model Hessian (see, for example, Mizutani [2] and references therein). With any form of the aforementioned Hessian matrices, we can collectively write the following Newton formula to determine the next step lj in the course of the Newton iteration for 9 next = 9 now + lj: (2) Hlj = -g. This linear system can be solved by a direct solver in conjunction with a suitable matrix factorization. However, typical criticisms towards the direct algorithm are: ? It is expensive to form and solve the linear equation (2), which requires O(mn 2 ) operations when m > n; ? It is expensive to store the (symmetric) Hessian matrix H, which requires n(n2+1) memory storage. These issues may become much more serious for a large-scale problem. In light of the vast literature on the nonlinear optimization, this paper describes how to alleviate these concerns, attempting to solve the Newton formula (2) approximately by iterative methods, which form a family of inexact (or truncated) Newton methods (see Dembo & Steihaug [3], for instance). An important subclass ofthe inexact Newton methods are Newton-Krylov methods. In particular, this paper focuses on a Newton-CG-type algorithm, wherein the linear Gauss-Newton normal equation, (3) is solved iteratively by the linear conjugate gradient method (known as CGNR) for a dogleg trust-region implementation of the well-known Levenberg-Marquardt algorithm; hence, the name "dogleg trust-region Gauss-Newton-CGNR" algorithm, or "iterative Krylov-dogleg" method (similar to Steihaug [4]; Toint [5]). 2 Direct Dogleg Trust-Region Algorithms In the NN literature, several variants of the Levenberg-Marquardt algorithm equipped with a direct linear-equation solver, particularly Marquardt's original method, have been recognized as instrumental and promising techniques; see, for example, Demuth & Beale [6]; Masters [7]; Shepherd [8]. They are based on a simple direct control ofthe Levenberg-Marquardt parameter J.L in (H+J.LI)lj = -g, although such a simple J.L-control can cause a number of problems, because of a complicated relation between parameter J.L and its associated step length (see Mizutani [9]). Alternatively, a more efficient dogleg algorithm [10] can be employed that takes, depending on the size of trust region R, the Newton step ljNewton [i.e., the solution of Eq. (2)], the (restricted) Cauchy step ljCauchy' or an intermediate dogleg step: ljdogleg dcl = ljCauchy + h ( ljNewton - ljCauchy ) , (4) which achieves a piecewise linear approximation to a trust-region step, or a restricted Levenberg-Marquardt step. Note that ljCauchy is the step that minimizes the local quadratic model in the steepest descent direction (i.e. , Eq. (8) with k = 1) . For details on Equation (4) , refer to Powell [10]; Mizutani [9 , 2]. When we consider the Gauss-Newton step for 8Newton in Equation (4), we must solve the overdetermined linear least squares problem: minimize8 Ilr + J8112, for which three principal direct linear-equation solvers are: (1) Normal equation approach (typically with Cholesky decomposition); (2) QR decomposition approach to J 8 = -r; (3) Singular value decomposition (SVD) approach to J8 = -r (only recommended when J is nearly rank-deficient). Among those three direct solvers, approach (1) to Equation (3) is fastest. (For more details , refer to Demmel [11], Chapters 2 and 3.) In a highly overdetermined case (with a large data set ; i.e. , m ? n) , the dominant cost in approach (1) is the mn 2 operations to form the Gauss-Newton model Hessian by: m JTJ = LU;U;' (5) ;=1 where uT is the ith row vector of J. This cost might be prohibitive even with enough storage for JT J. Therefore, to overcome this limitation of direct solvers for Equation (3), we consider an iterative scheme in the next section. 3 Iterative Krylov-Dogleg Algorithm The iterative Krylov-dogleg step approximates a trust-region step by iteratively approximating the Levenberg-Marquardt trajectory in the Krylov subspace via linear conjugate gradient iterates until the approximate trajectory hits the trustregion boundary; i.e., a CG iterate falls outside the trust-region boundary. In this context, the linear CGNR method is not intended to approximate the full GaussNewton step [i.e. , the solution of Eq. (3)]. Therefore, the required number of CGNRiterations might be kept small [see Section 4]. The iterative process for the linear-equation solution sequence {8 k } is called the inner 2 iteration, whereas the solution sequence {(h} from the Krylov-dogleg algorithm is generated by the outer iteration (or epoch), as shown in Figure 1. We now describe the inner iteration algorithm, which is identical to the standard linear CG algorithm (see Demmel [11], pages 311-312) except steps 2, 4, and 5: Algorithm 3.1 : The inner iteration of the Krylov-dogleg algorithm (see Figure 1). 1. Initialization: 8 0 = 0; do = ro = -gnow, and k = 1. 2. Matrix-vector product (compare Eq. (5) and see Algorithm 3.2): (6) m z = Hnowdk = J~ow(Jnowdk) = L(uT dk)u;. (7) ; =1 2Nonlinear conjugate gradient methods, such as Polak-Ribiere's CG (see Mizutani and Jang [13]) and Moller's scaled CG [14], are also widely-employed for training MLPs, but those nonlinear versions attempt to approximate the entire Hessian matrix by generating the solution sequence {Ih} directly as the outer nonlinear algorithm. Thus, they ignore the special structure of the nonlinear least squares problem; so does Pearlmutter's method [15] to the Newton formula, although its modification may be possible. Initialize Rn~. 90~ Compute E( 9000 ) Does ~ YES stopping criteria >-----~ hold? NO r'-';'~~'~';"';t~;~~;~'~""'1 c i o .~ i Algorithm 3. t L..................................... J ..... 2 ..... Q) "S o E( 90", ) IF ~ E( 90~) YES algorithm :. .......................... : IF Vnow ~ Vsmall Algorithm for local-model check Figure 1: The algorithmic flow of an iterative Krylov-dogleg algorithm. For detailed procedures in the three dotted rectangular boxes, refer to Mizu tani and Demmel [12} and Algorithm 3. 1 in text. 3. Analytical step size: 'fJk = rL1rk- l dTz k 4. Approximate solution: If Illikll < li k = li k - 1 + 'fJk d k. R now , then go onto the next step 5; otherwise compute li k li k = Rnowillikll ' (8) (9) and terminate. 5. Linear-system residual: r k = r k-l - 'fJkZ. If IIrkl1 2 is small enough , then set Rnow f- Illikll. and terminate . Otherwise, continue with step 6. 6. Improvement: {3k+l = rTr I rr k k-l 7. Search direction : d k+ 1 = rk k-l . + {3k +l d k. Then, set k = k + 1 and back to step 2. The first step given by Equation (8) is always the Cauchy step I5Cauchy ' moving 9now to the Cauchy point 9Cauchy when Rnow > III5Cauchyll . Then, departing from 9 Cauchy , the linear CG constructs a Krylov-dogleg trajectory (by adding a CG point one by one) towards the Gauss-Newton point 9Newton until the constructed trajectory hits the trust-region boundary (i.e., Ill5k ll :::: Rnow is satisfied in step 4), or till the linear-system residual becomes small in step 5 (unlikely to occur for small forcing terms; e.g., 0.01) . In this way, the algorithm computes a vector between the steepest descent direction and the Gauss-Newton direction, resulting in an approximate Levenberg-Marquardt step in the Krylov subspace. In step 2, the matrix-vector multiplication of Hdk in Equation (7) can be performed with neither the Jacobian nor Hessian matrices explicitly required, keeping only several n-dimensional vectors in memory at the same time , as shown next: Algorithm 3.2: Matrix-vector multiplication step. for i = 1 to m; i.e., one sweep of all training data: (a) do forward propagation to compute the MLP output a; (9) for datum i; (b) do backpropagation 3 to obtain the ith row vector u T of matrix J; (c) compute (uT dk)u; and add it to z ; end for. For one sweep of all m data, each of steps (a) and (b) costs at least 2mn (plus additional costs that depend on the MLP architectures) and step (c) [i. e., Eq. (7)] costs 4mn. Hence, the overall cost of the inner iteration (Algorithm 3.1) can be kept as O(mn), especially when the number of inner iterations is small owing to our strategy of upper-bounded trust-region radii (e.g., Rupper = 1 for the parity problem). Note for "Algorithm for local-model check" in Figure 1 that evaluating Vnow (a ratio between the actual error reduction and the reduction predicted by the current local quadratic model) needs a procedure similar to Algorithm 3.2. For more details on the algorithm in Figure 1, refer to Mizutani and Demmel [12] . 4 Experiments and Discussions In the NN literature, there are numerous algorithmic comparisons available (see, for example , Moller [14] ; Demuth & Beale [6] ; Shepherd [8] ; Mizutani [2 ,9, 16]). Due to the space limitation, this section compares typical behaviors of our Krylov-dogleg Gauss-Newton CGNR (or iterative dogleg) algorithm and Powell's dogleg-based algorithm with a direct linear-equation solver (or direct dogleg) for solving highly overdetermined parity problems. In our numerical tests, we used a criterion, in which the MLP output for the pth pattern, ap , can be regarded as either "on" (1.0) if ap :::: 0.8, or "off" (-1.0) if a p :S -0.8; otherwise, it is "undecided ." The initial parameter set was randomly generated in the range [-0.3 ,0.3]' and the two algorithms started exactly at the same point in the parameter space. Figure 2 presents MLP-Iearning curves in RMSE (root mean squared error) for the 20-bit and 14-bit parity problems. In (b) and (c), the total execution time [roughly (b) 32 days (500 epochs); (c) two hours (450 epochs), both on 299-MHz UltraSparc] of the direct dogleg algorithm was normalized for comparison purpose. Notably, the 3The batch-mode MLP backpropagation can be viewed as an efficient matrix-vector multiplication (2mn operations) for computing the graclient .JTr wilhoutfor'ming explicitly the m X n Jacobian matrix or the m-climensional residual vector (with some extra costs) . 1 . iterative dogleg direct dogleg 1 .. 0.8 .1- iterative dogleg .j .... direct dogleg ..1- I ~ 0.6 ~ 0.6 ~ 0.6 ::2 ::2 a: 0.4 a: 0.4 a: 0.4 0 .2 0.2 o 500 (a) Epoch 1000 I 0.8 ::2 oL---~~------~ iterative dogleg ?.1 .... direct dogleg 0.8 ~ O~----------~~ o 0.5 (b) Normalized exec. time 0 .2 oL-~======--~~ o 0.5 (c) Normalized exec. time Figure 2: MLP-learning curves of RMSE (root mean squared error) obtained by the "iterative dogleg" (solid line) and the "direct dogleg" (broken line): (a) "epoch" and (b) "normalized execution time" for the 20-bit parity problem with a standard 20 x 19 x 1 MLP with hyperbolic tangen t node functions (m = 220 , n = 419), and (c) "normalized execution time" for the 14-bit parity problem with a 14 x 13 x 1 MLP (m = 214, n = 209). In ( a), (b), the iterative dogleg reduced the number of incorrect patterns down to 21 (nearly RMSE = 0.009) at epoch 838, whereas the direct dogleg reached the same error level at epoch 388. In (c), the iterative dogleg solved it perfectly at epoch 1,034 and the direct dogleg did so at epoch 401. iterative dogleg converged faster to a small RMSE 4 than the direct dogleg at an early stage of learning even with respect to epoch. Moreover, the average number of inner CG iterations per epoch in the iterative dogleg algorithm was quite small, 5.53 for (b) and 4.61 for (c). Thus, the iterative dogleg worked nearly (b) nine times and (c) four times faster than the direct dogleg in terms of the average execution time per epoch. Those speed-up ratios b ecame smaller than n mainly due to the aforementioned cost of Algorithm 3.2. Yet, as n increases, the speed-up ratio can be larger especially when the number of inner iterations is reasonably small. 5 Conclusion and Future Directions We have compared two batch-mode MLP-Iearning algorithms: iterative and direct dogleg trust-region algorithms. Although such a high-dimensional parity problem is very special in the sense that it involves a large data set but the size of MLP can be kept relatively small, the algorithmic features of the two dogleg methods can be well understood from the obtained experimental results. That is, the iterative dogleg has the great advantage of reducing the cost of an epoch from O(mn 2 ) to O(mn), and the memory requirements from O(n 2 ) to O(n), a factor of O(n) in both cases. When n is large, this is a very large improvement. It also has the advantage offaster convergence in the early epochs, achieving a lower RMSE after fewer epochs than the direct dogleg. Its disadvantage is that it may need more epochs to converge to a very small RMSE than the direct dogleg (although it might work faster in execution time). Thus, the iterative dogleg is most attractive when attempting to achieve a reasonably small RMSE on very large problems in a short period of time. The iterative dogleg is a matrix-free algorithm that extracts information about the Hessian matrix via matrix-vector multiplication ; this algorithm might be characterized as iterative batch-mode learning, an intermediat e between direct batch4 A standard steepest descent-type online pattern-by-pattern learning (or incremental gradient) algorithm (with or without a momentum term) fai led to converge to a small RMSE in those parity problems due to hidden-node satl.lmtion [1]. mode learning and online pattern-by-pattern learning. Furthermore, the algorithm might be implemented in a block-by-block updating mode if a large data set can be split into multiple proper-size data blocks; so, it would be of our great interest to compare the performance with online-mode learning algorithms for solving large-scale real-world problems with a large-scale NN model. Acknowledgments We would like to thank Stuart Dreyfus (lEaR, UC Berkeley) and Rich Vuduc (CS, UC Berkeley) for their valuable advice. The work was supported in part by SONY US Research Labs., and in part by "Program for Promoting Academic Excellence of Universities," grant 89-E-FA04-1-4, Ministry of Education, Taiwan. References [1] E. Mizutani, S. E. Dreyfus, and J.-S. R. Jang. On dynamic programming-like recursive gradient formula for alleviating hidden-node satuaration in the parity problem. In Proceedings of the International Workshop on Intelligent Systems Resolutions - the 8th Bellman Continuum, pages 100- 104, Hsinchu, TAIWAN, 2000. [2] Eiji Mizutani. Powell's dogleg trust-region steps with the quasi-Newton augmented Hessian for neural nonlinear least-squares learning. In Pr'oceedings of the IEEE Int'l Conf. on Neural Networks (vol.2), pages 1239-1244, Washington, D.C., JuJy 1999. [3] R. S. Dembo and T. Steihaug. Truncated-Newton algorithms for large-scale unconstrained optimization. Math. Prog., 26:190-212, 1983. [4] Trond Steihaug. The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal., 20(3):626- 637, 1983. [5] P. L. Toint. On large scale nonlinear least squares calculations. SIAM J. Sci . Statist. Comput., 8(3):416- 435, 1987. [6] H. Demuth and M . Beale. Neural Network Toolbox ror Use with MATLAB . The MathWorks, Inc., Natick, Massachusetts, 1998. User's Guide (version 3.0). [7] Timothy Masters. Advanced algorithms for neural networ'ks: a C++ sourcebook. John Wiley & Sons, New York, 1995. [8] Adrian J. Shepherd. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer-Verlag, 1997. [9] Eiji Mizutani. Computing Powell's dogleg steps for solving adaptive networks nonlinear least-squares problems. In Proc. of the 8th Tnt'l Fuzzy Systems Association World Congress (IFSA '99), vol.2, pages 959- 963, Hsinchu, Taiwan, August 1999. [10] M . J. D. Powell. A new algorithm for unconstrained optimization. In Nonlinear Pr'ogramming, pages 31-65. Edited by J.B. Rosen et al., Academic Press, 1970. [11] James W. Demmel. Applied Numerical Linear Algebra. SIAM, 1997. [12] Eiji Mizutani and James W. Demmel. On generalized dogleg trust-region steps using the Krylov subspace for solving neural networks nonlinear least squares problems. Technical report, Computer Science Dept., UC Berkeley, 2001. (In preparation). [13] E. Mizutani and J.-S. R. Jang. Chapter 6: Derivative-based Optimization. In Neuro Fuzzy and Soft Computing, pages 129- 172. J.-S. R. Jang, C.-T. Sun and E. Mizutani. Prentice Hall, 1997. [14] Martin Fodslette Moller. A scaled conjugate gradient algorithm for fast supervised learning. N eural Networ'ks, 6:525-533, 1993. [15] B. A. Pearlmutter. Fast exact multiplication by the Hessian. Neural Computation, 6(1):147-160, 1994. [16] E. Mizutani, K. Nishio, N. Katoh, and M. Blasgen. Color device characterization of electronic cameras by solving adaptive networks nonlinear least squares problems. In Proc. of the 8th IEEE Int'l Conf. on Fuzzy Systems, vol. 2, pages 858- 862, 1999.
1805 |@word tsing:1 version:2 instrumental:1 adrian:1 linearized:1 decomposition:3 solid:1 reduction:2 initial:1 katoh:1 current:1 marquardt:8 yet:1 must:1 john:1 numerical:2 alone:1 prohibitive:1 fewer:1 device:1 dembo:2 steepest:3 ith:2 short:1 iterates:1 math:1 node:4 characterization:1 sigmoidal:1 constructed:1 direct:24 become:2 incorrect:1 excellence:1 notably:1 roughly:1 behavior:1 nor:1 multi:1 ol:2 bellman:1 relying:1 ming:1 actual:1 equipped:1 solver:7 becomes:1 bounded:1 moreover:1 minimizes:1 fuzzy:3 berkeley:6 subclass:1 iearning:2 exactly:1 ro:1 scaled:2 hit:2 control:2 wayne:1 grant:1 understood:1 local:4 oceedings:1 congress:1 approximately:1 ap:2 might:5 plus:1 therein:1 initialization:1 k:2 fastest:1 factorization:1 range:1 acknowledgment:1 camera:1 recursive:1 block:3 backpropagation:2 procedure:2 powell:5 hyperbolic:1 projection:1 onto:2 hlj:1 storage:2 context:1 prentice:1 optimize:1 go:1 rectangular:1 resolution:1 regarded:1 alleviating:1 user:1 programming:1 exact:1 overdetermined:3 expensive:2 particularly:1 updating:1 solved:3 region:15 sun:1 valuable:1 edited:1 broken:1 dynamic:1 depend:1 solving:7 ror:1 algebra:1 networ:2 chapter:2 undecided:1 fast:3 describe:1 demmel:8 gaussnewton:1 outside:2 quite:1 posed:1 solve:3 widely:1 larger:1 otherwise:3 polak:1 online:3 sequence:3 rr:1 advantage:2 analytical:1 product:1 till:1 achieve:1 hdk:1 qr:1 convergence:1 requirement:1 generating:1 incremental:1 depending:1 eq:5 implemented:1 c:3 predicted:1 involves:1 direction:5 radius:1 owing:1 education:1 alleviate:1 hold:1 hall:1 normal:3 great:2 scope:1 algorithmic:3 achieves:1 early:2 continuum:1 omitted:1 purpose:1 proc:2 always:1 ribiere:1 conjunction:1 focus:1 improvement:2 rank:1 check:2 mainly:1 cg:9 criticism:1 sense:1 stopping:1 mizutani:14 nn:4 typically:1 lj:3 entire:1 unlikely:1 hidden:3 relation:1 quasi:2 issue:1 aforementioned:2 among:1 overall:1 denoted:1 constrained:1 special:3 initialize:1 uc:3 construct:1 washington:1 identical:1 climensional:1 stuart:1 nearly:3 future:1 rosen:1 report:1 piecewise:1 serious:1 intelligent:1 randomly:1 composed:1 national:1 intended:1 attempt:1 mlp:13 interest:1 highly:2 numer:1 light:1 desired:1 instance:1 demuth:3 soft:1 tp:1 mhz:1 disadvantage:1 cost:9 international:1 siam:3 off:1 squared:3 trond:1 satisfied:1 conf:2 derivative:2 leading:1 li:5 int:2 inc:1 explicitly:3 performed:1 root:2 jtj:1 lab:1 reached:1 dcl:1 complicated:1 rmse:8 mlps:1 square:10 ofthe:2 yes:2 steihaug:4 produced:1 lu:1 trajectory:4 converged:1 inexact:2 james:3 associated:1 massachusetts:1 color:1 ut:3 back:1 hsinchu:3 day:1 supervised:1 wherein:2 rand:1 box:1 furthermore:2 stage:1 until:2 trust:15 nonlinear:15 propagation:1 fai:1 mode:6 tani:1 name:1 usa:1 normalized:5 hence:3 symmetric:1 iteratively:2 attractive:1 ll:1 levenberg:7 criterion:2 generalized:1 pearlmutter:2 dreyfus:2 association:1 approximates:1 refer:4 unconstrained:2 mathematics:1 moving:1 add:1 dominant:1 forcing:1 store:1 verlag:1 continue:1 ministry:1 additional:1 employed:2 recognized:1 converge:2 determine:1 period:1 recommended:1 ural:1 multiple:2 full:1 reduces:1 technical:1 faster:3 academic:2 characterized:1 calculation:1 long:1 variant:1 neuro:1 multilayer:1 natick:1 iteration:9 whereas:3 singular:1 extra:1 shepherd:3 deficient:1 flow:1 intermediate:1 split:1 enough:2 iterate:1 architecture:1 perfectly:1 inner:8 reduce:1 hessian:13 cause:1 nine:1 repeatedly:1 matlab:1 york:1 useful:1 detailed:1 statist:1 eiji:5 reduced:1 dotted:1 per:2 write:1 vol:3 four:1 achieving:1 neither:1 kept:3 vast:1 sum:1 master:2 prog:1 family:2 electronic:1 toint:2 bit:4 layer:1 datum:1 quadratic:2 occur:1 worked:1 ri:1 speed:2 attempting:2 relatively:1 martin:1 department:1 conjugate:6 describes:2 smaller:1 son:1 tw:1 modification:1 restricted:3 pr:2 equation:16 count:1 mathworks:1 sony:1 end:1 available:1 operation:3 promoting:1 beale:3 jang:4 batch:3 original:1 denotes:2 newton:25 exploit:1 especially:2 approximating:1 sweep:2 objective:2 strategy:1 gradient:9 ow:1 subspace:4 thank:1 sci:1 outer:3 cauchy:5 nthu:1 taiwan:4 length:1 dogleg:48 ratio:3 minimizing:1 ilr:1 implementation:1 anal:1 proper:1 exec:2 upper:1 descent:3 truncated:2 rn:1 tnt:1 august:1 required:2 toolbox:1 california:1 hour:1 krylov:15 pattern:7 program:1 reliable:1 memory:5 suitable:1 residual:5 advanced:1 mn:8 scheme:1 numerous:1 started:1 fjk:2 extract:1 text:1 epoch:16 literature:3 multiplication:6 j8:1 limitation:2 squashing:1 row:2 course:2 supported:1 parity:8 keeping:1 free:1 guide:1 fall:1 trustregion:1 departing:1 overcome:1 boundary:3 curve:2 evaluating:1 world:2 rich:1 computes:1 forward:1 adaptive:2 pth:2 approximate:6 ignore:1 implicitly:1 alternatively:1 search:1 iterative:25 promising:1 terminate:2 reasonably:2 ca:1 operational:1 moller:3 did:1 n2:1 dtz:1 eural:1 augmented:2 advice:1 rtr:1 wiley:1 momentum:1 comput:1 jacobian:4 formula:4 rk:1 down:1 jt:4 dk:2 concern:1 workshop:1 ih:1 adding:1 execution:5 led:1 timothy:1 simply:1 forming:1 hua:1 collectively:1 springer:1 viewed:2 lear:1 towards:2 typical:2 except:1 reducing:1 principal:1 called:3 total:1 gauss:11 svd:1 experimental:1 perceptrons:2 cholesky:1 preparation:1 dept:1
880
1,806
Ensemble Learning and Linear Response Theory for leA Pedro A.d.F.R. Hfljen-Sflrensen l , Ole Winther2 , Lars Kai Hansen l of Mathematical Modelling, Technical University of Denmark B321 DK-2800 Lyngby, Denmark, ph s , l k h a n s en @imrn. d tu. dk 2Theoretical Physics, Lund University, SOlvegatan 14 A S-223 62 Lund, Sweden, winther@ nimi s .t hep.lu. se 1 Department Abstract We propose a general Bayesian framework for performing independent component analysis (leA) which relies on ensemble learning and linear response theory known from statistical physics. We apply it to both discrete and continuous sources. For the continuous source the underdetermined (overcomplete) case is studied. The naive mean-field approach fails in this case whereas linear response theory-which gives an improved estimate of covariances-is very efficient. The examples given are for sources without temporal correlations. However, this derivation can easily be extended to treat temporal correlations. Finally, the framework offers a simple way of generating new leA algorithms without needing to define the prior distribution of the sources explicitly. 1 Introduction Reconstruction of statistically independent source signals from linear mixtures is an active research field. For historical background and early references see e.g. [I]. The source separation problem has a Bayesian formulation, see e.g., [2, 3] for which there has been some recent progress based on ensemble learning [4]. In the Bayesian framework, the covariances of the sources are needed in order to estimate the mixing matrix and the noise level. Unfortunately, ensemble learning using factorized trial distributions only treats self-interactions correctly and trivially predicts: (SiSi)(Si}(Sj) = 0 for i -I j. This naive mean-field (NMF) approximation first introduced in the neural computing context by Ref. [5] for Boltzmann machine learning may completely fail in some cases [6]. Recently, Kappen and Rodriguez [6] introduced an efficient learning algorithm for Boltzmann Machines based on linear response (LR) theory. LR theory gives a recipe for computing an improved approximation to the covariances directly from the solution to the NMF equations [7]. Ensemble learning has been applied in many contexts within neural computation, e.g. for sigmoid belief networks [8], where advanced mean field methods such as LR theory or TAP [9] may also be applicable. In this paper, we show how LR theory can be applied to independent component analysis (leA). The performance of this approach is compared to the NMF approach. We observe that NMF may fail for high noise levels and binary sources and for the underdetermined continuous case. In these cases the NMF approach ignores one of the sources and consequently overestimates the noise. The LR approach on the other hand succeeds in all cases studied. The derivation of the mean-field equations are kept completely general and are thus valid for a general source prior (without temporal correlations). The final eqs. show that the mean-field framework may be used to propose ICA algorithms for which the source prior is only defined implicitly. 2 Probabilistic leA Following Ref. [10], we consider a collection of N temporal measurements, X = {Xdt}, where Xdt denotes the measurement at the dth sensor at time t. Similarly, let S = {Smd denote a collection of M mutually independent sources where Sm. is the mth source which in general may have temporal correlations. The measured signals X are assumed to be an instantaneous linear mixing of the sources corrupted with additive Gaussian noise r , that is, X=As+r , (1) where A is the mixing matrix. Furthermore, to simplify this exposition the noise is assumed to be iid Gaussian with variance a 2 . The likelihood of the parameters is then given by, P(XIA, ( 2 ) = ! dSP(XIA, a 2 , S) P(S) , (2) where P(S) is the prior on the sources which might include temporal correlations. We will, however, throughout this paper assume that the sources are temporally uncorrelated. We choose to estimate the mixing matrix A and noise level a 2 by Maximum Likelihood (ML-II). The saddlepoint of P(XIA, ( 2 ) is attained at, 810gP~~IA,(2) = 0 810gP~~IA,(2) = 0 A = X(S)T(SST)-l (3) = D~(Tr(X - (4) a2 ASf(X - AS)) , where (.) denotes an average over the posterior and D is the number of sensors. 3 Mean field theory First, we derive mean field equations using ensemble learning. Secondly, using linear response theory, we obtain improved estimates of the off-diagonal terms of (SST) which are needed for estimating A and a 2 . The following derivation is performed for an arbitrary source prior. 3.1 Ensemble learning We adopt a standard ensemble learning approach and approximate 2) = P(XIA, a 2 , S)P(S) (5) "a P(XIA,a 2 ) in a family of product distributions Q(S) = TImt Q(Smt) . It has been shown in Ref. [11] that for a Gaussian P(XIA, a 2 , S), the optimal choice of Q(Smt) is given by a Gaussian times the prior: P(S IX A (6) In the following, it is convenient to use standard physics notation to keep everything as general as possible. We therefore parameterize the Gaussian as, P(XIA, a 2 , S) = P(XIJ, h, S) = Ce~ Tr(ST JS)+Tr(hTS) (7) , where J = _AT AI a 2 is the M x M interaction matrix and h = A TXI a 2 has the same dimensions as the source matrix S. Note that h acts as an external field from which we can obtain all moments of the sources. This is a property that we will make use of in the next section when we derive the linear response corrections. The Kullback-Leibler divergence between the optimal product distribution Q (S) and the true source posterior is given by ! = KL InP(XIA,a 2) dSQ(S)In = P(SI~:~,a2) = InP(XIA,a 2) -lnP(XIA,a2) (8) 2)Og! d8P(8)e~>'~tS2+Y~tS + ~ 2: (Jmm - Amt)(8~t) mt 1 mt +2 Tr(ST)(J - diag(J)(S) + Tr(h - "If (S) + In C , (9) where P(XIA, a 2) is the naive mean field approximation to the Likelihood and diag(J) is the diagonal matrix of J. The saddlepoints define the mean field equations; oKL o(S) = 0 oKL 0(8;'t) "I = h + (J - (10) diag(J))(S) (11) =0 The remaining two equations depend explicitly on the source prior, P(S); oK L 01mt =0 (8 mt ) = _O_IOg! d8mtP(8mt)e~>'~t s;'t+Y~tS~t 01mt == fbmt, Amt) oKL OAmt =0 : (8 2 \ mtl (12) 0 -10 g !d8 P(8 )e~>'~tS;'t+')'~tS~t = 2OAmt mt mt . (13) In section 4, we calculate fbmt' Amt) for some of the prior distributions found in the leA literature. 3.2 Linear response theory As mentioned already, h acts as an external field. This makes it possible to calculate the means and covariances as derivatives of log P(XIJ, h), i.e. (8 tt' - X mm, = \ mtl = ologP(XIJ, h) (14) oh mt (8 8 \ (8 \(8 \ _ 0 2 log P(XIJ, h) _ 0(8mt ) mt m't'l mtl m't'l !lh !lh - ~h . U m't' U mt U m't' (15) To derive an equation for X~m" we use eqs. (10), (11) and (12) to get tt' Xmm , = Of(1mt, Amt) 01mt ohm't' = Of(1mt, Amt) ( " J tt L...J mm"Xm"m' 01mt m",m"?:m ~) + Umm' ~ Ott'? (16) 2 ....&'.. "''Ij.;~ . . .. ~, ~~r a 2 a x'" a -1 -1 ..:..?... .. X'" 2 X'" ??? ,oJ? ~'. .~" . eA_ .. 01#. .. ....... -1 -2 -2 a 2 x, -2 -2 a -2 -2 2 a x, X, 2 Figure 1: Binary source recovery for low noise level (M = 2, D = 2). Shows from left to right: +/- the column vectors of; the true A (with the observations superimposed); the estimated A (NMF); estimated A (LR). 0.4 0 .5 0.5 ,, .....??, _ ... ??Ii> 8. r!J.. " i'l -0.5 a , 0 .1 -0.5 x, \ '" > .~ -0.5 0' 3 ~ .~ 0.2 ~?? 0.5 \ ,_,_,.l.~ a -0 .5 x, O'------~--~ a 0.5 20 iteration 40 Figure 2: Binary source recovery for low noise level (M = 2, D = 2), Shows the dynamics of the fix-point iterations. From left to right; +/- the column vectors of A (NMF); +/- the column vectors of A (LR); variance (72 (solid:NMF, dashed:LR, thick dash-dotted: the true empirical noise variance). We now see that the x-matrix factorizes in time X~ml = ott' X~ml. This is a direct consequence of the fact that the model has no temporal correlations. The above equation is linear and may straightforwardly be solved to yield X~ml = [(At - J)-l]mm ' , (17) where we have defined the diagonal matrix At = diag (8fh~'Alt) + J 11 , ... , 8"(lt 8fhM~,AM.) + JMM) . 8"(Mt At this point is appropriate to explain why linear response theory is more precise than using the factorized distribution which predicts X~ml 0 for non-diagonal terms. Here, we give an argument that can be found in Parisi's book on statistical field theory [7] : Let us assume that the approximate and exact distribution is close in some sense, i,e. Q(S) - P(SIX, A, (72) = c then (SmtSm1t)ex = (SmtSm1t)ap + O(c). Mean field theory gives a lower bound on the log-Likelihood since K L , eq. (8) is non-negaitive. Consequently, the linear term vanishes in the expansion of the log-Likelihood: log P(XIA, (72) log P(XIA, (72) + O(c 2 ) . It is therefore more precise to obtain moments of the variables through derivatives of the approximate log-Likelihood, i,e. by linear response. = = A final remark to complete the picture: if diag(J) in equation eq. (10) is exchanged with At = diag(Alt, ... , AMt) and likewise in the definition of At above we get TAP equations ,A=,) = [(At - J)-l] . [9], The TAP equation for Amt is xtmm = 8fh=t 8"(=t mm 2 2 :0,.....: .:.~: : . 1 X'" 2 ?:~t.~'~f ....... .. "C.. a ~,. X'" ?? . . . ~-. .:' l.' :::: -1 -2 -2 a -1 -2 -2 2 x, - a X'" a -1 a -2 -2 2 X, a 2 x, Figure 3: Binary source recovery for high noise level (M = 2, D = 2). Shows from left to right: +/- the column vectors of; the true A (with the observations superimposed); the estimated A (NMF); estimated A (LR). 0 .7 0 .5 0 .5 )( xC\! a eo ................ ..... ~ xN 0 0 .6 ~ ...............~ )( -0 .5 --<l .5 -0.5 * )( a x, 0.5 -0.5 a x, i'l0 .5 iij .~ 0.4 1\ - - - - - - 0 .3 0.5 Figure 4: Binary source recovery for high noise level (M figure 2. 0.2 ' - - - - - - - - - a 200 400 600 iteration = 2, D = 2). Same plot as in 4 Examples In this section we compare the LR approach and the NMF approach on the noisy leA model. The two approaches are demonstrated using binary and continous sources. 4.1 Binary source Independent component analysis of binary sources (e.g. studied in [12]) is considered for data transmission using binary modulation schemes such as MSK or biphase (Manchester) codes. Here, we consider a binary source Smt E { -1,1} with prior distribution P(Smt) = ! [8(Smt - 1) + 8(Smt + 1)]. In this case we get the well known mean field equations (Smt) tanhbmt). Figures 1 and 2 show the results of the NMF approach as well as LR approach in a low-noise variance setting using two sources (M = 2) and two sensors (D = 2). Figures 3 and 4 show the same but in a high-noise setting. The dynamical plots show the trajectory of the fix-point iteration where 'x' marks the starting point and '0' the final point. Ideally, the noise-less measurements would consist of the four combinations (with signs) of the columns in the mixing matrix. However, due to the noise, the measurement will be scattered around these "prototype" observations. = In the low-noise level setting both approaches find good approximations to the true mixing matrix and sources. However, the convergence rate of the LR approach is found to be faster. For high-noise variance the NMF approach fails to recover the true statistics. It is seen that one of the directions in the mixing matrix vanishes which in tum results in overestimating the noise variance. ::= TIJTIJTIJ -5 0 X, 5 -2 0 2 -2 0 X, 2 -2 0 X, 2 Figure 5: Overcomplete continuous source recovery with M 3 and D = 2. Shows from left to right the observations, +/- the column vectors of; the true A; the estimated A (NMF); estimated A (LR). 2 x 0 N -1 -2 -2 " 2 ~ " ?fJ " "~" x 0 N x <!"" 0 X, ,f 2.5 " -1 2 -2 -2 2 " .~ 1.5 .. > x eI' 0 X, , u 2 0.5 "'------- -, 1 0 -,_._._._. -,_ .. 1000 2000 iteration Figure 6: Overcomplete continuous source recovery with M = 3 and D = 2. Same plot as in figure 2. Note that the initial iteration step for A is very large. 4.2 Continuous Source To give a tractable example which illustrates the improvement by LR, we consider the Gaussian prior P(Smt) ex: exp( -o.S~t!2) (not suitable for source separation). This leads to fbmt, Amt) = 'Ymt/(o. - Amt). Since we have a factorized distribution, ensemble learning predicts (SmtSm't') - (Smt}(Sm't') = 8mm ,8tt' (a. - Amt)-l = 8mm ,8tt' (a. Jmm)-l, where the second equality follows from eq. (11). Linear response eq. (17) gives (SmtSm't') - (Smt}(Sm't') = 8tt' [(0.1 -J)-l]mm' which is identical with the exact result obtained by direct integration. For the popular choice of prior P(Smt) = 7r cos ~ s tnt [1], it is not possible to derive fbmt. Amt) analytically. However, fbmt. Amt) can be calculated analytically for the very similar Laplace distribution. Both these examples have positive kurtosis. Mean field equations for negative kurtosis can be obtained using the prior P(Smt) exp( -(Smt - 1-?)2/2) + exp( -(Smt + 1-?)2/2) [1] leading to ex:= Figure 5 and 6 show simulations using this source prior with 1-? = 1 in an overcomplete setting with D = 2 and M = 3. Note that 1-? = 1 yields a unimodal source distribution and hence qualitatively different from the bimodal prior considered in the binary case. In the overcomplete setting the NMF approach fails to recover the true sources. See [13] for further discussion of the overcomplete case. 5 Conclusion We have presented a general ICA mean field framework based upon ensemble learning and linear response theory. The naive mean-field approach (pure ensemble learning) fails in some cases and we speculate that it is incapable of handling the overcomplete case (more sources than sensors). Linear response theory, on the other hand, succeeds in all the examples studied. There are two directions in which we plan to extend this work: (1) to sources with temporal correlations and (2) to source models defined not by a parametric source prior, but directly in terms of the function j, which defines the mean field equations. Starting directly from the j-function makes it possible to test a whole range of implicitly defined source priors. A detailed analysis of a large selection of constrained and unconstrained source priors as well as comparisons of LR and the TAP approach can be found in [14]. Acknowledgments PHS wishes to thank Mike Jordan for stimulating discussions on the mean field and variational methods. This research is supported by the Swedish Foundation for Strategic Research as well as the Danish Research Councils through the Computational Neural Network Center (CONNECT) and the THOR Center for Neuroinformatics. References [1] T.- W. Lee: Independent Component Analysis, Kluwer Academic Publishers, Boston (1998). [2] A. Belouchrani and J.-F. Cardoso: Maximum Likelihood Source Separation by the ExpectationMaximization Technique: Deterministic and Stochastic Implementation In Proc. NOLTA, 49-53 (1995). [3] D. MacKay: Maximum Likelihood and Covariant Algorithms for Independent Components Analysis. "Draft 3.7" (1996). [4] H. Lappalainen and J.W. Miskin: Ensemble Learning, Advances in Independent Component Analysis, Ed. M. Girolami, In press (2000). [5] C. Peterson and J. Anderson: A Mean Field Theory Learning Algorithm for Neural Networks, Complex Systems 1, 995- 1019 (1987). [6] H. J. Kappen and F. B. Rodriguez: Efficient Learning in Boltzmann Machines Using Linear Response Theory, Neural Computation 10,1137-1156 (1998). [7] G. Parisi: Statistical Field Theory, Addison Wesley, Reading Massachusetts (1988). [8] L. K. Saul, T. Jaakkola and M. 1. Jordan: Mean Field Theory of Sigmoid Belief Networks, Journal of Artificial Intelligence Research 4, 61- 76 (1996). [9] M. Opper and O. Winther: Tractable Approximations for Probabilistic Models: The Adaptive TAP Mean Field Approach, Submitted to Phys. Rev. Lett. (2000). [10] L. K. Hansen: Blind Separation of Noisy Image Mixtures, Advances in Independent Component Analysis, Ed. M. Girolami, In press (2000). [11] L. Csat6, E. Fokoue, M. Opper, B. Schottky and O. Winther: Efficient Approaches to Gaussian Process Classification, in Advances in Neural Information Processing Systems 12 (NIPS'99), Eds. S. A. Solla, T. K. Leen, and K.-R. Muller, MIT Press (2000). [12] A.-J. van der Veen: Analytical Method for Blind Binary Signal Separation IEEE Trans. on Signal Processing 45(4) 1078- 1082 (1997). [13] M. S. Lewicki and T. J. Sejnowski: Learning Overcomplete Representations, Neural Computation 12, 337-365 (2000). [14] P. A. d. F. R. H0jen-S0rensen, O. Winther and L. K. Hansen: Mean Field Approaches to Independent Component Analysis, In preparation.
1806 |@word trial:1 simulation:1 covariance:4 tr:5 solid:1 moment:2 kappen:2 initial:1 ts2:1 si:2 additive:1 plot:3 hts:1 intelligence:1 lr:15 draft:1 mathematical:1 direct:2 ica:2 estimating:1 notation:1 factorized:3 sisi:1 temporal:8 act:2 overestimate:1 positive:1 treat:2 consequence:1 modulation:1 ap:1 might:1 studied:4 co:1 range:1 statistically:1 acknowledgment:1 veen:1 empirical:1 convenient:1 inp:2 jmm:3 get:3 close:1 selection:1 context:2 deterministic:1 demonstrated:1 center:2 csat6:1 starting:2 recovery:6 pure:1 oh:1 laplace:1 exact:2 predicts:3 mike:1 solved:1 parameterize:1 calculate:2 solla:1 mentioned:1 vanishes:2 ideally:1 dynamic:1 depend:1 upon:1 completely:2 easily:1 derivation:3 sejnowski:1 ole:1 artificial:1 neuroinformatics:1 kai:1 statistic:1 gp:2 noisy:2 final:3 parisi:2 kurtosis:2 analytical:1 propose:2 reconstruction:1 interaction:2 product:2 asf:1 tu:1 mixing:7 recipe:1 manchester:1 convergence:1 transmission:1 generating:1 derive:4 measured:1 ij:1 ex:3 expectationmaximization:1 progress:1 eq:6 msk:1 girolami:2 direction:2 thick:1 lars:1 stochastic:1 everything:1 fix:2 underdetermined:2 secondly:1 correction:1 mm:7 around:1 considered:2 exp:3 early:1 a2:3 adopt:1 fh:2 proc:1 applicable:1 hansen:3 council:1 mit:1 sensor:4 gaussian:7 og:1 factorizes:1 jaakkola:1 l0:1 dsp:1 improvement:1 modelling:1 likelihood:8 superimposed:2 am:1 sense:1 mth:1 classification:1 plan:1 constrained:1 integration:1 mackay:1 field:25 identical:1 simplify:1 overestimating:1 divergence:1 mixture:2 lh:2 sweden:1 exchanged:1 overcomplete:8 theoretical:1 column:6 hep:1 ott:2 strategic:1 ohm:1 straightforwardly:1 connect:1 corrupted:1 st:2 winther:4 probabilistic:2 physic:3 off:1 lee:1 choose:1 d8:1 external:2 book:1 derivative:2 leading:1 speculate:1 explicitly:2 blind:2 performed:1 recover:2 lappalainen:1 solvegatan:1 variance:6 likewise:1 ensemble:12 yield:2 bayesian:3 iid:1 lu:1 trajectory:1 submitted:1 explain:1 phys:1 ed:3 danish:1 definition:1 popular:1 massachusetts:1 ok:1 tum:1 attained:1 wesley:1 mtl:3 response:13 improved:3 swedish:1 formulation:1 leen:1 anderson:1 furthermore:1 correlation:7 hand:2 ei:1 rodriguez:2 defines:1 h0jen:1 true:8 equality:1 analytically:2 hence:1 leibler:1 self:1 tt:6 complete:1 txi:1 fj:1 image:1 variational:1 instantaneous:1 recently:1 sigmoid:2 timt:1 mt:17 extend:1 kluwer:1 measurement:4 ai:1 unconstrained:1 trivially:1 similarly:1 j:1 posterior:2 recent:1 incapable:1 binary:12 der:1 lnp:1 muller:1 seen:1 eo:1 signal:4 ii:2 dashed:1 unimodal:1 needing:1 technical:1 faster:1 academic:1 offer:1 iteration:6 bimodal:1 lea:7 whereas:1 background:1 source:45 publisher:1 smt:14 jordan:2 prototype:1 six:1 remark:1 se:1 sst:2 detailed:1 cardoso:1 ph:2 xij:4 dotted:1 sign:1 estimated:6 correctly:1 discrete:1 four:1 ce:1 schottky:1 kept:1 throughout:1 family:1 separation:5 bound:1 smd:1 dash:1 argument:1 performing:1 department:1 combination:1 saddlepoint:1 rev:1 lyngby:1 equation:13 mutually:1 fail:2 needed:2 addison:1 tractable:2 apply:1 observe:1 appropriate:1 denotes:2 remaining:1 include:1 xdt:2 xc:1 already:1 parametric:1 diagonal:4 thank:1 denmark:2 code:1 unfortunately:1 negative:1 implementation:1 boltzmann:3 observation:4 sm:3 t:4 extended:1 precise:2 tnt:1 arbitrary:1 nmf:14 introduced:2 kl:1 continous:1 tap:5 nip:1 trans:1 dth:1 dynamical:1 xm:1 lund:2 reading:1 oj:1 belief:2 ia:2 suitable:1 advanced:1 scheme:1 temporally:1 picture:1 naive:4 prior:17 literature:1 okl:3 foundation:1 nolta:1 uncorrelated:1 ymt:1 belouchrani:1 supported:1 saul:1 peterson:1 van:1 xia:13 dimension:1 xn:1 valid:1 calculated:1 opper:2 lett:1 ignores:1 collection:2 qualitatively:1 adaptive:1 historical:1 sj:1 approximate:3 implicitly:2 kullback:1 keep:1 ml:5 thor:1 active:1 assumed:2 continuous:6 why:1 expansion:1 complex:1 diag:6 whole:1 noise:18 ref:3 fokoue:1 en:1 scattered:1 iij:1 fails:4 wish:1 ix:1 dk:2 alt:2 consist:1 illustrates:1 boston:1 lt:1 lewicki:1 pedro:1 covariant:1 relies:1 amt:12 stimulating:1 consequently:2 exposition:1 miskin:1 s0rensen:1 succeeds:2 mark:1 preparation:1 handling:1
881
1,807
Recognizing Hand-written Digits Using Hierarchical Products of Experts Guy Mayraz & Geoffrey E. Hinton Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WCIN 3AR, u.K. Abstract The product of experts learning procedure [1] can discover a set of stochastic binary features that constitute a non-linear generative model of handwritten images of digits. The quality of generative models learned in this way can be assessed by learning a separate model for each class of digit and then comparing the unnormalized probabilities of test images under the 10 different class-specific models. To improve discriminative performance, it is helpful to learn a hierarchy of separate models for each digit class. Each model in the hierarchy has one layer of hidden units and the nth level model is trained on data that consists of the activities of the hidden units in the already trained (n - l)th level model. After training, each level produces a separate, unnormalized log probabilty score. With a three-level hierarchy for each of the 10 digit classes, a test image produces 30 scores which can be used as inputs to a supervised, logistic classification network that is trained on separate data. On the MNIST database, our system is comparable with current state-of-the-art discriminative methods, demonstrating that the product of experts learning procedure can produce effective generative models of high-dimensional data. 1 Learning products of stochastic binary experts Hinton [1] describes a learning algorithm for probabilistic generative models that are composed of a number of experts. Each expert specifies a probability distribution over the visible variables and the experts are combined by multiplying these distributions together and renormalizing. (1) where d is a data vector in a discrete space, Om is all the parameters of individual model m, Pm(dIOm) is the probability of d under model m, and c is an index over all possible vectors in the data space. A Restricted Boltzmann machine [2, 3] is a special case of a product of experts in which each expert is a single, binary stochastic hidden unit that has symmetrical connections to a set of visible units, and connections between the hidden units are forbidden. Inference in an RBM is much easier than in a general Boltzmann machine and it is also much easier than in a causal belief net because there is no explaining away. There is therefore no need to perform any iteration to determine the activities of the hidden units. The hidden states, Sj , are conditionally independent given the visible states, Si, and the distribution of Sj is given by the standard logistic function : 1 p(Sj = 1) = (2) 1 + exp( - Li WijSi) Conversely, the hidden states of an RBM are marginally dependent so it is easy for an RBM to learn population codes in which units may be highly correlated. It is hard to do this in causal belief nets with one hidden layer because the generative model of a causal belief net assumes marginal independence. An RBM can be trained using the standard Boltzmann machine learning algorithm which follows a noisy but unbiased estimate of the gradient of the log likelihood of the data. One way to implement this algorithm is to start the network with a data vector on the visible units and then to alternate between updating all of the hidden units in parallel and updating all of the visible units in parallel. Each update picks a binary state for a unit from its posterior distribution given the current states of all the units in the other set. If this alternating Gibbs sampling is run to equilibrium, there is a very simple way to update the weights so as to minimize the Kullback-Leibler divergence, QOIIQoo, between the data distribution, QO, and the equilibrium distribution of fantasies over the visible units, Qoo, produced by the RBM [4]: flWij oc <SiSj>QO - <SiSj>Q~ (3) where < SiSj >Qo is the expected value of SiSj when data is clamped on the visible units and the hidden states are sampled from their conditional distribution given the data, and <SiSj>Q ~ is the expected value of SiSj after prolonged Gibbs sampling. This learning rule does not work well because it can take a long time to approach thermal equilibrium and the sampling noise in the estimate of <SiSj>Q ~ can swamp the gradient. [1] shows that it is far more effective to minimize the difference between QOllQoo and Q111Qoo where Q1 is the distribution of the one-step reconstructions of the data that are produced by first picking binary hidden states from their conditional distribution given the data and then picking binary visible states from their conditional distribution given the hidden states. The exact gradient of this "contrastive divergence" is complicated because the distribution Q1 depends on the weights, but [1] shows that this dependence can safely be ignored to yield a simple and effective learning rule for following the approximate gradient of the contrastive divergence: flWij oc <SiSj>QO - <SiSj>Ql (4) For images of digits, it is possible to apply Eq. 4 directly if we use stochastic binary pixel intensities, but it is more effective to normalize the intensities to lie in the range [0,1] and then to use these real values as the inputs to the hidden units. During reconstruction, the stochastic binary pixel intensities required by Eq. 4 are also replaced by real-valued probabilities. Finally, the learning rule can be made less noisy by replacing the stochastic binary activities of the hidden units by their expected values. So the learning rule we actually use is: flWij oc <PiPj>QO - <PiPj>Ql (5) Stochastically chosen binary states of the hidden units are still used for computing the probabilities of the reconstructed pixels. This prevents each real-valued hidden probability from conveying more than 1 bit of information to the reconstruction. 2 The MNIST database MNIST, a standard database for testing digit recognition algorithms, is available at h ttp://www.rese arc h. a tt. co m/~y a nn /ocr/ mnist / index.html.MNIST METHOD Linear classifier (I-layer NN) K-nearest-neighbors, Euclidean 1000 RBF + linear classifier Best Back-Prop: 3-layer NN, 500+ 150 hidden units % ERRORS 12.0 5.0 3.6 2.95 Reduced Set SVM deg 5 polynomial LeNet-l [with 16x16 input] LeNet-5 1.0 1.7 0.95 Product of Experts (separate 3-layer net for each model) 1.7 Table 1: Performance of various learning methods on the MNIST test set. has 60,000 training images and 10,000 test images. Images are highly variable in style but are size-normalized and translated so that the center of gravity of their intensity lies at the center of a fixed-size image of 28 by 28 pixels. A number of well-known learning algorithms have been run on the MNIST database[5], so it is easy to assess the relative performance of a novel algorithm. Some of the experiments in [5] included deskewing images or augmenting the training set with distorted versions of the original images. We did not use deskewing or distortions in our main experiments, so we only compare our results with other methods that did not use them. The results in Table 1 should be treated with caution. Some attempts to replicate the degree 5 polynomial SVM have produced slightly higher error rates of 1.4% [6] and standard backpropagation can be carefully tuned to achieve under 2% (John Platt, personal communication). Table 1 shows that it is possible to achieve a result that is comparable with the best discriminative techniques by using multiple PoE models of each digit class to extract scores that represent unnormalized log probabilities. These scores are then used as the inputs to a simple logistic classifier. The fact that a system based on generative models can come close to the very best discriminative systems suggests that the generative models are doing a good job of capturing the distributions. 3 Training the individual PoE models The MNIST database contains an average of 6,000 training examples per digit, but these examples are unevenly distributed among the digit classes. In order to simplify the research we produced a balanced database by using only 5,400 examples of each digit. The first 4,400 examples were the unsupervised training set used for training the individual PoE models. The remaining examples of each of the 10 digits constituted the supervised training set used for training the logistic classification net that converts the scores of all the PoE models into a classification. The original intensity range in the MNIST images was 0 to 255. This was normalized to the range 0 to 1 so that we could treat intensities as probabilities. The normalized pixel intensities were used as the initial activities of the 784 visible units corresponding to the 28 by 28 pixels. The visible units were fully connected to a single layer of hidden units. The weights between the input and hidden layer were initialized to small, zero-mean, Gaussiandistributed, random values. The 4,400 training examples were divided into 44 mini-batches. One epoch of learning consisted of a pass through all 44 mini batches in fixed order with the weights being updated after each minibatch. We used a momentum method with a small ? ? . ? ? ? 234 5 6 7 digit to be explained 1 1 1 7 ?7 ::r1 1 t( Cf :::r 7 -"{ t( '1 7 7 7 , '7 ?7 -:t-1 '7 ~ . ? ? 8 9 7 7 f 9J '/ 9 f Figure 1: The areas of the blobs show the mean goodness of validation set digits using only the first-level models with 500 hidden units (white is positive). A different constant is added to all the goodness scores of each model so that rows sum to zero. Successful discrimination depends on models being better on their own class than other models are. The converse is not true: models can be better reconstructing other, easier classes of digits than their own class. " , , ? f 9J 1 q 7 9J Figure 2: Cross reconstruction of 7s and 9s with models containing 25 hidden units (top) and 100 hidden units (bottom). The central horizontal line in each block contains originals, and the lines above and below are reconstructions by the 7s and 9s models respectively. Both models produce stereotyped digits in the small net and much better reconstructions in the large one for both the digit classes. The 9s model sometimes tries to close the loop in 7s, and the 7s model tries to open the loop in 9s. amount of weight decay, so the change in a weight after the tth minibatch was: ~wL = J..L~wtt + 0.1 ((PiPj)Q~ - (PiPj)Q: - o.OOOlwL) (6) where Q~ and Qt are averages over the data or the one-step reconstructions for minibatch t, and the momentum, J..L, was 0 for the first 50 weight changes and 0.9 thereafter. The hidden and visible biases, bi and bj , were initialized to zero. Their values were similarly altered (by treating them like connections to a unit that was always on) but with no weight decay. Rather than picking one particular number of hidden units, we trained networks with various different numbers of units and then used discriminative performance on the validation set to decide on the most effective number of hidden units. The largest network was the best, even though each digit model contains 392,500 parameters trained on only 4,400 images. The receptive fields learned by the hidden units are quite local. Since the hidden units are fully connected and have random initial weights the learning procedure must infer the spatial proximity of pixels from the statistics of their joint activities. Figure 1 shows the mean goodness scores of all 10 models on all 10 digit classes. Figure 2 shows reconstructions produced by the bottom-level models on previously unseen data from the digit class they were trained on and also on data from a different digit class. With 500 hidden units, the 7s model is almost perfect at reconstructing 9s. This is because a model gets better at reconstructing more or less any image as its set of available features becomes more varied and more local. Despite this, the larger networks give better discriminative information. 3.1 Multi-layer models Networks that use a single layer of hidden units and do not allow connections within a layer have some major advantages over more general networks. With an image clamped on the visible units, the hidden units are conditionally independent. So it is possible to compute an unbiased sample of the binary states of the hidden units without any iteration. This property makes PoE's easy to train and it is lost in more general architectures. If, for example, we introduce a second hidden layer that is symmetrically connected to the first hidden layer, it is no longer straightforward to compute the posterior expected activity of a unit in the first hidden layer when given an image that is assumed to have been generated by the multilayer model at thermal equilibrium. The posterior distribution can be computed by alternating Gibbs sampling between the two hidden layers, but this is slow and noisy. Fortunately, if our ultimate goal is discrimination, there is a computationally convenient alternative to using a multilayer Boltzmann machine. Having trained a one-hidden-layer PoE on a set of images, it is easy to compute the expected activities of the hidden units on each image in the training set. These hidden activity vectors will themselves have interesting statistical structure because a PoE is not attempting to find independent causes and has no implicit penalty for using hidden units that are marginally highly correlated. So we can learn a completely separate PoE model in which the activity vectors of the hidden units are treated as the observed data and a new layer of hidden units learns to model the structure of this "data". It is not entirely clear how this second-level PoE model helps as a way of modelling the original image distribution, but it is clear that if a first-level PoE is trained on images of 2's, we would expect the vectors of hidden activities to be be very different when it is presented with a 3, even if the features it has learned are quite good at reconstructing the 3. So a second-level model should be able to assign high scores to the vectors of hidden activities that are typical of the 2 model when it is given images of 2's and low scores to the hidden activities of the 2 model when it is given images that contain combinations of features that are not normally present at the same time in a 2. We used a three-level hierarchy of PoE's for each digit class. The levels were trained sequentially and to simplify the research we always used the same number of hidden units at each level. We trained models of five different sizes with 25, 100, 200, 400, and 500 hidden units per level. 4 The logistic classification network An attractive aspect of PoE's is that it is easy to compute the numerator in Eq. 1 so it is easy to compute a goodness score which is equal to the log probability of a data vector up to an additive constant. Figure 3 show the goodness of the 7s and 9s models (the most difficult pair of digits to discriminate) when presented with test images of both 7s and 9s. It can be seen that a line can be passed that separates the two digit sets almost perfectly. It is also encouraging that all of the errors are close to the decision boundary, so there are no confident misclassifications. The classification network had 10 output units, each of which computed a logit, x, that was a linear function of the goodness scores, g, of the various PoE models, m, on an image, c. The probability assigned to class j was then computed by taking a "softmax" of the logits: Pcj -- exj ------0- '" L..Jk e X Ck xj = bj + Lg~Wmj (7) m There were 10 digit classes each with a three-level hierarchy of PoE models, so the classification network had 30 inputs and therefore 300 weights and 10 output biases. Both weights and biases were initialized to zero. The weights were learned by a momentum version of (b) (a) 210 180 200 220 240 260 score under 7s model (1 st layer) Figure 3: Validation set cross goodness results of (a) the first-level model and (b) the thirdlevel model of 7s and 9s. All models have 500 hidden units. The third-level models clearly give higher goodness scores for second-level hidden activities in their own hierarchy than for the hidden activities in the other hierarchy. gradient ascent in the log probability assigned to the correct class. Since there were only 310 weights to train, little effort was devoted to making the learning efficient. ~Wmj(t) = J-L~wmj(t-l) + 0.0002 Lg~(tj - pi) (8) c where tj is 1 if class j is the correct answer for training case c and 0 otherwise. The momentum J-L was 0.9. The biases were treated as if they were weights from an input that always had a value of 1 and were learned in exactly the same way. In each training epoch the weight changes were averaged over the whole supervised training setl . We used separate data for training the classification network because we expect the goodness score produced by a PoE of a given class to be worse and more variable on exemplars of that class that were not used to train the PoE and it is these poor and noisy scores that are relevant for the real, unseen test data. The training algorithm was run using goodness scores from PoE networks with different numbers of hidden units. The results in Table 2 show a consistent improvement in classification error as the number of units in the hidden layers of each PoE increase. There is no evidence for over-fitting, even though large PoE's are very good at reconstructing images of other digit classes or the hidden activity vectors of lower-level models in other hierarchies. It is possible to reduce the error rate by a further 0.1 % by averaging together the goodness scores of corresponding levels of model hierarchies with 100 or more units per layer, but this model averaging is not nearly as effective as using extra levels. 5 Model-based normalization The results of our current system are still not nearly as good as human performance. In particular, it appears the network has only a very limited understanding of image invari1We held back part of the supervised training set to use as a validation set in determining the optimal number of epochs to train the classification net, but once this was decided we retrained on all the supervised training data for that number of epochs. Network size Learning epochs % Errors 25 100 200 400 500 25 100 200 200 500 3.8 Table 2: MNIST test set error rate as a function of the number of hidden units per level. There is no evidence of overfitting even when over 250,000 parameters are trained on only 4,400 examples. 2.3 2.2 2.0 1.7 ances. This is not surprising since it is trained on prenormalized data. Dealing with image invariances better will be essential for approaching human performance. The fact that we are using generative models suggests an interesting way of refining the image normalization. If the normalization of an image is slightly wrong we would expect it to have lower probability under the correct class-specific model. So we should be able to use the gradient of the goodness score to iteratively adjust the normalization so that the data fits the model better. Using x translation as an example, 8C __ "" 8s i 8C ~ 8x i 8x 8s i 8C -8. S. = bi + "" ~ SjWji j where Si is the intensity of pixel i. 8si/8x is easily computed from the intensities of the left and right neighbors of pixel i and 8C / 8s i is just the top-down input to a pixel during reconstruction. Preliminary simulations by Yee Whye Teh on poorly normalized data show that this type of model-based renormalization improves the score of the correct model much more than the scores of the incorrect ones and thus eliminates most of the classification errors. Acknowledgments We thank Yann Le Cun, Mike Revow and members of the Gatsby Unit for helpful discussions. This research was funded the Gatsby Charitable Foundation. References [1] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical Report GeNU TR 2000-004, Gatsby Computational Neuroscience Unit, University College London, 2000. [2] P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [3] Yoav Freund and David Haussler. Unsupervised learning of distributions of binary vectors using 2-layer networks. In John E. Moody, Steve J. Hanson, and Richard P. Lippmann, editors, Advances inNeural1nformation Processing Systems, volume 4, pages 912- 919. Morgan Kaufmann Publishers, Inc., 1992. [4] G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzmann machines. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [5] Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, 1. Guyon, U. A. Muller, E. Sackinger, P. Simard, and V. Vapnik. Comparison of learning algorithms for handwritten digit recognition. In F. Fogelman and P. Gallinari, editors, International Conference on Artificial Neural Networks, pages 53- 60, Paris, 1995. EC2 & Cie. [6] Chris J.C. Burges and B. SchOlkopf. Improving the accuracy and speed of support vector machines. In Michael C. Mozer, Michael 1. Jordan, and Thomas Petsche, editors, Advances in Neural Information Processing Systems, volume 9, page 375. The MIT Press, 1997.
1807 |@word version:2 polynomial:2 replicate:1 logit:1 open:1 simulation:1 contrastive:3 q1:2 pick:1 tr:1 initial:2 contains:3 score:20 tuned:1 current:3 comparing:1 surprising:1 mayraz:1 si:3 written:1 must:1 john:2 visible:12 additive:1 treating:1 update:2 discrimination:2 generative:8 five:1 scholkopf:1 incorrect:1 consists:1 fitting:1 introduce:1 expected:5 themselves:1 multi:1 prolonged:1 encouraging:1 little:1 becomes:1 discover:1 fantasy:1 caution:1 safely:1 gravity:1 exactly:1 classifier:3 wrong:1 platt:1 gallinari:1 unit:51 converse:1 normally:1 positive:1 local:2 treat:1 despite:1 conversely:1 suggests:2 co:1 limited:1 range:3 bi:2 averaged:1 decided:1 acknowledgment:1 lecun:1 testing:1 lost:1 block:1 implement:1 backpropagation:1 digit:27 procedure:3 area:1 convenient:1 get:1 close:3 yee:1 www:1 center:2 straightforward:1 rule:4 haussler:1 population:1 swamp:1 updated:1 hierarchy:9 exact:1 rumelhart:2 recognition:2 jk:1 updating:2 database:6 bottom:2 observed:1 mike:1 connected:3 balanced:1 mozer:1 personal:1 trained:13 completely:1 translated:1 easily:1 joint:1 various:3 train:4 effective:6 london:3 sejnowski:1 artificial:1 quite:2 larger:1 valued:2 distortion:1 otherwise:1 statistic:1 unseen:2 noisy:4 blob:1 advantage:1 net:7 reconstruction:9 product:7 relevant:1 loop:2 poorly:1 achieve:2 normalize:1 wcin:1 deskewing:2 r1:1 produce:4 renormalizing:1 perfect:1 help:1 augmenting:1 exemplar:1 nearest:1 qt:1 job:1 eq:3 come:1 correct:4 stochastic:6 exploration:2 human:2 assign:1 microstructure:2 preliminary:1 proximity:1 exp:1 equilibrium:4 cognition:2 bj:2 major:1 harmony:1 jackel:1 largest:1 wl:1 pcj:1 mit:3 clearly:1 always:3 rather:1 ck:1 qoo:1 sisj:9 refining:1 improvement:1 modelling:1 likelihood:1 helpful:2 inference:1 dependent:1 nn:3 hidden:53 pixel:10 fogelman:1 classification:10 html:1 among:1 art:1 special:1 spatial:1 softmax:1 marginal:1 field:1 equal:1 once:1 having:1 sampling:4 unsupervised:2 nearly:2 report:1 simplify:2 richard:1 composed:1 divergence:4 individual:3 replaced:1 attempt:1 highly:3 adjust:1 tj:2 devoted:1 held:1 euclidean:1 initialized:3 causal:3 wmj:3 ar:1 goodness:12 yoav:1 queen:1 recognizing:1 successful:1 answer:1 combined:1 confident:1 st:1 international:1 ec2:1 probabilistic:1 picking:3 michael:2 together:2 moody:1 central:1 gaussiandistributed:1 containing:1 guy:1 worse:1 stochastically:1 expert:11 simard:1 style:1 li:1 inc:1 depends:2 try:2 doing:1 start:1 parallel:4 complicated:1 om:1 square:1 minimize:2 ass:1 accuracy:1 kaufmann:1 yield:1 conveying:1 handwritten:2 produced:6 marginally:2 multiplying:1 rbm:5 sampled:1 improves:1 carefully:1 actually:1 back:2 appears:1 steve:1 higher:2 supervised:5 though:2 just:1 implicit:1 hand:1 horizontal:1 replacing:1 qo:5 sackinger:1 minibatch:3 logistic:5 wijsi:1 quality:1 normalized:4 unbiased:2 consisted:1 true:1 logits:1 lenet:2 assigned:2 contain:1 alternating:2 leibler:1 iteratively:1 white:1 conditionally:2 attractive:1 during:2 numerator:1 unnormalized:3 oc:3 whye:1 tt:1 image:28 novel:1 volume:4 gibbs:3 pm:1 similarly:1 exj:1 had:3 funded:1 longer:1 posterior:3 own:3 forbidden:1 binary:12 muller:1 seen:1 morgan:1 fortunately:1 determine:1 multiple:1 infer:1 technical:1 cross:2 long:1 divided:1 multilayer:2 iteration:2 sometimes:1 cie:1 represent:1 normalization:4 rese:1 unevenly:1 publisher:1 extra:1 eliminates:1 ascent:1 member:1 jordan:1 symmetrically:1 easy:6 independence:1 xj:1 fit:1 misclassifications:1 architecture:1 perfectly:1 approaching:1 reduce:1 drucker:1 poe:19 ultimate:1 passed:1 effort:1 penalty:1 cause:1 constitute:1 ignored:1 probabilty:1 clear:2 setl:1 amount:1 mcclelland:2 tth:1 reduced:1 specifies:1 neuroscience:2 per:4 discrete:1 thereafter:1 demonstrating:1 convert:1 sum:1 wtt:1 run:3 distorted:1 almost:2 guyon:1 decide:1 yann:1 decision:1 comparable:2 bit:1 capturing:1 layer:20 entirely:1 activity:15 aspect:1 speed:1 attempting:1 alternate:1 combination:1 poor:1 describes:1 slightly:2 reconstructing:5 cun:1 making:1 brunot:1 explained:1 restricted:1 computationally:1 previously:1 available:2 apply:1 denker:1 hierarchical:1 away:1 ocr:1 petsche:1 batch:2 alternative:1 original:4 thomas:1 assumes:1 remaining:1 cf:1 top:2 already:1 added:1 receptive:1 dependence:1 gradient:6 separate:8 thank:1 chris:1 code:1 index:2 mini:2 minimizing:1 ql:2 difficult:1 lg:2 boltzmann:5 perform:1 teh:1 arc:1 thermal:2 hinton:4 communication:1 varied:1 retrained:1 intensity:9 david:1 pair:1 required:1 paris:1 connection:4 hanson:1 learned:5 able:2 below:1 dynamical:1 smolensky:1 belief:3 treated:3 nth:1 improve:1 altered:1 extract:1 epoch:5 understanding:1 determining:1 relative:1 freund:1 fully:2 expect:3 interesting:2 geoffrey:1 validation:4 foundation:4 degree:1 consistent:1 editor:5 charitable:1 pi:1 translation:1 row:1 bias:4 allow:1 burges:1 explaining:1 neighbor:2 taking:1 distributed:3 boundary:1 made:1 far:1 sj:3 approximate:1 reconstructed:1 lippmann:1 kullback:1 ances:1 dealing:1 deg:1 sequentially:1 overfitting:1 symmetrical:1 assumed:1 discriminative:6 table:5 learn:3 improving:1 bottou:1 did:2 main:1 constituted:1 stereotyped:1 whole:1 noise:1 gatsby:4 x16:1 slow:1 renormalization:1 momentum:4 lie:2 clamped:2 third:1 learns:1 down:1 specific:2 decay:2 svm:2 cortes:1 evidence:2 essential:1 mnist:10 vapnik:1 relearning:1 easier:3 prevents:1 prop:1 conditional:3 goal:1 rbf:1 revow:1 hard:1 change:3 included:1 typical:1 averaging:2 pas:1 discriminate:1 invariance:1 college:2 support:1 assessed:1 correlated:2
882
1,808
Efficient Learning of Linear Perceptrons Shai Ben-David Department of Computer Science Technion Haifa 32000, Israel Hans Ulrich Simon Fakultat fur Mathematik Ruhr Universitat Bochum D-44780 Bochum, Germany shai~cs.technion.ac.il simon~lmi.ruhr-uni-bochum.de Abstract We consider the existence of efficient algorithms for learning the class of half-spaces in ~n in the agnostic learning model (Le., making no prior assumptions on the example-generating distribution). The resulting combinatorial problem - finding the best agreement half-space over an input sample - is NP hard to approximate to within some constant factor. We suggest a way to circumvent this theoretical bound by introducing a new measure of success for such algorithms. An algorithm is IL-margin successful if the agreement ratio of the half-space it outputs is as good as that of any half-space once training points that are inside the IL-margins of its separating hyper-plane are disregarded. We prove crisp computational complexity results with respect to this success measure: On one hand, for every positive IL, there exist efficient (poly-time) IL-margin successful learning algorithms. On the other hand, we prove that unless P=NP, there is no algorithm that runs in time polynomial in the sample size and in 1/ IL that is IL-margin successful for all IL> O. 1 Introduction We consider the computational complexity of learning linear perceptrons for arbitrary (Le. non -separable) data sets. While there are quite a few perceptron learning algorithms that are computationally efficient on separable input samples, it is clear that 'real-life' data sets are usually not linearly separable. The task of finding a linear perceptron (i.e. a half-space) that maximizes the number of correctly classified points for an arbitrary input labeled sample is known to be NP-hard. Furthermore, even the task of finding a half-space whose success rate on the sample is within some constant ratio of an optimal one is NP-hard [1]. A possible way around this problem is offered by the support vector machines paradigm (SVM) . In a nutshell, the SVM idea is to replace the search for a linear separator in the feature space of the input sample, by first embedding the sample into a Euclidean space of much higher dimension, so that the images of the sample points do become separable, and then applying learning algorithms to the image of the original sample. The SVM paradigm enjoys an impressive practical success, however, it can be shown ([3]) that there are cases in which such embeddings are bound to require high dimension and allow only small margins, which in turn entails the collapse of the known generalization performance guarantees for such learning. We take a different approach. While sticking with the basic empirical risk minimization principle, we propose to replace the worst-case-performance analysis by an alternative measure of success. The common definition of the approximation ratio of an algorithm, requires the profit of an algorithm to remain within some fixed ratio from that of an optimal solution for all inputs, we allow the relative quality of our algorithm to vary between different inputs. For a given input sample, the number of points that the algorithm's output half-space should classify correctly relates not only to the success rate of the best possible half-space, but also to the robustness of this rate to perturbations of the hyper-plane. This new success requirement is intended to provide a formal measure that, while being achievable by efficient algorithms, retains a guaranteed quality of the output 'whenever possible'. The new success measure depends on a margin parameter p,. An algorithm is called p,-margin successful if, for any input labeled sample, it outputs a hypothesis half-space that classifies correctly as many sample points as any half-space can classify correctly with margin p, (that is, discounting points that are too close to the separating hyper-plane). Consequently, a p,-margin successful algorithm is required to output a hypothesis with close-to-optimal performance on the input data (optimal in terms of the number of correctly classified sample points), whenever this input sample has an optimal separating hyper-plane that achieves larger-than-p, margins for most of the points it classifies correctly. On the other hand, if for every hyper-plane h that achieves close-to-maximal number of correctly classified input points, a large percentage of the correctly classified points are close to h's boundary, then an algorithm can settle for a relatively poor success ratio without violating the p,-margin success criterion. We obtain a crisp analysis of the computational complexity of perceptron learning under the p,-margin success requirement: On one hand, for every p, > 0 we present an efficient p,-margin successful learning algorithm (that is, an algorithm that runs in time polynomial in both the input dimension and the sample size). On the other hand, unless P=NP, no algorithm whose running time is polynomial in the sample size and dimension and in 1/ p, can be p,-margin successful for all p, > O. Note, that by the hardness of approximating linear perceptrons result of [1] cited above, for p, = 0, p,-margin learning is NP hard (even NP-hard to approximate). We conclude that the new success criterion for learning algorithms provides a rigorous success guarantee that captures the constraints imposed on perceptron learning by computational efficiency requirements. It is well known by now that margins play an important role in the analysis of genera- lization performance (or sample complexity). The results of this work demonstrate that a similar notion of margins is a significant component in the determination of the computational complexity of learning as well. Due to lack of space, in this extended abstract we skip all the technical proofs. 2 Definition and Notation We shall be interested in the problem of finding a half-space that maximizes the agreement with a given labeled input data set. More formally, Best Separating Hyper-plane (BSH) Inputs are of the form (n, S), where n 2: 1, and S = {(Xl, 17d, ... , (Xm, 17m)} is finite labeled sample, that is, each Xi is a point in lRn and each 17i is a member of {+1, -I}. A hyper-plane h(w, t), where w E lRn and t E lR, correctly classifies (X, 17) if sign( < wx > -t) = 17 where < wx > denotes the dot product of the vectors w and x. We define the profit of h = h(w, t) on S as fi (hiS) pro t = l{(xi,17i): h correctly classifies (Xi, 17i)}1 lSI The goal of a Best Separating Hyper-plane algorithm is to find a pair (w, t) so that profit(h(w, t)IS) is as large as possible. In the sequel, we refer to an input instance with parameter n as a n-dimensional input. On top of the Best Separating Hyper-plane problem we shall also refer to the following combinatorial optimization problems: Best separating Homogeneous Hyper-plane (BSHH) - The same problem as BSH, except that the separating hyper-plane must be homogeneous, that is, t must be set to zero. The restriction of BSHH to input points from sn-l, the unit sphere in lRn, is called Best Separating Hemisphere Problem (BSHem) in the sequel. Densest Hemisphere (DHem) Inputs are of the form (n, P), where n 2: 1 and P is a list of (not necessarily different) points from sn-l - the unit sphere in lRn. The problem is to find the Densest Hemisphere for P, that is, a weight vector wE lRn such that H+(w, 0) contains as many points from P as possible (accounting for their multiplicity in P). Densest Open Ball (DOB) Inputs are of the form (n, P), where n 2: 1, and P is a list of points from lRn. The problem is to find the Densest Open Ball of radius 1 for P, that is, a center z E lRn such that B(z, 1) contains as many points from P as possible (accounting for their multiplicity in P). For the sake of our proofs, we shall also have to address the following well studied optimization problem: MAX-E2-SAT Inputs are of the form (n, C), where n 2: 1 and C is a collection of 2-clauses over n Boolean variables. The problem is to find an assignment a E {O, l}n satisfying as many 2-clauses of C as possible. More generally, a maximization problem defines for each input instance 1 a set of legal solutions, and for each (instance, legal-solution) pair (I, a), it defines profit (I, a) E lR+ - the profit of a on I. For each maximization problem II and each input instance 1 for II, optrr (I) denotes the maximum profit that can be realized by a legal solution for I. Subscript II is omitted when this does not cause confusion. The profit realized by an algorithm A on input instance 1 is denoted by A(I). The quantity opt (I) - A(I) opt (I) is called the relative error of algorithm A on input instance I. A is called 0approximation algorithm for II, where 0 E R+, if its relative error on I is at most 0 for all input instances I. 2.1 The new notion of approximate optimization: JL-margin approximation As mentioned in the introduction, we shall discuss a variant of the above common notion of approximation for the best separating hyper-plane problem (as well as for the other geometric maximization problems listed above). The idea behind this new notion, that we term 'JL-margin approximation', is that the required approximation rate varies with the structure of the input sample. When there exist optimal solutions that are 'stable', in the sense that minor variations to these solutions will not effect their cost, then we require a high approximation ratio. On the other hand, when all optimal solutions are 'unstable' then we settle for lower approximation ratios. The following definitions focus on separation problems, but extend to densest set problems in the obvious way. Definition 2.1 Given a hypothesis class 11. = Un li n , where each li n is a collection of subsets of Rn, and a parameter JL ~ 0, ? A margin function is a function M : U n(li n x Rn) r-+ R+. That is, given a hypothesis h C Rn and a point x E Rn, M (h, x) is a non-negative real number - the margin of x w. r. t. h. In this work, in most cases M (h, x) is the Euclidean distance between x and the boundary of h, normalized by IIxl12 and, for linear separators, by the 2-norm of the hyper-plane h as well. ? Given a finite labeled sample S and a hypothesis h E li n , the profit realized by h on S with margin JL is profit(hIS, JL) = I{(Xi,1Ji): h correctly classifies (Xi,1Ji) and M(h, Xi) ~ JL} I lSI ? For a labeled sample S, let optl-'(S) ~f maxhEl?(profit(hIS,JL)) ? h E li n is a JL-margin approximation for S w.r.t. optl-' (S). 11. if profit(hIS) > ? an algorithm A is JL-successful for 11. if for every finite n-dimensional input S it outputs A(S) E li n which is a JL-margin approximation for S w.r.t. 11.. ? Given any of the geometric maximization problem listed above, II, its JLrelaxation is the problem of finding, for each input instance of II a JL-margin approximation. For a given parameter JL > 0, we denote the JL-relaxation of a problem II by II[JL]. 3 Efficient J.1, - margin successful learning algorithms Our Hyper-plane learning algorithm is based on the following result of Ben-David, Eiron and Simon [2] Theorem 3.1 For every (constant) JL > 0, there exists a JL-margin successful polynomial time algorithm AI-' for the Densest Open Ball Problem. We shall now show that the existence of a J.L-successful algorithm for Densest Open Balls implies the existence of J.L-successful algorithms for Densest Hemispheres and Best Separating Homogeneous Hyper-planes. Towards this end we need notions of reductions between combinatorial optimization problems. The first definition, of a cost preserving polynomial reduction, is standard, whereas the second definition is tailored for our notion of J.L-margin success. Once this, somewhat technical, preliminary stage is over we shall describe our learning algorithms and prove their performance guarantees. Definition 3.2 Let II and II' be two maximization problems. A cost preserving polynomial reduction from II to II', written as II:S~~III' consists of the following components: ? a polynomial time computable mapping which maps input instances of II to input instances of II', so that whenever I is mapped to I', opt( I') ~ opt( I). ? for each I, a polynomial time computable mapping which maps each legal solutions (J' for I' to a legal solution (J for I having the same profit that (J' ? The following result is evident: Lemma 3.3 If II:S~~III' and there exists a polynomial time c5-approximation algorithm for II', then there exists a polynomial time c5-approximation algorithm for II. Claim 3.4 BSH<CPl BSHH<cP1BSHem<cp lDHem. -po -po -po Proof Sketch: By adding a coordinate one can translate hyper-planes to homogeneous hyper-planes (i.e., hyper-planes that pass through the origin). To get from the homogeneous hyper-planes separating problem to the best separating hemisphere problem, one applies the standard scaling trick. To get from there to the densest hemisphere problem, one applies the standard reflection trick. ? We are interested in J.L-relaxations of the above problems. We shall therefore introduce a slight modification of the definition of a cost-preserving reduction which makes it applicable to J.L-relaxed problems. Definition 3.5 Let II and II' be two geometric maximization problems, and J.L, J.L' > O. A cost preserving polynomial reduction from II[J.Ll to II' [J.L']' written as II[J.Ll:S~~III'[J.L'l, consists of the following components: ? a polynomial time computable mapping which maps input instances of II to input instances of II' , so that whenever I is mapped to I', opt", (I') ~ opt" (I). ? for each I, a polynomial time computable mapping which maps each legal solutions (J' for I' to a legal solution (J for I having the same profit that (J' ? The following result is evident: Lemma 3.6 If II[J.Ll:S~~III'[J.L'l and there exists a polynomial time J.L-margin successful algorithm for II, then there exists a polynomial time J.L'-margin successful algorithm for II' . To conclude our reduction of the Best Separating Hyper-plane problem to the Densest open Ball problem we need yet another step. Lemma 3.8 For p, > 0, let p,' = 1- J1- p,2 and p," = p,2/2. Then, DHem[p,]~~~IDOB[p,I]~~~IDOB[p,"] The proof is a bit technical and is deferred to the full version of this paper. Applying Theorem 3.1 and the above reductions, we therefore get: Theorem 3.9 For each (constant) p, > 0, there exists a p,-successful polynomial time algorithm AIL for the Best Separating Hyper-plane problem. Clearly, the same result holds for the problems BSHH, DHem and BSHem as well. Let us conclude by describing the learning algorithms for the BSH (or BSHH) problem that results from this analysis. We construct a family (AkhEN of polynomial time algorithms. Given a labeled input sample S, the algorithm Ak exhaustively searches through all subsets of S of size ~ k. For each such subset, it computes a hyper-plane that separates the positive from the negative points of the subset with maximum margin (if a separating hyperplane exists). The algorithm then computes the number of points in S that each of these hyper-planes classifies correctly, and outputs the one that maximizes this number. In [2] we prove that our Densest Open Ball algorithm is p,-successful for p, = 1/~ (when applied to all k-size subsamples). Applying Lemma 3.8, we may conclude for problem BSH that, for every k, Ak is (4/(k-1))1/4-successful. In other words: in order to be p,-successful, we must apply algorithm Ak for k = 1 + 14/ p, 4l. 4 NP-Hardness Results We conclude this extended abstract by proving some NP-hardness results that complement rather tightly the positive results of the previous section. We shall base our hardness reductions on two known results. Theorem 4.1 [Hiistad, [4]] Assuming P;6NP, for any a < 1/22, there is no polynomial time a-approximation algorithm for MAX-E2-SAT. Theorem 4.2 [Ben-David, Eiron and Long, [1]] Assuming P;6NP, for any a < 3/418, there is no polynomial time a-approximation algorithm for BSH. Applying Claim 3.4 we readily get: Corollary 4.3 Assuming P;6NP, for any a < 3/418, there is no polynomial time a-approximation algorithm for BSHH, BSHem, or DHem. So far we discussed p,-relaxations only for a value of p, that was fixed regardless of the input dimension. All the above discussion extends naturally to the case of dimension-dependent margin parameter. Let p, denote a sequence (P,l, ... , P,n, . . . ). For a problem TI, its p,-relaxation refers to the problem obtained by considering the margin value P,n for inputs of dimension n. A main tool for proving hardness is the notion of p-Iegal input instances. An n-dimensional input sample S is called p-Iegal if the maximal profit on S can be achieved by a hypothesis h* that satisfies profit(h* IS) = profit(h* IS, ILn). Note that the p-relaxation of a problem is NPhard, if the problem restricted to p-Iegal input instances is NP-hard. Using a special type of reduction, that due to space constrains we cannot elaborate here, we can show that Theorem 4.1 implies the following: Theorem 4.4 1. Assuming P;6NP, there is no polynomial time 1/198approximation for BSH even when only 1/v'36n-legal input instances are allowed. 2. Assuming P;6NP, there is no polynomial time 1/198-approximation for BSHH even when only 1/ y'45(n + I)-legal input instances are allowed. Using the standard cost preserving reduction chain from BSHH via BSHem to DHem, and noting that these reductions are obviously margin-preserving, we get the following: Corollary 4.5 Let S be one of the problems BSHH, BSHem, or DHem, and let p be given by ILn = 1/ y'45(n + 1). Unless P=NP, there exists no polynomial time 1/198-approximation for S[p]. In particular, the p-relaxations of these problems are NP-hard. Since the 1/ y'45(n + I)-relaxation of the Densest Hemisphere Problem is NP-hard, applying Lemma 3.8 we get immediately Corollary 4.6 The 45(~+1) -relaxation of the Densest Ball Problem is NP-hard. Finally note that Corollaries 4.4, 4.5 and 4.6 rule out the existence of "strong schemes" (AI') with running time of AI' being also polynomial in 1/1.?. References [1] Shai Ben-David, Nadav Eiron, and Philip Long. On the difficulty of approximately maximizing agreements. Proceedings of the Thirteenth Annual Conference on Computational Learning Theory (COLT 2000), 266-274. [2] Shai Ben-David, Nadav Eiron, and Hans Ulrich Simon. The computational complexity of densest region detection. Proceedings of the Thirteenth Annual Conference on Computational Learning Theory (COLT 2000), 255-265. [3] Shai Ben-David, Nadav Eiron, and Hans Ulrich Simon. Non-embedability in Euclidean Half-Spaces. Technion TR, 2000. [4] Johan Hastad. Some optimal inapproximability results. In Proceedings of the 29th Annual Symposium on Theory of Computing, pages 1- 10, 1997.
1808 |@word version:1 achievable:1 polynomial:24 norm:1 open:6 ruhr:2 accounting:2 profit:16 tr:1 reduction:11 contains:2 yet:1 must:3 readily:1 written:2 j1:1 wx:2 half:12 plane:23 lr:2 provides:1 become:1 symposium:1 prove:4 consists:2 inside:1 introduce:1 hardness:5 bsh:7 considering:1 classifies:6 notation:1 maximizes:3 agnostic:1 israel:1 ail:1 finding:5 guarantee:3 every:6 ti:1 nutshell:1 unit:2 positive:3 ak:3 subscript:1 approximately:1 studied:1 cpl:1 genus:1 collapse:1 practical:1 empirical:1 word:1 refers:1 suggest:1 bochum:3 get:6 cannot:1 close:4 risk:1 applying:5 crisp:2 restriction:1 imposed:1 map:4 center:1 maximizing:1 regardless:1 immediately:1 rule:1 his:4 embedding:1 proving:2 notion:7 variation:1 coordinate:1 play:1 densest:14 homogeneous:5 hypothesis:6 origin:1 agreement:4 trick:2 satisfying:1 labeled:7 role:1 capture:1 worst:1 region:1 mentioned:1 complexity:6 constrains:1 exhaustively:1 efficiency:1 po:3 describe:1 hyper:23 quite:1 whose:2 larger:1 obviously:1 subsamples:1 sequence:1 propose:1 maximal:2 product:1 translate:1 sticking:1 requirement:3 nadav:3 generating:1 ben:6 ac:1 minor:1 strong:1 c:1 skip:1 implies:2 radius:1 settle:2 require:2 generalization:1 preliminary:1 opt:6 hold:1 around:1 mapping:4 claim:2 vary:1 achieves:2 omitted:1 applicable:1 combinatorial:3 tool:1 minimization:1 clearly:1 rather:1 corollary:4 focus:1 fur:1 rigorous:1 sense:1 dependent:1 interested:2 germany:1 colt:2 denoted:1 special:1 once:2 construct:1 having:2 np:19 few:1 tightly:1 intended:1 detection:1 deferred:1 lrn:7 behind:1 chain:1 unless:3 euclidean:3 haifa:1 theoretical:1 instance:16 classify:2 boolean:1 hastad:1 retains:1 assignment:1 maximization:6 cost:6 introducing:1 subset:4 technion:3 successful:18 too:1 universitat:1 varies:1 cited:1 sequel:2 lmi:1 li:6 de:1 depends:1 shai:5 simon:5 il:8 classified:4 whenever:4 definition:9 obvious:1 e2:2 naturally:1 proof:4 higher:1 violating:1 furthermore:1 stage:1 hand:6 sketch:1 lack:1 defines:2 quality:2 effect:1 normalized:1 discounting:1 ll:3 iln:2 criterion:2 evident:2 demonstrate:1 confusion:1 cp:1 reflection:1 pro:1 image:2 fi:1 common:2 clause:2 ji:2 jl:16 extend:1 slight:1 discussed:1 significant:1 refer:2 ai:3 dot:1 stable:1 han:3 impressive:1 entail:1 base:1 hemisphere:7 success:14 life:1 preserving:6 somewhat:1 relaxed:1 paradigm:2 ii:28 relates:1 full:1 technical:3 determination:1 sphere:2 long:2 variant:1 basic:1 tailored:1 achieved:1 whereas:1 thirteenth:2 member:1 noting:1 iii:4 embeddings:1 idea:2 computable:4 cause:1 generally:1 clear:1 listed:2 exist:2 percentage:1 lsi:2 sign:1 eiron:5 correctly:12 shall:8 relaxation:8 run:2 extends:1 family:1 separation:1 scaling:1 bit:1 bound:2 guaranteed:1 annual:3 constraint:1 sake:1 separable:4 relatively:1 department:1 ball:7 poor:1 remain:1 making:1 modification:1 restricted:1 multiplicity:2 computationally:1 legal:9 mathematik:1 turn:1 discus:1 describing:1 end:1 apply:1 alternative:1 robustness:1 existence:4 original:1 denotes:2 running:2 top:1 approximating:1 realized:3 quantity:1 distance:1 separate:1 mapped:2 separating:16 philip:1 unstable:1 assuming:5 ratio:7 negative:2 finite:3 extended:2 rn:4 perturbation:1 arbitrary:2 david:6 complement:1 pair:2 required:2 fakultat:1 address:1 usually:1 xm:1 max:2 difficulty:1 circumvent:1 scheme:1 sn:2 prior:1 geometric:3 relative:3 offered:1 principle:1 ulrich:3 enjoys:1 formal:1 allow:2 perceptron:4 boundary:2 dimension:7 computes:2 collection:2 c5:2 far:1 approximate:3 uni:1 sat:2 conclude:5 xi:6 search:2 un:1 johan:1 poly:1 separator:2 necessarily:1 main:1 linearly:1 allowed:2 nphard:1 elaborate:1 xl:1 theorem:7 list:2 svm:3 exists:8 adding:1 margin:34 disregarded:1 inapproximability:1 applies:2 satisfies:1 goal:1 consequently:1 towards:1 replace:2 hard:9 except:1 hyperplane:1 lemma:5 called:5 iixl12:1 pas:1 perceptrons:3 formally:1 support:1
883
1,809
Multiple times cales of adaptation in a neural code Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, and Robert R. de Ruyter van Steveninck NEe Research Institute 4 Independence Way Princeton, New Jersey 08540 adrienne!geofflbialeklruyter@ research. nj. nec. com Abstract Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation. 1 Introduction Adaptation was one of the first phenomena discovered when Adrian recorded the responses of single sensory neurons [1, 2]. Since that time, many different forms of adaptation have been found in almost all sensory systems. The simplest forms of adaptation, such as light and dark adaptation in the visual system, seem to involve just discarding a large constant background signal so that the system can maintain sensitivity to small changes. The idea of Attneave [3] and Barlow [4] that the nervous system tries to find an efficient representation of its sensory inputs implies that neural coding strategies should be adapted not just to constant parameters such as the mean light intensity, but to the entire distribution of input signals [5]; more generally, efficient strategies for processing (not just coding) of sensory signals must also be matched to the statistics of these signals [6]. Adaptation to statistics might happen on evolutionary time scales, or, at the opposite extreme, it might happen in real time as an animal moves through the world. There is now evidence from several systems for real time adaptation to statistics [7, 8, 9], and at least in one case it has been shown that the form of this adaptation indeed does serve to optimize the efficiency of representation, maximizing the information that a single neuron transmits about its sensory inputs [10]. Perhaps the simplest of statistical adaptation experiments, as in Ref [7] and Fig. 1, is to switch between stimuli that are drawn from different probability distributions and ask how the neuron responds to the switch. When we 'repeat' the experiment we repeat the time dependence of the parameters describing the distribution, but we choose new signals from the same distributions; thus we probe the response or adaptation to the distributions and not to the particular signals. These switching experiments typically reveal transient responses to the switch that have rather long time scales, and it is tempting to identify these long time scales as the time scales of adaptation. On the other hand, one can also view the process of adapting to a distribution as one of learning the parameters of that distribution, or of accumulating evidence that the distribution has changed. Some features of the dynamics in the switching experiments match the dynamics of an optimal statistical estimator [11], but the overall time scale does not: for all the experiments we have seen, the apparent time scales of adaptation in a switching experiment are much longer than would be required to make reliable estimates of the relevant statistical parameters. In this work we re-examine the phenomena of statistical adaptation in the motion sensitive neurons of the fly visual system. Specifically, we are interested in adaptation to the variance or dynamic range of the velocity distribution [10]. It has been shown that, in steady state, this adaptation includes a rescaling of the neuron's input/output relation, so that the system seems to encode dynamic velocity signals in relative units; this allows the system, presumably, to deal both with the "-' 50? /s motions that occur in straight flight and with the "-' 2000? /s motions that occur during acrobatic flight (see Ref.[12]). Further, the precise form of rescaling chosen by the fly's visual system is that which maximizes information transmission. There are several natural questions: (1) How long does it take the system to accomplish the rescaling of its input/output relation? (2) Are the transients seen in switching experiments an indication of gradual rescaling? (3) If the system adapts to the variance of its inputs, is the neural signal ambiguous about the absolute scale of velocity? (4) Can we see the optimization of information transmission occurring in real time? 2 Stimulus structure and experimental setup A fly (Calliphora vicina) is immobilized in wax and views a computer controlled oscilloscope display while we record action potentials from the identified neuron HI using standard methods. The stimulus movie is a random pattern of dark and light vertical bars, and the entire pattern moves along a random trajectory with velocity S(t); since the neuron is motion (and not position) sensitive we refer to this signal as the stimulus. We construct the stimulus S(t) as the product of a normalized white noise s(t), constructed from a random number sequence refreshed every Ts = 2 ms, and an amplitude or standard deviation (J'(t) which varies on a characteristic timescale Ta ? Ts. Frames of the movie are drawn every 2 ms. For analysis all spike times are discretized at the 2 ms resolution of the movie. a) T = 20s - - - T= lOs ---T=4s ~ "0 Q) _!!1 til 0 E o -1 Z -2+-------~------~------~------~ 0 .00 0. 25 0.50 Normalised time 1fT 0.75 1.00 10 20 30 40 Period T (sec) Figure 1: (a) The spike rate measured in response to a square-wave modulated white noise stimulus set) , averaged over many presentations of set), and normalized by the mean and standard deviation. (b) Decay time of the rate following an upward switch as a function of switching period T. 3 Spike rate dynamics Switching experiments as described above correspond to a stimulus such that the amplitude (J'(t) is a square wave, alternating between two values (J'l and (J'2, (J'l > (J'2. Experiments were performed over a range of switching periods (T = 40, 20, 10, 4 s), with the amplitudes (J'l and (J'2 in a ratio of 5:1. Remarkably, the timescales of the response depend strongly on those of the experiment; in fact, the response times rescale by T, as is seen in Fig. lea). The decay of the rate in the first half of the experiment is fitted by an exponential, and in Fig. l(b), the resulting decay time T(T) is plotted as a function of T; we use an exponential not to insist that this is the correct form, only to extract a timescale. As suggested by the rescaling of Fig. lea), the fitted decay times are well described as a linear function of the stimulus period. This demonstrates that the timescale of adaptation of the rate is not absolute, but is a function of the timescale established in the experiment. Large sudden changes in stimulus variance might trigger special mechanisms, so we turn to a signal that changes variance continuously: the amplitude (J'(t) is taken to be the exponential of a sinusoid, (J'(t) = exp(asin(27l'kt)), where the period T = 11k was varied between 2 s and 240 s, and the constant a is fixed such that the amplitude varies by a factor of 10 over a cycle. A typical averaged rate response to the exponential-sinusoidal stimulus is shown in Fig. 2(a). The rate is close to sinusoidal over this parameter regime, indicating a logarithmic encoding of the stimulus variance. Significantly, the rate response shows a phase lead ~ <I> with respect to the stimulus. This may be interpreted as the effect of adaptation: at every point on the cycle, the gain of the response is set to a value defined by the stimulus a short time before. 2 Ql -0 120 a) :;- b) 100 1 0Ql .!E.- ~ .l!l ~ 0 -0 Ql .!!l -1 0 Z -2 00 60 :crn 40 E 20 Ql T =30 s - - - T=60s - T=90s - T= 120s 'iii E 80 i= 0 -20 0.00 0.25 0.50 0.75 1.00 Normalised tIT o 50 100 150 200 250 300 Period T (sec) Figure 2: (a) The spike rate measured in response to a exponential-sinusoidal modulation of a white noise stimulus set) , averaged over presentations of set) , and normalised by the mean and standard deviation, for several periods T. (b) The time shift 0 between response and stimulus, for a range of periods T. As before, the response of the system was measured over a range of periods T. Fig. 2(b) shows the measured relation of the timeshift 8(T) = T ~<I> of the response as a function of T. One observes that the relation is nearly linear over more than one order of magnitude in T; that is, the phase shift is approximately constant. Once again there is a strong and simple dependence of the apparent timescale of adaptation on the stimulus parameters. Responses to stimulus sequences composed of many frequencies also exhibit a phase shift, consistent with that observed for the single frequency experiments. 4 The dynamic input-output relation Both the switching and sinusoidally modulated experiments indicate that responses to changing the variance of input signals have multiple time scales, ranging from a few seconds to several minutes. Does it really take the system this long to adjust its input/output relation to the new input distribution? In the range of velocities used, and at the contrast level used in the laboratory, spiking in HI depends on features of the velocity waveform that occur within a window of '" 100 ms. After a few seconds, then, the system has had access to several tens of independent samples of the motion signal, and should be able to estimate its variance to within'" 20%; after a minute the precision would be better than a few percent. In practice, we are changing the input variance not by a few percent but a factor of two or ten; if the system were really efficient, these changes would be detected and compensated by adaptation on much shorter time scales. To address this, we look directly at the input/output relation as the standard deviation u(t) varies in time. For simplicity we analyze (as in Ref. [10]) features of the stimulus that modulate the probability of occurrence of individual spikes, P(spikelstimulus); we will not consider patterns of spikes, although the same methods can be easily generalised. The space of stimulus histories of length '" 100 ms, discretised at 2 ms, leading up to a s(like has a dimensionality'" 50, too large to allow adequate sampling of P(spikelstimulus) from the data, so we must begin by reducing the dimensionality of the stimulus description. The simplest way to do so is to find a subset of directions in stimulus space determined to be relevant for the system, and to project the stimulus onto that set of directions. These directions correspond to linear filters. Such a set of directions can be obtained from the moments of the spike-conditional stimulus; the first such moment is the spike-triggered average, or reverse correlation function [2]. It has been shown [10] that for HI, under these conditions, there are two relevant dimensions: a smoothed version of the velocity, and also its derivative. The rescaling observed in steady state experiments was seen to occur independently in both dimensions, so without loss of generality we will use as our filter the single dimension given by the spike-triggered average. The stimulus projected onto this filter will be denoted by So. The filtered stimulus is then passed through a nonlinear decision process akin to a threshold. To calculate the input/output relation P(spikelso) [10], we use Bayes' rule: P(spikelso) P(spike) P(solspike) P(so) (1) The spike rate r(so) is proportional to the probability of spiking, r(so) ex: P(spikelso), leading to the relation r(so) P(solspike) r = -----',=-:-----:---'-- P(so) (2) where r is the mean spike rate. P(so) is the (lrior distribution of the projected stimulus, which we know. The distribution P(solspike) is estimated from the projected stimulus evaluated at the spike times, and the ratio of the two is the nonlinear input/output relation. A number of experiments have shown that the filter characteristics of HI are adaptive, and we see this in the present experiments as well: as the amplitude u(t) is decreased, the filter changes both in overall amplitude and shape. The filter becomes increasingly extended: the system integrates over longer periods of time under conditions of low velocities. Thus the filter depends on the input variance, and we expect that there should be an observable relaxation of the filter to its new steady state form after a switch in variance. We find, however, that within 200 ms following the switch, the amplitude of the filter has already adjusted to the new variance, and further that the detailed shape of the filter has attained its steady state form in less than I s. The precise timescale of the establishment of the new filter shape depends on the value of u: for the change to U1, the steady state form is achieved within 200 ms. The long tail of the low variance filter for U2 ? (1) is established more slowly. Nonetheless, these time scales which characterize adaptation of the filter are much shorter than the rate transients seen in the switching experiments, and are closer to what we might expect for an efficient estimator. We construct time dependent input/output relations by forming conditional distributions using spikes from particular time slices in a periodic experiment. In Figs. 3.I(b) and 3.I(c), we show the input/output relation calculated in 1 s bins throughout the switching experiment. Within the first second the input/output relation is almost indistinguishable from its steady state form. Further, it takes the same form for the two halves of the experiment: it is rescaled by the standard deviation, as was seen for the steady state experiments. The close collapse or rescaling of the input/output relations depends not only on the normalisation by the standard deviation, but also on the use of the "local" adapted filter (i.e. measured in the same time bin). Returning to the sinusoidal experiments, the input/output relations were 2 . 1b) 1a) ";. - -1 10 15 20 10 , - - - - - - . - - - - - - - - , 10 15 20 25 30 20 40 60 80 r----.,,--------,--,------, r - - - - - - - - - - - - - , "> ~('J 001 10 3a) ....."ili -.:: 01 0.01 Figure 3: Input/output relations for (a) switching, (b) sinusoidal and (c) randomly modulated experiments. Figs. 3.1 show the modulation envelope u(t) , in log for (b) and (c) (solid), and the measured rate (dotted), normalised by mean and standard deviation. Figs. 3.2 show input/output relations calculated in non-overlapping bins throughout the stimulus cycle, with the input 80 in units of the standard deviation of the whole stimulus. Figs. 3.3 show the input/output relations with the input rescaled to units of the local standard deviation. constructed for T = 45 s in 20 non-overlapping bins of width 2.25 s. Once again the functions show a remarkable rescaling which is sharpened by the use of the appropriate local filter: see Fig.3.2(b) and (c). Finally, we consider an amplitude which varies randomly with correlation time Tu '" 3 s: u(t) is a repeated segment of the exponential of a Gaussian random process, pictured in Fig.3.3(a), with periodicity T = 90s? Tu. Dividing the stimulus into sequential bins of 2 s in width, we obtain the filters for each timeslice, and calculate the local prior distributions, which are not Gaussian in this case as they are distorted by the local variations of u(t). Nonetheless, the ratio P(solspike)j P(so) conspires such that the form of the input/output relation is preserved. In all three cases, our results show that the system rapidly and continuously adjusts its coding strategy, rescaling the input/output relation with respect to the local variance of the input as for steady state stimuli. Variance normalisation occurs as rapidly as is measurable, and the system chooses a similar form for the input/output relation in each case. 5 Information transmission What does this mean for the coding efficiency of the neuron? An experiment was designed to track the information transmission as a function of time. We use a small set of N 2 s long random noise sequences {Si(t)}, i = 1, ... ,N, presented in random order at two different amplitudes, 0"1 and 0"2. We then ask how much information the spike train conveys about (a) which of the random segments Si(t) and (b) which of the amplitudes O"j was used. Specifically, the experiment consists of a series of trials of length 2 s where the fast component is one of the sequences {Si}, and after 1 s, the amplitude switches from 0"1 to 0"2 or vice versa. N was taken to be 40, so that a 2 hour experiment provides approximately 80 samples for each (Si,O"j). This allows us to measure the mutual information between the response and either the fast or the slow component of the stimulus as a function of time across the 2 s repeated segment. We use only a very restricted subspace of 0" and s: the maximum available information about 0" is 1 bit, and about sis log2N. The spike response is represented by "words" [13], generated from the spike times discretised to timebins of 2 ms, where no spike is represented by 0, and a spike by 1. A word is defined as the binary digit formed from 10 consecutive bins, so there are 210 possible words. The information about the fast component S in trials of a given 0" is N Icr(w(t); s) = H[Pcr(w(t))] - L P(sj)H[Pcr(w(t); Sj)]' (3) j=l where H is the entropy of the word distribution: H[P(w(t))] =- L P(Wk(t)) log2 P(Wk(t)). (4) k One can compare this information for different values of 0". Similarly, one can calculate the information about the amplitude using a given probe s: 2 Is (w(t); 0") = H[Ps(w(t))]- L P(O"j)H[Ps(w(t); O"j)]. (5) j=l The amount of information for each S j varies rapidly depending on the presence or absence of spikes, so we average these contributions over the {Sj} to give I(w; 0"). _ _ I(w;s). upward sWitch o I(w;s). downward SWitch -/>- I(w;,,) -075 -050 -0.25 000 025 0.50 075 Time relative to switch (sec) Figure 4: Information per spike as a function of time where 0" is switched every 2 s. The mutual information as a function of time is shown in Fig. 4, presented as bits/spike. As one would expect, the amount of information transmitted per second about the stimulus details, or s, depends on the ensemble parameter 0": larger velocities allow a higher SNR for velocity estimation, and the system is able to transmit more information. However, when we convert the information rate to bits/spike, we find that the system is transmitting at a constant efficiency of around 1.4 bits/spike. Any change in information rate during a switch from 0"1 to 0"2 is undetectable. For a switch from 0"2 to 0"1, the time to recovery is of order 100 ms. This demonstrates explicitly that the system is indeed rapidly maximising its information transmission. Further, the transient "excess" of spikes following an upward switch provide information at a constant rate per spike. The information about the amplitude, similarly, remains at a constant level throughout the trial. Thus, information about the ensemble variable is retained at all times: the response is not ambiguous with respect to the absolute scale of velocity. Despite the rescaling of input/output curves, responses within different ensembles are distinguishable. 6 Discussion We find that the neural response to a stimulus with well-separated timescales S(t) = O"(t)s(t) takes the form of a ratel8)timing code, where the response r(t) may be approximately modelled as r(t) = R[O"(t)]g (s(t)). (6) Here R modulates the overall rate and depends on the slow dynamics of the variance envelope, while the precise timing of a given spike in response to fast events in the stimulus is determined by the nonlinear input/output relation g, which depends only on the normalised quantity s(t). Through this apparent normalisation by the local standard deviation, g, as for steady-state experiments, maximises information transmission about the fast components of the stimulus. The function R modulating the rate varies on much slower timescales so cannot be taken as an indicator of the extent of the system's adaptation to a new ensemble. Rather, R appears to function as an independent degree of freedom, capable of transmitting information, at a slower rate, about the slow stimulus modulations. The presence of many timescales in R may itself be an adaptation to the many timescales of variation in natural signals. At the same time, the rapid readjustment of the input/output relation - and the consequent recovery of information after a sudden change in 0" - indicate that the adaptive mechanisms approach the limiting speed set by the need to gather statistics. Acknowledgments We thank B. Agiiera y Arcas, N. Brenner and T. Adelman for helpful discussions. References [1] E. Adrian (1928) The Basis of Sensation (London Christophers) [2] F. Rieke, D. Warland, R. de Ruyter van Steveninck and W. Bialek (1997). Spikes: exploring the neural code. (Cambridge, MA: MIT Press). [3] F. Attneave (1954) P5YCh. Rev. 61,183-193. [4] H. B. Barlow (1961) in Sensory Communication, W. A. Rosenbluth, ed. (Cambridge, MA: MIT Press), pp.217-234. [5] S.B. Laughlin (1981) Z. Naturforsch. 36c, 910-912. [6] M. Potters and W. Bialek (1994) 1. Phys. I. France 4, 1755-1775. [7] S. Smirnakis, M. Berry, D. Warland, W. Bialek and M . Meister (1997) Nature 386, 69-73. [8] J. H. van Hateren (1997) Vision Research 37, 3407-3416. [9] R.R. de Ruyter van Steveninck, W. Bialek, M. Potters, and R. H. Carlson (1994) Proc. of the IEEE International Conference on Systems, Man and Cybernetics, 302-307. [10] N. Brenner, W. Bialek and R. de Ruyter van Steveninck (2000), Neuron, 26, 695-702. [11] M. deWeese and A. Zador (1998) Neural Compo 10, 1179-1202. [12] C. Schilstra and J. H. van Hateren (1999) 1. Exp. Biol. 202 , 1481-1490. [13] S. Strong, R. Koberle, R. de Ruyter van Steveninck and W. Bialek (1998) Phys. Rev. Lett. 80, 197-200.
1809 |@word trial:3 version:1 seems:1 adrian:2 gradual:1 solid:1 moment:2 series:1 com:1 si:5 must:2 happen:2 shape:3 designed:1 half:2 nervous:1 short:1 record:1 compo:1 filtered:1 sudden:2 provides:1 along:1 constructed:2 undetectable:1 consists:1 indeed:2 rapid:1 oscilloscope:1 examine:2 discretized:1 insist:1 window:1 becomes:1 begin:1 project:1 matched:1 maximizes:2 what:2 interpreted:1 nj:1 every:4 smirnakis:1 returning:1 demonstrates:2 asin:1 unit:3 before:2 generalised:1 local:7 timing:2 switching:11 despite:1 encoding:1 modulation:3 approximately:3 might:4 dynamically:2 collapse:1 range:6 steveninck:5 averaged:3 acknowledgment:1 practice:1 digit:1 adapting:1 significantly:1 word:4 onto:2 close:2 cannot:1 context:1 accumulating:1 optimize:1 measurable:1 compensated:1 maximizing:1 zador:1 independently:1 resolution:1 simplicity:1 recovery:2 estimator:2 rule:1 adjusts:1 rieke:1 variation:2 transmit:1 limiting:1 trigger:1 velocity:11 observed:2 ft:1 fly:4 calculate:3 cycle:3 rescaled:2 observes:1 calliphora:1 dynamic:8 depend:1 segment:3 tit:1 serve:1 efficiency:3 basis:1 easily:1 jersey:1 represented:2 train:1 separated:1 fast:5 london:1 detected:1 apparent:3 larger:1 statistic:4 timescale:6 itself:1 sequence:4 indication:1 triggered:2 product:1 adaptation:26 tu:2 relevant:3 rapidly:5 adapts:1 description:1 los:1 transmission:7 p:2 depending:1 measured:6 rescale:1 strong:2 dividing:1 implies:1 indicate:2 direction:4 waveform:1 sensation:1 correct:1 filter:16 transient:5 bin:6 really:2 adjusted:1 exploring:1 around:1 exp:2 presumably:1 consecutive:1 estimation:2 proc:1 integrates:1 sensitive:2 modulating:1 vice:1 mit:2 gaussian:2 timeslice:1 establishment:1 rather:2 varying:1 encode:1 contrast:1 helpful:1 dependent:2 entire:2 typically:1 readjustment:1 relation:23 france:1 interested:1 upward:3 overall:3 denoted:1 animal:1 special:1 mutual:2 construct:2 once:2 sampling:1 look:1 nearly:1 stimulus:40 few:4 randomly:2 composed:1 individual:1 phase:3 william:1 maintain:1 freedom:1 normalisation:3 adjust:1 extreme:1 light:3 kt:1 closer:1 capable:1 shorter:2 re:1 plotted:1 fitted:2 sinusoidally:1 deviation:10 subset:1 snr:1 too:1 characterize:1 varies:6 periodic:1 accomplish:1 chooses:1 international:1 sensitivity:1 continuously:2 transmitting:2 again:2 sharpened:1 recorded:1 choose:1 slowly:1 derivative:1 til:1 rescaling:10 leading:2 potential:1 sinusoidal:5 de:5 coding:4 sec:3 includes:1 wk:2 explicitly:1 depends:7 performed:1 try:1 view:2 analyze:1 wave:2 bayes:1 contribution:1 square:2 formed:1 variance:16 characteristic:2 ensemble:5 correspond:2 identify:1 modelled:1 trajectory:1 cybernetics:1 straight:1 history:1 phys:2 ed:1 nonetheless:2 frequency:2 pp:1 attneave:2 conveys:1 transmits:1 refreshed:1 gain:1 ask:2 dimensionality:2 amplitude:14 appears:1 ta:1 attained:1 higher:1 response:23 evaluated:1 strongly:1 generality:1 just:3 correlation:2 hand:1 flight:2 christopher:1 nonlinear:3 overlapping:2 reveal:1 perhaps:1 effect:1 normalized:2 barlow:2 sinusoid:1 alternating:1 laboratory:1 deal:1 white:3 indistinguishable:1 during:2 width:2 adelman:1 ambiguous:2 steady:9 m:10 demonstrate:1 motion:5 percent:2 ranging:1 spiking:2 extend:1 tail:1 refer:1 versa:1 cambridge:2 similarly:2 had:1 access:1 longer:2 reverse:1 binary:1 seen:6 transmitted:1 period:10 tempting:1 signal:13 multiple:2 match:1 long:7 controlled:1 vision:1 arca:1 achieved:1 lea:2 preserved:1 background:1 remarkably:1 decreased:1 envelope:2 seem:1 presence:2 iii:1 switch:13 independence:1 identified:1 opposite:1 idea:1 shift:3 passed:1 akin:1 action:1 adequate:1 generally:1 detailed:1 involve:1 amount:2 dark:2 ten:2 simplest:3 dotted:1 estimated:1 track:1 per:3 threshold:1 drawn:2 changing:2 deweese:1 relaxation:1 convert:1 distorted:1 place:1 almost:2 throughout:3 decision:1 bit:4 hi:4 display:1 fairhall:1 adapted:2 occur:4 u1:1 speed:1 across:1 increasingly:1 rev:2 restricted:1 taken:3 remains:1 describing:1 turn:1 mechanism:2 know:1 meister:1 available:1 probe:2 appropriate:1 occurrence:1 slower:2 log2:1 carlson:1 wax:1 warland:2 move:2 question:1 already:1 spike:28 occurs:1 strategy:3 quantity:1 dependence:2 responds:1 bialek:7 evolutionary:1 exhibit:1 subspace:1 thank:1 extent:1 immobilized:1 maximising:1 potter:2 code:3 length:2 retained:1 ratio:3 setup:1 ql:4 robert:1 rosenbluth:1 maximises:1 vertical:1 neuron:9 t:2 extended:1 communication:1 precise:3 frame:1 discovered:1 varied:1 smoothed:1 intensity:1 discretised:2 required:1 nee:1 established:2 hour:1 address:1 able:2 bar:1 suggested:1 pattern:3 regime:1 reliable:1 pcr:2 event:1 natural:2 indicator:1 pictured:1 movie:3 extract:1 koberle:1 prior:1 lewen:1 berry:1 relative:2 loss:1 expect:3 proportional:1 geoffrey:1 remarkable:1 switched:1 degree:1 gather:1 consistent:2 periodicity:1 changed:1 repeat:2 normalised:5 allow:2 laughlin:1 institute:1 absolute:3 van:7 slice:1 curve:1 dimension:3 calculated:2 world:1 lett:1 sensory:6 adaptive:2 projected:3 sj:3 excess:1 observable:1 crn:1 naturforsch:1 nature:1 ruyter:5 adrienne:2 timescales:7 whole:1 noise:4 repeated:2 ref:3 fig:13 slow:3 precision:1 position:1 exponential:6 minute:2 discarding:1 decay:4 consequent:1 evidence:2 sequential:1 modulates:1 nec:1 magnitude:1 downward:1 occurring:1 entropy:1 logarithmic:1 distinguishable:1 forming:1 visual:4 u2:1 ma:2 conditional:2 modulate:1 presentation:2 absence:1 brenner:2 change:8 man:1 specifically:2 typical:1 reducing:1 determined:2 ili:1 experimental:1 indicating:1 modulated:4 phenomenon:2 hateren:2 log2n:1 princeton:1 biol:1 ex:1
884
181
149 FIXED POINT ANALYSIS FOR RECURRENT NETWORKS Mary B. Ottaway Patrice Y. Simard Dept. of Computer Science University of Rochester Rochester NY 14627 Dana H. Ballard ABSTRACT This paper provides a systematic analysis of the recurrent backpropagation (RBP) algorithm, introducing a number of new results. The main limitation of the RBP algorithm is that it assumes the convergence of the network to a stable fixed point in order to backpropagate the error signals. We show by experiment and eigenvalue analysis that this condition can be violated and that chaotic behavior can be avoided. Next we examine the advantages of RBP over the standard backpropagation algorithm. RBP is shown to build stable fixed points corresponding to the input patterns. This makes it an appropriate tool for content addressable memories, one-to-many function learning, and inverse problems. INTRODUCTION In the last few years there has been a great resurgence of interest in neural network learning algorithms. One of the most successful of these is the Backpropagation learning algorithm of [Rumelhart 86], which has shown its usefulness in a number of applications. This algorithm is representative of others that exploit internal units to represent very nonlinear decision surfaces [Lippman 87] and thus overcomes the limits of the classical percept ron [Rosenblatt 62]. With its enormous advantages, the backpropagation algorithm has a number of disadvantages. Two of these are the inability to fill in patterns and the inability to solve one-to-many inverse problems [Jordan 88]. These limitations follow from the fact that the algorithm is only defined for a feedforward network. Thus if part of the pattern is missing or corrupted in the input, this error will be propagated through to the output and the original pattern will not be restored. In one-to-many problems, several solutions are possible for a given input. On a feedforward net, the competing targets for a given input introduce contradictory error signals and learning in unsuccessful. Very recently, these limitations have been removed with the specification of a recurrent backpropagation algorithm [Pineda 87]. This algorithm effectively extends the backpropagation idea to networks of arbitrary connection topologies. This advantage, however, does not come without some risk. Since the connections in the network are not symmetric, the stability of the network is not guaranteed. For some choices of weights, the state of the units may oscillate indefinitely. This paper provides a systematic analysis of the recurrent backpropagation (RBP) algorithm, introducing a number of new results. The main limitation of the RBP algorithm is that it assumes the convergence of the network to a stable fixed point in order to 150 Simard, Ottaway and Ballard backpropagate the error signals. We show by experiment and eigenvalue analysis that this condition can be violated and that chaotic behavior can be avoided. Next we examine the advantage in convergence speed of RBP over the standard backpropagation algorithm. RBP is shown to build stable fixed points corresponding to the input patterns. This makes it an appropriate tool for content addressable memories, manyto-one function learning and inverse problem. MODEL DESCRIPTION The simulations have been done on a recurrent backpropagation network with first order units. Using the same formalism as [Pineda 87], the vector state x is updated according to the equation: (1) where Ui = LWijXj for i=I,2 , . .. ,N (2) j The activation function is the logistic function (3) The networks we will consider are organized in modules (or sets) of units that perform similar functions. For example, we talk about fully connected module if each unit in the module is connected to ea.c h of the others. An input module is a set of units where each unit has non-zero input function Ii . Note that a single unit can belong to more than one module at a time. The performance of the network is measured through the energy function: N 1 '"' E = "2 L..J J j2 (4) i=1 where (5) An output module is a set of units i such that Ji '" O. Units that do not belong to any input or output modules are called hidden units. A unit (resp module) can be damped and undamped. When the unit (resp module) is undamped, Ii = Ji 0 for the unit (resp the module). If the unit is damped, it behave according to the pattern presented to the network. Unclamping a unit results in making it hidden. Clamping and unclamping actions are handy concepts for the study of content addressable memory or generalization. The goal for the network is to minimize the energy function by changing the weights accordingly. One way is to perform a gradient descent in E using the delta rule: = (6) where 71 is a learning rate constant. The weight variation as a function of the error is given by the formula [Pineda 8i, Almeida 87] (7) Fixed Point Analysis for Recurrent Networks where yi is a solution of the dynamical system (8) The above discussion, assumes that the input function I and the target T are constant over time. In our simulation however, we have a set of patterns Po. presented to the network. A pattern is a tuple in ([0,1] U {U})N, where N is the total number of units and U stands for undamped. The ith value of the tuple is the value assigned to Ii and Ti when the pattern is presented to the network (if the value is U, the unit is undamped for the time of presentation of the pattern). This definition of a pattern does not allow hOI. and Tio. to have different values. This is not an important restriction however since we can can always simulate such an (inconsistent) unit with two units. The energy function to be minimized over all the patterns is defined by the equation: Etotal =L E( a) (9) a The gradient of Etotal is simply the sum of the gradients of E(a), and hence the updating equation has the form: (10) dWij/dt = 17 Yi(a)xj(a) I: a \"'hen a pat tern Po. is presented to the network, an approximation of x j (a) is first computed by doing a few iterations using equation 1 (propagation). Then, an approximation of yoo (a) is evaluated by iterating equation 8 (backpropagation). The weights are finally updated using equation 10. If we assume the number of iterations to evaluate xj(a) and yj(a) to be constant, the total number of operations required to update the weights is O( N 2 ). The validity of this assumption will be discussed in a later section. CONVERGENCE OF THE NETWORK The learning algorithm for our network assumes a correct approximation of xoo. This value is computed by recursively propagating the activation signals according to equation 1. The effect of varying the number of propagations can be illustrated with a simple experiment. Consider a fully connected network of eight units (it's a directed anti-reflexive graph). Fonr of them are auto-associative units which are presented various patterns of zeroes and ones. An auto-associative unit is best viewed as two visible units, one having a.ll of the incoming connect.ions and one having all of the outgoing connections. When the autoassociat.ive unit is not cla.mped, it is viewed as a hidden unit. The four remaining units arc hidden. The error is measured by the differences between the a.ctivations (from the incoming connections) of t.he auto-associat.ive units and the corresponding target value 7~ for each pattern. In running the experiment, eight patterns were presented to the network perfo[213zrming 1 to 5 propagations of the activations using Equation 1, 20 backpropa.gations of the error signals according to Equation 8, and one update (Equation 10) of the weights per preselltat,ion. We define an epoch to be a sweep through the eight patterns using the above formula of execution on each. The corresponding results using a learning rate of 0.2 are shown in figure 1. It can easily be seen that using one or two propagations does not suffice to set the hidden units to their correct values. However, the network does learn correctly how to reproduce the eight patterns when 3 or more 151 152 Simard, Ottaway and Ballard 4 1 3 Error 'on ...... " ....... . ?......? .2. P.I:<?P.~~~~!?~~ .............. . ....... . 2 1 0 0 500 1000 1500 2000 1500 2000 4 3 Error 2 back-propagation 1 3 4 back-propagations 0 0 500 1000 Figure 1: Learning curves for a recurrent network with different numbers of propagations of the activation and back-propagation of the error signals. Fixed Point Analysis for Recurrent Networks propagations are done after each presentation of a new pattern. This is not surprising since the rate of convergence to a fixed point is geometric (if the fixed point is stable), thus making only a few propagations necessary. We suspect that larger networks with a fully connected topology will still only require a few iterations of forward propagation if the fixed points are fairly stable. In the next section, we will study a problem, where this assumption is not true. In such a situation, we use an algorithm where the number of forward propagations varies dynamically. For some specialized networks such as a feed-forward one, the number of propagations must be at least equal to the number of layers, so that the output units receive the activation corresponding to the input before the error signal is sent. Similarly, yOO is computed recursively by iterative steps. We used the same experiment as described above with 1 to 4 back-propagations of the error signals to evaluate the time yt takes to converge. The rest of this experiment remained the same as above, except that the number of propagations for xt was set to 20. The learning curves are shown in figure 1. It is interesting to note that wi th only one propagation of the error signal, the system was able to complete the learning, for the isolated curve tends toward the other curves as time increases. The remaining four curves lie along the same path because the error signals rapidly become meaningless after few iterations. The reason for this is that the error signals are multiplied by gi(ui) = WijXi(I- Xi) when going from unit i to unit j, which is usually much smaller than one because I xi(I-xi) I is smaller than 0.25. The fact that one iteration of the error signal is enough to provide learning is interesting for VLSI applications: it enables the units to work together in an asynchronous fashion. If each unit propagates the activation much more often than it backpropagates the error signals the system is, on average, in a stable state when the backpropagation occurs and the patterns are learned slowly. This ability for recurrent networks to work without synchronization mechanisms makes them more compatible with physiological network systems. The above discussion assumed that X OO exists and can be computed by recursively computing the activation function. However, it has been shown ([Simard 88]) that for any activation function, there are always sets of weights such that there exist no stable fixed points. This fact is alarming since X OO is computed recursively, which implies that if there is no stable fixed point, x' will fail to converge, and incorrect error signals will be propagated through the system. Fortunately, the absence of stable fixed points turns out not to be a problem in practice. One reason for this is that they are very likely to be present given a reasonable set of initial weights. The network almost always starts with a unique stable fixed point. The fixed points are searched by following the zero curve of the homotopy map (11) for different [ail starting at ).. = O. The results indicate that the probability of getting unstable fixed points increases with the size of the network. We always found a stable fixed point for networks with less than 50 units. Out of 500 trials of 100 unit networks starting with random weights between -1 and +1, we found two set of weights with no stable fixed points. However, even in that case, most of the eigenvalues were much less than 1, which means that oscillations are limited to one or two eigenvector axes. Since it is possible to start with a network that has no stable fixed points, it is of interest whether it will still learn correctly. Since searching for all the fixed points (by trying different [ail in equation 11) is computationally expensive, we choose, as before, a simple learning experiment. The network's layout is the same as previously described. However, we know (from the previous result) than it probably has no unstable fixed points 153 154 Simard, Ottaway and Ballard \ -- Eigen 1 value ~~--------------- .. . .. ' " . ... ... ...... .. . ........... . .... . .. ....... . O-L-----------------------------------------3 2 Error -':'::":":w" ~- >.~ ... . -- .............. -.&.. ?.....:,.. ?...:.,.. ? ? ? ? ? ? ? ? ? ? ....... -..., ' 0. 1 . ........-:. ~----.--... . . ----.. ... .. O-+--------------~----------------r_------------- o 50 100 Figure 2: top Maximum eigenvalues for the unstable fixed point as a function of the number of epochs. bottom Error as a function of the number of epochs. Fixed Point Analysis for Recurrent Networks since it only has four hidden units. To increase the probability of getting a fixed point that is unstable, we make the initial weights range from -3 to 3 and set the thresholds so that [0.5] is a fixed point for one of the patterns. This fixed point is more likely to be unstable since the partial derivative ofthe functions (which are equal to 8gi(Ui)/8xj = wijxi(l-Xi) at the fixed point) are maximized at [Xi] [0.5] and therefore the Jacobian is more likely to have big eigenvalues. Figure 2 shows the stability of that particular fixed point and the error as a function of the number of epochs. Three different simulations were done with different sets of random initial weights. As clearly shown in the figure, the network learns despite the absence of stable fixed points. Moreover, the observed fixed point(s) become stable as learning progresses. In the absence of stable fixed points, the weights are modified after a fixed number of propagations and backpropagations. Even though the state vector of the network is not precisely defined, the state space trajectory lies in a delimited volume. As learning progresses, the projection of this volume on the visible units diminishes to a single point (stable) and moves toward a target point that correspond to the presented pattern on the visible units. Note that our energy function does not impose constraints on the state space trajectories projected on the hidden units [Pearl mutter 88]. = "RUNAWAY" SIMULATIONS The next question that arises is whether a recurrent network goes to the same fixed point at successive epochs (for a given input) and what happens if it does not. To answer this question, we construct two networks, one with only feed forward connections and one with feed back connections. Bot.h networks have 3 modules (input, hidden and output) of 4 units each. The connections of the feed forward network are between the input and the hidden module and between the hidden and the output module. The connections of the recurrent net.work are identical except that the there are connections between the units of the hidden module. The rationale behind this layout is to ensure fairness of comparison between feed forward and feedback backpropagation. Each network is presented sixteen distiuct patterns on the input with sixteen different random patterns on the output. The patterns consist of zeros and ones. This task is purposely chosen to be fairly difficult (16 fixed points on the four hidden units for the recurrent net) and will make the evaluation of X OO difficult. The learning curves for the networks are shown in Figure 3 for a learning rate of 0.2. We can see that the network with recurrent connections learn a slightly faster than the feed forward network. However, a more careful analysis reveals that when the learning rate is increased, the recurrent network doesn't always learn properly. The success of the learning depends on the number of iterations we use in the computation of xt. As clearly shown on the Figure 3, if we use 30 iterations for xt the network fails to learn, although 40 iterations yields reasonable results. The two cases only differ by the value of xt used when the error signals are backpropagated. According to our interpretation, recurrent backpropagation learns by moving the fixed points (or small volume state trajectories) toward target values (determined by the output). As learning progresses, the distances between the fixed points and the target values diminish, causing the error signals to become smaller and the learning to slow down. However if the network doesn't come close enough to the fixed point (or the small volume state trajectory), the new error (the distance between the current state and the target) can suddenly be very large (relatively to the distance between the fixed point and the target). Large incorrect error signals are then introduced into the system. There are two Ca.seS: if the learning rate is small, a near miss has lit tie effect on the learning curve and RBP learns faster than the feed forward network. If, on the other hand, the learning rate 155 156 Simard, Ottaway and Ballard 6 4 Error ..... .... . " ..... ... . ... . .. . O-+-------.--------.-------r-------~------~------- o 1000 500 1500 2000 2500 6 4 Error ' 2 .. . . . ? . . . . . . . ? . . . ? . ?~ . . . . . ? . .?~.--rr-.-r-...."...?..._:_.__:"'.~.'? . O-+------~--------.-------r-------~------~------- o 100 200 300 400 500 6 4 Error 2 O-+------~------~.-------r-------~------~------- o 100 200 300 400 500 Figure 3: Error as a function of the number of epochs for a feed forward net (dotted) and a recurrent net (solid or dashed). top: The learning rate is set to 0.2. center: The learning rate is set to 1.0. The solid and the dashed lines are for recurrent net with 30 and 40 iterations of xt per epochs respectively. bottom: The learning rate is variable. The recurrent network has a variable number of iteration of xt per epochs. Fixed Point Analysis for Recurrent Networks 1 1 \ \ X2 ? ~\ ? 0 0 0 1 . 1 ~ 0 0 Xl Xl ~ \ X2 X2 I 0 1 ? is big, a near miss will induce important incorrect error signals into the system which in turn makes the next miss more dramatic. This runaway situation is depicted on the center of Figure 3. To circumvent this problem we vary the number of propagations as needed until successive states on the state trajectory are sufficiently close. The resulting learning curves for feed forward and recurrent nets are plotted at the bottom of Figure 3. In these simulations the learning rates are adjusted dynamically so that successive error vectors are almost colinear, that is: - < cos(~w:j' ~W:1l) < 0.9 . 1 Xl Figure 4: State space and fixed point. Xl and X2 are the activation of two units of a fully connected network. left: Before learning, there is one stable fixed point center: After learning a few pattern, there are two desired stable fixed points. right: After learning several patterns, there are two desired stable fixed points and one undesired stable fixed point. 0.7 157 (12) As can be seen recurrent and feed forward nets learn at the same speed. It is interesting to mention that the average learning rate for the recurrent net is significantly smaller (:::::: 0.65) than for the feed forward net (:::::: 0.80). Surprisingly, this doesn't affect the learning speed. CONTENT ADDRESSABLE MEMORIBS An interesting property of recurrent networks is their ability to generate fixed points that can be used to perform content addressable memory [Lapedes 86, Pineda 87]. Initially, a fully connected network usually has only one stable fixed point (all units undamped) (see Figure 4, left). By clamping a few (autoassociative) units to given patterns, it is possible, by learning, to create stable fixed points for the undamped network (Figure 4, center). To illustrate this property, we build a network of 6 units: 3 auto associative units 158 Simard, Ottaway and Ballard fixed points hidden units autoassociative units 0.0478 0.0402 0.0395 0.9800 0.8699 0.0763 0.0724 0.8803 0.4596 0.9649 0.0176 0.0450 0.0830 0.2136 0.0880 0.8832 0.9662 0.0658 0.1142 0.1692 0.5164 0.9400 0.9619 0.9252 0.0448 0.6909 0.7431 0.9076 0.5201 0.0391 Maximum eigenvalue 0.4419 0.6939 0.8470 0.8941 1.2702 Table 1: Fixed points for content addressable memory and 3 hidden units. The three autoassociative units are presented patterns with an odd number of ones in them (there are 4 such patterns on 3 units: 1 0 0, 0 1 0, 0 Oland 1 1 1). The network is fully connected. After 5000 epochs, the auto-associative units are undamped for testing. All the fixed points found for the network of 6 (undamped) units are given in table 1. As can be seen, the four stable fixed points are exactly the four patterns presented to the network. Moreover their stability guarantees that the network can be used for CAM (content addressable memory) or for one-to-many function learning. Indeed, if the network is presented incomplete or corrupted patterns (sufficiently dose to a previously learned pattern), it will restore the pattern as soon as the incorrect or missing units are undamped by converging to a stable fixed point. If there are several correct pattern completions for the damped units, the network will converge to one of the pattern depending on the initial conditions of the undamped units (which determine the state space trajectory). These highly desirable properties are the main advantages of having feedback connections. V.le note from table 1 that a fifth (incorrect) fixed point has also be found. However, this fixed point is unstable (Maximum eigenvalue = 1.27) and will therefore never be found during recursive searches. In the previous example, there are no undesired stable fixed points. They are, however, likely to appear if the learning task becomes more complex (Figure 4, right). The reason why they are difficult to avoid is that unless the units are undamped (the learning is stopped), the network cannot reach them. Algorithms which eliminate spurious fixed points are presently under study. CONCLUSION In this paper, we have studied the effect of introducing feedback connections into feed forward networks. We have shown that the potential disadvantages of the algorithm, such as the absence of stable fixed points and chaotic behavior, can be overcome. The resulting systems ha\'e several interesting properties. First, allowing arbitrary connections makes a network more physiologically plausible by removing structural constraints on the topology. Second, the increased number of connections diminishes the sensitivity to noise and slightly improves the speed of learning. Finally, feedback connections allow the network to restore incomplete or corrupted patterns by following the state space trajectory to a stable fixed point. This property can also be used for one-to-many function learning . A limitation of the algorithm, however, is that spurious stable fixed points could lead to incorrect pattern completion. Fixed Point Analysis for Recurrent Networks References [Almeida 87] Luis B. Almeida, in the Proceedings of the IEEE First Annual International Conference on Neural Networks, San Diego, California, June 1987. [Lapedes 86] Alan S. Lapedes & Robert M. Farber A self-optimizing nonsymmetrical neural net for content addressable memory and pattern recognition. Physica D22, 247259, 1986. [Lippman 87] Richard P. Lippman, An introduction to computing with neural networks, IEEE ASSP Magazine April 1987. [Jordan 88] Michael I. Jordan, Supervised learning and systems with excess degrees of freedom. COINS Technical Report 88-27. Massachusetts Institute of Technology. 1988. [Pearlmutter 88] Barak A. Pearlmutter. Learning State Space Trajectories in Recurrent Neural Networks. Proceedings of the Connectionnist Models Summer School. pp. 113117. 1988. [Pineda 87] Fernando J. Pineda. Generalization of backpropagation to recurrent and higher order neural networks. Neural Information Processing Systems, New York, 1987. [Pineda 88] Fernando J. Pineda. Dynamics and Architecture in Neural Computation. Journal of Complexity, special issue on Neural Network. September 1988. [Simard 88] Patrice Y. Simard, Mary B. Ottaway and Dana H. Ballard, Analysis of recurrent backpropagation. Technical Report 253. Computer Science, University of Rochester, 1988. [Rosenblatt 62] F. Rosenblatt, Principles of Neurodynamics, New York: Spartam Books, 1962. [Rumelhart 86] D. E. Rumelhart, G. E. Hinton, & R. J. Williams, Learning internal representations by back-propagating errors. Nature, 323,533-536. 159
181 |@word trial:1 simulation:5 cla:1 perfo:1 dramatic:1 mention:1 solid:2 recursively:4 initial:4 lapedes:3 current:1 surprising:1 activation:9 must:1 luis:1 visible:3 enables:1 update:2 accordingly:1 ith:1 indefinitely:1 provides:2 ron:1 successive:3 along:1 become:3 incorrect:6 introduce:1 indeed:1 behavior:3 examine:2 becomes:1 moreover:2 suffice:1 what:1 ail:2 eigenvector:1 guarantee:1 ti:1 tie:1 exactly:1 unit:59 appear:1 before:3 tends:1 limit:1 despite:1 path:1 studied:1 dynamically:2 co:1 limited:1 range:1 directed:1 unique:1 yj:1 testing:1 practice:1 recursive:1 backpropagation:15 chaotic:3 handy:1 lippman:3 addressable:8 significantly:1 projection:1 induce:1 cannot:1 close:2 risk:1 restriction:1 map:1 missing:2 yt:1 backpropa:1 layout:2 go:1 starting:2 center:4 williams:1 d22:1 rule:1 fill:1 stability:3 searching:1 variation:1 updated:2 resp:3 target:8 diego:1 magazine:1 rumelhart:3 expensive:1 recognition:1 updating:1 bottom:3 observed:1 module:14 connected:7 removed:1 ui:3 complexity:1 cam:1 dynamic:1 colinear:1 po:2 easily:1 various:1 talk:1 wijxi:2 larger:1 solve:1 ive:2 plausible:1 s:1 ability:2 gi:2 patrice:2 associative:4 pineda:8 advantage:5 eigenvalue:7 rr:1 net:11 j2:1 causing:1 rapidly:1 description:1 getting:2 convergence:5 oo:3 recurrent:28 illustrate:1 propagating:2 completion:2 measured:2 depending:1 school:1 odd:1 progress:3 come:2 implies:1 indicate:1 differ:1 farber:1 correct:3 runaway:2 hoi:1 require:1 generalization:2 homotopy:1 nonsymmetrical:1 adjusted:1 physica:1 sufficiently:2 diminish:1 great:1 vary:1 diminishes:2 create:1 tool:2 clearly:2 always:5 modified:1 r_:1 avoid:1 varying:1 ax:1 june:1 properly:1 alarming:1 eliminate:1 initially:1 hidden:14 vlsi:1 spurious:2 reproduce:1 going:1 issue:1 special:1 fairly:2 equal:2 construct:1 never:1 having:3 identical:1 lit:1 fairness:1 minimized:1 others:2 report:2 richard:1 few:7 freedom:1 interest:2 highly:1 evaluation:1 xoo:1 behind:1 damped:3 tuple:2 partial:1 necessary:1 unless:1 incomplete:2 desired:2 plotted:1 isolated:1 dose:1 stopped:1 increased:2 formalism:1 disadvantage:2 introducing:3 reflexive:1 usefulness:1 successful:1 connect:1 answer:1 varies:1 corrupted:3 international:1 sensitivity:1 systematic:2 michael:1 together:1 choose:1 slowly:1 book:1 simard:9 derivative:1 potential:1 depends:1 later:1 doing:1 start:2 rochester:3 minimize:1 percept:1 maximized:1 correspond:1 ofthe:1 yield:1 trajectory:8 reach:1 definition:1 energy:4 pp:1 propagated:2 massachusetts:1 improves:1 organized:1 ea:1 back:6 feed:12 delimited:1 dt:1 supervised:1 follow:1 higher:1 april:1 mutter:1 done:3 evaluated:1 though:1 until:1 hand:1 etotal:2 nonlinear:1 propagation:18 logistic:1 mary:2 rbp:9 effect:3 validity:1 concept:1 true:1 hence:1 assigned:1 symmetric:1 illustrated:1 undesired:2 ll:1 during:1 self:1 backpropagates:1 trying:1 complete:1 pearlmutter:2 purposely:1 recently:1 specialized:1 ji:2 volume:4 belong:2 discussed:1 he:1 interpretation:1 similarly:1 moving:1 stable:30 specification:1 surface:1 optimizing:1 success:1 yi:2 seen:3 fortunately:1 impose:1 converge:3 determine:1 fernando:2 signal:18 ii:3 dashed:2 desirable:1 alan:1 technical:2 faster:2 converging:1 iteration:10 represent:1 ion:2 receive:1 rest:1 meaningless:1 probably:1 suspect:1 sent:1 inconsistent:1 jordan:3 structural:1 near:2 backpropagations:1 feedforward:2 enough:2 xj:3 affect:1 architecture:1 competing:1 topology:3 idea:1 whether:2 york:2 oscillate:1 action:1 autoassociative:3 iterating:1 backpropagated:1 generate:1 exist:1 dotted:1 bot:1 delta:1 per:3 correctly:2 rosenblatt:3 four:6 threshold:1 enormous:1 changing:1 graph:1 year:1 sum:1 inverse:3 extends:1 almost:2 reasonable:2 oscillation:1 decision:1 layer:1 guaranteed:1 summer:1 annual:1 precisely:1 constraint:2 x2:4 speed:4 simulate:1 relatively:1 according:5 smaller:4 slightly:2 wi:1 making:2 happens:1 presently:1 computationally:1 equation:11 previously:2 turn:2 mechanism:1 fail:1 needed:1 know:1 operation:1 multiplied:1 eight:4 appropriate:2 coin:1 eigen:1 original:1 assumes:4 remaining:2 running:1 top:2 ensure:1 exploit:1 build:3 classical:1 suddenly:1 sweep:1 move:1 question:2 occurs:1 restored:1 september:1 gradient:3 distance:3 evaluate:2 unstable:6 toward:3 reason:3 difficult:3 robert:1 resurgence:1 perform:3 allowing:1 arc:1 descent:1 behave:1 anti:1 pat:1 situation:2 hinton:1 assp:1 arbitrary:2 introduced:1 required:1 connection:15 california:1 learned:2 pearl:1 able:1 dynamical:1 pattern:38 usually:2 unsuccessful:1 memory:7 circumvent:1 restore:2 technology:1 auto:5 epoch:9 geometric:1 hen:1 synchronization:1 fully:6 rationale:1 interesting:5 limitation:5 dana:2 sixteen:2 undamped:11 degree:1 propagates:1 principle:1 compatible:1 surprisingly:1 last:1 asynchronous:1 soon:1 allow:2 barak:1 institute:1 fifth:1 curve:9 feedback:4 overcome:1 stand:1 doesn:3 forward:13 projected:1 avoided:2 san:1 excess:1 overcomes:1 incoming:2 reveals:1 assumed:1 xi:5 search:1 iterative:1 physiologically:1 why:1 table:3 neurodynamics:1 ballard:7 learn:6 nature:1 ca:1 complex:1 main:3 big:2 noise:1 associat:1 representative:1 fashion:1 ny:1 slow:1 fails:1 xl:4 lie:2 jacobian:1 learns:3 formula:2 remained:1 down:1 removing:1 xt:6 physiological:1 exists:1 consist:1 effectively:1 execution:1 tio:1 clamping:2 backpropagate:2 depicted:1 simply:1 likely:4 dwij:1 goal:1 presentation:2 viewed:2 careful:1 absence:4 content:8 determined:1 except:2 miss:3 contradictory:1 called:1 total:2 internal:2 almeida:3 tern:1 searched:1 inability:2 arises:1 violated:2 dept:1 outgoing:1 yoo:2
885
1,810
Beyond maximum likelihood and density estimation: A sample-based criterion for unsupervised learning of complex models Sepp Hochreiter and Michael C. Mozer Department of Computer Science University of Colorado Boulder, CO 80309- 0430 {hochreit,mozer}~cs.colorado.edu Abstract The goal of many unsupervised learning procedures is to bring two probability distributions into alignment. Generative models such as Gaussian mixtures and Boltzmann machines can be cast in this light, as can recoding models such as ICA and projection pursuit. We propose a novel sample-based error measure for these classes of models, which applies even in situations where maximum likelihood (ML) and probability density estimation-based formulations cannot be applied, e.g., models that are nonlinear or have intractable posteriors. Furthermore, our sample-based error measure avoids the difficulties of approximating a density function. We prove that with an unconstrained model, (1) our approach converges on the correct solution as the number of samples goes to infinity, and (2) the expected solution of our approach in the generative framework is the ML solution. Finally, we evaluate our approach via simulations of linear and nonlinear models on mixture of Gaussians and ICA problems. The experiments show the broad applicability and generality of our approach. 1 Introduction Many unsupervised learning procedures can be viewed as trying to bring two probability distributions into alignment. Two well known classes of unsupervised procedures that can be cast in this manner are generative and recoding models. In a generative unsupervised framework, the environment generates training exampleswhich we will refer to as observations-by sampling from one distribution; the other distribution is embodied in the model. Examples of generative frameworks are mixtures of Gaussians (MoG) [2], factor analysis [4], and Boltzmann machines [8]. In the recoding unsupervised framework, the model transforms points from an obser- vation space to an output space, and the output distribution is compared either to a reference distribution or to a distribution derived from the output distribution. An example is independent component analysis (leA) [11], a method that discovers a representation of vector-valued observations in which the statistical dependence among the vector elements in the output space is minimized. With ICA, the model demixes observation vectors and the output distribution is compared against a factorial distribution which is derived either from assumptions about the distribution (e.g., supergaussian) or from a factorization of the output distribution. Other examples within the recoding framework are projection methods such as projection pursuit (e.g., [14]) and principal component analysis. In each case we have described for the unsupervised learning of a model, the objective is to bring two probability distributions- one or both of which is produced by the model- into alignment. To improve the model, we need to define a measure of the discrepancy between the two distributions, and to know how the model parameters influence the discrepancy. One natural approach is to use outputs from the model to construct a probability density estimator (PD?). The primary disadvantage of such an approach is that the accuracy of the learning procedure depends highly on the quality of the PDE. PDEs face the bias-variance trade-off. For the learning of generative models, maximum likelihood (ML) is a popular approach that avoids PDEs. In an ML approach, the model's generative distribution is expressed analytically, which makes it straightforward to evaluate the posterior, p(data I model), and therefore, to adjust the model parameters to maximize the likelihood of the data being generated by the model. This limits the ML approach to models that have tractable posteriors, true only of the simplest models [1, 6, 9]. We describe an approach which, like ML, avoids the construction of an explicit PDE, yet does so without requiring an analytic expression for the posterior. Our approach, which we call a sample-based method, assumes a set of samples from each distribution and proposes an error measure of the disagreement defined directly in terms of the samples. Thus, a second set of samples drawn from the model serves in place of a PDE or an analytic expression of the model's density. The sample-based method is inspired by the theory of electric fields, which describes the interactions among charged particles. For more details on the metaphor, see [10]. In this paper, we prove that our approach converges to the optimal solution as the sample size goes to infinity, assuming an unconstrained (maximally flexible) model. We also prove that the expected solution of our approach is the ML solution in a generative context. We present empirical results showing that the sample-based approach works for both linear and nonlinear models. 2 The Method Consider a model to be learned, fw, parameterized by weights w. The model maps an input vector, zi, indexed by i, to an output vector xi = fw(zi). The model inputs are sampled from a distribution pz(.), and the learning procedure calls for adjusting the model such that the output distribution, Px (.), comes to match a target distribution, py(.). For unsupervised recoding models, zi is an observation, xi is the transformed representation of zi, and Py (.) specifies the desired code properties. For unsupervised generative models, pz(.) is fixed and py(.) is the distribution of observations. The Sample-based Method: The Intuitive Story Assume that we have data points sampled from two different distributions, labeled "- " and "+" (Figure 1). The sample-based error measure specifies how samples should be moved so that the two distributions are brought into alignment. In the figure, samples from the lower left and upper right corners must be moved to the upper left and lower right corners. Our goal is to establish an explicit correspondence between each "- " sample and each "+" sample. Toward this end, our samplebased method utilizes on mass interactions among the samples, by introducing a repelling force between samFigure 1 pIes from the same distribution and an attractive force between samples from different distributions, and allowing the samples to move according to these forces. The Sample-based Method: The Formal Presentation In conceiving of the problem in terms of samples that attract and repel one another, it is natural to think in terms of physical interactions among charged particles. Consider a set of positively charged particles at locations denoted by xi, i = L.Nx, and a set of negatively charged particles at locations denoted by yi, j = L .N y . The particles correspond to data samples from two distributions. The interaction among particles is characterized by the Coulomb energy, E: where r(a, b) is a distance measure- Green's function- which results in nearby particles having a strong influence on the energy, but distant particles having only a weak influence. Green's function is defined as r(a, b) = c(d) Ilia - bll d - 2 , where d is the dimensionality of the space, c( d) is a constant only depending on d, and 11.11 denotes the Euclidean distance. For d = 2, r(a, b) = k In (ila - biD. The Coulomb energy is low when negative and positive particles are near one another, positive particles are far from one another, and negative particles are far from one another. This is exactly the state we would like to achieve for our two distributions of samples: bringing the two distributions into alignment without collapsing either distribution into a trivial form. Consequently, our sample-based method proposes using the Coulomb energy as an objective function to be minimized. The gradient of E with respect to a sample's location is readily computed (it is the force acting on that sample), and this gradient can be chained with the Jacobian of the location with respect to the model pa-to "V wE = rameters w to obtain a gradient-based update rule: ll.w = -to C~z~~~l(~~)T "Vxk<P(X k ) - JY~~~l(WG)T "Vyk<p(yk)),whereEiSa step size, <p(a) := N;l ~~1 r(a,xi) - N:;;l ~~1 r(a,yi) is the potential with N;;l "Va<p(a) = "VaE, T is the transposition and a = xk or yk. Here axklaw is the Jacobian of fw(zk) and the time derivative of xk is:i;k = iw(zk) = -"V<p(xk). If yk depends on w then yk - notation is analogous else ayk law is the zero matrix. There turns out to be an advantage to using Green's function as the particle interactions basis over other possibilities, e.g., a Gaussian function (e.g., [12, 13,3]). The advantage stems from the fact that with Green's function, the force between two nearby points goes to infinity as the points are pushed together, whereas with the Gaussian, the force goes to zero. Consequently, without Green's function, one might expect local optima in which clusters of points collapse onto a single location. Empirically, simulations confirmed this conjecture. Proof: Correctness of the Update Rule As the numbers of samples N x and Ny go to infinity, <P can be expressed as <p(a) = J p(b) r(a,b) db, where p(b) := Px(b) - py(b). Our sample-based method moves data points, but by moving data points, the method implicitly alters the probability density which gave rise to the data. The relation between the movement of data points and the change in the density can be expressed using an operator from vector analysis, the divergence. The divergence at a location a is the number of data points moving out of a volume surrounding a minus the number of data points moving in to the same volume. Thus, the negative divergence of movements at a gives the density change at a. The movement of data points is given by -V<p(a). We get p(a) = px(a) - py(a) = -div(-V<p(a)). For Cartesian (orthogonal) coordinates the divergence div of a vector field V at a is defined as div (V (a)) := E1=1 8Vt (a) /8al. The Laplace operator f::::. of a scalar function A is defined as f::::.A(a) := div (VA(a)) = E1=1 8 2 A(a)/8a;' The Laplace operator allows an important characterization of Green's function: f::::.af(a, b) = -8(a - b), where 8 is the Dirac delta function. This characterization gives f::::.<p(a) = -p(a). p(a) = J.L(a) div(V<p(a)) = J.L(a) f::::.<p(a) = -J.L(a) p(a) , J.L(a):2: J.Lo > 0, where J.L(a) gives the effectiveness ofthe algorithm in moving a sample at a. We get p(a, t) = p(a,O) exp( -J.L(a) t). For the integrated squared error (ISE) of the two distributions we obtain ISE(t) = J (p(a,t))2da ~ exp(-J.LO t) J (p(a,0))2da = exp(-J.LO t) ISE(O) , where ISE(O) is independent of t. Thus, the ISE between the two distributions is guaranteed to decrease during learning, when the sample size goes to infinity. Proof: Expected Generative Solution is ML Solution In the case of a generative model which has no constraints (i.e., can model any distribution), the maximum likelihood solution will have distribution px(a) = y Ef':18(yi - a), i.e., the model will produce only the observations and all of them with equal probability. For this case, we show that our sample-based method will yield the same solution in expectation as ML. .J The sample-based method converges to a local minimum of the energy, where (Va<p(a)}x = 0 for all a, where Ox is the expectation over model output. Equivalently, (Vaf(a,x)}x - y Ef,:l Vaf (a,yi) = 0 or .J (Vaf(a,x)}x = J 1 Ny . px(x)Vaf(a,x) dx = 7VLVaf(a,yJ) y i=l .J Because this equation holds for all a, we obtain px(a) = y Ef':l 8(yi - a), which is the ML solution. Thus, the sample-based method can be viewed as an approximation to ML which gets more exact as the number of samples goes to infinity. 3 Experiments We illustrate the sample-based approach for two common unsupervised learning problems: MoG and ICA. In both cases, we demonstrate that the sample-based approach works in the linear case. We also consider a nonlinear case to illustrate the power of the sample-based approach. Mixture of Gaussians In this generative model framework, m denotes a mixture component which is chosen with probability Vm from M components, and has associated model parameters Wm = (Om, Mm). In the standard MoG model, given a choice of component m, the (linear) model output is obtained by Xi = fW m (zi) = Om zi + Mm, where zi is drawn from the Gaussian distribution with zero mean and identity covariance matrix. For a nonlinear mixture model, we used a 3-layer sigmoidal neural network for fW m (zi). An update rule for Vm can be derived for our approach: ~vm = z 8z' x. i, were h . a st ep sIze . an d ""M . en?orced . -Ev ""N wi=l ( Z i) T 7fi' Ev IS wm=1 Vm -- 1 IS We trained a linear MoG model with the standard expected maximization (EM) algorithm (using code from [5]) and a linear and a nonlinear MoG with our samplebased approach. A fixed training set of Ny = 100 samples was used for all models, and all models had M = 10 except one nonlinear model which had M = 1. In the sample-based approach, we generated 100 samples from our model (the Xi) following every training epoch. The nonlinear model was trained with backpropagation. Figure 2 shows the results. The linear ML model is better than the sample-based modeL That is not surprising because ML computes the model probability values analytically (the posterior is tractable) and our algorithm uses only samples to approximate the model probability values. We used only 100 model samples in each epoch and the linear sample-based model found an acceptable solution and is not much worse than the ML modeL The nonlinear models fit better the true ring-like distribution and do not suffer from sharp corners and edges. ,---.----.--:::c~---, ML (10 -linear) Figure 2: (upper panel, left to right) training samples chosen from a ring density, a larger sample from this density, the solutions obtained from the linear model trained with EM; (lower panels) models trained with the sample-based method (left to right): linear model, nonlinear model, nonlinear model with one component. Independent COInponent Analysis With a recoding model we tried to demix sub gaussian source distributions where each has supergaussian modes. Most ICA methods are not able to demix subgaussian sources. Figure 3 shows the results, which are nearly perfect. The ideal result is a scaled and permuted identity matrix when the mixing and demixing matrices are multiplied. For more details see [10]. Figure 3: For a three-dimensional linear mixture projections of sources (first row), mixtures (second row), and sources recovered by our approach (third row) on a twodimensional plane are shown. The demixing matrix multiplied with the mixing matrix yields: -0.0017 0.0010 0.2523 -0.0014 0.1850 -0.0101 -0.1755 0.0003 0.0053 In a second experiment, we tried to recover sources from two nonlinear mixings. This problem is impossible for standard rcA methods because they are designed for linear mixings. The result is shown in Figure 4. An exact demixing cannot be expected, because nonlinear ICA has no unique solution. For more details see [10]. Sources Mixtures Recovered Sources Figure 4: For two two-dimensional nonlinear mixing functions- upper row, (z + a)2, and lower row, Jz + a, with complex variable z- the sources, mixtures, and recovered sources. The mixing function is not completely inverted but the sources are recovered recognizable. 4 Discussion Although our sample-based approach is intuitively straightforward, its implementation has two drawbacks: (1) One has to be cautious of samples that are close together, because they lead to unbounded gradients; and (2) all samples must be considered when computing the force on a data point, which makes the approach computation intensive. However, in [10, 7] approximations are proposed that reduce the computational complexity of the approach. In this paper, we have presented simulations showing the generality and power of our sample-based approach to unsupervised learning problems, and have also proven two important properties of the approach: (1) With certain assumptions, the approach will find the correct solution. (2) With an unconstrained model, the expected solution of our approach is the ML solution. In conclusion, our samplebased approach can be applied to unsupervised learning of complex models where ML does not work and our method avoids the drawbacks of PDE approaches. Acknow ledgIllents We thank Geoffrey Hinton for inspirational suggestions regarding this work. The work was supported by the Deutsche Forschungsgemeinschajt (Ho 1749/1-1), McDonnell-Pew award 97-18, and NSF award IBN-9873492. References [1] P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel. The Helmholtz machine. Neural Computation, 7(5):889-904, 1995 . [2] R. O . Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973. [3] D. Erdogmus and J. C. Principe. Comparision of entropy and mean square error criteria in adaptive system training using higher order statistics. In P. Pajunen and J. Karhunen, editors, Proceedings of the Second International Workshop on Independent Component Analysis and Blind Signal Separation, Helsinki, Finland, pages 75-80. Otamedia, Espoo, Finland, ISBN: 951-22-5017-9, 2000. [4] B. S. Everitt. An introduction to latent variable models. Chapman and Hall, 1984. [5] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures offactor analyzers. Technical Report CRG-TR-96-1 , University of Toronto, Dept. ofComp. Science, 1996. [6] Z. Ghahramani and G. E. Hinton. Hierachical non-linear factor analysis and topographic maps. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors , Advances in Neural Information Processing Systems 10, pages 486- 492. MIT Press, 1998. [7] A. Gray and A. W. Moore. 'N-body' problems in statistical learning. In T. K. Leen, T . Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, 2001. In this proceeding. [8] G. E. Hinton and T. J. Sejnowski. Learning and relearning in Boltzmann machines. In Parallel Distributed Processing, volume 1, pages 282- 317. MIT Press, 1986. [9] G. E. Hinton and T. J. Sejnowski. Introduction. In G. E. Hinton and T. J. Sejnowski, editors, Unsupervised Learning: Foundations of Neural Computation, pages VII- XVI. The MIT Press, Cambridge, MA, London, England, 1999. [10] S. Hochreiter and M. C. Mozer. An electric field approach to independent component analysis. In P. Pajunen and J . Karhunen, editors, Proceedings of the Second International Workshop on Independent Component Analysis and Blind Signal Separation, Helsinki, Finland, pages 45- 50. Otamedia, Finland, ISBN: 951-22-5017-9, 2000. [11] A. Hyviirinen. Survey on independent component analysis. Neural Computing Surveys, 2:94- 128, 1999. [12] G. C. Marques and L. B. Almeida. Separation of nonlinear mixtures using pattern repulsion. In J.-F. Cardoso, C. Jutten, and P. Loubaton, editors, Proceedings of the First International Workshop on Independent Component Analysis and Signal Separation, Aussois, France, pages 277- 282 , 1999. [13] J. C. Principe and D. Xu. Information-theoretic learning using Renyi's quadratic entropy. In J.-F. Cardoso , C. Jutten, and P. Loubaton, editors, Proceedings of the First International Workshop on Independent Component Analysis and Signal Separation, Aussois, France, pages 407-412, 1999. [14] Y. Zhao and C. G. Atkeson. Implementing projection pursuit learning. IEEE Transactions on Neural Networks , 7(2):362- 373, 1996.
1810 |@word duda:1 simulation:3 tried:2 covariance:1 minus:1 tr:1 recovered:4 repelling:1 surprising:1 yet:1 dx:1 must:2 readily:1 distant:1 analytic:2 hochreit:1 designed:1 update:3 generative:12 plane:1 xk:3 transposition:1 characterization:2 location:6 toronto:1 obser:1 sigmoidal:1 unbounded:1 prove:3 recognizable:1 manner:1 expected:6 ica:6 inspired:1 metaphor:1 notation:1 deutsche:1 panel:2 mass:1 every:1 vaf:4 exactly:1 scaled:1 positive:2 local:2 limit:1 might:1 co:1 factorization:1 collapse:1 unique:1 yj:1 backpropagation:1 procedure:5 empirical:1 projection:5 hierachical:1 ila:1 get:3 cannot:2 onto:1 close:1 operator:3 twodimensional:1 context:1 influence:3 impossible:1 py:5 map:2 charged:4 sepp:1 go:7 straightforward:2 survey:2 estimator:1 rule:3 coordinate:1 analogous:1 laplace:2 construction:1 target:1 colorado:2 exact:2 us:1 pa:1 element:1 helmholtz:1 labeled:1 ep:1 solla:1 trade:1 movement:3 decrease:1 yk:4 mozer:3 environment:1 pd:1 complexity:1 chained:1 trained:4 negatively:1 basis:1 completely:1 surrounding:1 describe:1 london:1 sejnowski:3 zemel:1 ise:5 larger:1 valued:1 wg:1 statistic:1 topographic:1 think:1 advantage:2 isbn:2 propose:1 interaction:5 mixing:6 achieve:1 intuitive:1 moved:2 dirac:1 cautious:1 cluster:1 optimum:1 demix:2 produce:1 perfect:1 converges:3 ring:2 depending:1 illustrate:2 strong:1 ibn:1 c:1 come:1 drawback:2 correct:2 implementing:1 crg:1 hold:1 mm:2 considered:1 hall:1 exp:3 finland:4 conceiving:1 estimation:2 iw:1 correctness:1 supergaussian:2 brought:1 mit:3 gaussian:5 vae:1 derived:3 likelihood:5 dayan:1 repulsion:1 attract:1 integrated:1 relation:1 transformed:1 france:2 among:5 flexible:1 classification:1 denoted:2 espoo:1 proposes:2 field:3 construct:1 equal:1 having:2 sampling:1 chapman:1 broad:1 unsupervised:13 nearly:1 discrepancy:2 minimized:2 report:1 loubaton:2 divergence:4 highly:1 possibility:1 adjust:1 alignment:5 mixture:12 light:1 edge:1 orthogonal:1 indexed:1 euclidean:1 desired:1 disadvantage:1 maximization:1 applicability:1 introducing:1 vation:1 st:1 density:10 international:4 off:1 vm:4 michael:1 together:2 squared:1 collapsing:1 worse:1 corner:3 derivative:1 zhao:1 potential:1 depends:2 blind:2 wm:2 recover:1 parallel:1 pajunen:2 om:2 square:1 accuracy:1 variance:1 correspond:1 ofthe:1 yield:2 weak:1 produced:1 confirmed:1 against:1 energy:5 proof:2 associated:1 sampled:2 adjusting:1 popular:1 dimensionality:1 higher:1 maximally:1 formulation:1 leen:1 ox:1 generality:2 furthermore:1 nonlinear:15 jutten:2 mode:1 quality:1 gray:1 dietterich:1 requiring:1 true:2 analytically:2 moore:1 neal:1 attractive:1 ll:1 during:1 criterion:2 trying:1 theoretic:1 demonstrate:1 bring:3 novel:1 discovers:1 ef:3 fi:1 common:1 permuted:1 physical:1 empirically:1 volume:3 refer:1 cambridge:1 everitt:1 pew:1 unconstrained:3 particle:12 analyzer:1 had:2 moving:4 posterior:5 certain:1 vt:1 yi:5 inverted:1 minimum:1 maximize:1 signal:4 stem:1 technical:1 match:1 england:1 characterized:1 af:1 pde:4 hart:1 e1:2 award:2 jy:1 va:3 mog:5 expectation:2 hochreiter:2 lea:1 whereas:1 else:1 source:10 bringing:1 db:1 effectiveness:1 jordan:1 call:2 subgaussian:1 near:1 ideal:1 bid:1 fit:1 zi:8 gave:1 reduce:1 regarding:1 intensive:1 expression:2 suffer:1 cardoso:2 factorial:1 transforms:1 simplest:1 specifies:2 nsf:1 alters:1 delta:1 aussois:2 drawn:2 offactor:1 parameterized:1 place:1 utilizes:1 separation:5 acceptable:1 pushed:1 layer:1 guaranteed:1 correspondence:1 quadratic:1 comparision:1 infinity:6 constraint:1 scene:1 helsinki:2 nearby:2 generates:1 px:6 conjecture:1 department:1 according:1 mcdonnell:1 describes:1 em:3 wi:1 intuitively:1 rca:1 boulder:1 samplebased:3 equation:1 turn:1 know:1 tractable:2 serf:1 end:1 pursuit:3 gaussians:3 multiplied:2 coulomb:3 disagreement:1 ho:1 assumes:1 denotes:2 ilium:1 inspirational:1 vxk:1 ghahramani:2 establish:1 approximating:1 objective:2 move:2 primary:1 dependence:1 div:5 gradient:4 distance:2 thank:1 nx:1 evaluate:2 trivial:1 toward:1 assuming:1 code:2 equivalently:1 pie:1 vyk:1 acknow:1 negative:3 rise:1 implementation:1 boltzmann:3 allowing:1 upper:4 observation:6 marque:1 situation:1 hinton:7 bll:1 sharp:1 cast:2 repel:1 learned:1 xvi:1 beyond:1 able:1 pattern:2 ev:2 green:6 power:2 difficulty:1 natural:2 force:7 improve:1 embodied:1 tresp:1 epoch:2 law:1 expect:1 suggestion:1 rameters:1 proven:1 geoffrey:1 foundation:1 editor:7 story:1 lo:3 row:5 supported:1 pdes:2 bias:1 formal:1 face:1 recoding:6 distributed:1 avoids:4 computes:1 ayk:1 adaptive:1 atkeson:1 far:2 transaction:1 approximate:1 implicitly:1 ml:17 xi:6 latent:1 hyviirinen:1 jz:1 zk:2 complex:3 electric:2 da:2 positively:1 body:1 xu:1 en:1 ny:3 wiley:1 sub:1 explicit:2 jacobian:2 third:1 renyi:1 showing:2 pz:2 demixing:3 intractable:1 workshop:4 karhunen:2 cartesian:1 relearning:1 vii:1 entropy:2 expressed:3 scalar:1 applies:1 ma:1 goal:2 presentation:1 viewed:2 consequently:2 identity:2 erdogmus:1 fw:5 change:2 except:1 acting:1 principal:1 kearns:1 principe:2 almeida:1 dept:1
886
1,811
Exact Solutions to Time-Dependent MDPs Justin A. Boyan? ITA Software Building 400 One Kendall Square Cambridge, MA 02139 jab@itasoftware.com Michael L. Littman AT&T Labs-Research and Duke University 180 Park Ave. Room A275 Florham Park, NJ 07932-0971 USA mlittman@research.att. com Abstract We describe an extension of the Markov decision process model in which a continuous time dimension is included in the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling. 1 Introduction Imagine trying to plan a route from home to work that minimizes expected time. One approach is to use a tool such as "Mapquest", which annotates maps with information about estimated driving time, then applies a standard graph-search algorithm to produce a shortest route. Even if driving times are stochastic, the annotations can be expected times, so this presents no additional challenge. However, consider what happens if we would like to include public transportation in our route planning. Buses, trains, and subways vary in their expected travel time according to the time of day: buses and subways come more frequently during rush hour; trains leave on or close to scheduled departure times. In fact, even highway driving times vary with time of day, with heavier traffic and longer travel times during rush hour. To formalize this problem, we require a model that includes both stochastic actions, as in a Markov decision process (MDP), and actions with time-dependent stochastic durations. There are a number of models that include some of these attributes: ? Directed graphs with shortest path algorithms [2]: State transitions are deterministic; action durations are time independent (deterministic or stochastic). ? Stochastic Time Dependent Networks (STDNS) [6]: State transitions are deterministic; action durations are stochastic and can be time dependent. ? Markov decision processes (MDPS) [5]: State transitions are stochastic; action durations are deterministic. ? Semi-Markov decision processes (SMDPS) [5]: State transitions are stochastic; action durations are stochastic, but not time dependent. -The work reported here was done while Boyan's affiliation was with NASA Ames Research Center, Computational Sciences Division. In this paper, we introduce the Time-Dependent MDP (TMDP) model, which generalizes all these models by including both stochastic state transitions and stochastic, time-dependent action durations. At a high level, a TMDP is a special continuousstate MDP [5; 4] consisting of states with both a discrete component and a real-valued time component: (x , t) E X x lR. With absolute time as part of the state space, we can model a rich set of domain objectives including minimizing expected time, maximizing the probability of making a deadline, or maximizing the dollar reward of a path subject to a time deadline. In fact, using the time dimension to represent other one-dimensional quantities, TMDPS support planning with non-linear utilities [3] (e.g., risk-aversion), or with a continuous resource such as battery life or money. We define TMDPs and express their Bellman equations in a functional form that gives, at each state x, the one-step lookahead value at (x, t) for all times in parallel (Section 2) . We use the term time-value function to denote a mapping from realvalued times to real-valued future reward. With appropriate restrictions on the form of the stochastic state-time transition function and reward function, we guarantee that the optimal time-value function at each state is a piecewise linear function of time, which can be represented exactly and computed by value iteration (Section 3). We conclude with empirical results on two domains (Section 4). 2 General model J.11 Missed the 8am J.12 (rdID ~l 7 8 910 12 2 01234567 ~I Caught the 8am tram I I I I I I L2 1 I ft. 1 I I I 7 8 910 REL 12 2 Pz lSl~ 11 ABS 78 9 10 J.L3 IQ IIII L3 1At-lll l{ 7 8 910 12 2 ;\ 7 8 910 12 J.i5 Dnve on backroad HIghway - off peak I~ L4 2 Vz 7~ III 8 910 12 2 0 1 23 4 5 6 7 REL ~4 122 Highway - rush hour 14"011111 0 1 2 3 4 5 6 7 ~ I I II I II I 7 8 9 10 12 2 Ls I I I 11111 0 1 2 3 4 5 6 7 Ps REL REL Figure 1: An illustrative route-planning example TMDP. Figure 1 depicts a small route-planning example that illustrates several distinguishing features of the TMDP model. The start state Xl corresponds to being at home. From here, two actions are available: a1, taking the 8am train (a scheduled action); and a2, driving to work via highway then backroads (may be done at any time). Action a1 has two possible outcomes, represented by III and 1l2? Outcome III ("Missed the 8am train") is active after 7:50am, whereas outcome 112 ("Caught the train") is active until 7:50am; this is governed by the likelihood functions L1 and L2 in the model. These outcomes cause deterministic transitions to states Xl and X3, respectively, but take varying amounts of time. Time distributions in a TMDP may be either "relative" (REL) or "absolute" (ABS). In the case of catching the train (1l2), the distribution is absolute: the arrival time (shown in P2) has mean 9:45am no matter what time before 7:50am the action was initiated. (Boarding the train earlier does not allow us to arrive at our destination earlier!) However, missing the train and returning to Xl has a relative distribution: it deterministically takes 15 minutes from our starting time (distribution P1 ) to return home. The outcomes for driving (a2) are Jl3 and Jl4. Outcome Jl3 ("Highway - rush hour") is active with probability 1 during the interval 8am- 9am, and with smaller probability outside that interval, as shown by L 3. Outcome Jl4 ("Highway - off peak") is complementary. Duration distributions P3 and P4 , both relative to the initiation time, show that driving times during rush hour are on average longer than those off peak. State X2 is reached in either case. From state X2, only one action is available, a3. The corresponding outcome Jl5 ("Drive on backroad") is insensitive to time of day and results in a deterministic transition to state X3 with duration 1 hour. The reward function for arriving at work is + 1 before 11am and falls linearly to zero between 11am and noon. The solution to a TMDP such as this is a policy mapping state-time pairs (x, t) to actions so as to maximize expected future reward. As is standard in MDP methods, our approach finds this policy via the value function V*. We represent the value function of a TMDP as a set of time-value functions, one per state: ~(t) gives the optimal expected future reward from state Xi at time t. In our example of Figure 1, the time-value functions for X3 and X2 are shown as Va and V2. Because of the deterministic one-hour delay of Jl5, V2 is identical to V3 shifted back one hour. This wholesale shifting of time-value functions is exploited by our solution algorithm. The model also allows a notion of "dawdling" in a state. This means the agent can remain in a state for as long as desired at a reward rate of K(x, t) per unit time before choosing an action. This makes it possible, for example, for an agent to wait at home for rush hour to end before driving to work. TMDP TMDP Formally, a X A M L R K TMDP consists of the following components: discrete state space discrete action space discrete set of outcomes, each of the form Jl = (x~,TIt,PIt): x~ EX: the resulting state Tit E {ABS, REL}: specifies the type of the resulting time distribution PIt(t') (if Tit = ABS): pdf over absolute arrival times of Jl PIt (8) (if Tit = REL): pdf over durations of Jl L(Jllx, t, a) is the likelihood of outcome Jl given state x, time t, action a R(Jl, t, 8) is the reward for outcome Jl at time t with duration 8 K(x, t) is the reward rate for "dawdling" in state x at time t. We can define the optimal value function for a with the following Bellman equations: TMDP in terms of these quantities t' V(x, t) sup (r K(x,s)ds+V(x,t')) value function (allowing dawdling) t'?t it V(x, t) = Q(x, t,a) = max Q(x,t,a) value function (immediate action) L expected Q value over outcomes aEA L(Jllx, a, t) . U(Jl, t) itEM U(Jl, t) f~ PIt(t') [R(Jl, t, t' = { f~ PIt(t' - t)[R(Jl, t, t' - t) + V(x~, t'))dt' t) + V(x~, t'))dt' (if Tit = ABS) (if Tit = REL). These equations follow straightforwardly from viewing the TMDP as an undiscounted continuous-time MDP . Note that the calculations of U(Jl, t) are convolutions of the result-time pdf P with the lookahead value R + V. In the next section, we discuss a concrete way of representing and manipulating the continuous quantities that appear in these equations. 3 Model with piecewise linear value functions In the general model, the time-value functions for each state can be arbitrarily complex and therefore impossible to represent exactly. In this section, we show how to restrict the model to allow value functions to be manipulated exactly. For each state, we represent its time-value function Vi(t) as a piecewise linear function of time. Vi(t) is thus represented by a data structure consisting of a set of distinct times called breakpoints and, for each pair of consecutive breakpoints, the equation of a line defined over the corresponding interval. Why are piecewise linear functions an appropriate representation? Linear timevalue functions provide an exact representation for minimum-time problems. Piecewise time-value functions provide closure under the "max" operator. Rewards must be constrained to be piecewise linear functions of start and arrival times and action durations. We write R(p" t, 8) = Rs(p" t) + Ra(P" t + 8) + Rd(p" 8) where Rs, Ra, and Rd are piecewise linear functions of start time, arrival time, and duration, respectively. In addition, the dawdling reward K and the outcome probability function L must be piecewise constant. The most significant restriction needed for exact computation is that arrival and duration pdfs be discrete. This ensures closure under convolutions. In contrast, convolving a piecewise constant pdf (e.g., a uniform distribution) with a piecewise linear time-value function would in general produce a piecewise quadratic timevalue function; further convolutions increase the degree with each iteration of value iteration. In Section 5 below we discuss how to relax this restriction. Given the restrictions just mentioned, all the operations used in the Bellman equations from Section 2- namely, addition, multiplication, integration, supremum, maximization, and convolution- can be implemented exactly. The running time of each operation is linear in the representation size of the time-value functions involved. Seeding the process with an initial piecewise linear time-value function, we can carry out value iteration until convergence. In general, the running time from one iteration to the next can increase, as the number of linear "pieces" being manipulated grows; however, the representations grow only as complex as necessary to represent the value function V exactly. 4 Experimental domains We present results on two domains: transportation planning and telescope scheduling. For comparison, we also implemented the natural alternative to the piecewiselinear technique: discretizing the time dimension and solving the problem as a standard MDP. To apply the MDP method, three additional inputs must be specified: an earliest starting time, latest finishing time, and bin width. Since this paper's focus is on exact computations, we chose a discretization level corresponding to the resolution necessary for exact solution by the MDP at its grid points. An advantage of the MDP is that it is by construction acyclic, so it can be solved by just one sweep of standard value iteration, working backwards in time. The TMDP'S advantage is that it directly manipulates entire linear segments of the time-value functions. 4.1 Transportation planning Figure 2 illustrates an example TMDP for optimizing a commute from San Francisco to NASA Ames. The 14 discrete states model both location and observed traffic Figure 2: The San Francisco to Ames commuting example Q-functions at state 10 ("US1 01 & 8ayshore / heavy traffic") action 0 ("drive to Ames") action 1 ("drive to 8ayshore station") 1.2 ------------------------..- C? x- 0.8 ~ f - - - - - - -,'------; " C Q) ::J (ij 0.6 > (ij E li 0 0.4 0.2 0 -0.2 6 7 8 9 10 11 12 Optimal policy over time at state 10 a.; action 0 ("drive to Ames") action 1 ("drive to 8ayshore station") .0 E ::J c: c: o o nro 6 7 8 9 10 11 12 time Figure 3: The optimal Q-value functions and policy at state #10. conditions: shaded and unshaded circles represent heavy and light traffic, respectively. Observed transition times and traffic conditions are stochastic, and depend on both the time and traffic conditions at the originating location. At states 5, 6, 11, and 12, the "catch the train" action induces an absolute arrival distribution reflecting the train schedules. The domain objective is to arrive at Ames by 9:00am. We impose a linear penalty for arriving between 9 and noon, and an infinite penalty for arriving after noon. There are also linear penalties on the number of minutes spent driving in light traffic, driving in heavy traffic, and riding on the train; the coefficients of these penalties can be adjusted to reflect the commuter's tastes. Figure 3 presents the optimal time-value functions and policy for state #10, "US101&Bayshore / heavy traffic." There are two actions from this state, corresponding to driving directly to Ames and driving to the train station to wait for the next train. Driving to the train station is preferred (has higher Q-value) at times that are close- but not too close!- to the departure times of the train. The full domain is solved in well under a second by both solvers (see Table 1). The optimal time-value functions in the solution comprise a total of 651 linear segments. 4.2 Telescope observation scheduling Next, we consider the problem of scheduling astronomical targets for a telescope to maximize the scientific return of one night's viewing [1]. We are given N possible targets with associated coordinates, scientific value, and time window of visibility. Of course, we can view only one target at a time. We assume that the reward of an observation is proportional to the duration of viewing the target. Acquiring a target requires two steps of stochastic duration: moving the telescope, taking time roughly proportional to the distance traveled; and calibrating it on the new target. Previous approaches have dealt with this stochasticity heuristically, using a just-incase scheduling approach [1]. Here, we model the stochasticity directly within the TMDP framework. The TMDP has N + 1 states (corresponding to the N observations and "off") and N actions per state (corresponding to what to observe next). The Domain Solver SF-Commute piecewise VI exact grid VI piecewise VI exact grid VI piecewise VI exact grid VI piecewise VI exact grid VI piecewise VI exact grid VI Telescope-IO Telescope-25 Telescope-50 Telescope-100 Model states 14 5054 11 14,311 26 33,826 51 66,351 101 131,300 Value sweeps 13 1 5 1 6 1 6 1 4 1 V* pieces 651 5054 186 14,311 716 33,826 1252 66,351 2711 131,300 Runtime (secs) 0.2 0.1 0.1 1.3 1.8 7.4 6.3 34.5 17.9 154.1 Table 1: Summary of results. The three rightmost columns measure solution complexity in terms of the number of sweeps of value iteration before convergence; the number of distinct "pieces" or values in the optimal value function V*; and the running time. Running times are the median of five runs on an UltraSparc II (296MHz CPU, 256Mb RAM). dawdling reward rate K(x, t) encodes the scientific value of observing x at time t; that value is 0 at times when x is not visible. Relative duration distributions encode the inter-target distances and stochastic calibration times on each transition. We generated random target lists of sizes N =10, 25, 50, and 100. Visibility windows were constrained to be within a 13-hour night, specified with O.Ol-hour precision. Thus, representing the exact solution with a grid required 1301 time bins per state. Table 1 shows comparative results of the piecewise-linear and grid-based solvers. 5 Conclusions In sum, we have presented a new stochastic model for time-dependent MDPS (TMDPS), discussed applications, and shown that dynamic programming with piecewise linear time-value functions can produce optimal policies efficiently. In initial comparisons with the alternative method of discretizing the time dimension, the TMDP approach was empirically faster, used significantly less memory, and solved the problem exactly over continuous t E lR rather than just at grid points. In our exact computation model, the requirement of discrete duration distributions seems particularly restrictive. We are currently investigating a way of using our exact algorithm to generate upper and lower bounds on the optimal solution for the case of arbitrary pdfs. This may allow the system to produce an optimal or provably near-optimal policy without having to identify all the twists and turns in the optimal time-value functions. Perhaps the most important advantage of the piecewise linear representation will turn out to be its amenability to bounding and approximation methods. We hope that such advances will enable the solution of city-sized route planning, more realistic telescope scheduling, and other practical time-dependent stochastic problems. Acknowledgments We thank Leslie Kaelbling, Rich Washington and NSF CAREER grant IRI-9702576. References [1] John Bresina, Mark Drummond, and Keith Swanson. Managing action duration uncertainty with just-in-case scheduling. In Decision- Theoretic Planning: Papers from the 1994 Spring AAAI Symposium, pages 19-26, Stanford, CA, 1994. AAAI Press, Menlo Park, California. ISBN 0-929280-70-9. URL http://ic-www.arc.nasa.gov/ic/projects/xfr/jic/jic.html. [2] Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to Algorithms. The MIT Press, Cambridge, MA, 1990. [3] Sven Koenig and Reid G. Simmons. How to make reactive planners risksensitive. In Proceedings of the 2nd International Conference on Artificial Intelligence Planning Systems, pages 293- 304, 1994. [4] Harold J. Kushner and Paul G. Dupuis. Numerical Methods for Stochastic Control Problems in Continuous Time. Springer-Verlag, New York, 1992. [5] Martin L. Puterman. Markov Decision Processes- Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, 1994. [6] Michael P. Wellman, Kenneth Larson, Matthew Ford, and Peter R. Wurman. Path planning under time-dependent uncertainty. In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, pages 532- 539, 1995.
1811 |@word seems:1 nd:1 heuristically:1 closure:2 r:2 commute:2 carry:1 initial:2 att:1 tram:1 rightmost:1 com:2 discretization:1 must:3 john:2 ronald:1 realistic:1 numerical:1 visible:1 seeding:1 visibility:2 intelligence:2 item:1 lr:2 ames:7 location:2 five:1 symposium:1 consists:1 introduce:1 inter:1 ra:2 expected:7 roughly:1 p1:1 examine:1 planning:11 frequently:1 ol:1 bellman:3 gov:1 cpu:1 lll:1 solver:3 window:2 project:1 rivest:1 what:3 minimizes:1 nj:1 guarantee:1 runtime:1 exactly:6 returning:1 control:1 unit:1 grant:1 appear:1 reid:1 before:5 io:1 initiated:1 path:3 chose:1 shaded:1 pit:5 range:1 directed:1 practical:1 acknowledgment:1 x3:3 jl3:2 empirical:1 significantly:1 wait:2 close:3 operator:1 scheduling:7 risk:1 impossible:1 restriction:4 www:1 map:1 deterministic:7 transportation:4 center:1 maximizing:2 missing:1 latest:1 starting:2 duration:18 caught:2 l:1 resolution:1 iri:1 manipulates:1 notion:1 coordinate:1 simmons:1 imagine:1 lsl:1 construction:1 target:8 exact:14 duke:1 programming:2 distinguishing:1 particularly:1 observed:2 ft:1 solved:3 ensures:1 mentioned:1 complexity:1 reward:14 littman:1 battery:1 dynamic:2 depend:1 solving:1 segment:2 tit:6 division:1 represented:3 train:15 distinct:2 sven:1 describe:1 artificial:2 outcome:13 outside:1 choosing:1 stanford:1 valued:2 relax:1 florham:1 ford:1 advantage:3 isbn:1 mb:1 p4:1 lookahead:2 drummond:1 convergence:2 p:1 undiscounted:1 requirement:1 produce:4 comparative:1 leave:1 spent:1 iq:1 ij:2 keith:1 p2:1 implemented:2 come:1 amenability:1 attribute:1 stochastic:19 viewing:3 enable:1 public:2 bin:2 require:1 dupuis:1 adjusted:1 extension:1 ic:2 mapping:2 matthew:1 driving:12 vary:3 consecutive:1 a2:2 travel:2 currently:1 highway:6 vz:1 city:1 tool:1 hope:1 mit:1 rather:1 varying:1 earliest:1 encode:1 focus:1 finishing:1 pdfs:2 likelihood:2 contrast:1 ave:1 dollar:1 am:13 dependent:10 entire:1 manipulating:1 originating:1 provably:1 html:1 plan:1 constrained:2 special:1 integration:1 comprise:1 having:1 washington:1 identical:1 park:3 future:3 piecewise:20 manipulated:2 consisting:2 ab:5 leiserson:1 risksensitive:1 wellman:1 light:2 necessary:2 desired:1 circle:1 rush:6 catching:1 column:1 earlier:2 mhz:1 maximization:1 leslie:1 kaelbling:1 uniform:1 delay:1 too:1 reported:1 straightforwardly:1 peak:3 international:1 destination:1 off:4 michael:2 concrete:1 reflect:1 aaai:2 convolving:1 return:2 li:1 sec:1 includes:1 coefficient:1 matter:1 inc:1 vi:12 piece:3 view:1 lab:1 kendall:1 observing:1 traffic:9 reached:1 start:3 sup:1 parallel:1 annotation:1 square:1 efficiently:1 identify:1 dealt:1 wurman:1 drive:5 involved:1 associated:1 astronomical:1 formalize:1 schedule:1 back:1 nasa:3 reflecting:1 higher:1 dt:2 day:3 follow:1 done:2 just:5 until:2 d:1 working:1 koenig:1 night:2 perhaps:1 scheduled:2 scientific:3 grows:1 mdp:9 riding:1 usa:1 building:1 calibrating:1 puterman:1 piecewiselinear:1 during:4 width:1 illustrative:1 harold:1 larson:1 trying:1 pdf:4 theoretic:1 l1:1 charles:1 commuter:1 functional:1 nro:1 empirically:1 jab:1 twist:1 insensitive:1 jl:11 discussed:1 significant:1 cambridge:2 rd:2 grid:9 stochasticity:2 l3:2 moving:1 calibration:1 longer:2 money:1 annotates:1 mlittman:1 continuousstate:1 optimizing:1 route:7 verlag:1 initiation:1 affiliation:1 arbitrarily:1 discretizing:2 life:1 exploited:1 wholesale:1 additional:2 minimum:1 impose:1 managing:1 shortest:2 maximize:2 v3:1 semi:1 ii:3 full:1 faster:1 calculation:1 long:1 deadline:2 a1:2 va:1 iteration:7 represent:6 whereas:1 addition:2 iiii:1 interval:3 grow:1 median:1 subject:1 near:1 backwards:1 iii:3 restrict:1 heavier:1 utility:1 url:1 penalty:4 peter:1 aea:1 york:2 cause:1 action:26 amount:1 induces:1 telescope:10 generate:1 specifies:1 http:1 nsf:1 shifted:1 estimated:1 per:4 discrete:8 write:1 express:1 subway:2 kenneth:1 ram:1 graph:2 sum:1 run:1 i5:1 uncertainty:3 arrive:2 planner:1 missed:2 home:4 p3:1 decision:6 bound:1 breakpoints:2 quadratic:1 x2:3 software:1 encodes:1 spring:1 martin:1 according:1 cormen:1 smaller:1 remain:1 son:1 making:1 happens:1 resource:1 equation:6 bus:2 discus:2 turn:2 needed:1 end:1 generalizes:1 available:2 operation:2 apply:1 observe:1 v2:2 appropriate:2 alternative:2 thomas:1 tmdp:17 running:4 include:2 xfr:1 kushner:1 restrictive:1 sweep:3 objective:2 quantity:3 distance:2 thank:1 swanson:1 minimizing:1 jllx:2 policy:7 allowing:1 upper:1 observation:4 convolution:4 markov:5 arc:1 commuting:1 immediate:1 station:4 arbitrary:1 pair:2 namely:1 specified:2 required:1 california:1 hour:11 justin:1 below:1 departure:2 challenge:1 including:2 max:2 memory:1 shifting:1 natural:1 boyan:2 representing:2 mdps:3 realvalued:1 catch:1 traveled:1 l2:4 taste:1 multiplication:1 relative:4 proportional:2 acyclic:1 ita:1 aversion:1 agent:2 degree:1 heavy:4 course:1 summary:1 arriving:3 allow:3 wide:1 fall:1 taking:2 absolute:5 dimension:4 transition:11 rich:2 jic:2 san:2 preferred:1 supremum:1 active:3 investigating:1 conclude:1 francisco:2 xi:1 continuous:6 search:1 why:1 table:3 noon:3 ca:1 career:1 menlo:1 complex:2 domain:7 tmdps:3 linearly:1 bounding:1 paul:1 arrival:6 complementary:1 depicts:1 ny:1 wiley:1 precision:1 deterministically:1 sf:1 xl:3 governed:1 minute:2 list:1 pz:1 a3:1 rel:8 illustrates:2 applies:1 acquiring:1 springer:1 corresponds:1 ma:2 sized:1 room:1 unshaded:1 included:1 smdps:1 infinite:1 called:1 total:1 experimental:1 l4:1 formally:1 support:1 mark:1 reactive:1 ex:1
887
1,812
Bayesian video shot segmentation Nuno Vasconcelos Andrew Lippman MIT Media Laboratory, 20 Ames St, E15-354, Cambridge, MA 02139, {nuno,lip}@media.mit.edu, http://www.media.mit.edurnuno Abstract Prior knowledge about video structure can be used both as a means to improve the peiformance of content analysis and to extract features that allow semantic classification. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models by introducing a Bayesian formulation for the shot segmentation problem. The new formulations is shown to extend standard thresholding methods in an adaptive and intuitive way, leading to improved segmentation accuracy. 1 Introduction Given the recent advances on video coding and streaming technology and the pervasiveness of video as a form of communication, there is currently a strong interest in the development of techniques for browsing, categorizing, retrieving and automatically summarizing video. In this context, two tasks are of particular relevance: the decomposition of a video stream into its component units, and the extraction of features for the automatic characterization of these units. Unfortunately, current video characterization techniques rely on image representations based on low-level visual primitives (such as color, texture, and motion) that, while practical and computationally efficient, fail to capture most of the structure that is relevant for the perceptual decoding of the video. In result, it is difficult to design systems that are truly useful for naive users. Significant progress can only be attained by a deeper understanding of the relationship between the message conveyed by the video and the patterns of visual structure that it exhibits. There are various domains where these relationships have been thoroughly studied, albeit not always from a computational standpoint. For example, it is well known by film theorists that the message strongly constrains the stylistic elements of the video [1, 6], which are usually grouped into two major categories: the elements of montage and the elements of mise-en-scene. Montage refers to the temporal structure, namely the aspects of film editing, while, mise-en-scene deals with spatial structure, i.e. the composition of each image, and includes variables such as the type of set in which the scene develops, the placement of the actors, aspects of lighting, focus, camera angles, and so on. Building computational models for these stylistic elements can prove useful in two ways: on one hand it will allow the extraction of semantic features enabling video characterization and classification much closer to that which people use than current descriptors based on texture properties or optical flow. On the other hand, it will provide constraints for the low-level analysis algorithms required to perform tasks such as video segmentation, keyframing, and so on. The first point is illustrated by Figure 1 where we show how a collection of promotional trailers for commercially released feature films populates a 2-D feature space based on the most elementary characterization of montage and mise-en-scene: average shot duration vs. average shot activity t . Despite the coarseness of this characterization, it captures aspects that are important for semantic movie classification: close inspection of the genre assigned to each movie by the motion picture association of America reveals that in this space the movies cluster by genre! Movie "Circle of Friends" "French Kiss" 'Miami Rhapsody" + o + '''The Santa Clause" "Exit to Eden" 09 "A Walk in the Clouds" +crda p..i nlor o nV8rw11 d +edwood + i:'- "''" + + santa 06 +steePi ng + scout + clouds , 05 ,, " , , , , ,, ,, blarit man " + '" ' + wal~~~ce~ , Jungle +puWel 04 'While you Were Sleeping" "Bad Boys" ,, ,, ,, ,, ,, , ,, , ,, ,, , od", +IIlBdness o~lghter b adboys o terrnnaJ vengeance 04 Shot acllVlty "Juni or" "Crimson Tide" 'The Scout" '''The Walking Dead" "Ed Wood" "'The Jungle Book" "Puppet M aster" "A Little Princess" '"Judge Dredd" 'The River Wild" '''Terminal Velocity" '1l1ankman" '1 n the Mouth of Madness" "Street Fighter" "Die Hard: With a Vengeance" Legend circl e french miami santa eden clouds sleeping badboys junior tide seouL walking edwood jungle puppel princess dredd riverwild termin al bl ankman madness fig hter vengeance Figure 1: Shot activity vs. duration features . The genre of each movie is identified by the symbol used to represent the movie in the plot. In this paper, we concentrate on the second point, i.e. how the structure exhibited by Figure 1 can be exploited to improve the performance of low-level processing tasks such as shot segmentation. Because knowledge about the video structure is a form of prior knowledge, Bayesian procedures provide a natural way to accomplish this goal. We therefore introduce computational models for shot duration and activity and develop a Bayesian framework for segmentation that is shown to significantly outperform current approaches. 2 Modeling shot duration Because shot boundaries can be seen as arrivals over discrete, non-overlapping temporal intervals, a Poisson process seems an appropriate model for shot duration [3]. However, events generated by Poisson processes have inter-arrival times characterized by the exponential density which is a monotonically decreasing function of time. This is clearly not the case for the shot duration, as can be seen from the histograms of Figure 2. In this work, we consider two alternative models, the Erlang and Weibull distributions. 2.1 The Erlang model Letting T be the time since the previous boundary, the Erlang distribution [3] is described by (1) IThe activity features are described in section 3. Figure 2: Shot duration histogram, and maximum likelihood fit obtained with the Erlang (left) and Weibull (right) distributions. It is a generalization of the exponential density, characterized by two parameters: the order r, and the expected inter-arrival time (1/ A) of the underlying Poisson process. When r 1, the Erlang distribution becomes the exponential distribution. For larger values of r, it characterizes the time between the rth order inter-arrival time of the Poisson process. This leads to an intuitive explanation for the use of the Erlang distribution as a model of shot duration: for a given order r, the shot is modeled as a sequence of r events which are themselves the outcomes of Poisson processes. Such events may reflect properties of the shot content, such as "setting the context" through a wide angle view followed by "zooming in on the details" when r 2, or "emotional buildup" followed by "action" and "action outcome" when r 3. Figure 2 presents a shot duration histogram, obtained from the training set to be described in section 5, and its maximum likelihood (ML) Erlang fit. = = 2.2 = The Wei bull model While the Erlang model provides a good fit to the empirical density, it is of limited practical utility due to the constant arrival rate assumption [5] inherent to the underlying Poisson process. Because A is a constant, the expected rate of occurrence of a new shot boundary is the same if 10 seconds or 1 hour have elapsed since the occurrence of the previous one. An alternative models that does not suffer from this problem is the Weibull distribution [5], which generalizes the exponential distribution by considering an expected rate of arrival of new events that is a function of time r aro<- l A(r)=~, and of the parameters a and (3; leading to a probability density of the form wo< ,j3(r) aro<-l = ~exp [(r)/3 0<] . - (2) Figure 2 presents the ML Weibull fit to the shot duration histogram. Once again we obtain a good approximation to the empirical density estimate. 3 Modeling shot activity The color histogram distance has been widely used as a measure of (dis)similarity between images for the purposes of object recognition [7], content-based retrieval [4], and temporal video segmentation [2]. A histogram is first computed for each image in the sequence and the distance between successive histograms is used as a measure of local activity. A standard metric for video segmentation [2] is the L l norm of the histogram difference, B V(a, b) = L i=l lai - bil, (3) where a and b are histograms of successive frames, and B the number of histogram bins. Statistical modeling of the histogram distance features requires the identification of the various states through which the video may progress. For simplicity, in this work we restrict ourselves to a video model composed of two states: "regular frames" (S = 0) and "shot transitions" (S = 1). The fundamental principles are however applicable to more complex models. As illustrated by Figure 3, for "regular frames" the distribution is asymmetric about the mean, always positive and concentrated near zero. This suggests that a mixture of Erlang distributions is an appropriate model for this state, a suggestion that is confirmed by the fit to the empirical density obtained with EM, also depicted in the figure. On the other hand, for "shot transitions" the fit obtained with a simple Gaussian model is sufficient to achieve a reasonable approximation to the empirical density. In both cases, a uniform mixture component is introduced to account for the tails of the distributions. Figure 3: Left: Conditional activity histogram for regular frames, and best fit by a mixture with three Erlang and a uniform component. Right: Conditional activity histogram for shot transitions, and best fit by a mixture with a Gaussian and a uniform component. 4 A Bayesian framework for shot segmentation Because shot segmentation is a pre-requisite for virtually any task involving the understanding, parsing, indexing, characterization, or categorization of video, the grouping of video frames into shots has been an active topic of research in the area of multimedia signal processing. Extensive evaluation of various approaches has shown that simple thresholding of histogram distances performs surprisingly well and is difficult to beat [2]. In this work, we consider an alternative formulation that regards the problem as one of statistical inference between two hypothesis: ? No : no shot boundary occurs between the two frames under analysis (S ? Jit: a shot boundary occurs between the two frames (S = 0), = 1), for which the optimal decision is provided by a likelihood ratio test where Nt is chosen if P(VIS = 1) C = log P(VIS = 0) > 0, (4) and No is chosen otherwise. It is well known that standard thresholding is a particular case of this formulation, in which both conditional densities are assumed to be Gaussians with the same covariance. From the discussion in the previous section, it is clear that this does not hold for real video. One further limitation of the thresholding model is that it does not take into account the fact that the likelihood of a new shot transition is dependent on how much time has elapsed since the previous one. On the other hand, the statistical formulation can easily incorporate the shot duration models developed in section 2. 4.1 Notation Because video is a discrete process, characterized by a given frame rate, shot boundaries are not instantaneous, but last for one frame period. To account for this, states are defined over time intervals, i.e. instead of St = 0 or St = 1, we have St ,tH; = 0 or St,t+6 = 1, where t is the start of a time interval, and 8 its duration. We designate the features observed during the interval [t, t + <5] by Vt,tH' To simplify the notation, we reserve t for the temporal instant at which the last shot boundary has occurred and make all temporal indexes relative to this instant. I.e. instead of St+r,t+r+6 we write Sr,r+6, or simply S6 if T = O. Furthermore, we reserve the symbol 8 for the duration of the interval between successive frames (inverse of the frame rate), and use the same notation for a simple frame interval and a vector of frame intervals (the temporal indexes being themselves enough to avoid ambiguity). I.e., while Sr,rH = 0 indicates that no shot boundary is present in the interval [t + T, t + T + 8], SrH = indicates that no shot boundary has occurred in any of the frames between t and t + T + 8. Similarly, VrH represents the vector of observations in [t, t + T + 8]. ? 4.2 Bayesian formulation Given that there is a shot boundary at time t and no boundaries occur in the interval [t, t + T], the posterior probability that the next shot change happens during the interval [t +T, t +T+ 8] is, using Bayes rule, P(Sr,rH = 11Sr = 0, VrH) = 'YP(VrHISr = O,Sr,rH = l)P(Sr,rH = 11Sr = 0), where'Y is a normalizing constant. Similarly, the probability of no change in [t + T, t + T + 8] is P(Sr,rH = OISr = 0, VrH) = 'YP(VrHISrH = O)P(Sr,rH and the posterior odds ratio between the two hypothesis is P(Sr ,rH P(Sr,rH = 11Sr = 0, VrH) = OISr = 0, VrH) = OISr = 0), = 1) P(Sr,rH = 11Sr = 0) = 0) P(Sr,rH = OISr = 0) = P(Vr,rHISr,rH = 1) P(Sr,rH = I,Sr = 0\,5) = P(Vr,rHISr,rH P(Vr,rHISr,rH P(Vr,rHISr ,rH = 0) P(SrH = 0) where we have assumed that, given Sr,rH, Vr,rH is independent of all other V and S. In this expression, while the first term on the right hand side is the ratio of the conditional likelihoods of activity given the state sequence, the second term is simply the ratio of probabilities that there may (or not) be a shot transition T units of time after the previous one. Hence, the shot duration density becomes a prior for the segmentation process. This is intuitive since knowledge about the shot duration is a form of prior knowledge about the structure of the video that should be used to favor segmentations that are more plausible. Assuming further that V is stationary, defining Llr = [t + T, t + T + <5], considering the probability density function p( T) for the time elapsed until the first scene change after t, and taking logarithms, leads to a log posterior odds ratio C p08t of the form Cp08t P(V~TIS~T = 1) J: H p(a)da = log P(V~T IS~T = 0) + log Joo ()d . r+6 P a a The optimal answer to the question if a shot change occurs or not in [t thus to declare that a boundary exists if P(V,dS~T = 1) > 10 log P(V~T IS~T = 0) - Jr':6 P(a)da g J: H p(a)da = 7(T) ' (6) + T, t + T + 8] is (7) and that there is no boundary otherwise. Comparing this with (4), it is clear that the inclusion of the shot duration prior transforms the fixed thresholding approach into an adaptive one, where the threshold depends on how much time has elapsed since the previous shot boundary. 4.2.1 The Erlang model It can be shown that, under the Erlang assumption, (8) and the threshold of (7) becomes "( ) -1 " T - og L~-l ?i,.x(T + 8) r . Li=lh,.x(T) - ?i,.x(T + 8)] (9) Its variation over time is presented in Figure 4. While in the initial segment of the shot, the threshold is large and shot changes are unlikely to be accepted, the threshold decreases as the scene progresses increasing the likelihood that shot boundaries will be declared. -- . - - ,, ~ ... ---- .- Figure 4: Temporal evolution of the Bayesian threshold for the Erlang (left) and Weibull (center) priors. Right: Total number of errors for all thresholds. Even though, qualitatively, this is behavior that what one would desire, a closer observation of the figure reveals the major limitation of the Erlang prior: its steady-state behavior. Ideally, in addition to decreasing monotonically over time, the threshold should not be lower bounded by a positive value as this may lead to situations in which its steady-state value is high enough to miss several consecutive shot boundaries. This limitation is a consequence of the constant arrival rate assumption discussed in section 2 and can be avoided by relying instead on the Weibull prior. 4.2.2 The Weibull model It can be shown that, under the Wei bull assumption, (10) from which Tw (T) = - log { exp [(T + 8J: - TCX ] - 1} . (11) As illustrated by Figure 4, unlike the threshold associated with the Erlang prior, Tw(T) tends to -00 when T grows without bound. This guarantees that a new shot boundary will always be found if one waits long enough. In summary, both the Erlang and the Weibull prior lead to adaptive thresholds that are more intuitive than the fixed threshold commonly employed for shot segmentation. 5 Segmentation Results The performance of Bayesian shot segmentation was evaluated on a database containing the promotional trailers of Figure 1. Each trailer consists of 2 to 5 minutes of video and the total number of shots in the database is 1959. In all experiments, performance was evaluated by the leave-one- out method. Ground truth was obtained by manual segmentation of all the trailers. We evaluated the performance of Bayesian models with Erlang, Weibull and Poisson shot duration priors and compared them against the best possible performance achievable with a fixed threshold. For the latter, the optimal threshold was obtained by brute-force, i.e. testing several values and selecting the one that performed best. Error rates for all priors are shown in Figure 4 where it is visible that, while the Poisson prior leads to worse accuracy than the static threshold, both the Erlang and the Weibull priors lead to significant improvements. The Weibull prior achieves the overall best performance decreasing the error rate of the static threshold by 20%. The reasons for the improved performance of Bayesian segmentation are illustrated by Figure 5, which presents the evolution of the thresholding process for a segment from one of the trailers in the database ("blankman"). Two thresholding approaches are depicted: Bayesian with the Weibull prior, and standard fixed thresholding. The adaptive behavior of the Bayesian threshold significantly increases the robustness against spurious peaks of the activity metric originated by events such as very fast motion, explosions, camera flashes, etc. Figure 5: An example of the thresholding process. Top: Bayesian. The likelihood ratio and the Weibull threshold are shown. Bottom: Fixed. Histogram distances and optimal threshold (determined by leave-one-out using the remainder of the database) are presented. Errors are indicated by circles. References [1] D. Bordwell and K. Thompson. Film Art: an Introduction. McGraw-Hill, 1986. [2] J. Boreczky and L. Rowe. Comparison of Video Shot Boundary Detection Techniques. In Proc. SPIE Con! on Visual Communication and Image Processing, 1996. [3] A. Drake. Fundamentals of Applied Probability Theory. McGraw-Hill, 1987. [4] W. Niblack et al. The QBIC project: Querying images by content using color, texture, and shape. In Storage and Retrievalfor Image and Video Databases, pages 173- 181, SPIE, Feb. 1993, San Jose, California. [5] R. Hogg and E. Tanis. Probability and Statistical Inference. Macmillan, 1993. [6] K. Reisz and G. Millar. The Technique of Film Editing. Focal Press, 1968. [7] M. Swain and D. Ballard. Color Indexing. International Journal of Computer Vision , Vol. 7(1):11- 32, 1991.
1812 |@word achievable:1 norm:1 coarseness:1 seems:1 decomposition:1 covariance:1 shot:53 initial:1 selecting:1 current:3 comparing:1 od:1 nt:1 parsing:1 visible:1 shape:1 plot:1 v:2 stationary:1 inspection:1 characterization:6 provides:1 ames:1 successive:3 retrieving:1 prove:1 consists:1 wild:1 introduce:2 inter:3 expected:3 behavior:3 themselves:2 terminal:1 relying:1 montage:3 decreasing:3 automatically:1 little:1 considering:2 increasing:1 becomes:3 provided:1 project:1 underlying:2 notation:3 bounded:1 medium:3 what:1 weibull:13 developed:1 guarantee:1 temporal:7 ti:1 puppet:1 brute:1 unit:3 positive:2 declare:1 local:1 jungle:3 tends:1 consequence:1 despite:1 studied:1 suggests:1 limited:1 practical:2 camera:2 testing:1 lippman:1 procedure:1 area:1 empirical:4 significantly:2 pre:1 refers:1 regular:3 wait:1 close:1 storage:1 context:2 www:1 peiformance:1 center:1 primitive:1 duration:18 thompson:1 simplicity:1 rule:1 s6:1 variation:1 user:1 hypothesis:2 element:4 velocity:1 recognition:1 walking:2 asymmetric:1 database:5 observed:1 cloud:3 bottom:1 capture:2 decrease:1 aster:1 princess:2 constrains:1 ideally:1 segment:2 ithe:1 exit:1 easily:1 various:3 america:1 genre:3 fast:1 outcome:2 film:5 larger:1 widely:1 plausible:1 otherwise:2 favor:1 sequence:3 reisz:1 aro:2 remainder:1 relevant:1 achieve:1 intuitive:4 cluster:1 categorization:1 leave:2 object:1 andrew:1 friend:1 develop:1 progress:3 strong:1 judge:1 concentrate:1 bin:1 trailer:5 generalization:1 elementary:1 designate:1 miami:2 hold:1 ground:1 exp:2 reserve:2 major:2 achieves:1 consecutive:1 released:1 purpose:1 proc:1 applicable:1 currently:1 grouped:1 mit:3 clearly:1 always:3 gaussian:2 avoid:1 og:1 categorizing:1 focus:1 improvement:1 likelihood:7 indicates:2 summarizing:1 inference:2 promotional:2 dependent:1 streaming:1 unlikely:1 spurious:1 overall:1 classification:3 development:1 spatial:1 art:1 drake:1 once:1 vasconcelos:1 extraction:2 ng:1 represents:1 commercially:1 develops:1 inherent:1 simplify:1 bil:1 composed:1 ourselves:1 detection:1 interest:1 message:2 evaluation:1 truly:1 mixture:4 llr:1 closer:2 erlang:18 explosion:1 lh:1 logarithm:1 walk:1 circle:2 modeling:3 bull:2 introducing:1 uniform:3 usefulness:1 swain:1 answer:1 accomplish:1 thoroughly:1 st:6 density:10 fundamental:2 river:1 peak:1 international:1 decoding:1 again:1 reflect:1 ambiguity:1 containing:1 worse:1 dead:1 book:1 leading:2 yp:2 li:1 account:3 coding:1 includes:1 vi:2 stream:1 depends:1 performed:1 view:1 characterizes:1 millar:1 start:1 bayes:1 accuracy:2 descriptor:1 wal:1 bayesian:13 identification:1 pervasiveness:1 lighting:1 confirmed:1 manual:1 ed:1 against:2 rowe:1 nuno:2 associated:1 spie:2 static:2 con:1 mise:3 knowledge:5 color:4 segmentation:17 attained:1 improved:2 wei:2 editing:2 formulation:6 evaluated:3 though:1 strongly:1 furthermore:1 until:1 d:1 hand:5 overlapping:1 french:2 indicated:1 grows:1 building:1 evolution:2 hence:1 assigned:1 laboratory:1 semantic:3 illustrated:4 deal:1 during:2 die:1 steady:2 juni:1 hill:2 demonstrate:1 performs:1 motion:3 image:7 instantaneous:1 clause:1 extend:1 association:1 tail:1 rth:1 occurred:2 discussed:1 significant:2 composition:1 cambridge:1 theorist:1 automatic:1 focal:1 hogg:1 similarly:2 inclusion:1 joo:1 actor:1 similarity:1 etc:1 feb:1 posterior:3 recent:1 vt:1 exploited:1 tide:2 seen:2 employed:1 period:1 monotonically:2 signal:1 characterized:3 long:1 retrieval:1 lai:1 fighter:1 j3:1 involving:1 vision:1 metric:2 poisson:8 histogram:15 represent:1 sleeping:2 addition:1 interval:10 standpoint:1 unlike:1 exhibited:1 sr:18 virtually:1 legend:1 flow:1 odds:2 near:1 enough:3 fit:8 identified:1 restrict:1 expression:1 utility:1 wo:1 suffer:1 buildup:1 action:2 useful:2 santa:3 clear:2 transforms:1 dredd:2 concentrated:1 category:1 http:1 outperform:1 tanis:1 discrete:2 write:1 vol:1 threshold:17 eden:2 ce:1 wood:1 angle:2 inverse:1 you:1 jose:1 stylistic:2 reasonable:1 decision:1 bound:1 followed:2 activity:11 occur:1 placement:1 constraint:1 scene:6 declared:1 aspect:3 optical:1 jr:1 em:1 tw:2 happens:1 indexing:2 computationally:1 fail:1 letting:1 generalizes:1 gaussians:1 appropriate:2 occurrence:2 alternative:3 robustness:1 top:1 emotional:1 instant:2 vrh:5 bl:1 question:1 occurs:3 exhibit:1 distance:5 zooming:1 street:1 topic:1 reason:1 jit:1 assuming:1 modeled:1 relationship:2 index:2 ratio:6 difficult:2 unfortunately:1 boy:1 design:1 perform:1 observation:2 enabling:1 beat:1 defining:1 situation:1 communication:2 frame:14 introduced:1 namely:1 required:1 populates:1 junior:1 extensive:1 california:1 elapsed:4 hour:1 usually:1 pattern:1 video:25 explanation:1 mouth:1 niblack:1 event:5 natural:1 rely:1 force:1 e15:1 improve:2 technology:1 movie:6 picture:1 extract:1 naive:1 prior:16 understanding:2 relative:1 suggestion:1 limitation:3 querying:1 conveyed:1 sufficient:1 thresholding:9 principle:1 scout:2 qbic:1 summary:1 surprisingly:1 last:2 dis:1 side:1 allow:2 deeper:1 wide:1 taking:1 regard:1 boundary:18 transition:5 collection:1 adaptive:4 qualitatively:1 avoided:1 commonly:1 san:1 mcgraw:2 ml:2 active:1 reveals:2 assumed:2 lip:1 ballard:1 complex:1 domain:1 da:3 rh:17 arrival:7 fig:1 en:3 vr:5 originated:1 exponential:4 perceptual:1 minute:1 bad:1 symbol:2 normalizing:1 grouping:1 exists:1 albeit:1 texture:3 browsing:1 madness:2 depicted:2 simply:2 visual:3 desire:1 macmillan:1 kiss:1 truth:1 ma:1 conditional:4 goal:1 flash:1 man:1 content:4 hard:1 change:5 determined:1 miss:1 multimedia:1 total:2 accepted:1 people:1 seoul:1 latter:1 relevance:1 incorporate:1 requisite:1
888
1,813
Finding the Key to a Synapse Thomas Natschlager & Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz, Austria {tnatschl, maass}@igi.tu-graz.ac.at Abstract Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature. 1 Introduction A large number of experimental studies have shown that biological synapses have an inherent dynamics, which controls how the pattern of amplitudes of postsynaptic responses depends on the temporal pattern of the incoming spike train. Various quantitative models have been proposed involving a small number of characteristic parameters, that allow us to predict the response of a given synapse to a given spike train once proper values for these characteristic synaptic parameters have been found. The analysis of this article is based on the model of [1], where three parameters U, F, D control the dynamics of a synapse and a fourth parameter A - which corresponds to the synaptic "weight" in static synapse models - scales the absolute sizes of the postsynaptic responses. The resulting model predicts the amplitude Ak for the kth spike in a spike train with interspike intervals (lSI's) .60 1 , .60 2 , ? .. ,.6ok-l through the equations l Ak = A? Uk' Rk Uk = U +Uk-l(1- U)exp(-.6ok-dF) Rk = 1 + (Rk-l - Uk-1Rk-l - 1) exp( -.6ok-d D) (1) which involve two hidden dynamic variables U E [0,1] and R E [0,1] with the initial conditions Ul = U and Rl = 1 for the first spike. These dynamic variables evolve in dependence of the synaptic parameters U, F, D and the interspike intervals of the incoming ITo be precise: the term Uk-1Rk-l in Eq. (1) was erroneously replaced by ukRk-l in the corresponding Eq. (2) of [1]. The model that they actually fitted to their data is the model considered in this article. I IIIIII 0 .75 u ! input spike train B A fA 0 .5 II C 0 .25 F2 o I lUI I a 0.75 fi I IIIIIII IIIII III II m! I I ] I II ~ lIn! Un I II 2 3 4 5 time [sec) 0 Figure 1: Synaptic heterogeneity. A The parameters U, D, and F can be determined for biological synapses. Shown is the distribution of values for inhibitory synapses investigated in [2] which can be grouped into three mayor classes: facilitating (Ft), depressing (F2) and recovering (F3). B Synapses produce quite different outputs for the same input for different values of the parameters U, D, and F. Shown are the amplitudes Uk ? Rk (height of vertical bar) of the postsynaptic response of a FI-type and a F2-type synapse to an irregular input spike train. The parameters for synapses fA and F2 are the mean values for the synapse types FI and F2 reported in [2]: (U, D, F) =(0.16,45 msec, 376 msec) for FI , and (0.25,706 msec, 21 msec) for F2 ? spike train. 2 It is reported in [2] that the synaptic parameters U, F, D are quite heterogeneous, even within a single neural circuit (see Fig. IA). Note that the time constants D and F are in the range of a few hundred msec. The synapses investigated in [2] can be grouped into three major classes: facilitating (FI), depressing (F2) and recovering (F3). Fig. IB compares the output of a typical FI-type and a typical F2-type synapse in response to a typical irregular spike train. One can see that the same input spike train yields markedly different outputs at these two synapses. In this article we address the question which temporal pattern of a spike train is optimally fitted to a given synapse characterized by the three parameters U, F, D in a certain sense. One possible choice is to look for the temporal pattern of a spike train which produces the largest integral of synaptic current. Note that in the case where the dendritic integration is approximately linear the integral of synaptic current is proportional to the sum 'E~=l A . Uk . Rk of postsynaptic responses. We would like to stress, that the computational methods we will present are not restricted to any particular choice of the optimality criterion. For example one can use them also to compute the spike train which produces the largest peak of the postsynaptic membrane voltage. However, in the following we will focus on the question which temporal pattern of a spike train produces the largest sum 'E~=l A? Uk . Rk of postsynaptic responses (or equivalently the largest integral of postsynaptic current). More precisely, we fix a time interval T , a minimum value ~min for lSI's, a natural number N , and synaptic parameters U, F, D . We then look for that spike train with N spikes during T and lSI's 2:: ~min that maximizes 'E~=l A? Uk' Rk. Hence we seek for a solutionthat is a sequence ofISI's ~l' ~2' ... , ~N-I - to the optimization problem N maximize LA. k=l N-I Uk . Rk under L ~k ~ T and ~min ~ ~k' 1 ~ k < N . (2) k=l In Section 2 of this article we present an algorithmic approach based on dynamic program2It should be noted that this deterministic model predicts the cumulative response of a population of stochastic release sites that make up a synaptic connection. ming that is guaranteed to find the optimal solution of this problem (up to discretization errors), and exhibit for major types of synapses temporal patterns of spike trains that are optimally fitted to these synapses. In Section 3 we present a faster heuristic method for computing optimally fitted spike trains, and apply it to analyze how their temporal pattern depends on the number N of allowed spikes during time interval T, i.e., on the firing rate f = NIT. Furthermore we analyze in Section 3 how changes in the synaptic parameters U, F, D affect the temporal pattern of the optimally fitted spike train. 2 Computing Optimal Spike Trains for Common Types of Synapses Dynamic Programming For T = 1000 msec and N = 10 there are about 2100 spike trains among which one wants to find the optimally fitted one. We show that a computationally feasible solution to this complex optimization problem can be achieved via dynamic programming. We refer to [3] for the mathematical background of this technique, which also underlies the computation of optimal policies in reinforcement learning. We consider the discrete time dynamic system described by the equation Xl = (U, 1, 0) and Xk+1 = g(Xk, ak) for k = 1, ... , N - 1 (3) where Xk describes the state of the system at step k, and ak is the "control" or "action" taken at step k. In our case Xk is the triple (Uk, Rk, tk) consisting of the values of the dynamic variables U and R used to calculate the amplitude A . Uk . Rk of the kth postsynaptic response, and the time tk of the arrival of the kth spike at the synapse. The "action" ak is the length Ilk E [Il min , T - tkJ of the kth lSI in the spike train that we construct, where Ilmin is the smallest possible size of an lSI (we have set Ilmin = 5 msec in our computations). As the function gin Eq. (3) we take the function which maps (Uk, Rk, tk) and Ilk via Eq. (1) on (uk+l,Rk+l,tk+1) for tk+1 = tk + Ilk. The "reward" for the kth spike is A . Uk . Rk, i.e., the amplitude of the postsynaptic response for the kth spike. Hence maximizing the total reward J(Xl) = 2:~=1 A? Uk? Rk is equivalent to solving the maximization problem (2). The maximal possible value of J l (Xl) can be computed exactly via the equations A? UN? RN Jk(Xk) = max (A? Uk? Rk + Jk+1(g(Xk, Il))) IN(XN) = (4) ~E[~min,T-tkl backwards from k = N - 1 to k = 1. Thus the optimal sequence al, ... , aN-l of "actions" is the sequence Ill, .. . , IlN -1 of lSI's that achieves the maximal possible value of 2:~=1 A . Uk . Rk . Note that the evaluation of Jk(Xk) for a single value of Xk requires the evaluation of Jk+1 (Xk+1) for many different values of Xk+1. 3 The "Key" to a Synapse We have applied the dynamic programming approach to three major types of synapses reported in [2]. The results are summarized in Fig. 2 to Fig. 5. We refer informally to the temporal pattern of N spikes that maximizes the response of a particular synapse as the "key" to this synapse. It is shown in Fig. 3 that the "keys" for the inhibitory synapses Fi and F2 are rather specific in the sense that they exhibit a substantially smaller postsynaptic response on any other of the major types of inhibitory synapses reported in [2]. The specificity of a "key" to a synapse is most pronounced for spiking frequencies f below 20 Hz. One may speculate that due to this feature a neuron can activate - even without changing its firing rate - a particular subpopulation of its target neurons by generating a series of action potentials with a suitable temporal pattern, see 3When one solves Eq. (4) on a computer, one has to replace the continuous state variable Xk by a discrete variable Xk , and round Xk+l := g(Xk'~) to the nearest value of the corresponding discrete variable Xk+l. For more details about the discretization of the model we refer the reader to [4]. Fl III I I I I I I I I I I I 0.75 ~ .!!!.. I I 0.5 o 0.25 i.~ 0.5 a a 0.25 F3 I II o 0.2 0.4 0.6 0.8 time [sec] u Figure 2: Spike trains that maximize the sum of postsynaptic responses for three common types of synapses (T = 0.8 sec, N = 15 spikes). The parameters for synapses Fi, F2, and F3 are the mean values for the synapse types FI, F2 and F3 reported in [2] : (U, D, F} = (0.16,45 msec, 376 msec} for Fl , (0.25,706 msec, 21 msec} for F2 , and (0.32, 144 msec, 62 msec} for F3 . key to synapse response of a response of a Fl-type synapse F2-type synapse Fl I II I I I I I I I I I I I I key to synapse III F2 ({/ % II II 81% I I Figure 3: Specificity of optimal spike trains. The optimal spike trains for synapses Fl and F2 - the "keys" to the synapses Fl and F2 - obtained for T = 0.8 sec and N = 15 spikes are tested on the synapses Fl and F2 . If the "key" to synapse Fl (F2 ) is tested on the synapse Fl (F2 ) this synapse produces the maximal (l00 %) postsynaptic response. If on the other hand the "key" to synapse Fl (F2) is tested on synapse F2 (Fl ) this synapse produces significantly less postsynaptic response. Fig. 4. Recent experiments [5, 6] show that neuromodulators can control the firing mode of cortical neurons. In [5] it is shown that bursting neurons may switch to regular firing if norepinephine is applied. Together with the specificity of synapses to certain temporal patterns these findings point to one possible mechanism how neuromodulators can change the effective connectivity of a neural circuit. Relation to discharge patterns A noteworthy aspect of the "keys" shown in Fig. 2 (and in Fig. 6 and Fig. 7) is that they correspond to common firing patterns ("accommodating", "non-accommodating", "stuttering", "bursting" and "regular firing") of neocortical interneurons reported under controlled conditions in vitro [2, 5] and in vivo [7]. For example the temporal patterns of the "keys" to the synapses Fl , F2, and F3 are similar to the discharge patterns of "accommodating" [2], "bursting" [5, 7], and "stuttering" [2] cells respectively. What is the role of the parameter A? Another interesting effect arises if one compares N the optimal values of the sum Ek=l Uk . Rk (i.e. A = 1) for synapses H, F 2 , and F3 (see Fig. 5A) with the maximal values of E~=l A . Uk ? Rk (see Fig. 5B), where we have set synaptic response synaptic response -te--=III -----....:1=1 ---------=.1_-=1~< i 0-I F2 (1- key to synapse F2 Fl l o - - key to synapse Fl 11111111 I I I I I I I Fl Figure 4: Preferential addressing of postsynaptic targets. Due to the specificity of a "key" to a synapse a presynaptic neuron may address (i.e. evoke stronger response at) either neuron A or B, depending on the temporal pattern of the spike train (with the same frequency f = NIT) it produces (T = 0.8 sec and N = 15 in this example). A B 4 ,----- 15 r----- (3 - ,----- - ~10 A::1 A::1 A::1 o A::3.24 A=7.76 A:3.44 Fl F2 F3 o Fl F2 F3 Figure 5: A Absolute values of the sums 2::=1 Uk . Rk if the key to synapse Pi is applied to synapse Pi, i = 1,2,3. B Same as panel A except that the value of 2::=1 A . Uk . Rk is plotted. For A we used the value of G max (in nS) reported in [2]. The quotient max I min is 1.3 compared to 2.13 in panel A. A equal to the value of G max reported in [2]. Whereas the values of G max vary strongly among different synapse types (see Fig. 5B), the resulting maximal response of a synapse to its proper "key" is almost the same for each synapse. Hence, one may speculate that the system is designed in such a way that each synapse should have an equal influence on the postsynaptic neuron when it receives its optimal spike train. However, this effect is most evident for a spiking frequency f = NIT of 10 Hz and vanishes for higher frequencies. 3 Exploring the Parameter Space Sequential Quadratic Programming The numerical approach for approximately computing optimal spike trains that was used in section 2 is sufficiently fast so that an average PC can carry out any of the computations whose results were reported in Fig. 2 within a few hours. To be able to address computationally more expensive issues we used a a nonlinear optimization algorithm known as "sequential quadratic programming" (SQP)4 which is the state of the art approach for heuristically solving constrained optimization problems such as (2). We refer the reader to [8] for the mathematical background of this technique and to [4] for more details about the application of SQP for approximately computing optimal spike trains. Optimal Spike Trains for Different Firing Rates First we used SQP to explore the effect of the spike frequency f = N IT on the temporal pattern of the optimal spike train. For the synapses PI, P2 , and Pa we computed the optimal spike trains for frequencies 4We used the implementation (function constr) which is contained in the MATLAB Optimization Toolbox (see http : //www . ma thworks . com/products/ optimiz a tion/). keys to Fl synapse -h 40 11 111111 111 1111111111111111111111111 ~ 35 111 11 1111 11 !. g ~ ~ 111111 11111 11 h 40 ~ ~ 35 1 synapse keys to I I 1 I I I I I I I 1 1 1I II "-. 30 I I I I 1 1 1 C 25 1 I I t:: g. 20 1 ~ 15 1 -- 11 11 30 11 111 11111 11111 111111 F2 keys to III 25 111 11111 1 11 1 11 111 1 II I -h 20 1 1 111 1 1 1 1 111 1 1 0 0.2 I I 111 1 I I 0.4 0.6 0.8 0.2 0.4 time [sec] 0.6 11 111111 11111 11111 111111 111111 II "-. 30 11 11 1111 111 111111 111 1111 C 25 t:: Q) 20 1 11 1 11 1 11 1 1 11 1 11 1 111 g. II I I I 0.8 0 0.2 time [sec] I I I I I I I I I II II I II I I I I I I II II I I I II II III I I I I I I I I I II II III 1111 11111 I I I I I I II I I I Q) <./:: 15 0.4 0.6 0.8 time [sec] Figure 6: Dependence of the optimal spike train of the synapses spike frequency f = NIT (T = 1 sec, N = 15, ... ,40). 0 .60 0.50 0 .45 0.40 0 .35 ::) 0 .30 0 .25 0.20 0.15 0 .10 synapse 40 111111111 11111 1111 111 1111 11111 11111 11 ~ 35 Q) 15 11 11 11 1 F3 I I I I I I II II I I I I I I II II II II III III 1111 I I I FL F2 , I I I and F3 on the II II II I II III 1111 1111 11111 11111 i i i i 0 0.25 0 .5 0.75 time [sec] Figure 7: Dependence of the optimal spike train on the synaptic parameter U. It is shown how the optimal spike train changes if the parameter U is varied. The other two parameters are set to the value corresponding to synapse F3: D = 144 msec and F = 62 msec. The black bar to the left marks the range of values (mean ? std) reported in [2] for the parameter U. To the right of each spike train we have plotted the corresponding value of J = Ef=i ukRk (gray bars). ranging from 15 Hz to 40 Hz. The results are summarized in Fig. 6. For synapses Fi and F2 the characteristic spike pattern (Fi ... accommodating, F2 ... stuttering) is the same for all frequencies. In contrast, the optimal spike train for synapse F3 has a phase transition from "stuttering" to "non-accommodating" at about 20 Hz. The Impact of Individual Synaptic Parameters We will now address the question how the optimal spike train depends on the individual synaptic parameters U, F, and D. The results for the case of F3-type synapses and the parameter U are summarized in Fig. 7. For results with regard to other parameters and synapse types we refer to [4]. We have marked in Fig. 7 with a black bar the range of U for F3-type synapses reported in [2]. It can be seen that within this parameter range we find "regu]ar" and "bursting" spike patterns. Note that the sum of postsynaptic responses J (gray horizontal bars in Fig. 7) is not proportional to U. While U increases from 0.1 to 0.6 (6 fold change) J only increases by a factor of 2. This seems to be interesting since the parameter U is closely related to the initial release probability of a synapse, and it is a common assumption that the "strength" of a synapse is proportional to its initial release probability. 4 Discussion We have presented two complementary computational approaches for computing spike trains that optimize a given response criterion for a given synapse. One of these methods is based on dynamic programming (similar as in reinforcement learning), the other one on sequential quadratic programming. These computational methods are not restricted to any particular choice of the optimality criterion and the synaptic model. In [4] applications of these methods to other optimality criteria, e.g. maximizing the specificity, are discussed. It turns out that the spike trains that maximize the response of Fl-, F2- and F3-type synapses (see Fig. 1) are well known firing patterns like "accommodating", "bursting" and "regular firing" of specific neuron types. Furthermore for Fl- and F3-type synapses the optimal spike train agrees with the most often found firing pattern of presynaptic neurons reported in [2], whereas for F2-type synapses there is no such agreement; see [4]. This observation provides the first glimpse at a possible functional role of the specific combinations of synapse types and neuron types that was recently found in [2]. Another noteworthy aspect of the optimal spike trains is their specificity for a given synapse (see Fig. 3).: suitable temporal firing patterns activate preferentially specific types of synapses. One potential functional role of such specificity to temporal firing patterns is the possibility of preferential addressing of postsynaptic target neurons (see Fig. 4). Note that there is experimental evidence that cortical neurons can switch their intrinsic firing behavior from "bursting" to "regular" depending on neuromodulator mediated inputs [5, 6]. This findings provide support for the idea of preferential addressing of postsynaptic targets implemented by the interplay of dynamic synapses and the intrinsic firing behavior of the presynaptic neuron. Furthermore our analysis provides the platform for a deeper understanding of the specific role of different synaptic parameters, because with the help of the computational techniques that we have introduced one can now see directly how the temporal structure of the optimal spike train for a synapse depends on the individual synaptic parameters. We believe that this inverse analysis is essential for understanding the computational role of neural circuits. References [1] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. Proc. Natl. Acad. Sci. , 95:5323- 5328, 1998. [2] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science, 287:273- 278, 2000. [3] D. P. Bertsekas. Dynamic Programming and Optimal Control, Volume 1. Athena Scientific, Belmont, Massachusetts, 1995. [4] T. Natschlager and W. Maass . Computing the optimally fitted spike train for a synapse. submitted for publication, electronically available via http : //www. igi . TUGr a z .at/igi/ tn a tschl/psfiles/synkey-journ al. ps . gz, 2000. [5] Z. Wang and D. A. McCormick. Control of firing mode of corticotectal and corticopontine layer V burst generating neurons by norepinephrine. Journal of Neuroscience , 13(5):2199-2216, 1993. [6] J. C. Brumberg, L. G. Nowak, and D. A. McCormick. Ionic mechanisms underlying repetitive high frequency burst firing in supragranular cortical neurons. Journal of Neuroscience , 20(1):4829-4843, 2000. [7] M. Steriade, I. Timofeev, N. Diirmiiller, and F. Grenier. Dynamic properties of corticothalamic neurons and local cortical interneurons generating fast rhytmic (30-40 hz) spike bursts. Journal of Neurophysiology, 79:483-490, 1998. [8] M. J. D. Powell. Variable metric methods for constrained optimization. In A. Bachem, M Grotschel , and B. Korte, editors, Mathematical Programming: The State of the Art, pages 288- 311. Springer Verlag, 1983.
1813 |@word neurophysiology:1 stronger:1 seems:1 heuristically:1 seek:1 carry:1 initial:3 series:1 current:3 discretization:2 com:1 belmont:1 numerical:1 interspike:2 designed:1 xk:15 provides:2 height:1 mathematical:3 burst:3 differential:1 behavior:2 nor:1 ming:1 grotschel:1 underlying:1 circuit:4 maximizes:2 panel:2 natschlager:2 what:1 substantially:1 finding:3 temporal:16 quantitative:1 exactly:1 uk:22 control:6 bertsekas:1 understood:1 local:1 acad:1 ak:5 firing:17 approximately:3 noteworthy:2 black:2 bursting:6 range:4 signaling:1 powell:1 significantly:1 subpopulation:1 specificity:7 regular:4 influence:1 www:2 equivalent:1 deterministic:1 map:1 optimize:1 maximizing:2 nit:4 population:1 discharge:2 target:4 programming:9 agreement:1 pa:1 expensive:1 jk:4 std:1 predicts:2 role:7 ft:1 wang:3 calculate:1 tsodyks:1 graz:2 tkj:1 vanishes:1 reward:2 dynamic:16 solving:2 f2:32 various:1 train:44 fast:2 effective:1 activate:2 quite:2 heuristic:1 whose:1 itself:1 interplay:1 sequence:4 steriade:1 maximal:5 product:1 tu:1 organizing:1 supragranular:1 pronounced:1 sqp:3 p:1 produce:8 generating:3 tk:6 help:1 depending:2 ac:1 nearest:1 eq:5 p2:1 solves:1 recovering:2 quotient:1 implemented:1 closely:1 stochastic:1 fix:1 biological:2 dendritic:1 exploring:1 sufficiently:1 considered:1 exp:2 algorithmic:1 predict:1 major:4 achieves:1 vary:1 smallest:1 proc:1 largest:5 grouped:2 agrees:1 rather:1 voltage:1 publication:1 release:3 focus:1 contrast:1 sense:3 hidden:1 relation:1 journ:1 issue:1 among:2 ill:1 art:2 integration:1 constrained:2 platform:1 tkl:1 equal:2 once:1 f3:18 construct:1 bachem:1 look:2 inherent:1 few:2 individual:3 replaced:1 phase:1 consisting:1 interneurons:3 possibility:1 evaluation:2 pc:1 natl:1 integral:3 nowak:1 preferential:3 glimpse:1 plotted:2 theoretical:1 fitted:9 ar:1 maximization:1 technische:1 addressing:3 hundred:1 universitat:1 optimally:8 reported:12 peak:1 together:1 connectivity:1 neuromodulators:2 ek:1 potential:2 diversity:1 speculate:2 sec:10 summarized:3 psfiles:1 igi:3 depends:4 tion:1 wolfgang:1 analyze:2 vivo:1 il:2 characteristic:3 yield:1 correspond:1 ionic:1 submitted:1 synapsis:34 synaptic:21 frequency:9 static:1 massachusetts:1 austria:1 regu:1 amplitude:6 stuttering:4 actually:1 ok:3 corticothalamic:1 higher:1 response:26 synapse:48 depressing:2 strongly:1 furthermore:3 hand:1 receives:1 horizontal:1 iiiii:1 nonlinear:1 mode:2 gray:2 scientific:1 believe:1 effect:3 hence:3 maass:3 round:1 during:2 iln:1 noted:1 criterion:4 stress:1 evident:1 neocortical:2 tn:1 ranging:1 ef:1 fi:11 recently:1 common:5 functional:2 spiking:2 rl:1 vitro:1 volume:1 discussed:2 refer:5 iiiiiii:1 recent:1 certain:2 verlag:1 seen:1 minimum:1 maximize:3 ii:32 match:1 faster:1 characterized:1 lin:1 controlled:1 impact:1 involving:1 underlies:1 heterogeneous:2 mayor:1 metric:1 df:1 repetitive:1 achieved:1 cell:1 irregular:2 background:2 want:1 whereas:2 interval:4 pyramidal:1 markedly:1 hz:6 backwards:1 iii:10 switch:2 affect:1 idea:1 ul:1 action:4 matlab:1 korte:1 involve:1 informally:1 neocortex:1 http:2 lsi:6 inhibitory:3 neuroscience:2 discrete:3 key:20 changing:1 neither:1 iiiiii:1 sum:7 inverse:1 fourth:1 respond:1 almost:1 reader:2 fl:21 layer:1 guaranteed:1 fold:1 quadratic:3 ilk:3 strength:1 precisely:1 optimiz:1 erroneously:1 aspect:2 optimality:3 min:6 combination:1 membrane:1 describes:1 smaller:1 postsynaptic:20 constr:1 restricted:2 taken:1 computationally:2 equation:3 turn:1 mechanism:2 available:1 apply:1 thomas:1 question:3 spike:59 fa:2 dependence:3 exhibit:2 gin:1 kth:6 sci:1 athena:1 accommodating:6 presynaptic:3 length:1 preferentially:1 equivalently:1 implementation:1 proper:2 policy:1 mccormick:2 vertical:1 neuron:18 observation:1 heterogeneity:2 precise:1 rn:1 varied:1 introduced:1 toolbox:1 connection:1 hour:1 address:4 able:1 bar:5 below:1 pattern:24 max:5 ia:1 suitable:2 natural:1 gabaergic:1 gz:1 mediated:1 literature:1 understanding:2 evolve:1 interesting:2 proportional:3 triple:1 article:5 principle:1 editor:1 pi:3 neuromodulator:1 electronically:1 allow:1 deeper:1 institute:1 markram:2 absolute:2 regard:1 xn:1 cortical:4 cumulative:1 transition:1 reinforcement:2 l00:1 evoke:1 incoming:2 un:2 continuous:1 norepinephrine:1 investigated:2 complex:1 arrival:1 allowed:1 facilitating:2 complementary:1 fig:20 site:1 axon:1 n:1 msec:15 xl:3 ib:1 ito:1 rk:22 specific:6 gupta:1 evidence:1 intrinsic:2 essential:1 sequential:3 te:1 surprise:1 explore:1 contained:1 springer:1 corresponds:1 ma:1 marked:1 replace:1 feasible:2 change:4 determined:1 typical:3 lui:1 except:1 total:1 experimental:3 tnatschl:1 la:1 mark:1 support:1 arises:1 tested:3
889
1,814
Incremental and Decremental Support Vector Machine Learning Gert Cauwenberghs* CLSP, ECE Dept. Johns Hopkins University Baltimore, MD 21218 gert@jhu.edu Tomaso Poggio CBCL, BCS Dept. Massachusetts Institute of Technology Cambridge, MA 02142 tp@ai.mit.edu Abstract An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the KuhnTucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, and decremental "unlearning" offers an efficient method to exactly evaluate leave-one-out generalization performance. Interpretation of decremental unlearning in feature space sheds light on the relationship between generalization and geometry of the data. 1 Introduction Training a support vector machine (SVM) requires solving a quadratic programming (QP) problem in a number of coefficients equal to the number of training examples. For very large datasets, standard numeric techniques for QP become infeasible. Practical techniques decompose the problem into manageable subproblems over part of the data [7, 5] or, in the limit, perform iterative pairwise [8] or component-wise [3] optimization. A disadvantage of these techniques is that they may give an approximate solution, and may require many passes through the dataset to reach a reasonable level of convergence. An on-line alternative, that formulates the (exact) solution for ? + 1 training data in terms of that for ? data and one new data point, is presented here. The incremental procedure is reversible, and decremental "unlearning" of each training sample produces an exact leave-one-out estimate of generalization performance on the training set. 2 Incremental SVM Learning Training an SVM "incrementally" on new data by discarding all previous data except their support vectors, gives only approximate results [II). In what follows we consider incrementallearning as an exact on-line method to construct the solution recursively, one point at a time. The key is to retain the Kuhn-Tucker (KT) conditions on all previously seen data, while "adiabatically" adding a new data point to the solution. 2.1 Kuhn-Thcker conditions In SVM classification, the optimal separating function reduces to a linear combination of kernels on the training data, f(x) = E j QjYjK(xj, x) + b, with training vectors Xi and corresponding labels Yi = ?l. In the dual formulation of the training problem, the ?On sabbatical leave at CBCL in MIT while this work was performed. ?? .. .. W~Whl ? W~ . ~ '~ g,~o: ?? ? : .. . g -0 : pO a,=O , .. . : : C "" ~ C ~"" x, Q. ... f ......... " ,, , a,=C , ~"" x ox, '...... " ,, a, , 0 " ,, , support vector , error vector Figure 1: Soft-margin classification SVM training. coefficients ai are obtained by minimizing a convex quadratic objective function under constraints [12] min : W o<"'.<c - = ~2 '"' ~ a?Q??a? i,j t_ t tJ J '"' ~ a t? + b'"' ~ Y'a' t t i (1) i with Lagrange multiplier (and offset) b, and with symmetric positive definite kernel matrix Qij = YiyjK(Xi,Xj). The first-order conditions on W reduce to the Kuhn-Tucker (KT) conditions: L Qijaj + Yi b - 1 = yd(Xi) - 1 j ~ OJ { =OJ ~ OJ ai =0 ai = C 0< aw ai <C (2) (3) ab j which partition the training data D and corresponding coefficients {ai, b}, i = 1, ... i, in three categories as illustrated in Figure 1 [9]: the set S of margin support vectors strictly on the margin (yd(Xi) = 1), the set E of error support vectors exceeding the margin (not necessarily misc1assified), and the remaining set R of (ignored) vectors within the margin. 2.2 Adiabatic increments The margin vector coefficients change value during each incremental step to keep all elements in D in equilibrium, i.e., keep their KT conditions satisfied. In particular, the KT conditions are expressed differentially as: ~9i Qic~ac Qij~aj +L + Yi~b, 'Vi E D U {c} (4) jES o = Ye~ae + LYj~aj (5) jES where a e is the coefficient being incremented, initially zero, of a "candidate" vector outside D. Since 9i == 0 for the margin vector working set S = {Sl, .. . Sis}, the changes in coefficients must satisfy ~b Q. [ ~aSl = ~aSts 1 [ with symmetric but not positive-definite Jacobian Q= [ YSl 0 YS~s QSle y, Qs~se 1 ~ae (6) Q: YSl QS1Sl YSts QS1Sts QSts Sl QStsSts (7) 1 Thus, in equilibrium t!.b t!.aj = (3t!.a c (3j t!.a c, \::Ij E D (8) (9) with coefficient sensitivities given by (10) where R = Q-l, and (3j according to: == 0 for all j outside S. Substituted in (4), the margins change t!.9i = 'Yit!.a c, \::Ii E D U {c} (11) with margin sensitivities 'Yi = Qic + L Qij(3j + Yi(3, (12) jES and 'Yi 2.3 == 0 for all i in S. Bookkeeping: upper limit on increment t!.a c It has been tacitly assumed above that t!.a c is small enough so that no element of D moves across S, E and/or R in the process. Since the aj and 9i change with a c through (9) and (11), some bookkeeping is required to check each of the following conditions, and determine the largest possible increment t!.a c accordingly: 1. gc :::; 0, with equality when c joins S; 2. a c :::; C, with equality when c joins E; 3. 0:::; aj :::; C , Vj E S, with equality 0 when j transfers from S to R, and equality C when j transfers from S to E; 4. gi :::; 0, Vi E E, with equality when i transfers from E to S; 5. gi :::: 0, Vi E R , with equality when i transfers from R to S. 2.4 Recursive magic: R updates To add candidate c to the working margin vector set S, R is expanded as: The same formula applies to add any vector (not necessarily the candidate) c to S, with parameters (3, (3j and 'Yc calculated as (10) and (12). The expansion ofR, as incremental learning itself, is reversible. To remove a margin vector k from S, R is contracted as: Rij +- Rij - Rkk -lRik Rkj \::Ii,j E S U {O}; i,j :f. k (14) where index 0 refers to the b-term. The R update rules (13) and (14) are similar to on-line recursive estimation of the covariance of (sparsified) Gaussian processes [2]. a, /+1 c c:. a,c support vector error vector Figure 2: Incremental learning. A new vector, initially for O:c = 0 classified with negative margin 9c < 0, becomes a new margin or error vector. 2.5 Incremental procedure LeU! --* i+ 1, by adding point c (candidate margin or error vector) to D : Dl+l = DiU{ c}. Then the new solution {o:f+1, bi +1 }, i = 1, ... i + 1 is expressed in terms of the present solution {o:f, bi}, the present Jacobian inverse n, and the candidate Xc, Yc, as: Algorithm 1 (Incremental Learning, ? --* ? + 1) 1. Initialize ne to zero; > 0, terminate (c is not a margin or error vector); 3. If ge :S 0, apply the largest possible increment ne so that (the first) one of the following 2. If ge conditions occurs: (a) ge = 0: Add c to margin set S, update R accordingly, and terminate; (b) ne = C: Add c to error set E , and terminate; (c) Elements of Dl migrate across S, E, and R ("bookkeeping," section 2.3): Update membership of elements and, if S changes, update R accordingly. and repeat as necessary. The incremental procedure is illustrated in Figure 2. Old vectors, from previously seen training data, may change status along the way, but the process of adding the training data c to the solution converges in a finite number of steps. 2.6 Practical considerations The trajectory of an example incremental training session is shown in Figure 3. The algorithm yields results identical to those at convergence using other QP approaches [7], with comparable speeds on various datasets ranging up to several thousands training points L. A practical on-line variant for larger datasets is obtained by keeping track only of a limited set of "reserve" vectors: R = {i E D I 0 < 9i < t}, and discarding all data for which 9i ~ t. For small t, this implies a small overhead in memory over Sand E. The larger t, the smaller the probability of missing a future margin or error vector in previous data. The resulting storage requirements are dominated by that for the inverse Jacobian n, which scale as (is)2 where is is the number of margin support vectors, #S. 3 Decremental "Unlearning" Leave-one-out (LOO) is a standard procedure in predicting the generalization power of a trained classifier, both from a theoretical and empirical perspective [12]. It is naturally implemented by decremental unlearning, adiabatic reversal of incremental learning, on each of the training data from the full trained solution. Similar (but different) bookkeeping of elements migrating across S, E and R applies as in the incremental case. I Matlab code and data are available at http://bach.ece.jhu.eduJpublgertlsvmlincremental. . -2 - ) -2 - ) 20 40 60 80 100 Iteration Figure 3: Trajectory of coefficients Q:i as a function of iteration step during training, for ? = 100 non-separable points in two dimensions, with C = 10, and using a Gaussian kernel with u = 1. The data sequence is shown on the left. gc'" -1 -------------------- -1 ~ -------------~-gc '" Figure 4: Leave-one-out (LOO) decremental unlearning (Q: c ~ 0) for estimating generalization performance, directly on the training data. gc \c < -1 reveals a LOO classification error. 3.1 Leave-oDe-out procedure LeU ~ ? - 1, by removing point c (margin or error vector) from D: D\c = D \ {c}. The solution {Q:i \c, b\C} is expressed in terms of {Q:i' b}, R and the removed point Xc, Yc. The solution yields gc \c, which determines whether leaving c out of the training set generates a classification error (gc \c < -1). Starting from the full i-point solution: Algorithm 2 (Decremental Unlearning, ? ---+ ? - 1, and LOO Classification) 1. Ifc is not a margin or error vector: Terminate, "correct" (c is already left out, and correctly classified); 2. If c is a margin or error vector with gc training error); < -1: Terminate, "incorrect" (by default as a 3. If c is a margin or error vector with gc ~ -1, apply the largest possible decrement that (the first) one of the following conditions occurs: Q c so (a) gc < -1: Terminate, "incorrect"; (b) Q c = 0: Terminate, "correct"; (c) Elements of Dl migrate across S, E, and R: Update membership of elements and, if S changes, update R accordingly. and repeat as necessary. The leave-one-out procedure is illustrated in Figure 4. o - 0.2 - 0.4 <KJ~ - 0.6 - 0.8 -1 a.c Figure 5: Trajectory of LOO margin Bc as a function of leave-one-out coefficient a c. The data and parameters are as in Figure 3. 3.2 Leave-one-out considerations If an exact LOO estimate is requested, two passes through the data are required. The LOO pass has similar run-time complexity and memory requirements as the incremental learning procedure. This is significantly better than the conventional approach to empirical LOO evaluation which requires ? (partial but possibly still extensive) training sessions. There is a clear correspondence between generalization performance and the LOO margin sensitivity'Yc. As shown in Figure 4, the value of the LOO margin Bc \c is obtained from the sequence of Bc vs. a c segments for each of the decrement steps, and thus determined by their slopes 'Yc. Incidentally, the LOO approximation using linear response theory in [6] corresponds to the first segment of the LOO procedure, effectively extrapolating the value of Bc \c from the initial value of 'Yc. This simple LOO approximation gives satisfactory results in most (though not all) cases as illustrated in the example LOO session of Figure 5. Recent work in statistical learning theory has sought improved generalization performance by considering non-uniformity of distributions in feature space [13] or non-uniformity in the kernel matrix eigenspectrum [10]. A geometrical interpretation of decremental unlearning, presented next, sheds further light on the dependence of generalization performance, through 'Yc, on the geometry of the data. 4 Geometric Interpretation in Feature Space The differential Kuhn-Tucker conditions (4) and (5) translate directly in terms of the sensitivities 'Yi and f3j as Vi E D U {c} 'Yi (15) jES o = Yc + LYjf3j. (16) jES Through the nonlinear map Xi reduce to linear inner products: =Yi'P(Xi) into feature space, the kernel matrix elements Qij = YiyjK(Xi,Xj) = Xi? Xj, Vi,j and the KT sensitivity conditions (15) and (16) in feature space become 'Yi Xi? (Xc +L jES Xjf3j) + Yif3 Vi E D U {c} (17) (18) o Since 'Yi = Vi 0, Yc + LYj{Jj. (19) jES E S, (18) and (19) are equivalent to minimizing a functional: mf3in: We = ~(Xc + L X j {Jj)2 , (20) jES J subject to the equality constraint (19) with Lagrange parameter {J. Furthermore, the optimal value of We immediately yields the sensitivity 'Yc, from (18): 'Yc = 2Wc = (Xc +L X j {Jj)2 ~ O. (21) jES In other words, the distance in feature space between sample c and its projection on S along (16) determines, through (21), the extent to which leaving out c affects the classification of c. Note that only margin support vectors are relevant in (21), and not the error vectors which otherwise contribute to the decision boundary. 5 Concluding Remarks Incremental learning and, in particular, decremental unlearning offer a simple and computationally efficient scheme for on-line SVM training and exact leave-one-out evaluation of the generalization performance on the training data. The procedures can be directly extended to a broader class of kernel learning machines with convex quadratic cost functional under linear constraints, including SV regression. The algorithm is intrinsically on-line and extends to query-based learning methods [1]. Geometric interpretation of decremental unlearning in feature space elucidates a connection, similar to [13], between generalization performance and the distance of the data from the subspace spanned by the margin vectors. References [1] C. Campbell, N. Cristianini and A. Smola, "Query Learning with Large Margin Classifiers," in Proc. 17th Tnt. Con! Machine Learning (TCML2000), Morgan Kaufman, 2000. [2] L. Csato and M. Opper, "Sparse Representation for Gaussian Process Models," in Adv. Neural Information Processing Systems (NIPS'2000), vol. 13, 2001. [3] T.-T. FrieB, N. Cristianini and C. Campbell, "The Kernel Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines," in 15th Tnt. Con! Machine Learning, Morgan Kaufman, 1998. [4] T.S. Jaakkola and D. Haussler, "Probabilistic Kernel Methods," Proc. 7th Int. Workshop on Artificial Tntelligence and Statistics, 1998. [5] T. Joachims, "Making Large-Scale Support Vector Machine Leaming Practical," in Scholkopf, Burges and Smola, Eds., Advances in Kernel Methods- Support Vector Learning, Cambridge MA: MIT Press, 1998, pp 169-184. [6] M. Opper and O. Winther, "Gaussian Processes and SVM: Mean Field Results and Leave-OneOut," Adv. Large Margin Classifiers, A. Smola, P. Bartlett, B. SchOlkopf and D. Schuurmans, Eds., Cambridge MA: MIT Press, 2000, pp 43-56. [7] E. Osuna, R. Freund and F. Girosi, ''An Improved Training Algorithm for Support Vector Machines," Proc. 1997 TEEE Workshop on Neural Networks for Signal Processing, pp 276-285, 1997. [8] J.C. Platt, "Fast Training of Support Vector Machines Using Sequential Minimum Optimization," in Scholkopf, Burges and Smola, Eds., Advances in Kernel Methods- Support Vector Learning, Cambridge MA: MIT Press, 1998, pp 185-208. [9] M. Pontil and A. Verri, "Properties of Support Vector Machines," it Neural Computation, vol. 10, pp 955-974, 1997. [10] B. Scholkopf, J. Shawe-Taylor, A.J. Smola and R.C. Williamson, "Generalization Bounds via Eigenvalues of the Gram Matrix," NeuroCOLT, Technical Report 99-035, 1999. [11] N.A. Syed, H. Liu and K.K. Sung, "Incremental Learning with Support Vector Machines," in Proc. Int. foint Con! on Artificial Intelligence (IJCAI-99), 1999. [12] V. Vapnik, The Nature of Statistical Learning Theory,' New York: Springer-Verlag, 1995. [13] V. Vapnik and O. Chapelle, "Bounds on Error Expectation for SVM," in Smola, Bartlett, Scholkopf and Schuurmans, Eds., Advances in Large Margin Classifiers, Cambridge MA: MIT Press, 2000.
1814 |@word manageable:1 t_:1 covariance:1 recursively:1 initial:1 liu:1 bc:4 si:1 must:1 john:1 partition:1 girosi:1 remove:1 extrapolating:1 update:7 v:1 intelligence:1 accordingly:4 contribute:1 along:2 become:2 differential:1 scholkopf:5 qij:4 incorrect:2 overhead:1 unlearning:10 pairwise:1 tomaso:1 adiabatically:1 considering:1 becomes:1 estimating:1 what:1 kaufman:2 sung:1 adatron:1 shed:2 exactly:1 classifier:4 platt:1 positive:2 diu:1 limit:2 yd:2 limited:1 bi:2 practical:4 recursive:3 definite:2 procedure:11 pontil:1 empirical:2 jhu:2 significantly:1 projection:1 word:1 refers:1 storage:1 conventional:1 map:1 equivalent:1 missing:1 starting:1 convex:2 immediately:1 q:1 rule:1 haussler:1 spanned:1 gert:2 increment:5 exact:5 programming:1 elucidates:1 element:8 rij:2 thousand:1 adv:2 incremented:1 removed:1 complexity:1 cristianini:2 tacitly:1 trained:2 uniformity:2 solving:1 segment:2 po:1 various:1 fast:2 query:2 artificial:2 outside:2 larger:2 otherwise:1 statistic:1 gi:2 itself:1 sequence:2 eigenvalue:1 product:1 relevant:1 translate:1 differentially:1 convergence:2 ijcai:1 requirement:2 produce:1 incidentally:1 incremental:16 leave:11 converges:1 ac:1 ij:1 implemented:1 implies:1 kuhn:4 correct:2 sand:1 require:1 generalization:11 rkk:1 decompose:1 strictly:1 migrating:1 cbcl:2 equilibrium:2 reserve:1 sought:1 estimation:1 proc:4 label:1 largest:3 mit:6 gaussian:4 broader:1 jaakkola:1 joachim:1 check:1 membership:2 initially:2 classification:6 dual:1 whl:1 initialize:1 equal:1 construct:1 field:1 identical:1 future:1 report:1 geometry:2 ab:1 evaluation:2 light:2 tj:1 kt:5 partial:1 necessary:2 poggio:1 old:1 taylor:1 theoretical:1 soft:1 tp:1 disadvantage:1 formulates:1 cost:1 loo:14 aw:1 sv:1 winther:1 sensitivity:6 retain:2 probabilistic:1 hopkins:1 satisfied:1 possibly:1 coefficient:9 int:2 satisfy:1 vi:7 performed:1 cauwenberghs:1 ifc:1 slope:1 yield:3 trajectory:3 classified:2 reach:1 ed:4 pp:5 tucker:3 naturally:1 con:3 dataset:1 massachusetts:1 intrinsically:1 leu:2 f3j:1 jes:9 campbell:2 response:1 improved:2 verri:1 formulation:1 ox:1 though:1 furthermore:1 smola:6 working:2 nonlinear:1 reversible:3 incrementally:1 aj:5 ye:1 multiplier:1 asl:1 analytically:1 equality:7 symmetric:2 satisfactory:1 illustrated:4 during:2 geometrical:1 ranging:1 wise:1 consideration:2 bookkeeping:4 functional:2 qp:3 sabbatical:1 interpretation:4 cambridge:5 ai:6 session:3 shawe:1 chapelle:1 add:4 recent:1 perspective:1 verlag:1 yi:11 ofr:1 seen:3 morgan:2 minimum:1 determine:1 signal:1 ii:3 full:2 bcs:1 reduces:1 technical:1 offer:2 bach:1 dept:2 y:1 variant:1 regression:1 ae:2 expectation:1 iteration:2 kernel:10 csato:1 ode:1 baltimore:1 leaving:2 pass:2 subject:1 enough:1 xj:4 affect:1 reduce:2 inner:1 whether:1 bartlett:2 york:1 jj:3 remark:1 matlab:1 migrate:2 ignored:1 se:1 clear:1 qic:2 category:1 http:1 sl:2 track:1 correctly:1 vol:2 key:1 yit:1 run:1 inverse:2 extends:1 reasonable:1 decision:1 comparable:1 bound:2 correspondence:1 quadratic:3 constraint:3 dominated:1 generates:1 wc:1 speed:1 min:1 concluding:1 expanded:1 separable:1 according:1 combination:1 across:4 smaller:1 osuna:1 making:1 computationally:1 previously:3 ge:3 reversal:1 available:1 apply:2 alternative:1 remaining:1 xc:5 rkj:1 objective:1 move:1 already:1 occurs:2 dependence:1 md:1 subspace:1 distance:2 separating:1 neurocolt:1 frieb:1 extent:1 eigenspectrum:1 code:1 index:1 relationship:1 minimizing:2 subproblems:1 negative:1 oneout:1 magic:1 perform:1 upper:1 datasets:3 finite:1 sparsified:1 extended:1 gc:9 yiyjk:2 tnt:2 required:2 extensive:1 connection:1 nip:1 yc:11 oj:3 memory:2 including:1 power:1 syed:1 predicting:1 scheme:1 technology:1 ne:3 clsp:1 kj:1 geometric:2 freund:1 repeat:2 keeping:1 infeasible:1 burges:2 institute:1 kuhntucker:1 sparse:1 boundary:1 calculated:1 dimension:1 numeric:1 default:1 opper:2 gram:1 decremental:11 approximate:2 status:1 keep:2 reveals:1 assumed:1 xi:9 iterative:1 terminate:7 transfer:4 nature:1 schuurmans:2 requested:1 expansion:1 ysl:2 williamson:1 necessarily:2 substituted:1 vj:1 decrement:2 contracted:1 join:2 lyj:2 adiabatic:3 exceeding:1 candidate:5 jacobian:3 formula:1 removing:1 discarding:2 offset:1 svm:8 dl:3 workshop:2 vapnik:2 adding:3 effectively:1 sequential:1 margin:30 lagrange:2 expressed:3 applies:2 springer:1 corresponds:1 determines:2 ma:5 leaming:1 change:7 determined:1 except:1 pas:1 ece:2 support:18 evaluate:1
890
1,815
Sequentially fitting "inclusive" trees for inference in noisy-OR networks Brendan J. Frey!, Relu Patrascu l 1 Intelligent Algorithms Lab University of Toronto www.cs.toronto.edu/~frey , Tommi S. Jaakkola\ Jodi Moranl Computer Science and Electrical Engineering Massachusetts Institute of Technology 2 Abstract An important class of problems can be cast as inference in noisyOR Bayesian networks, where the binary state of each variable is a logical OR of noisy versions of the states of the variable's parents. For example, in medical diagnosis, the presence of a symptom can be expressed as a noisy-OR of the diseases that may cause the symptom - on some occasions, a disease may fail to activate the symptom. Inference in richly-connected noisy-OR networks is intractable, but approximate methods (e .g., variational techniques) are showing increasing promise as practical solutions. One problem with most approximations is that they tend to concentrate on a relatively small number of modes in the true posterior, ignoring other plausible configurations of the hidden variables. We introduce a new sequential variational method for bipartite noisyOR networks, that favors including all modes of the true posterior and models the posterior distribution as a tree. We compare this method with other approximations using an ensemble of networks with network statistics that are comparable to the QMR-DT medical diagnostic network. 1 Inclusive variational approximations Approximate algorithms for probabilistic inference are gaining in popularity and are now even being incorporated into VLSI hardware (T. Richardson, personal communication). Approximate methods include variational techniques (Ghahramani and Jordan 1997; Saul et al. 1996; Frey and Hinton 1999; Jordan et al. 1999), local probability propagation (Gallager 1963; Pearl 1988; Frey 1998; MacKay 1999a; Freeman and Weiss 2001) and Markov chain Monte Carlo (Neal 1993; MacKay 1999b). Many algorithms have been proposed in each of these classes. One problem that most of the above algorithms suffer from is a tendency to concentrate on a relatively small number of modes of the target distribution (the distribution being approximated). In the case of medical diagnosis, different modes correspond to different explanations of the symptoms. Markov chain Monte Carlo methods are usually guaranteed to eventually sample from all the modes, but this may take an extremely long time, even when tempered transitions (Neal 1996) are (a) ,,"\ (b) Q(x) \ fiE. .. x .. x Figure 1: We approximate P(x) by adjusting the mean and variance of a Gaussian, Q(x}. (a) The result of minimizing D(QIIP) = 2:" Q(x)log(Q(x)/ P(x?, as is done for most variational methods. (b) The result of minimizing D(PIIQ) = 2:" P(x)log(P(x)/Q(x?. used. Preliminary results on local probability propagation in richly connected networks show that it is sometimes able to oscillate between plausible modes (Murphy et al. 1999; Frey 2000), but other results also show that it sometimes diverges or oscillates between implausible configurations (McEliece et al. 1996). Most variational techniques minimize a cost function that favors finding the single, most massive mode, excluding less probable modes of the target distribution (e.g., Saul et al. 1996; Ghahramani and Jordan 1997; Jaakkola and Jordan 1999; Frey and Hinton 1999; Attias 1999). More sophisticated variational techniques capture multiple modes using substructures (Saul and Jordan 1996) or by leaving part of the original network intact and approximating the remainder (Jaakkola and Jordan 1999). However, although these methods increase the number of modes that are captured, they still exclude modes. Variational techniques approximate a target distribution P(x) using a simpler, parameterized distribution Q(x) (or a parameterized bound). For example, P(diseasel, disease2,'" , diseaseNlsymptoms) may be approximated by a factorized distribution, Ql (diseasedQ2 (disease2) .. ?QN(diseaseN). For the current set of observed symptoms, the parameters of the Q-distributions are adjusted to make Q as close as possible to P. A common approach to variational inference is to minimize a relative entropy, D(QIIP) = l: Q(x) log ~~:~. (1) x Notice that D(QIIP):j:. D(PIIQ). Often D(QIIP) can be minimized with respect to the parameters of Q using iterative optimization or even exact optimization. To see how minimizing D(QIIP) may exclude modes of the target distribution, suppose Q is a Gaussian and P is bimodal with a region of vanishing density between the two modes, as shown in Fig. 1. If we minimize D(QIIP) with respect to the mean and variance of Q, it will cover only one of the two modes, as illustrated in Fig.1a. (We assume the symmetry is broken.) This is because D(QIIP) will tend to infinity if Q is nonzero in the region where P has vanishing density. In contrast, if we minimize D(PIIQ) = Ex P(x)log(P(x)/Q(x)) with respect to the mean and variance of Q, it will cover all modes, since D(PIIQ) will tend to infinity if Q vanishes in any region where P is nonzero. See Fig. lb. For many problems, including medical diagnosis, it is easy to argue that it is more important that our approximation include all modes than exclude non plausible configurations at the cost of excluding other modes. The former leads to a low number of false negatives, whereas the latter may lead to a large number of false negatives (concluding a disease is not present when it is) . Figure 2: Bipartite Bayesian network. 2 8kS are observed, dns are hidden. Bipartite noisy-OR networks Fig. 2 shows a bipartite noisy-OR Bayesian network with N binary hidden variables d = (d 1, . . . , d N ) and K binary observed variables s = (Sl, . . . , SK) . Later, we present results on medical diagnosis, where dn = 1 indicates a disease is active, dn = 0 indicates a disease is inactive, Sk = 1 indicates a symptom is active and Sk = 0 indicates a symptom is inactive. The joint distribution is P(d, s) = K N k=l n=l [II P(skl d )] [II P(dn )]. (2) In the case of medical diagnosis, this form assumes the diseases are independent. 1 Although some diseases probably do depend on other diseases, this form is considered to be a worthwhile representation of the problem (Shwe et al., 1991). The likelihood for Sk takes the noisy-OR form (Pearl 1988). The probability that symptom Sk fails to be activated (Sk = 0) is the product of the probabilities that each active disease fails to activate Sk: N P(Sk = Old) = PkO II p~~. (3) n=l Pkn is the probability that an active dn fails to activate Sk. PkO accounts for a "leak probability". 1- PkO is the probability that symptom Sk is active when none of the diseases are active. Exact inference computes the distribution over d given a subset of observed values in s. However, if Sk is not observed, the corresponding likelihood (node plus edges) may be deleted to give a new network that describes the marginal distribution over d and the remaining variables in s. So, we assume that we are considering a subnetwork where all the variables in s are observed. We reorder the variables in s so that the first J variables are active (Sk = 1, 1 ~ k ~ J) and the remaining variables are inactive (Sk = 0, J + 1 ~ k ~ K). The posterior distribution can then be written P(dls) ocP(d,s) J N k=l n=l = [II(1-PkoIIp~~)][ K II k=J+1 N N n=l n=l (pkoIIp~~)][IIP(dn)J. (4) Taken together, the two terms in brackets on the right take a simple, product form over the variables in d. So, the first step in inference is to "absorb" the inactive 1 However, the diseases are dependent given that some symptoms are present . variables in s by modifying the priors P(dn ) as follows: pI (dn ) K II = anP(dn ) ( d Pkn) n, (5) k=J+l where an is a constant that normalizes P/(dn ). Assuming the inactive symptoms have been absorbed, we have P(dls) ex J N N k=l n=l n=l [II (1 - PkO II p~~)] [II P/(dn)]. (6) The term in brackets on the left does not have a product form. The entire expression can be multiplied out to give a sum of 2J product forms, and exact "QuickS core" inference can be performed by combining the results of exact inference in each of the 2J product forms (Heckerman 1989). However, this exponential time complexity makes large problems, such as QMR-DT, intractable. 3 Sequential inference using inclusive variational trees As described above, many variational methods minimize D(QIIP), and find approximations that exclude some modes of the posterior distribution. We present a method that minimizes D(PIIQ) sequentially - by absorbing one observation at a time - so as to not exclude modes of the posterior. Also, we approximate the posterior distribution with a tree. (Directed and undirected trees are equivalent we use a directed representation, where each variable has at most one parent.) The algorithm absorbs one active symptom at a time, producing a new tree by searching for the tree that is closest - in the D(PIIQ) sense - to the product of the previous tree and the likelihood for the next symptom. This search can be performed efficiently in O(N 2 ) time using probability propagation in two versions of the previous tree to compute weights for edges of a new tree, and then applying a minimum-weight spanning-tree algorithm. Let Tk(d) be the tree approximation obtained after absorbing the kth symptom, Sk = 1. Initially, we take To(d) to be a tree that decouples the variables and has marginals equal to the marginals obtained by absorbing the inactive symptoms, as described above. Interpreting the tree Tk-l (d) from the previous step as the current "prior" over the diseases, we use the likelihood P(Sk = lid) for the next symptom to obtain a new estimate of the posterior: N A(dls 1 , ... ,Sk) ex Tk-l (d)P(Sk = lid) = Tk-l (d) (1 - PkO II p~~) n=l = Tk-l(d) - TLl(d), (7) where TLI (d) = Tk-l (d) (PkO TI;;=l p~~) is a modified tree. Let the new tree be Tk(d) = TIn T k (d n ld1rk (n)), where 7rk (n) is the index of the parent of d n in the new tree. The parent function 7rk (n) and the conditional probability tables of Tk (d) are found by minimizing (8) Ignoring constants, we have D(FkIITk) = - 2:Fk(dls1, ... ,Sk) log Tk (d) d = - 2: (Tk- 1 (d) - TLl(d)) log (II Tk(dnld1fk(n))) n d = - 2: (2:(Tk-l(d) n = - 2:(2: n - TLl(d)) 10gTk(dnld1fk(n))) d 2: (Tk-l(dn,d1fk(n)) - T~_l(dn,d1fk(n))) 10gTk(dnld1fk(n))). dn d"k(n) For a given structure (parent function 7l"k(n)), the optimal conditional probability tables are (9) where f3n is a constant that ensures Ldn T k (dn ld1fk (n)) = 1. This table is easily computed using probability propagation in the two trees to compute the two marginals needed in the difference. The optimal conditional probability table for a variable is independent of the parentchild relationships in the remainder of the network. So, for the current symptom, we compute the optimal conditional probability tables for all N(N - 1)/2 possible parent-child relationships in O(N2) time using probability propagation. Then, we use a minimum-weight directed spanning tree algorithm (Bock 1971) to search for the best tree. Once all of the symptoms have been absorbed, we use the final tree distribution, TJ(d) to make inferences about d given s. The order in which the symptoms are absorbed will generally affect the quality of the resulting tree (Jaakkola and Jordan 1999), but we used a random ordering in the experiments reported below. 4 Results on QMR-DT type networks Using the structural and parameter statistics of the QMR-DT network given in Shwe et al. (1991) we simulated 30 QMR-DT type networks with roughly 600 diseases each. There were 10 networks in each of 3 groups with 5, 10 and 15 instantiated active symptoms. We chose the number of active symptoms to be small enough that we can compare our approximate method with the exact QuickScore method (Heckerman 1989). We also tried two other approximate inference methods: local probability propagation (Murphy et al. 1999) and a variational upper bound (Jaakkola and Jordan 1999). For medical diagnosis, an important question is how many most probable diseases n' under the approximate posterior must be examined before the most probable n diseases under the exact posterior are found. Clearly, n ~ n' ~ N. An exact inference algorithm will give n' = n, whereas an approximate algorithm that mistakenly ranks the most probable disease last will give n' = N. For each group of networks and each inference method, we averaged the 10 values of n' for each value of n. The left column of plots in Fig. 3 shows the average of n' versus n for 5, 10 and 15 active symptoms. The sequential tree-fitting method is closest to optimal (n' = n) in all cases. The right column of plots shows the "extra work" caused by the excess number of diseases n' - n that must be examined for the approximate methods. 5 positive findings 5 positive findings 300 ~ 100 ,, ' ~ r , 250 200 c '-:.' 'c I 150 100 , 50 100 150 200 250 50 300 100 10 positive findings 150 200 250 10 positive findings 350 ,-----~~-~~~-~~-~~_____, : '~J 300 .. 6:] -- Ub 1' tree - - pp , , 450 250 I 400 350 c 200 : I 'c 300 , , ?c 150 250 : I 200 ; 150 ; 100 : 50 ' / ' ~~~50~~ 1O~ 0 -'~50~2= 00~2~ 50~ 30~ 0 -= 35~ 0 -4~00~4~ 50~ 50 15 positive findings 15 positive findings -6:] ub 550 soo 450 100 150 200 250 300 350 400 450 - ~ --- tree - - pp --- I 400 , 350 , 'c 3OO 250 200 150 100 50 , 00 50 100 150 200 250 300 350 400 450 500 550 00 50 100 150 200 250 300 350 400 450 500 550 Figure 3: Comparisons of the number of most probable diseases n' under the approximate posterior that must be examined before the most probable n diseases under the exact posterior are found. Approximate methods include the sequential tree-fitting method presented in this paper (tree), local probability propagation (pp) and a variational upper bound (ub). 5 Summary Noisy-OR networks can be used to model a variety of problems, including medical diagnosis. Exact inference in large, richly connected noisy-OR networks is intractable, and most approximate inference algorithms tend to concentrate on a small number of most probable configurations of the hidden variables under the posterior. We presented an "inclusive" variational method for bipartite noisy-OR networks that favors including all probable configurations, at the cost of including some improbable configurations. The method fits a tree to the posterior distribution sequentially, i.e., one observation at a time. Results on an ensemble of QMR-DT type networks show that the method performs better than local probability propagation and a variational upper bound for ranking most probable diseases. Acknowledgements. We thank Dale Schuurmans for discussions about this work. References H. Attias 1999. Independent factor analysis. Neural Computation 11:4, 803- 852. F. Bock 1971. An algorithm to construct a minimum directed spanning tree in a directed network. Developments in Operations Research, Gordon and Breach, New York, 29-44. W. T. Freeman and Y. Weiss 2001. On the fixed points of the max-product algorithm. To appear in IEEE Transactions on Information Theory, Special issue on Codes on Graphs and Iterative Algorithms. B. J. Frey 1998. Graphical Models for Machine Learning and Digital Communication. MIT Press, Cambridge, MA. B. J. Frey 2000. Filling in scenes by propagating probabilities through layers and into appearance models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society Press, Los Alamitos, CA. B. J. Frey and G. E. Hinton 1999. Variational learning in non-linear Gaussian belief networks. Neural Computation 11:1, 193-214. R. G. Gallager 1963. Low-Density Parity-Check Codes. MIT Press, Cambridge, MA. Z. Ghahramani and M. I. Jordan 1997. Factorial hidden Markov models. Machine Learning 29, 245- 273. D. Heckerman 1989. A tractable inference algorithm for diagnosing multiple diseases. Proceedings of the Fifth Conference on Uncertainty in Artificial Intelligence. T. S. Jaakkola and M. I. Jordan 1999. Variational probabilistic inference and the QMR-DT network. Journal of Artificial Intelligence Research 10, 291-322. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola and L. K. Saul 1999. An introduction to variational methods for graphical models. In M. I. Jordan (ed) Learning in Graphical Models, MIT Press, Cambridge, MA. D. J. C MacKay 1999a. Good error-correcting codes based on very sparse matrices. IEEE Transactions on Information Theory 45:2, 399-431. D. J. C MacKay 1999b. Introduction to Monte Carlo methods. In M. I. Jordan (ed) Learning in Graphical Models, MIT Press, Cambridge, MA. R. J. McEliece, E. R. Rodemich and J.-F. Cheng 1996. The turbo decision algorithm. Proceedings of the 33 rd Allerton Conference on Communication, Control and Computing, Champaign-Urbana, IL. K. P. Murphy, Y. Weiss and M. I. Jordan 1999. Loopy belief propagation for approximate inference: An empirical study. Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Francisco, CA. R. M. Neal 1993. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, Computer Science, University of Toronto. R. M. Neal 1996. Sampling from multimodal distributions using tempered transitions. Statistics and Computing 6, 353-366. L. K. Saul, T. Jaakkola and M. I. Jordan 1996. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research 4, 61-76. L. K. Saul and M. I. Jordan 1996. Exploiting tractable substructures in intractable networks. In D. Touretzky, M. Mozer, and M. Hasselmo (eds) Advances in Neural Information Processing Systems 8. MIT Press, Cambridge, MA. M. Shwe, B. Middleton, D. Heckerman, M. Henrion, E. Horvitz, H. Lehmann and G. Cooper 1991. Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base I. The probabilistic model and inference algorithms. Methods of Information in Medicine 30, 241- 255.
1815 |@word version:2 tried:1 tr:1 configuration:6 horvitz:1 current:3 written:1 must:3 plot:2 intelligence:4 vanishing:2 core:1 node:1 toronto:3 allerton:1 simpler:1 diagnosing:1 dn:14 fitting:3 absorbs:1 introduce:1 roughly:1 freeman:2 considering:1 increasing:1 factorized:1 minimizes:1 finding:7 ti:1 oscillates:1 decouples:1 internist:1 control:1 medical:8 appear:1 producing:1 before:2 positive:6 engineering:1 frey:9 local:5 qmr:8 plus:1 chose:1 k:1 examined:3 averaged:1 directed:5 practical:1 empirical:1 close:1 applying:1 www:1 equivalent:1 middleton:1 quick:1 correcting:1 searching:1 target:4 suppose:1 massive:1 exact:9 approximated:2 recognition:1 observed:6 electrical:1 capture:1 region:3 ensures:1 connected:3 ordering:1 disease:21 mozer:1 vanishes:1 broken:1 leak:1 complexity:1 personal:1 depend:1 bipartite:5 easily:1 joint:1 multimodal:1 instantiated:1 activate:3 monte:4 artificial:4 plausible:3 favor:3 statistic:3 richardson:1 noisy:10 final:1 product:7 remainder:2 combining:1 ocp:1 los:1 exploiting:1 parent:6 diverges:1 tk:13 oo:1 propagating:1 c:1 tommi:1 concentrate:3 modifying:1 preliminary:1 dns:1 probable:9 adjusted:1 crg:1 considered:1 pkn:2 hasselmo:1 mit:5 clearly:1 gaussian:3 modified:1 jaakkola:8 tll:3 rank:1 indicates:4 likelihood:4 check:1 contrast:1 brendan:1 sense:1 inference:21 dependent:1 entire:1 initially:1 hidden:5 vlsi:1 issue:1 development:1 special:1 mackay:4 marginal:1 equal:1 once:1 construct:1 field:1 sampling:1 filling:1 minimized:1 report:1 intelligent:1 gordon:1 murphy:3 bracket:2 activated:1 tj:1 chain:3 edge:2 improbable:1 tree:29 old:1 column:2 cover:2 loopy:1 cost:3 subset:1 reported:1 density:3 probabilistic:5 together:1 iip:1 account:1 exclude:5 skl:1 caused:1 ranking:1 later:1 performed:2 tli:1 lab:1 substructure:2 noisyor:2 minimize:5 il:1 variance:3 kaufmann:1 efficiently:1 ensemble:2 correspond:1 bayesian:3 none:1 carlo:4 implausible:1 touretzky:1 ed:3 pp:3 richly:3 massachusetts:1 adjusting:1 logical:1 knowledge:1 sophisticated:1 rodemich:1 dt:7 wei:3 done:1 symptom:22 mceliece:2 mistakenly:1 propagation:9 mode:19 quality:1 true:2 former:1 nonzero:2 neal:4 illustrated:1 occasion:1 performs:1 interpreting:1 variational:18 common:1 absorbing:3 sigmoid:1 marginals:3 cambridge:5 rd:1 fk:1 quickscore:1 base:1 posterior:14 closest:2 binary:3 tempered:2 captured:1 minimum:3 morgan:1 ii:11 multiple:2 champaign:1 technical:1 fie:1 long:1 vision:1 fifteenth:1 sometimes:2 bimodal:1 whereas:2 leaving:1 extra:1 probably:1 anp:1 tend:4 undirected:1 jordan:16 structural:1 presence:1 easy:1 enough:1 variety:1 affect:1 relu:1 fit:1 attias:2 inactive:6 expression:1 suffer:1 york:1 cause:1 oscillate:1 generally:1 factorial:1 hardware:1 sl:1 notice:1 diagnostic:1 popularity:1 diagnosis:8 promise:1 group:2 reformulation:1 deleted:1 graph:1 sum:1 parameterized:2 uncertainty:2 lehmann:1 decision:1 comparable:1 bound:4 layer:1 guaranteed:1 cheng:1 turbo:1 infinity:2 inclusive:4 scene:1 extremely:1 concluding:1 relatively:2 describes:1 heckerman:4 lid:2 taken:1 eventually:1 fail:1 needed:1 tractable:2 operation:1 multiplied:1 worthwhile:1 original:1 assumes:1 remaining:2 include:3 graphical:4 medicine:1 ghahramani:4 approximating:1 society:1 question:1 alamitos:1 subnetwork:1 kth:1 thank:1 simulated:1 argue:1 spanning:3 assuming:1 code:3 index:1 relationship:2 minimizing:4 ql:1 negative:2 upper:3 observation:2 markov:4 urbana:1 hinton:3 incorporated:1 excluding:2 communication:3 lb:1 cast:1 pearl:2 able:1 usually:1 below:1 pattern:1 including:5 gaining:1 explanation:1 soo:1 max:1 belief:3 technology:1 breach:1 prior:2 acknowledgement:1 relative:1 versus:1 bock:2 digital:1 pi:1 normalizes:1 summary:1 last:1 parity:1 institute:1 saul:6 fifth:1 sparse:1 transition:2 qn:1 computes:1 dale:1 san:1 transaction:2 excess:1 approximate:15 absorb:1 sequentially:3 active:11 reorder:1 parentchild:1 francisco:1 search:2 iterative:2 sk:18 table:5 ca:2 ignoring:2 symmetry:1 schuurmans:1 n2:1 shwe:3 gtk:2 child:1 fig:5 cooper:1 fails:3 exponential:1 tin:1 rk:2 showing:1 dl:3 intractable:4 false:2 sequential:4 qiip:8 entropy:1 appearance:1 gallager:2 absorbed:3 expressed:1 patrascu:1 ma:5 conditional:4 henrion:1 f3n:1 tendency:1 intact:1 latter:1 ub:3 ex:3
891
1,816
Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics Barbara Zenger Division of Biology Caltech 139-74 Pasadena, CA 91125 Christof Koch Computation and Neural Systems Caltech 139-74 Pasadena, CA 91125 barbara@klab.caltech. edu koch@klab.caltech.edu Abstract We describe an analogy between psychophysically measured effects in contrast masking, and the behavior of a simple integrate-andfire neuron that receives time-modulated inhibition. In the psychophysical experiments, we tested observers ability to discriminate contrasts of peripheral Gabor patches in the presence of collinear Gabor flankers. The data reveal a complex interaction pattern that we account for by assuming that flankers provide divisive inhibition to the target unit for low target contrasts, but provide subtractive inhibition to the target unit for higher target contrasts. A similar switch from divisive to subtractive inhibition is observed in an integrate-and-fire unit that receives inhibition modulated in time such that the cell spends part of the time in a high-inhibition state and part of the time in a low-inhibition state. The similarity between the effects suggests that one may cause the other. The biophysical model makes testable predictions for physiological single-cell recordings. 1 Psychophysics Visual images of Gabor patches are thought to excite a small and specific subset of neurons in the primary visual cortex and beyond. By measuring psychophysically in humans the contrast detection and discrimination thresholds of peripheral Gabor patches, one can estimate the sensitivity of this subset of neurons. Furthermore, spatial interactions between different neuronal populations can be probed by testing the effects of additional Gabor patches (masks) on performance. Such experiments have revealed a highly configuration-specific pattern of excitatory and inhibitory spatial interactions [1, 2]. 1.1 Methods Two vertical Gabor patches with a spatial frequency of 4cyc/deg were presented at 4 deg eccentricity left and right of fixation, and observers had to report which patch had the higher contrast (spatial 2AFC). In the "flanker condition" (see Fig. lA), the two targets were each flanked by two collinear Gabor patches of 40% contrast, presented above and below the targets (at a distance of 0.75 deg, i.e., 3 times the spatial period of the Gabor). Observers fixated a central cross, which was visible before and during each trial, and then initiated the trial by pressing the space bar on the computer keyboard. Two circular cues appeared for 180ms to indicate the locations of the two targets (to minimize spatial uncertainty). A blank stimulus of randomized length (500ms?100ms) was followed by a 83ms stimulus presentation. No mask was presented. Observers indicated which target had the higher contrast ("left" or "right") by specified keys. Auditory feedback was provided. Thresholds were determined using a staircase procedure [3]. Whenever the staircase procedure showed a ceiling effect (asking to display contrasts above 100%) the data for this pedestal contrast in this condition were not considered for this observer, even if at other days valid threshold estimates were obtained, because considering only the 'good days' would have introduced a bias. Seven observers with normal or corrected-to-normal vision participated in the experiment. Each condition was repeated at least six times. The experimental procedure is in accordance with Caltech's Committee for the Protection of Human Subjects. Experiments were controlled by an 02 Silicon Graphics workstation, and stimuli were displayed on a raster monitor. Mean luminance Lm was set to 40 cd/m2. We used color-bit stealing to increase the number of grey levels that can be displayed [4]. A gamma correction ensured linearity of the gray levels. To remove some of the effects of inter-observer variability from our data analysis, the entire data set of each observer was first normalized by his or her average performance across all conditions, and only then averages and standard errors were computed. The mean standard errors across all conditions and contrast levels are presented as bars in Figs. IB and ID. 1.2 Results In the absence of flankers (circles, Fig. IB), discrimination thresholds first decrease from absolute detection threshold at 8.7% with increasing pedestal contrast and then increase again. As common in sensory psychophysics , we assume that the contrast discrimination thresholds can be derived from an underlying sigmoidal contrastresponse function r(c) (see Fig. lC, solid curve), together with the assumption that some fixed response difference ~r = 1 is required for correct discrimination [2]. In other words, for any fixed pedestal contrast c, the discrimination threshold ~c satisfies r(c + ~c) = r(c) + 1. Our underlying assumption is that at the decision stage, the level of noise in the signal is independent of the response magnitude. Neuronal noise, on the other hand, is usually well characterized by a Poisson process, that is, the noise level increases with increasing response. Little evidence exists, however, that this "early" response dependent noise actually limits the performance. It is conceivable that this early noise is relatively small, that the performance-limiting noise is added at a later processing stage, and that this noise is independent of the response magnitude. To describe the response r of the system to a single, well-isolated, target as a function of its contrast c we adopt the function suggested by Foley (1994) [2]: acP risolated(C) = r!,-q (1) cP - Q + th For plausible parameters (c, Cth > 0) this function is proportional to cP for c? Cth B A G-<) no flanks I SEM 0% 1% 4% 16% 60% Pedestal contrast c D C 10 c..> <I no flanks 8 (]) Ul c: 0 0.. Ul (]) "0 :g 18% 40% flanks G-<) 20% flanks .. ~ 40% flanks Ul 2! 6 .c ~ 1 2% 4 0 iic: a: 2 E .;:: c..> 25% 50% Contrast 75% 100 SEM 8% Ul 0 0% 1% 4% 16% 60% Pedestal contrast c Figure 1: (A) Sample stimuli without flanks and with flanks. (B) Discrimination thresholds average across seven observers for flanked (diamond) and unflanked (circles) targets. (C) Contrast response functions used for model prediction in (B). (D) Discrimination performance averaged across four observers for different flank contrasts. Lines in (B) and (D) represent the best model fit. and is proportional to cq for c ? Cth, consistent with a modified Weber law [5]. The contrast-response function obtained for the parameters given in the first row of Tab. 1 is shown in Fig. lC (solid line). The corresponding discrimination thresholds (Fig. 1Bi solid line) fit well the psychophysical data (open circles). What happens to the dipper function when the two targets are flanked by Gabor patches of 40% contrast? In the presence of flankers, contrast discrimination thresholds (diamonds, Fig. IB) first decrease, then increase, then decrease again, and finally increase again, following a W-shaped function. Depending on target contrast, one can distinguish two distinctive flanker effects: for targets of 40% contrast or less, flankers impair discrimination. In the masking literature such suppressive effects are often attributed to a divisive input from the mask to the target; in other words, the flanks seem to reduce the target's gain [2]. For targets of 50% or more (four rightmost data points in Fig. IB), contrast performance is about the same irrespective of whether flankers are present or not; at these high target contrasts, flankers apparently cease to contribute to the target's gain control. Following this concept, we define two model parameters to describe the effects of the flankers: the first parameter, Co, determines the maximal target contrast at which gain control is still effective; the second parameter, b, determines the strength of the gain control. Formally written, we obtain: ( ) _ {riSolated (C) rflanked C risolated (c) Ib - d for for C :::; Co, C ~ Co, (gain control) (no gain control) (2) In the low-contrast range, the contrast-response functions with and without flankers are multiples of each other (factor b); in the high-contrast regime, the two curves are shifted vertically (offset d) with respect to each other (see Fig. 1C). The subtractive constant d is not a free parameter, but is determined by imposing that r be continuous at C = Co, i.e., risolated (co) Ib = risolated (co) - d. The parameters that best account for the average data in Figs. 1B and 1D in the least-mean-square sense were estimated using a multidimensional simplex algorithm [6]. Table 1: Best fitting model parameters in the least-square sense. no flanks 20% flanks 40% flanks 70% flanks q b a p b b Co Co Co Cth Fig. lEC 0.363 7.14% 4.47 0.704 1.86 46.8% Fig. 1D 0.395 6.07% 3.78 0.704 1.69 26.4% 1.78 44.9% 2.01 64.3% Increasing the flanker contrast leads both to an increase in the strength of gain control b and to an increase in the range Co in which gain control is effective. The predicted discrimination performance is shown superimposed on the data in Fig. 1B and D. As one can see, the model captures the behavior of the data reasonably well, considering that for each combined fit there are only four parameters to fit the unflanked data and two additional parameters for each W curve. Or, put differently, we use but two degrees of freedom to go from the unflanked to the flanked conditions. 2 Biophysics While the above model explains the data, it remains a puzzle how the switch from divisive to subtractive is implemented neuronally. Here, we show that time-modulated inhibition can naturally account for the observed switch, without assuming inputdependent changes in the network. 2.1 Circuit Model II I II 9, Figure 2: Circuit model used for the simulations. To simulate the behavior of individual neurons we use a variant of the leaky integrate-and-fire unit (battery Ee = 70mV, capacitance C = 200pF, leak conductance 9pass = IOnS, and firing threshold vth = 20mV, see Fig. 2). Excitatory and inhibitory synaptic input are modeled as changes in the conductances ge and gi, respectively. Whenever the membrane potential Vm exceeds threshold (Vih), a spike is initiated and the membrane potential Vm is reset to v;.est = O. No refractory period was assumed. The model was implemented on a PC using the programming language C. 2.2 Simulations Firing rates for increasing excitation (ge) at various levels of inhibtion (gi) are shown in Fig. 3A. For low excitatory input the cell never fires, because the input current is counter-balanced by the leakage current, thus preventing the cell from reaching its firing threshold. Once the cell does start firing, firing rates first increase very fast, but then rapidly converge against a linear function, whose slope is independent of gi. When the inhibitory input is modulated in time and switches between a low B A N JC4oor---------------------~400r---------------------~ c 20ns, -9j=OnS ~300 c 9j - 9 j = 10nS (\) ~200 rrnn m0DDm OnS Oms 100ms 200ms - 9 j =OnS 0- (\) ..... u.. 100 100 (\) .::t:. '0.. C/) 0 "--__........__.L-~....l..._~_ _ _ _~_ __ ' 0 5 10 15 ge in nS 20 5 10 15 20 ge in nS Figure 3: Simulations of circuit model with constant inhibition (A) or timemdoulated inhbition (dashed line in (B)). This simple single-cell model matches the psychophysics remarkably well. inhibition state (gi = glow) and a high inhibition state (gi = ghigh), the results look different (Fig. 3B, dashed line). The cell fires part of the time like a lowly inhibited cell, part of the time like a highly inhibited cell, explaining why the overall firing rate resemble weighted averages of the curves for constant gi. A comparison of the noinhibition curve (gj = 0) and the curve for time-modulated inhbition demonstrates that inhibition switches from a divisive mode to a subtractive mode for increasing ge. The ge-Ievel at which the switch occurs depends on the level of inhibition in the high-inhibition state (here ghigh=20nS). The strength of divisive inhibition depends on the percentage of time R that the cell spends in the high-inhibition state; in the example shown as a dashed line in Fig. 2B, the cell spends on average half of the time in the high-inhibition stage (thus R=50%), and remains the rest of the time in the low-inhibition stage. 3 Discussion Both the psychophysical data and the biophysical model show a switch from divisive to subtractive inhibition. Making the connection between psychophysics and biophysics explicitly, requires that a number of assumptions be made: (1) the excitatory input ge to the target unit increases with increasing target contrast; (2) increasing the flank contrast leads to an increase of 9high (to account for the fact that the transition from divisive to subtractive inhibition occurs at higher contrasts co); (3) the relative time spent in the 9high state (R) increases with flanker contrast (leading to a stronger divisive inhibition b that is reflected in the overall performance decrease with increasing flanker contrast). While these assumptions all seem quite plausible, there remains the question of why one would assume time-modulated inhbition in the first place. Here we suggest three different mechanisms: First, the time-modulation might reflect inhibitory input by synchronized interneurons [7], i.e., sometimes a large number of them fire at the same time (high-inhibition state) while at other times almost none ofthe inhibitory cells fire (low-inhibition state). A second plausible implementation (which gives very similar results) assumes that there is only one transition and that the low- and high-inhibition state follow each other sequentially (rather than flipping back and forth as suggested in Fig. 3B). Indeed, cells in primary visual cortex often show a transient response at stimulus onset (which may reflect the low-inhibition state), followed by a smaller level of sustained response (which may reflect the high-inhibition state). In this context, R would simply reflect the time delay between the onset of excitation and inhibition (with a large R representing brief delays before inhibition sets in). Finally, low- and high- inhibition states may reflect different subtypes of neurons which receive different amount of surround inhibition. In other words, some neurons are strongly inhibited (high-inhibition state) while others are not (low-inhibition state). The ratio of strongly-inhibited units (among all units) is given by R. The mean response of all the neurons will show a divisive inhibition in the range where the inhibited neurons are shut off completely, but will show a subtractive inhibition as soon as the inhibited units start firing. To summarize on a more abstract level: any mechanism that will average firing rates of different 9i states, rather than averaging different inhibitory inputs 9i, will lead to a mechanism that shows this switch from divisive to subtractive inhibition. The remaining differences between the psychophysically estimated contrast-response functions (Fig. Ie) and the firing rates of the circuit model (Fig. 3B) seem to reflect mainly oversimplifications in the biophysical model. Saturation at high ge values, for instance, could be achieved by assuming refractory periods or other firing-rate adaptation mechanisms. The very steep slope directly after the switch from divisive to subtractive inhibition would disappear if the simple integrate-andfire unit would be replaced by a more realistic unit in which - due to stochastic linearization - the firing rate rises more gradually once the threshold is crossed. In any case, one does not expect a precise match between the two functions, as psychophysical performance presumably relies on a variety of different neurons with different dynamic ranges. Once the model includes many neurons, one would need to define decision strategies. We believe that such a link between a biophysical model and psychophysical data is in principle possible, but have favored here simplicity at the expense of achieving a more quantitative match. Our analysis of the circuit model shows that the psychophysical data can be explained without assuming complex interaction patterns between different neuronal units. While we have no reason to believe that the switching-mechanism from divisive to subtractive inhibition will become ineffective when considering large number of neurons, it does not require a large network. Our model suggests that the critical events happen at the level of individual neurons, and not in the network. Our model makes two clear predictions: first, the contrast-response function of single neurons should show - in the presence of flankers - a switch from divisive to subtractive inhibition (Fig. lC and Fig. 3B). Physiological studies have measured how stimuli outside the classical receptive field affect the absolute response level of the target unit [8, 9]. Distinguishing subtractive and divisive inhibition, however, requires that, in addition, surround effects on the slope of the contrast-response functions are estimated. Such experiments have been carried out by Sengpiel et al [10] in cat primary visual cortex. Their extracellular recordings show that when a target grating is surrounded by a high-contrast annulus, inhibition is indeed well described by a divisive effect on the response. It remains to be seen, however, whether surround annuli whose contrast is lower than the target contrast will act subtractively. The second prediction is that inhibition is bistable, i.e., that there are distinct low- and high-inhibition states. These states may alternate in time within the same neuron, or they may be represented by different subsets of neurons. Acknow ledgIllents We would like to thank Jochen Braun, Gary Holt, and Laurent Itti for helpful comments. The research was supported by NSF, NIMH and a Caltech Divisional Scholarship to BZ. References [1] U. Polat and D. Sagi. Lateral interactions between spatial channels: Suppression and facilitation revealed by lateral masking experiments. Vision Research, 33:993- 999,1993. [2] John M. Foley. Human luminance pattern-vision mechanisms: masking experiments require a new model. Journal of the Optical Society of America A, 11:1710-1719,1994. [3] H. Levitt. Transformed up-down methods in psychoacoustics. The Journal of the Acoustical Society of America, 49:467- 477, 1971. [4] C.W. Tyler. Colour bit-stealing to enhance the luminance resolution of digital displays on a single pixel basis. Spatial Vision, 10(4):369-377, 1997. [5] Gordon E. Legge. A power law for contrast discrimination. Vision Research, 21:457- 467, 1981. [6] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes in C. Cambridge University Press, 1992. [7] W. Singer and C.M. Gray. Visual feature integration and the temporal correlation hypothesis. Annual review of neuroscience, 18:555-586, 1995. [8] J.B. Levitt and J.S. Lund. Contrast dependence of contextual effects in primate visual cortex. Nature, 387:73- 76, 1997. [9] U. Polat, K. Mizobe, M.W. Pettet, T. Kasamatsu, and A.M. Norcia. Collinear stimuli regulate visual responses depending on cell's contrast threshold. Nature, 391:580-584, 1998. [10] F. Sengpiel, R.J. Baddeley, T.C.B Freeman, R. Harrad, and C. Blakemore. Different mechanisms underlie three inhibitory phenomena in cat area 17. Vision Research, 38(14):2067-2080, 1998.
1816 |@word trial:2 stronger:1 open:1 grey:1 simulation:3 solid:3 configuration:1 rightmost:1 blank:1 current:2 contextual:1 protection:1 written:1 john:1 realistic:1 numerical:1 visible:1 happen:1 remove:1 discrimination:12 cue:1 half:1 shut:1 contribute:1 location:1 sigmoidal:1 become:1 fixation:1 sustained:1 fitting:1 inter:1 mask:4 indeed:2 behavior:3 freeman:1 little:1 pf:1 considering:3 increasing:8 zenger:1 provided:1 linearity:1 underlying:2 circuit:5 what:1 spends:3 temporal:1 quantitative:1 multidimensional:1 act:1 braun:1 ensured:1 demonstrates:1 control:7 unit:12 underlie:1 christof:1 before:2 accordance:1 vertically:1 sagi:1 limit:1 switching:1 id:1 initiated:2 laurent:1 firing:11 modulation:1 might:1 suggests:2 co:11 blakemore:1 bi:1 range:4 averaged:1 inhibtion:1 testing:1 procedure:3 area:1 gabor:9 thought:1 word:3 holt:1 suggest:1 put:1 context:1 go:1 resolution:1 simplicity:1 m2:1 facilitation:1 his:1 population:1 limiting:1 target:24 programming:1 distinguishing:1 hypothesis:1 observed:2 capture:1 decrease:4 counter:1 balanced:1 legge:1 leak:1 nimh:1 battery:1 dynamic:1 distinctive:1 division:1 completely:1 basis:1 differently:1 various:1 cat:2 represented:1 america:2 distinct:1 fast:1 describe:3 effective:2 outside:1 whose:2 quite:1 iic:1 plausible:3 ability:1 gi:6 pressing:1 biophysical:4 interaction:5 maximal:1 reset:1 adaptation:1 rapidly:1 forth:1 recipe:1 eccentricity:1 spent:1 depending:2 measured:2 grating:1 implemented:2 predicted:1 resemble:1 indicate:1 synchronized:1 correct:1 stochastic:1 human:3 transient:1 bistable:1 explains:1 require:2 oversimplification:1 subtypes:1 correction:1 pettet:1 koch:2 klab:2 considered:1 normal:2 tyler:1 presumably:1 puzzle:1 acp:1 lm:1 early:2 adopt:1 weighted:1 modified:1 reaching:1 rather:2 sengpiel:2 derived:1 superimposed:1 mainly:1 contrast:46 suppression:1 sense:2 helpful:1 dependent:1 vetterling:1 entire:1 pasadena:2 her:1 transformed:1 pixel:1 overall:2 among:1 favored:1 spatial:8 integration:1 psychophysics:5 field:1 once:3 never:1 shaped:1 biology:1 look:1 afc:1 jochen:1 simplex:1 report:1 stimulus:7 others:1 inhibited:6 gordon:1 gamma:1 individual:2 replaced:1 fire:6 freedom:1 detection:2 conductance:2 interneurons:1 highly:2 circular:1 pc:1 circle:3 isolated:1 instance:1 asking:1 measuring:1 subset:3 delay:2 graphic:1 psychophysically:3 combined:1 sensitivity:1 randomized:1 ie:1 flanker:15 vm:2 off:1 enhance:1 together:1 again:3 central:1 reflect:6 glow:1 leading:1 itti:1 account:4 pedestal:5 potential:2 includes:1 explicitly:1 mv:2 depends:2 onset:2 crossed:1 later:1 polat:2 observer:10 tab:1 apparently:1 start:2 masking:4 slope:3 minimize:1 square:2 neuronally:1 inputdependent:1 ofthe:1 none:1 annulus:2 whenever:2 synaptic:1 against:1 raster:1 frequency:1 naturally:1 attributed:1 workstation:1 gain:8 auditory:1 color:1 actually:1 back:1 higher:4 day:2 follow:1 reflected:1 response:18 strongly:2 furthermore:1 stage:4 correlation:1 flanked:4 receives:2 hand:1 mode:2 gray:2 reveal:1 indicated:1 believe:2 effect:12 staircase:2 normalized:1 concept:1 vih:1 during:1 excitation:2 m:6 cp:2 weber:1 image:1 common:1 refractory:2 linking:1 silicon:1 surround:3 imposing:1 cambridge:1 language:1 had:3 similarity:1 cortex:4 inhibition:42 gj:1 showed:1 barbara:2 keyboard:1 caltech:6 seen:1 additional:2 converge:1 period:3 dashed:3 ii:2 signal:1 multiple:1 exceeds:1 match:3 characterized:1 cross:1 biophysics:3 controlled:1 prediction:4 variant:1 vision:6 poisson:1 bz:1 represent:1 sometimes:1 achieved:1 cell:14 ion:1 receive:1 addition:1 remarkably:1 participated:1 suppressive:1 rest:1 subtractively:1 ineffective:1 comment:1 recording:2 subject:1 dipper:1 seem:3 ee:1 presence:3 revealed:2 switch:10 variety:1 fit:4 affect:1 reduce:1 whether:2 six:1 colour:1 ul:4 collinear:3 cyc:1 cause:1 clear:1 ievel:1 amount:1 percentage:1 nsf:1 inhibitory:7 shifted:1 estimated:3 neuroscience:1 probed:1 key:1 four:3 threshold:15 monitor:1 achieving:1 andfire:2 luminance:3 uncertainty:1 place:1 almost:1 patch:8 decision:2 bit:2 followed:2 distinguish:1 display:2 lec:1 annual:1 strength:3 simulate:1 optical:1 relatively:1 extracellular:1 alternate:1 peripheral:2 membrane:2 across:4 smaller:1 cth:4 making:1 stealing:2 happens:1 primate:1 explained:1 gradually:1 ceiling:1 remains:4 committee:1 mechanism:7 singer:1 ge:8 regulate:1 rrnn:1 assumes:1 remaining:1 testable:1 scholarship:1 disappear:1 classical:1 society:2 leakage:1 psychophysical:6 capacitance:1 added:1 question:1 spike:1 occurs:2 flipping:1 primary:3 strategy:1 receptive:1 dependence:1 conceivable:1 distance:1 link:1 thank:1 lateral:2 seven:2 acoustical:1 reason:1 assuming:4 length:1 modeled:1 cq:1 ratio:1 steep:1 expense:1 acknow:1 rise:1 implementation:1 diamond:2 vertical:1 neuron:15 displayed:2 variability:1 precise:1 introduced:1 required:1 specified:1 connection:1 beyond:1 bar:2 suggested:2 below:1 pattern:4 usually:1 impair:1 lund:1 appeared:1 regime:1 summarize:1 saturation:1 power:1 critical:1 event:1 representing:1 brief:1 irrespective:1 carried:1 foley:2 review:1 literature:1 relative:1 law:2 expect:1 oms:1 proportional:2 analogy:1 digital:1 integrate:4 degree:1 consistent:1 principle:1 subtractive:14 surrounded:1 cd:1 row:1 excitatory:4 supported:1 free:1 soon:1 bias:1 explaining:1 absolute:2 leaky:1 curve:6 feedback:1 valid:1 transition:2 sensory:1 preventing:1 made:1 ons:3 deg:3 sequentially:1 fixated:1 assumed:1 excite:1 continuous:1 why:2 table:1 channel:1 reasonably:1 nature:2 ca:2 sem:2 flank:14 complex:2 noise:7 repeated:1 neuronal:3 fig:22 levitt:2 lc:3 n:5 ib:6 down:1 specific:2 offset:1 physiological:2 cease:1 evidence:1 exists:1 magnitude:2 linearization:1 psychoacoustics:1 flannery:1 simply:1 visual:7 vth:1 gary:1 satisfies:1 determines:2 relies:1 teukolsky:1 presentation:1 absence:1 change:2 determined:2 corrected:1 averaging:1 discriminate:1 pas:1 divisive:17 la:1 experimental:1 est:1 formally:1 modulated:6 baddeley:1 tested:1 phenomenon:1
892
1,817
NIPS '00 The Use of Classifiers in Sequential Inference Vasin Punyakanok Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 punyakan@cs.uiuc.edu danr@cs.uiuc. edu Abstract We study the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. In particular, we develop two general approaches for an important subproblem - identifying phrase structure. The first is a Markovian approach that extends standard HMMs to allow the use of a rich observation structure and of general classifiers to model state-observation dependencies. The second is an extension of constraint satisfaction formalisms. We develop efficient combination algorithms under both models and study them experimentally in the context of shallow parsing. 1 Introduction In many situations it is necessary to make decisions that depend on the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints - the sequential nature of the data or other domain specific constraints. Consider, for example, the problem of chunking natural language sentences where the goal is to identify several kinds of phrases (e.g. noun phrases, verb phrases) in sentences. A task of this sort involves multiple predictions that interact in some way. For example, one way to address the problem is to utilize two classifiers for each phrase type, one of which recognizes the beginning of the phrase, and the other its end. Clearly, there are constraints over the predictions; for instance, phrases cannot overlap and there are probabilistic constraints over the order of phrases and their lengths. The above mentioned problem is an instance of a general class of problems - identifying the phrase structure in sequential data. This paper develops two general approaches for this class of problems by utilizing general classifiers and performing inferences with their outcomes. Our formalisms directly applies to natural language problems such as shallow parsing [7, 23, 5, 3, 21], computational biology problems such as identifying splice sites [8,4, 15], and problems in information extraction [9]. Our first approach is within a Markovian framework. In this case, classifiers are functions of the observation sequence and their outcomes represent states; we study two Markov models that are used as inference procedures and differ in the type of classifiers and the details of the probabilistic modeling. The critical shortcoming of this framework is that it attempts to maximize the likelihood of the state sequence - not the true performance measure of interest but only a derivative of it. The second approach extends a constraint satisfaction formalism to deal with variables that are associated with costs and shows how to use this to model the classifier combination problem. In this approach general constraints can be incorporated flexibly and algorithms can be developed that closely address the true global optimization criterion of interest. For both approaches we develop efficient combination algorithms that use general classifiers to yield the inference. The approaches are studied experimentally in the context of shallow parsing - the task of identifying syntactic sequences in sentences [14, 1, 11] - which has been found useful in many large-scale language processing applications including information extraction and text summarization [12, 2]. Working within a concrete task allows us to compare the approaches experimentally for phrase types such as base Noun Phrases (NPs) and SubjectVerb phrases (SVs) that differ significantly in their statistical properties, including length and internal dependencies. Thus, the robustness of the approaches to deviations from their assumptions can be evaluated. Our two main methods, projection-based Markov Models (PMM) and constraint satisfaction with classifiers (CSCL) are shown to perform very well on the task of predicting NP and SV phrases, with CSCL at least as good as any other method tried on these tasks. CSCL performs better than PMM on both tasks, more significantly so on the harder, SV, task. We attribute it to CSCL's ability to cope better with the length of the phrase and the long term dependencies. Our experiments make use of the SNoW classifier [6, 24] and we provide a way to combine its scores in a probabilistic framework; we also exhibit the improvements of the standard hidden Markov model (HMM) when allowing states to depend on a richer structure of the observation via the use of classifiers. 2 Identifying Phrase Structure The inference problem considered can be formalized as that of identifying the phrase structure of an input string. Given an input string 0 =< 01,02, .. . On >, a phrase is a substring of consecutive input symbols Oi, 0i+l, ... OJ. Some external mechanism is assumed to consistently (or stochastically) annotate substrings as phrases l . Our goal is to come up with a mechanism that, given an input string, identifies the phrases in this string. The identification mechanism works by using classifiers that attempt to recognize in the input string local signals which are indicative to the existence of a phrase. We assume that the outcome of the classifier at input symbol 0 can be represented as a function of the local context of 0 in the input string, perhaps with the aid of some external information inferred from it2 . Classifiers can indicate that an input symbol 0 is inside or outside a phrase (10 modeling) or they can indicate that an input symbol 0 opens or closes a phrase (the OC modeling) or some combination of the two. Our work here focuses on OC modeling which has been shown to be more robust than the 10, especially with fairly long phrases [21]. In any case, the classifiers' outcomes can be combined to determine the phrases in the input string. This process, however, needs to satisfy some constraints for the resulting set of phrases to be legitimate. Several types of constraints, such as length, order and others can be formalized and incorporated into the approaches studied here. The goal is thus two fold: to learn classifiers that recognize the local signals and to combine them in a way that respects the constraints. We call the inference algorithm that combines the classifiers and outputs a coherent phrase structure a combinator. The performance of this process is measured by how accurately it retrieves the phrase structure of the input string. This is quantified in terms of recall - the percentage of phrases that are correctly identified - and precision - the percentage of identified phrases that are indeed correct phrases. 1 We assume here a single type of phrase, and thus each input symbol is either in a phrase or outside it. All the methods can be extended to deal with several kinds of phrases in a string. 2Jn the case of natural language processing, if the DiS are words in a sentence, additional information might include morphological information, part of speech tags, semantic class information from WordNet, etc. This information can be assumed to be encoded into the observed sequence. 3 Markov Modeling HMM is a probabilistic finite state automaton that models the probabilistic generation of sequential processes. It consists of a finite set S of states, a set 0 of observations, an initial state distribution Pl(s), a state-transition distribution P(sls') (s, s' E S) and an observation distribution P(ols) (0 EO, s E S). A sequence of observations is generated by first picking an initial state according to PI (s); this state produces an observation according to P(ols) and transits to a new state according to P(sls'). This state produces the next observation, and the process goes on until it reaches a designated final state [22]. In a supervised learning task, an observation sequence 0 =< 01,02,' .. On > is supervised by a corresponding state sequence S =< Sl, S2,'" sn >. This allows one to estimate the HMM parameters and then, given a new observation sequence, to identify the most likely corresponding state sequence. The supervision can also be supplied (see Sec. 2) using local signals from which the state sequence can be recovered. Constraints can be incorporated into the HMM by constraining the state transition probability distribution P(sls') . For example, set P( sis') = 0 for all s, s' such that the transition from s' to s is not allowed. 3.1 A Hidden Markov Model Combinator To recover the most likely state sequence in HMM, we wish to estimate all the required probability distributions. As in Sec. 2 we assume to have local signals that indicate the state. That is, we are given classifiers with states as their outcomes. Formally, we assume that Pt(slo) is given where t is the time step in the sequence. In order to use this information in the HMM framework, we compute Pt(ols) = Pt(slo)Pt(o)jPt(s). That is, instead of observing the conditional probability Pt(ols) directly from training data, we compute it from the classifiers' output. Notice that in HMM, the assumption is that the probability distributions are stationary. We can assume that for Pt(slo) which we obtain from the classifier but need not assume it for the other distributions, Pt(o) and Pt(s). Pt(s) can be calculated by Pt(s) = L:s/Es P(sIS')Pt-l(S') where Pl(s) and P(sls') are the two required distributions for the HMM. We still need Pt (0) which is harder to approximate but, for each t, can be treated as a constant 'fit because the goal is to find the most likely sequence of states for the given observations, which are the same for all compared sequences. With this scheme, we can still combine the classifiers' predictions by finding the most likely sequence for an observation sequence using dynamic programming. To do so, we incorporate the classifiers' opinions in its recursive step by computing P(Otls) as above: 8t (s) = max8t_l(s')P(sls')P(otls) = max8t - l (s')P(sls')P(slot)'fIt/Pt (s). s/ES s/ES This is derived using the HMM assumptions but utilizes the classifier outputs P(slo), allowing us to extend the notion of an observation. In Sec. 6 we estimate P(slo) based on a whole observation sequence rather than 0t to significantly improve the performance. 3.2 A Projection based Markov Model Combinator In HMMs, observations are allowed to depend only on the current state and long term dependencies are not modeled. Equivalently, the constraints structure is restricted by having a stationary probability distribution of a state given the previous one. We attempt to relax this by allowing the distribution of a state to depend, in addition to the previous state, on the observation. Formally, we now make the following independence assumption: P(StISt-l,St-2, ... ,Sl,Ot,Ot-l, ... ,od = P(stlst-l,ot). Thus, given an observation sequence 0 we can find the most likely state sequence S given 0 by maximizing n n t=2 t=2 Hence, this model generalizes the standard HMM by combining the state-transition probability and the observation probability into one function. The most likely state sequence can still be recovered using the dynamic programming (Viterbi) algorithm if we modify the recursive step: 8t ( s) = maxs'ES 8t - l (s')P( sis', Ot). In this model, the classifiers' decisions are incorporated in the terms P(sls', 0) and Pl(slo) . To learn these classifiers we follow the projection approach [26] and separate P( sis', 0) to many functions Ps ' (slo) according to the previous states s'. Hence as many as IS I classifiers, projected on the previous states, are separately trained. (Therefore the name "Projection based Markov model (PMM)".) Since these are simpler classifiers we hope that the performance will improve. As before, the question of what constitutes an observation is an issue. Sec. 6 exhibits the contribution of estimating Ps ' (s 10) using a wider window in the observation sequence. 3.3 Related Work Several attempts to combine classifiers, mostly neural networks, into HMMs have been made in speech recognition works in the last decade [20]. A recent work [19] is similar to our PMM but is using maximum entropy classifiers. In both cases, the attempt to combine classifiers with Markov models is motivated by an attempt to improve the existing Markov models; the belief is that this would yield better generalization than the pure observation probability estimation from the training data. Our motivation is different. The starting point is the existence of general classifiers that provide some local information on the input sequence along with constraints on their outcomes; our goal is to use the classifiers to infer the phrase structure of the sequence in a way that satisfies the constraints. Using Markov models is only one possibility and, as mentioned earlier, not one the optimizes the real performance measure of interest. Technically, another novelty worth mentioning is that we use a wider range of observations instead of a single observation to predict a state. This certainly violates the assumption underlying HMMs but improves the performance. 4 Constraints Satisfaction with Classifiers This section describes a different model that is based on an extension of the Boolean constraint satisfaction (CSP) formalism [17] to handle variables that are the outcome of classifiers. As before, we assume an observed string 0 =< 01,02, . .. On > and local classifiers that, without loss of generality, take two distinct values, one indicating openning a phrase and a second indicating closing it (OC modeling). The classifiers provide their output in terms of the probability P(o) and P(c), given the observation. We extend the CSP formalism to deal with probabilistic variables (or, more generally, variables with cost) as follows. Let V be the set of Boolean variables associated with the problem, IVI = n. The constraints are encoded as clauses and, as in standard CSP modeling the Boolean CSP becomes a CNF (conjunctive normal form) formula f. Our problem, however, is not simply to find an assignment T : V -+ {O, I} that satisfies f but rather the following optimization problem. We associate a cost function c : V -+ R with each variable, and try to find a solution T of f of minimum cost, C(T) = E~=l T(Vi)C(Vi). One efficient way to use this general scheme is by encoding phrases as variables. Let E be the set of all possible phrases. Then, all the non-overlapping constraints can be encoded in: I\e; overlaps ej (-,ei V -,ej). This yields a quadratic number of variables, and the constraints are binary, encoding the restriction that phrases do not overlap. A satisfying assignment for the resulting 2-CNF formula can therefore be computed in polynomial time, but the corresponding optimization problem is still NP-hard [13]. For the specific case of phrase structure, however, we can find the optimal solution in linear time. The solution to the optimization problem corresponds to a shortest path in a directed acyclic graph constructed on the observations symbols, with legitimate phrases (the variables of the CSP) as its edges and their cost as the edges' weights. The construction of the graph takes quadratic time and corresponds to constructing the 2-CNF formula above. It is not hard to see (details omitted) that each path in this graph corresponds to a satisfying assignment and the shortest path corresponds to the optimal solution. The time complexity of this algorithm is linear in the size of the graph. The main difficulty here is to determine the cost C as a function of the confidence given by the classifiers. Our experiments revealed, though, that the algorithm is robust to reasonable modifications in the cost function. A natural cost function is to use the classifiers probabilities P(o) and P(c) and define, for a phrase e = (0, c), c(e) = 1 - P(o)P(c). The interpretation is that the error in selecting e is the error in selecting either 0 or c, and allowing those to overlap 3. The constant in 1 - P(o)P(c) biases the minimization to prefers selecting a few phrases, so instead we minimize -P(o)P(c). 5 Shallow Parsing We use shallow parsing tasks in order to evaluate our approaches. Shallow parsing involves the identification of phrases or of words that participate in a syntactic relationship. The observation that shallow syntactic information can be extracted using local information by examining the pattern itself, its nearby context and the local part-of-speech information - has motivated the use of learning methods to recognize these patterns [7, 23, 3, 5]. In this work we study the identification of two types of phrases, base Noun Phrases (NP) and Subject Verb (SV) patterns. We chose these since they differ significantly in their structural and statistical properties and this allows us to study the robustness of our methods to several assumptions. As in previous work on this problem, this evaluation is concerned with identifying one layer NP and SV phrases, with no embedded phrases. We use the OC modeling and learn two classifiers; one predicting whether there should be an open in location t or not, and the other whether there should be a close in location t or not. For technical reasons the cases -'0 and -,c are separated according to whether we are inside or outside a phrase. Consequently, each classifier may output three possible outcomes 0, nOi, nOo (open, not open inside, not open outside) and C, nCi, nCo, resp. The statetransition diagram in figure 1 captures the order constraints. Our modeling of the problem is a modification of our earlier work on this topic that has been found to be quite successful compared to other learning methods attempted on this problem [21] . Figure 1: State-transition diagram for the phrase recognition problem. 5.1 Classification The classifier we use to learn the states as a function of the observation is SNoW [24, 6], a multi-class classifier that is specifically tailored for large scale learning tasks. The SNoW learning architecture learns a sparse network of linear functions, in which the targets (states, in this case) are represented as linear functions over a common features space. SNoW has already been used successfully for a variety of tasks in natural language and visual processing [10, 25]. Typically, SNoW is used as a classifier, and predicts using a winnertake-all mechanism over the activation value of the target classes. The activation value is computed using a sigmoid function over the linear SUll. In the current study we normalize the activation levels of all targets to sum to 1 and output the outcomes for all targets (states). We verified experimentally on the training data that the output for each state is indeed a distribution function and can be used in further processing as P(slo) (details omitted). 3It is also possible to account for the classifiers' suggestions inside each phrase; details omitted. 6 Experiments We experimented both with NPs and SVs and we show results for two different representations of the observations (that is, different feature sets for the classifiers) - part of speech (PaS) information only and pas with additional lexical information (words). The result of interest is F{3 = (f32 + 1) . Recall? Precision/ (f32 . Precision + Recall) (here f3 = 1). The data sets used are the standard data sets for this problem [23, 3, 21] taken from the Wall Street Journal corpus in the Penn Treebank [18]. For NP, the training and test corpus was prepared from sections 15 to 18 and section 20, respectively; the SV phrase corpus was prepared from sections 1 to 9 for training and section for testing. ? For each model we study three different classifiers. The simple classifier corresponds to the standard HMM in which P(ols) is estimated directly from the data. When the observations are in terms of lexical items, the data is too sparse to yield robust estimates and these entries were left empty. The NB (naive Bayes) and SNoW classifiers use the same feature set, conjunctions of size 3 of pas tags (PaS and words, resp.) in a window of size 6. Table 1: Results (F{3=l) of different methods on NP and SV recognition Method Model Classifier SNoW HMM NB Simple SNoW NB PMM Simple SNoW CSCL NB Simple POS tags only 90.64 90.50 87.83 90.61 90.22 61.44 90.87 90.49 54.42 NP POS tags+words 92.89 92.26 92.98 91.98 92.88 91.95 POS tags only 64.15 75.40 64.85 74.98 74.80 40.18 85.36 80.63 59.27 SV POS tags+words 77.54 78.43 86.07 84.80 90.09 88.28 The first important observation is that the SV identification task is significantly more difficult than that the NP task. This is consistent across all models and feature sets. When comparing between different models and feature sets, it is clear that the simple HMM formalism is not competitive with the other two models. What is interesting here is the very significant sensitivity to the feature base of the classifiers used, despite the violation of the probabilistic assumptions. For the easier NP task, the HMM model is competitive with the others when the classifiers used are NB or SNoW. In particular, the fact that the significant improvements both probabilistic methods achieve when their input is given by SNoW confirms the claim that the output of SNoW can be used reliably as a probabilistic classifier. PMM and CSCL perform very well on predicting NP and SV phrases with CSCL at least as good as any other methods tried on these tasks. Both for NPs and SVs, CSCL performs better than the others, more significantly on the harder, SV, task. We attribute it to CSCL's ability to cope better with the length of the phrase and the long term dependencies. 7 Conclusion We have addressed the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. This can be viewed as a concrete instantiation of the Learning to Reason framework [16]. The focus here is on an important subproblem, the identification of phrase structure. We presented two approachs: a probabilistic framework that extends HMMs in two ways and an approach that is based on an extension of the CSP formalism. In both cases we developed efficient combination algorithms and studied them empirically. It seems that the CSP formalisms can support the desired performance measure as well as complex constraints and dependencies more flexibly than the Markovian approach. This is supported by the experimental results that show that CSCL yields better results, in particular, for the more complex case of SV phrases. As a side effect, this work exhibits the use of general classifiers within a probabilistic framework. Future work includes extensions to deal with more general constraints by exploiting more general probabilistic structures and generalizing the CSP approach. Acknowledgments This research is supported by NSF grants IIS-9801638 and IIS-9984168. References [1] S. P. Abney. Parsing by chunks. In S. P. A. R. C. Berwick and C. Tenny, editors, Principle-based parsing: Computation and Psycho linguistics, IJages 257-278. Kluwer, Dordrecht, 1991. [2] D. Appelt, J. Hobbs, J. Bear, D. Israel , and Nt Tyson. FASTUS: A finite-state processor for information extraction from real-world text. In Proc. of IJCAl, 1993. [3] S. Argamon, 1. Dagan, and Y. Krymolowski. A memory-based approach to learning shallow natural language patterns. Journal of Experimental and Theoretical Artificial Intelligence, special issue on memory-based learning, 10:1- 22, 1999. [4] C. Burge and S. Karlin. Finding the genes in genomic DNA. Current Opinion in Structural Biology, 8:346- 354, 1998. [5] C. Cardie and D. Pierce. Error-driven pruning of treebanks grammars for base noun phrase identification. In Proceedings of ACL-98, pages 218- 224, 1998. [6] A. Carlson, C. Cumby, J. Rosen, and D. Roth. The SNoW learning architecture. Technical Report UillCDCS-R-99-2101, UillC Computer Science Department, May 1999. [7] K. W. Church. A stochastic parts program and noun phrase parser for unrestricted text. In Proc. of ACL Conference on Applied Natural Language Processing, 1988. [8] 1: W. Fickett. The gene identification problem: An overview for developers. Computers and Chemistry, 20:103- 118,1996. [9] D. Freitag and A. McCallum. Information extraction using HMMs and shrinkage. In Papers from the AAAJ-99 Workshop on Machine Learning for Information Extraction, 31- 36, 1999. [10] A. R. Golding and D. Roth. A Winnow based approach to context-sensitive spelling correction. Machine Learning, 34(1-3):107-130, 1999. [11] G. Greffenstette. Evaluation techniques for automatic semantic extraction: comparing semantic and window based approaches. In ACL'93 workshop on the Acquisition of Lexical Knowledge from Text, 1993. [12] R. Grishman. The NYU system for MUC-6 or where's syntax? In B. Sundheim, editor, Proceedings of the Sixth Message Understanding Conference. Morgan Kaufmann Publishers, 1995. [13] D. Gusfield and L. Pitt. A bounded approximation for the minimum cost 2-SAT problems. Algorithmica, 8:103-117, 1992. [14] Z. S. Harris. Co-occurrence and transformation in linguistic structure. Language, 33(3):283340,1957. [15] D. Haussler. Computational genefinding. Trends in Biochemical Sciences, Supplementary Guide to Bioinformatics, pages 12- 15, 1998. [16] R. Khardon and D. Roth. Learning to reason. J. ACM, 44(5):697- 725, Sept. 1997. [17] A. Mackworth. Constraint Satisfaction. In S. C. Shapiro, editor, Encyclopedia of Artificial Intelligence, pages 285- 293, 1992. Volume 1, second edition. [18] M. P. Marcus, B. Santorini, and M. Marcinkiewicz. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313- 330, June 1993. [19] A. McCallum, D. Freitag, and F. Pereira. Maximum entropy Markov models for information extraction and segmentation. In proceedings of ICML-2000, 2000. to appear. [20] N. Morgan and H. Bourlard. Continuous speech recognition. IEEE Signal Processing Magazine, 12(3):24-42, 1995. [21] M. Munoz, V. Punyakanok, D. Roth, and D. Zimak. A learning approach to shallow parsing. In EMNLP-VLC'99, 1999. [22] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257- 285 , 1989. [23] L. A. Ramshaw and M. P. Marcus. Text chunking using transformation-based learning. In Proceedings of the Third Annual Workshop on Very Large Corpora, 1995 . [24] D. Roth. Learning to resolve natural language ambiguities: A unified approach. In Proceedings of the National Conference on Artificial Intelligence, pages 806- 813, 1998. [25] D. Roth, M.-H. Yang, and N. Ahuja. Learning to recognize objects. In CVPR'OO, The IEEE Conference on Computer Vision and Pattern Recognition, pages 724--731, 2000. [26] L. G. Valiant. Projection learning. In Proceedings of the Conference on Computational Learning Theory, pages 287- 293, 1998.
1817 |@word f32:2 polynomial:1 seems:1 open:5 confirms:1 tried:2 harder:3 initial:2 score:1 selecting:3 existing:1 recovered:2 current:3 od:1 comparing:2 nt:1 si:4 activation:3 conjunctive:1 parsing:9 stationary:2 intelligence:3 selected:1 item:1 indicative:1 mccallum:2 beginning:1 provides:3 location:2 simpler:1 along:1 constructed:1 consists:1 freitag:2 dan:1 combine:6 inside:4 indeed:2 uiuc:2 multi:1 resolve:1 window:3 becomes:1 estimating:1 underlying:1 bounded:1 what:2 sull:1 kind:2 israel:1 string:10 developer:1 developed:2 unified:1 finding:2 transformation:2 classifier:56 jpt:1 penn:2 grant:1 appear:1 before:2 local:9 modify:1 despite:1 encoding:2 path:3 might:1 chose:1 acl:3 studied:3 quantified:1 co:1 hmms:6 mentioning:1 range:1 directed:1 acknowledgment:1 testing:1 recursive:2 procedure:1 significantly:6 projection:5 word:6 confidence:1 cannot:1 close:2 nb:5 context:5 restriction:1 roth:7 maximizing:1 lexical:3 go:1 flexibly:2 starting:1 automaton:1 formalized:2 identifying:7 pure:1 legitimate:2 haussler:1 utilizing:1 it2:1 handle:1 notion:1 resp:2 pt:13 construction:1 target:4 punyakanok:2 parser:1 programming:2 magazine:1 associate:1 pa:4 trend:1 recognition:6 satisfying:2 predicts:1 observed:2 subproblem:2 capture:1 pmm:6 morphological:1 noi:1 mentioned:2 complexity:1 dynamic:2 trained:1 depend:4 technically:1 po:4 represented:2 retrieves:1 separated:1 distinct:1 treebanks:1 shortcoming:1 artificial:3 outcome:12 outside:4 dordrecht:1 quite:1 richer:1 encoded:3 supplementary:1 cvpr:1 relax:1 grammar:1 ability:2 syntactic:3 itself:1 final:1 sequence:23 karlin:1 combining:3 achieve:1 normalize:1 vasin:1 exploiting:1 empty:1 p:2 produce:2 object:1 wider:2 oo:1 develop:3 measured:1 c:2 involves:2 come:1 indicate:3 differ:3 snow:13 closely:1 correct:1 attribute:2 annotated:1 stochastic:1 opinion:2 violates:1 generalization:1 wall:1 marcinkiewicz:1 extension:4 pl:3 correction:1 considered:1 normal:1 viterbi:1 predict:1 claim:1 pitt:1 consecutive:1 omitted:3 estimation:1 proc:2 sensitive:1 successfully:1 hope:1 minimization:1 clearly:1 genomic:1 csp:8 rather:2 ej:2 shrinkage:1 conjunction:1 linguistic:1 derived:1 focus:2 june:1 improvement:2 consistently:1 likelihood:1 inference:9 nco:1 biochemical:1 typically:1 psycho:1 hidden:3 issue:2 classification:1 noun:5 special:1 fairly:1 f3:1 extraction:7 having:1 biology:2 hobbs:1 icml:1 constitutes:1 future:1 tyson:1 report:1 np:13 others:3 develops:1 statetransition:1 few:1 rosen:1 recognize:4 national:1 algorithmica:1 attempt:6 danr:1 interest:4 message:1 possibility:1 evaluation:2 certainly:1 violation:1 argamon:1 edge:2 necessary:1 desired:1 theoretical:1 instance:2 formalism:8 modeling:9 earlier:2 markovian:3 boolean:3 zimak:1 assignment:3 phrase:57 cost:9 deviation:1 entry:1 examining:1 successful:1 too:1 dependency:6 sv:11 combined:1 chunk:1 st:1 muc:1 sensitivity:1 probabilistic:12 picking:1 concrete:2 ambiguity:1 emnlp:1 external:2 stochastically:1 derivative:1 account:1 chemistry:1 sec:4 includes:1 satisfy:1 vi:2 try:1 observing:1 competitive:2 sort:1 recover:1 bayes:1 contribution:1 minimize:1 il:1 oi:1 kaufmann:1 yield:5 identify:2 rabiner:1 identification:7 accurately:1 substring:2 cardie:1 worth:1 processor:1 reach:1 sixth:1 acquisition:1 mackworth:1 associated:2 recall:3 knowledge:1 improves:1 segmentation:1 supervised:2 follow:1 evaluated:1 though:1 generality:1 until:1 working:1 ei:1 overlapping:1 perhaps:1 building:1 effect:1 name:1 true:2 hence:2 semantic:3 deal:4 oc:4 criterion:1 syntax:1 slo:8 performs:2 ols:5 common:1 sigmoid:1 clause:1 empirically:1 overview:1 volume:1 extend:2 interpretation:1 kluwer:1 significant:2 munoz:1 automatic:1 closing:1 illinois:1 winnertake:1 language:9 ramshaw:1 supervision:1 etc:1 base:4 recent:1 winnow:1 optimizes:1 driven:1 binary:1 morgan:2 minimum:2 additional:2 unrestricted:1 eo:1 determine:2 maximize:1 novelty:1 shortest:2 signal:5 ii:2 multiple:1 berwick:1 infer:1 champaign:1 technical:2 long:4 prediction:3 vision:1 annotate:1 represent:1 tailored:1 addition:1 nci:1 separately:1 addressed:1 diagram:2 publisher:1 ot:4 ivi:1 subject:1 call:1 structural:2 yang:1 constraining:1 revealed:1 concerned:1 variety:1 independence:1 fit:2 architecture:2 identified:2 golding:1 whether:3 motivated:2 speech:6 cnf:3 svs:3 prefers:1 useful:1 generally:1 clear:1 prepared:2 encyclopedia:1 dna:1 shapiro:1 sl:9 percentage:2 supplied:1 nsf:1 tutorial:1 notice:1 estimated:1 correctly:1 verified:1 utilize:1 graph:4 sum:1 extends:3 reasonable:1 utilizes:1 decision:2 layer:1 fold:1 quadratic:2 krymolowski:1 annual:1 constraint:26 tag:6 nearby:1 performing:1 department:2 designated:1 according:5 combination:5 describes:1 across:1 shallow:9 modification:2 restricted:1 taken:1 chunking:2 mechanism:4 end:1 generalizes:1 fickett:1 occurrence:1 robustness:2 existence:2 jn:1 include:1 linguistics:2 recognizes:1 carlson:1 especially:1 question:1 already:1 spelling:1 exhibit:3 separate:1 hmm:14 street:1 participate:1 transit:1 topic:1 reason:3 marcus:2 length:5 modeled:1 relationship:1 equivalently:1 difficult:1 mostly:1 reliably:1 summarization:1 perform:2 allowing:4 observation:31 markov:12 urbana:2 gusfield:1 finite:3 situation:1 extended:1 incorporated:4 santorini:1 verb:2 inferred:1 required:2 sentence:4 coherent:4 nip:1 address:2 pattern:5 program:1 including:2 oj:1 max:1 belief:1 memory:2 overlap:4 satisfaction:6 natural:8 critical:1 treated:1 predicting:3 difficulty:1 bourlard:1 scheme:2 improve:3 identifies:1 church:1 naive:1 sept:1 sn:1 text:5 understanding:1 embedded:1 loss:1 bear:1 generation:1 suggestion:1 interesting:1 acyclic:1 consistent:1 principle:1 treebank:2 editor:3 pi:1 supported:2 last:1 english:1 dis:1 bias:1 allow:1 side:1 guide:1 dagan:1 combinator:3 sparse:2 calculated:1 transition:5 world:1 rich:1 made:1 projected:1 cope:2 approximate:1 pruning:1 gene:2 global:1 instantiation:1 sat:1 corpus:5 assumed:2 continuous:1 decade:1 table:1 abney:1 nature:1 learn:4 robust:3 interact:1 complex:2 constructing:1 domain:1 main:2 s2:1 whole:1 motivation:1 edition:1 allowed:2 grishman:1 site:1 ahuja:1 aid:1 precision:3 pereira:1 wish:1 khardon:1 burge:1 third:1 learns:1 splice:1 formula:3 specific:2 symbol:6 nyu:1 experimented:1 workshop:3 sequential:4 valiant:1 pierce:1 stlst:1 easier:1 entropy:2 generalizing:1 simply:1 likely:6 visual:1 applies:1 corresponds:5 vlc:1 satisfies:5 extracted:1 harris:1 acm:1 conditional:1 slot:1 goal:5 viewed:1 consequently:1 experimentally:4 hard:2 specifically:1 tenny:1 wordnet:1 e:4 experimental:2 attempted:1 cumby:1 indicating:2 formally:2 internal:1 noo:1 support:1 bioinformatics:1 incorporate:1 evaluate:1
893
1,818
The Unscented Particle Filter Rudolph van der Merwe Oregon Graduate Institute Electrical and Computer Engineering P.O. Box 91000,Portland,OR 97006, USA rvdmerwe@ece.ogi.edu N ando de Freitas UC Berkeley, Computer Science 387 Soda Hall, Berkeley CA 94720-1776 USA jfgf@cs.berkeley.edu Arnaud Doucet Cambridge University Engineering Department Cambridge CB2 1PZ, England ad2@eng.cam.ac.uk Eric Wan Oregon Graduate Institute Electrical and Computer Engineering P.O. Box 91000,Portland,OR 97006, USA ericwan@ece.ogi.edu Abstract In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very "nice" properties. Firstly, it makes efficient use of the latest available information and, secondly, it can have heavy tails. As a result, we find that the algorithm outperforms standard particle filtering and other nonlinear filtering methods very substantially. This experimental finding is in agreement with the theoretical convergence proof for the algorithm. The algorithm also includes resampling and (possibly) Markov chain Monte Carlo (MCMC) steps. 1 Introduction Filtering is the problem of estimating the states (parameters or hidden variables) of a system as a set of observations becomes available on-line. This problem is of paramount importance in many fields of science, engineering and finance. To solve it, one begins by modelling the evolution of the system and the noise in the measurements. The resulting models typically exhibit complex nonlinearities and non-Gaussian distributions, thus precluding analytical solution. The best known algorithm to solve the problem of non-Gaussian, nonlinear filtering (filtering for short) is the extended Kalman filter (Anderson and Moore 1979). This filter is based upon the principle of linearising the measurements and evolution models using Taylor series expansions. The series approximations in the EKF algorithm can, however, lead to poor representations of the nonlinear functions and probability distributions of interest. As as result, this filter can diverge. Recently, Julier and Uhlmann (Julier and Uhlmann 1997) have introduced a filter founded on the intuition that it is easier to approximate a Gaussian distribution than it is to approximate arbitrary nonlinear functions. They named this filter the unscented Kalman filter (UKF) . They have shown that the UKF leads to more accurate results than the EKF and that in particular it generates much better estimates of the covariance of the states (the EKF seems to underestimate this quantity). The UKF has, however, the limitation that it does not apply to general non-Gaussian distributions. Another popular solution strategy for the general filtering problem is to use sequential Monte Carlo methods, also known as particle filters (PFs): see for example (Doucet, Godsill and Andrieu 2000, Doucet, de Freitas and Gordon 2001, Gordon, Salmond and Smith 1993). These methods allow for a complete representation of the posterior distribution of the states, so that any statistical estimates, such as the mean, modes, kurtosis and variance, can be easily computed. They can therefore, deal with any nonlinearities or distributions. PFs rely on importance sampling and, as a result, require the design of proposal distributions that can approximate the posterior distribution reasonably welL In general, it is hard to design such proposals. The most common strategy is to sample from the probabilistic model of the states evolution (transition prior). This strategy can, however, fail if the new measurements appear in the tail of the prior or if the likelihood is too peaked in comparison to the prior. This situation does indeed arise in several areas of engineering and finance, where one can encounter sensors that are very accurate (peaked likelihoods) or data that undergoes sudden changes (nonstationarities): see for example (Pitt and Shephard 1999, Thrun 2000). To overcome this problem, several techniques based on linearisation have been proposed in the literature (de Freitas 1999, de Freitas, Niranjan, Gee and Doucet 2000, Doucet et aL 2000, Pitt and Shephard 1999). For example, in (de Freitas et aL 2000), the EKF Gaussian approximation is used as the proposal distribution for a PF. In this paper, we follow the same approach, but replace the EKF proposal by a UKF proposal. The resulting filter should perform better not only because the UKF is more accurate, but because it also allows one to control the rate at which the tails of the proposal distribution go to zero. It becomes thus possible to adopt heavier tailed distributions as proposals and, consequently, obtain better importance samplers (Gelman, Carlin, Stern and Rubin 1995). Readers are encouraged to consult our technical report for further results and implementation details (van der Merwe, Doucet, de Freitas and Wan 2000)1. 2 Dynamic State Space Model We apply our algorithm to general state space models consisting of a transition equation p(Xt IXt-d and a measurement equation p(Yt IXt). That is, the states follow a Markov process and the observations are assumed to be independent given the states. For example, if we are interested in nonlinear, non-Gaussian regression, the model can be expressed as follows Xt Yt = f(Xt-1, Vt-1) h(ut,xt,nt) where Ut E Rnu denotes the input data at time t, Xt E Rn z denotes the states (or parameters) of the model, Yt E Rny the observations, Vt E Rnv the process noise and nt E Rnn the measurement noise. The mappings f : Rnz x Rnv r-+ Rnz and h : (Rn z x Rnu) x Rnn r-+ Rn y represent the deterministic process and measurement models. To complete the specification ofthe model, the prior distribution (at t = 0) lThe TR and software are available at http://www.cs.berkeley.edurjfgf . is denoted by p(xo). Our goal will be to approximate the posterior distribution p(xo:tIYl:t) and one of its marginals, the filtering density p(XtIYl:t) , where Yl:t = {Yl, Y2, ... ,yd? By computing the filtering density recursively, we do not need to keep track of the complete history of the states. 3 Particle Filtering Particle filters allow us to approximate the posterior distribution P (xo:t I Yl:t) using N N}, a set of weighted samples (particles) {x~~L i = 1, ... , which are drawn from an importance proposal distribution q(xo:tIYl:t). These samples are propagated in time as shown in Figure 1. In doing so, it becomes possible to map intractable integration problems (such as computing expectations and marginal distributions) to easy summations. This is done in a rigorous setting that ensures convergence according to the strong law of large numbers where ~ denotes almost sure convergence and it : IRn~ -t IRn't is some function of interest. For example, it could be the conditional mean, in which case it (xo:t) = XO:t, or the conditional covariance of Xt with it (xo:t) = XtX~ i= 1, ... ,N= 10 particles o 0 0 0 o , 000 0 " it tf' ! i 1h lh j 1 w(i)} {x(i) t? 1' t? 1 ?? Figure 1: In this example, a particle filter starts at time t - 1 with an unweighted measure {X~~l' N- 1 }, which provides an approximation of p(Xt-lIYl:t-2). For each particle we compute the importance weights using the information at time t - 1. This results in the weighted measure {x~~l!W~~l}' which yields an approximation p(xt-lIYl:t-l). Subsequently, a resampling step selects only the "fittest" particles to obtain the unweighted measure {X~~l' N- 1 }, which is still an approximation of p(Xt-lIYl:t-l) . Finally, the sampling (prediction) step introduces variety, resulting in the measure {x~i), N-l}. Fp(x,lyu) [Xt]I8:'p(x,lyu) [Xt]. A Generic PF algorithm involves the following steps. Generic PF 1. Sequential importance sampling step ? For i = 1, ... ,N. sample x~il '" q(XtIX~~L1,Yl:t) and update the trajectories -til A. (-(il (il ) xo:t - x t ,xO:t-1 ? For i = 1, ... ,N. evaluate the importance weights up to a normalizing constant: (il _ wt - -(i l I P( xo:t Yl:t ) I (il ) (-(il I ) q (-(i x t XO:t - 1' Y1:t P XO:t - 1 Y1 :t-1 l . hts: ? For ~. = 1, ... , N . norma I?Ize t he welg -til -_ Wt (,l Wt [",N (Jl] L.JJ=1 Wt -1 2. Selection step ? Multiply/suppress samples (x~i~) with high/low importance weights w~il. respectively. to obtain N random samples (x~i~) approximately distributed according to p(X~~~IY1:t). 3. MCMC step ? Apply a Markov transition kernel with invariant distribution given by p(x~~~IYl:t) to obtain (x~i ~). ? In the above algorithm, we can restrict ourselves to importance functions of the t form q(xo:tIYl:t) = q(xo) II q(xkIY1:k,X1:k-I) to obtain a recursive formula to k=1 evaluate the importance weights Wt CX P (Yt IYI:t-l, xo:t) P (Xt IXt-I) q (Xt IYl:t, Xl:t-I) There are infinitely many possible choices for q (xo:tl Yl:t), the only condition being that its support must include that of p(xo:tIYl:t). The simplest choice is to just sample from the prior, P (Xt I Xt- I), in which case the importance weight is equal to the likelihood, P (Ytl YI:t-l, xO:t). This is the most widely used distribution, since it is simple to compute, but it can be inefficient, since it ignores the most recent evidence, Yt. The selection (resampling) step is used to eliminate the particles having low importance weights and to multiply particles having high importance weights (Gordon et al. 1993). This is done by mapping the weighted measure {x~i) ,w~i)} to an unweighted measure {x~i), N- I } that provides an approximation of p(xtIYl:t). After the selection scheme at time t, we obtain N particles distributed marginally approximately according to p(xo:tIYl:t). One can, therefore, apply a Markov kernel (for example, a Metropolis or Gibbs kernel) to each particle and the resulting distribution will still be p(xo:t IYl:t). This step usually allows us to obtain better results and to treat more complex models (de Freitas 1999). 4 The Unscented Particle Filter As mentioned earlier, using the transition prior as proposal distribution can be inefficient. As illustrated in Figure 2, if we fail to use the latest available information to propose new values for the states, only a few particles might survive. It is therefore of paramount importance to move the particles towards the regions of high likelihood. To achieve this, we propose to use the unscented filter as proposal distribution. This simply requires that we propagate the sufficient statistics of the UKF for each particle. For exact details, please refer to our technical report (van der Merwe et al. 2000). Prior Likelihood ? ? ??????? ? ? ? ? ? ? Figure 2: The UKF proposal distribution allows us to move the samples in the prior to regions of high likelihood. This is of paramount importance if the likelihood happens to lie in one of the tails of the prior distribution, or if it is too narrow (low measurement error). 5 Theoretical Convergence Let B (l~n) be the space of bounded, Borel measurable functions on ~n. We denote Ilfll ~ sup If (x) I. The following theorem is a straightforward extension of previous xERn results in (Crisan and Doucet 2000). Theorem 1 If the importance weight Wt CX P (Yt IXt) P (Xt IXt-l) q (Xt IXO:t-l, Yl:t) (1) is upper bounded for any (Xt-l,yt), then, for all t ~ 0, there exists Ct independent of N, such that for any ft E B (~n~x(t+l)) (2) The expectation in equation 2 is with respect to the randomness introduced by the particle filtering algorithm. This convergence result shows that, under very lose assumptions, convergence of the (unscented) particle filter is ensured and that the convergence rate of the method is independent of the dimension of the state-space. The only crucial assumption is to ensure that Wt is upper bounded, that is that the proposal distribution q (Xt I XO:t-l, Yl:t) has heavier tails than P (Yt I Xt) P (Xtl Xt-t). Considering this theoretical result, it is not surprising that the UKF (which has heavier tails than the EKF) can yield better estimates. 6 Demonstration For this experiment, a time-series is generated by the following process model Xt+! = 1 + sin(w7rt) + ?Xt + Vt, where Vt is a Gamma(3,2) random variable modeling the process noise, and W = 4e - 2 and ? = 0.5 are scalar parameters. A non-stationary observation model, t S 30 t> 30 is used. The observation noise, nt, is drawn from a zero-mean Gaussian distribution. Given only the noisy observations, Yt, a few different filters were used to estimate the underlying clean state sequence Xt for t = 1 ... 60. The experiment was repeated 100 times with random re-initialization for each run. All of the particle filters used 200 particles. Table 1 summarizes the performance of the different filters. The Algorithm Extended Kalman Filter (EKFl Unscented Kalman Filter (UKF) Particle Filter : generic Particle Filter: MCMC move step Particle Filter : EKF proposal Particle Filter: EKF proposal and MCMC move step Particle Filter : UKF proposal (" Unscented Particle Filter") Particle Filter: UKF proposal and MCMC move step MSE mean var 0.374 0.015 0.280 0.012 0.424 0.053 0.417 0.055 0.310 0.016 0.307 0.015 0.070 0.006 0.074 0.008 Table 1: Mean and variance of the MSE calculated over 100 independent runs. table shows the means and variances of the mean-square-error (MSE) of the state estimates. Note that MCMC could improve results in other situations. Figure 3 compares the estimates generated from a single run of the different particle filters. The superior performance of the unscented particle filter is clearly evident. Figure 'O~--~ ' O----~2~O----~30-----4~O----~W----~ro? Time Figure 3: Plot of the state estimates generated by different filters. 4 shows the estimates of the state covariance generated by a stand-alone EKF and UKF for this problem. Notice how the EKF's estimates are consistently smaller than those generated by the UKF. This property makes the UKF better suited than the EKF for proposal distribution generation within the particle filter framework. Estimates of state covariance I-- EKF I - 10"" UKF I I , "'-- ... -.-- ... ---', ..... , .. , 'O~O:--------": '0"----------:20: :--------,3":-0-------": 40-------:5'::-0------:"0 time Figure 4: EKF and UKF estimates of state covariance. 7 Conclusions We proposed a new particle filter that uses unscented filters as proposal distributions. The convergence proof and empirical evidence, clearly, demonstrate that this algorithm can lead to substantial improvements over other nonlinear filtering algorithms. The algorithm is well suited for engineering applications, when the sensors are very accurate but nonlinear, and financial time series, where outliers and heavy tailed distributions play a significant role in the analysis of the data. For further details and experiments, please refer to our report (van der Merwe et al. 2000). References Anderson, B. D. and Moore, J. B. (1979). Optimal Filtering, Prentice-Hall, New Jersey. Crisan, D. and Doucet, A. (2000). Convergence of generalized particle filters, Technical Report CUED/F-INFENG/TR 381, Cambridge University Engineering Department. de Freitas, J. F . G. (1999) . Bayesian Methods for Neural Networks, PhD thesis, Department of Engineering, Cambridge University, Cambridge, UK de Freitas, J. F. G., Niranjan, M., Gee, A. H. and Doucet, A. (2000). Sequential Monte Carlo methods to train neural network models, Neural Computation 12(4): 955- 993. Doucet, A., de Freitas, J. F. G. and Gordon, N. J. (eds) (2001). Sequential Monte Carlo Methods in Practice, Springer-Verlag. Doucet, A., Godsill, S. and Andrieu, C. (2000). On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing 10(3): 197- 208. Gelman, A., Carlin, J. B., Stern, H. S. and Rubin, D. B. (1995). Bayesian Data Analysis, Chapman and Hall. Gordon, N. J., Salmond, D . J. and Smith, A. F. M. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation, lEE Proceedings-F 140(2): 107113. Julier, S. J. and Uhlmann, J. to nonlinear systems, Proc. Aerospace/Defence Sensing, Sensor Fusion, Tracking and K (1997). A new extension of the Kalman filter of AeroSense: The 11th International Symposium on Simulation and Controls, Orlando, Florida. , Vol. Multi Resource Management II. Pitt, M. K and Shephard, N. (1999). Filtering via simulation: Auxiliary particle filters, Journal of the American Statistical Association 94(446): 590- 599. Thrun, S. (2000). Monte Carlo POMDPs, in S. Solla, T. Leen and K-R. Miiller (eds), Advances in Neural Information Processing Systems 12, MIT Press, pp. 1064- 1070. van der Merwe, R., Doucet, A., de Freitas, J . F. G. and Wan, E. (2000). The unscented particle filter, Technical Report CUED/F-INFENG/TR 380, Cambridge University Engineering Department.
1818 |@word seems:1 simulation:2 propagate:1 eng:1 covariance:5 tr:3 recursively:1 series:4 precluding:1 outperforms:1 freitas:11 nt:3 surprising:1 must:1 plot:1 update:1 hts:1 resampling:3 stationary:1 alone:1 smith:2 short:1 sudden:1 provides:2 firstly:1 welg:1 symposium:1 xtl:1 indeed:1 multi:1 pf:3 considering:1 becomes:3 begin:1 estimating:1 bounded:3 underlying:1 substantially:1 finding:1 berkeley:4 finance:2 ensured:1 ro:1 uk:2 control:2 appear:1 engineering:9 treat:1 yd:1 approximately:2 might:1 initialization:1 graduate:2 recursive:1 practice:1 cb2:1 area:1 empirical:1 rnn:2 xtx:1 selection:3 gelman:2 prentice:1 www:1 measurable:1 deterministic:1 map:1 yt:9 latest:2 go:1 straightforward:1 financial:1 play:1 exact:1 us:2 agreement:1 ft:1 role:1 electrical:2 region:2 ensures:1 solla:1 iy1:1 mentioned:1 intuition:1 substantial:1 cam:1 dynamic:1 iyl:3 upon:1 eric:1 easily:1 jersey:1 train:1 monte:6 widely:1 solve:2 statistic:2 rudolph:1 noisy:1 sequence:1 analytical:1 kurtosis:1 propose:3 linearising:1 achieve:1 fittest:1 convergence:9 cued:2 ac:1 strong:1 shephard:3 auxiliary:1 c:2 involves:1 norma:1 filter:38 subsequently:1 require:1 orlando:1 secondly:1 summation:1 extension:2 ad2:1 unscented:11 hall:3 mapping:2 lyu:2 pitt:3 adopt:1 estimation:1 proc:1 lose:1 pfs:2 uhlmann:3 tf:1 weighted:3 mit:1 clearly:2 sensor:3 gaussian:8 defence:1 ekf:13 crisan:2 ytl:1 improvement:1 portland:2 modelling:1 likelihood:7 consistently:1 rigorous:1 typically:1 eliminate:1 hidden:1 irn:2 interested:1 selects:1 denoted:1 integration:1 uc:1 marginal:1 field:1 equal:1 having:2 sampling:5 encouraged:1 chapman:1 survive:1 peaked:2 report:5 gordon:5 few:2 gamma:1 consisting:1 ukf:16 ourselves:1 ando:1 interest:2 multiply:2 introduces:1 chain:1 accurate:4 lh:1 taylor:1 re:1 theoretical:3 earlier:1 modeling:1 too:2 ixt:5 density:2 international:1 probabilistic:1 yl:8 lee:1 diverge:1 thesis:1 management:1 wan:3 possibly:1 american:1 inefficient:2 til:2 nonlinearities:2 de:11 includes:1 oregon:2 doing:1 sup:1 start:1 il:7 square:1 merwe:5 variance:3 yield:2 ofthe:1 bayesian:4 marginally:1 carlo:6 trajectory:1 pomdps:1 randomness:1 history:1 ed:2 underestimate:1 pp:1 proof:2 propagated:1 popular:1 ut:2 follow:2 leen:1 done:2 box:2 anderson:2 just:1 nonlinear:9 mode:1 undergoes:1 usa:3 y2:1 ize:1 evolution:3 andrieu:2 arnaud:1 moore:2 illustrated:1 deal:1 ogi:2 sin:1 please:2 generalized:1 evident:1 complete:3 demonstrate:1 l1:1 novel:1 recently:1 common:1 superior:1 jl:1 julier:3 tail:6 he:1 association:1 marginals:1 measurement:7 refer:2 significant:1 cambridge:6 gibbs:1 particle:37 specification:1 iyi:1 posterior:4 recent:1 linearisation:1 verlag:1 vt:4 der:5 yi:1 ii:2 technical:4 jfgf:1 england:1 nonstationarities:1 niranjan:2 prediction:1 infeng:2 regression:1 expectation:2 represent:1 kernel:3 proposal:20 crucial:1 sure:1 consult:1 easy:1 variety:1 carlin:2 restrict:1 heavier:3 miiller:1 jj:1 simplest:1 http:1 notice:1 track:1 vol:1 drawn:2 clean:1 run:3 aerosense:1 soda:1 named:1 almost:1 reader:1 summarizes:1 ct:1 paramount:3 software:1 generates:1 rnz:2 department:4 according:3 poor:1 smaller:1 metropolis:1 happens:1 outlier:1 invariant:1 xo:21 equation:3 resource:1 fail:2 available:4 apply:4 generic:3 encounter:1 florida:1 rnv:2 denotes:3 include:1 ensure:1 move:5 quantity:1 strategy:3 exhibit:1 thrun:2 lthe:1 kalman:5 demonstration:1 godsill:2 suppress:1 design:2 implementation:1 stern:2 perform:1 upper:2 observation:6 markov:4 ixo:1 ericwan:1 extended:2 situation:2 y1:2 rn:3 arbitrary:1 introduced:2 aerospace:1 narrow:1 usually:1 fp:1 rely:1 scheme:1 improve:1 rnu:2 nice:1 prior:9 literature:1 law:1 generation:1 limitation:1 filtering:14 var:1 sufficient:1 rubin:2 principle:1 bank:1 i8:1 heavy:2 gee:2 allow:2 salmond:2 institute:2 van:5 distributed:2 overcome:1 dimension:1 calculated:1 transition:4 stand:1 unweighted:3 ignores:1 founded:1 approximate:5 keep:1 doucet:12 assumed:1 tailed:2 table:3 reasonably:1 ca:1 expansion:1 mse:3 complex:2 xtix:1 noise:5 arise:1 repeated:1 x1:1 tl:1 borel:1 xl:1 lie:1 formula:1 theorem:2 xt:24 sensing:1 pz:1 normalizing:1 evidence:2 intractable:1 exists:1 fusion:1 sequential:6 importance:18 phd:1 easier:1 suited:2 cx:2 simply:1 infinitely:1 expressed:1 tracking:1 scalar:1 springer:1 rny:1 conditional:2 goal:1 consequently:1 towards:1 replace:1 hard:1 change:1 sampler:1 wt:7 ece:2 experimental:1 support:1 evaluate:2 mcmc:6
894
1,819
The Interplay of Symbolic and Subsymbolic Processes in Anagram Problem Solving David B. Grimes and Michael C. Mozer Department of Computer Science and Institute of Cognitive Science University of Colorado, Boulder, CO 80309-0430 USA {gr imes ,mo z er}@c s .co l ora d o .edu Abstract Although connectionist models have provided insights into the nature of perception and motor control, connectionist accounts of higher cognition seldom go beyond an implementation of traditional symbol-processing theories. We describe a connectionist constraint satisfaction model of how people solve anagram problems. The model exploits statistics of English orthography, but also addresses the interplay of sub symbolic and symbolic computation by a mechanism that extracts approximate symbolic representations (partial orderings of letters) from sub symbolic structures and injects the extracted representation back into the model to assist in the solution of the anagram. We show the computational benefit of this extraction-injection process and discuss its relationship to conscious mental processes and working memory. We also account for experimental data concerning the difficulty of anagram solution based on the orthographic structure of the anagram string and the target word. Historically, the mind has been viewed from two opposing computational perspectives. The symbolic perspective views the mind as a symbolic information processing engine. According to this perspective, cognition operates on representations that encode logical relationships among discrete symbolic elements, such as stacks and structured trees, and cognition involves basic operations such as means-ends analysis and best-first search. In contrast, the subsymbolic perspective views the mind as performing statistical inference, and involves basic operations such as constraint-satisfaction search. The data structures on which these operations take place are numerical vectors. In some domains of cognition, significant progress has been made through analysis from one computational perspective or the other. The thesis of our work is that many of these domains might be understood more completely by focusing on the interplay of subsymbolic and symbolic information processing. Consider the higher-cognitive domain of problem solving. At an abstract level of description, problem solving tasks can readily be formalized in terms of symbolic representations and operations. However, the neurobiological hardware that underlies human cognition appears to be subsymbolic-representations are noisy and graded, and the brain operates and adapts in a continuous fashion that is difficult to characterize in discrete symbolic terms. At some level-between the computational level of the task description and the implementation level of human neurobiology-the symbolic and subsymbolic accounts must come into contact with one another. We focus on this point of contact by proposing mechanisms by which symbolic representations can modulate sub symbolic processing, and mechanisms by which subsymbolic representations are made symbolic. We conjecture that these mechanisms can not only provide an account for the interplay of symbolic and sub symbolic processes in cognition, but that they form a sensible computational strategy that outperforms purely subsymbolic computation, and hence, symbolic reasoning makes sense from an evolutionary perspective. In this paper, we apply our approach to a high-level cognitive task, anagram problem solving. An anagram is a nonsense string of letters whose letters can be rearranged to form a word. For example, the solution to the anagram puzzle RYTEHO is THEORY. Anagram solving is a interesting task because it taps higher cognitive abilities and issues of awareness, it has a tractable state space, and interesting psychological data is available to model. 1 A Sub symbolic Computational Model We start by presenting a purely subsymbolic model of anagram processing. By subsymbolic, we mean that the model utilizes only English orthographic statistics and does not have access to an English lexicon. We will argue that this model proves insufficient to explain human performance on anagram problem solving. However, it is a key component of a hybrid symbolic-subsymbolic model we propose, and is thus described in detail. 1.1 Problem Representation A computational model of anagram processing must represent letter orderings. For example, the model must be capable of representing a solution such as <THEORY>, or any permutation of the letters such as <RYTEHO>. (The symbols "<" and ">" will be used to delimit the beginning and end of a string, respectively.) We adopted a representation of letter strings in which a string is encoded by the set of letter pairs (hereafter, bigrams) contained in the string; for example, the bigrams in <THEORY> are: <T, TH, HE, EO, OR, RY, and Y>. The delimiters < and > are treated as ordinary symbols of the alphabet. We capture letter pairings in a symbolic letter-ordering matrix, or symbolic ordering for short. Figure lea) shows the matrix, in which the rows indicate the first letter of the bigram, and the columns indicate the second. A cell of the matrix contains a value of I if the corresponding bigram is present in the string. (This matrix formalism and all procedures in the paper can be extended to handle strings with repeated letters, which we do not have space to discuss.) The matrix columns and rows can be thought of as consisting of all letters from A to z, along with the delimiters < and>. However, in the Figure we have omitted rows and columns corresponding to letters not present in the anagram. Similarly, we have omitted the < from the column space and the> from row space, as they could not by definition be part of any bigram. The seven bigrams indicated by the seven ones in the Figure uniquely specify the string THEORY. As we've described the matrix, cells contain the truth value of the proposition that a particular bigram appears in the string being represented. However, the cell values have an interesting alternative interpretation: as the probability that a particular bigram is present. Figure l(b) illustrates a matrix of this sort, which we call a subsymbolic letter ordering matrix, or subsymbolic ordering for short. In the Figure, the bigram TH occurs with probability 0.8. Although the symbolic orderings are obviously a subset of the sub symbolic orderings, the two representations play critically disparate roles in our model, and thus are treated as separate entities. To formally characterize symbolic and subsymbolic ordering matrices, we define a mask vector, /-?, having N = 28 elements, corresponding to the 26 letters of the alphabet plus the two delimiters. Element i of the mask, /-?i, is set to one if the corresponding letter appears in the anagram string and zero if it does not. In both the symbolic and sub symbolic orderings, the matrices are constrained such that elements in row i and column i must sum E H 0 R T Y > E H 0 R T Y > E H 0 R T Y < 0 0 0 0 1 0 0 < 0 0 .2 0 .6 .2 0 < 0 0 0 0 1 0 0 E 0 0 1 0 0 0 0 E .2 0 .3 .3 .1 0 .1 E 0 0 0 0 0 0 0 > H 1 0 0 0 0 0 0 H .6 0 .3 0 0 .1 0 H 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 .1 .2 0 .5 .1 0 .1 0 0 0 0 0 0 0 0 R 0 0 0 0 0 1 0 R 0 0 .1 0 .2 .7 0 R 0 0 0 0 0 1 0 T 0 1 0 0 0 0 0 T .1 .8 0 .1 0 0 0 T 0 1 0 0 0 0 0 Y 0 0 0 0 0 0 1 Y 0 .1 .1 0 0 .8 Y 0 0 0 0 0 0 0 0 (c) (b) (a) Figure 1: (a) A symbolic letter-ordering matrix for the string THEORY. (b) A subsymbolic letterordering matrix whose cells indicate the probabilities that particular bigrams are present in a letter string. (c) A symbolic partial letter-ordering matrix, formed from the symbolic ordering matrix by setting to zero a subset of the elements, which are highlighted in grey. The resulting matrix represents the partial ordering { <TH, RY }. to J.Li. If one extracts all rows and columns for which J.Li = 1 from a symbolic ordering, as we have done in Figure l(a), a permutation matrix is obtained. If one extracts all rows and columns for which J.Li = 1 from a sub symbolic ordering, as we have done in Figure l(b), the resulting matrix is known as doubly stochastic, because each row and column vector can each be interpreted as a probability distribution. 1.2 Constraint Satisfaction Network A simple computational model can be conceptualized by considering each cell in the subsymbolic ordering matrix to correspond to a standard connectionist unit, and to consider each cell value as the activity level of the unit. In this conceptualization, the goal of the connectionist network is to obtain a pattern of activity corresponding to the solution word, given the anagram. We wish for the model to rely solely on orthographic statistics of English, avoiding lexical knowledge at this stage. Our premise is that an interactive model-a model that allows for top-down lexical knowledge to come in contact with the bottom-up information about the anagram-would be too powerful; i.e., the model would be superhuman in its ability to identify lexical entries containing a target set of letters. Instead, we conjecture that a suitable model of human performance should be primarily bottom-up, attempting to order letters without the benefit of the lexicon. Of course, the task cannot be performed without a lexicon, but we defer discussion of the role of the lexicon until we first present the core connectionist component of the model. The connectionist model is driven by three constraints : (1) solutions should contain bigrams with high frequency in English, (2) solutions should contain trigrams with high frequency in English, and (3) solutions should contain bigrams that are consistent with the bigrams in the original anagram. The first two constraints attempt to obtain English-like strings. The third constraint is motivated by the observation that anagram solution time depends on the arrangement of letters in the original anagram (e.g. , Mayzner & Tresselt, 1959). The three constraints are embodied by a constraint-satisfaction network with the following harmony function : H =L lj f3ijPij + W LTijkPijPjk + ~ LPijSij ljk (1) lj where Pij denotes the value of the cell corresponding to bigram ij, f3ij is monotonically related to the frequency of bigram ij in English, Tijk is monotonically related to the frequency of trigram ijk in English, Sij is 1 if the original anagram contained bigram ij or o otherwise, and W and ~ are model parameters that specify the relative weighting of the trigram and unchanged-ordering constraints, respectively. Anagram Extracti on E H < 0 0 0 Solved? (YIN) T Y > < 0 0 20 J J EOO I 0000 It 60 J 0010 HI 0 0 0 0 0 o I 5 o 0 0 1 2 0 T I Y 0 2 0 Process E ROO Constraint-Satisfacti on Network R 2 06 101 I 0 I 10210 0 0 0 I 00 0 0 0 0 ROO 000 I 0 8 0 1 0 0 0 T 0 1 0 0 0 0 0 0 I 0 B Y 0 0 0 0 0 0 I I 0 Subsymbolic Matrix 4 Symbolic Matrix Lexical Verification Figure 2: The Iterative Extraction-Injection Model The harmony function specifies a measure of goodness of a given matrix in terms of the degree to which the three sets of constraints are satisfied. Running the connectionist network corresponds to searching for a local optimum in the harmony function. The local optimum can be found by gradient ascent, i.e., defining a unit-update rule that moves uphill in harmony. Such a rule can be obtained via the derivative of the harmony function: A I....l.Pij = {}H E {}Pi; ? Although the update rule ensures that harmony will increase over time, the network state may violate the conditions of the doubly stochastic matrix by allowing the Pij to take on values outside of [0, 1], or by failing to satisfy the row and column constraints. The procedure applied to enforce the row and column constraints involves renormalizing the activities after each harmony update to bring the activity pattern arbitrarily close to a doubly-stochastic matrix. The procedure, suggested by Sinkhorn (1964), involves alternating row and column normalizations (in our case to the values of the mask vector). Sinkhorn proved that this procedure will asymptotically converge on a doubly stochastic matrix. Note that the Sinkhorn normalization procedure must operate at a much finer time grain than the harmony updates, in order to ensure that the updates do not cause the state to wander from the space of doubly stochastic matrices. 2 The Iterative Extraction-Injection Model The constraint-satisfaction network we just described is inadequate as a model of human anagram problem solving for two principle reasons. First, the network output generally does not correspond to a symbolic ordering, and hence has no immediate interpretation as a letter string. Second, the network has no access to a lexicon so it cannot possibly determine if a candidate solution is a word. These two concerns are handled by introducing additional processing components to the model. The components-called extraction, verification, and injection-bring subsymbolic representations of the constraint-satisfaction network into contact with the symbolic realm. The extraction component converts a sub symbolic ordering-the output of the constraintsatisfaction network-into a symbolic ordering. This symbolic ordering serves as a candidate solution to the anagram. The verification component queries the lexicon to retrieve words that match or are very close to the candidate solution. If no lexical item is retrieved that can serve as a solution, the injection component feeds the candidate solution back into the constraint-satisfaction network in the form of a bias on subsequent processing, in exactly the same way that the original anagram did on the first iteration of constraint satisfaction. Figure 2 shows a high-level sketch of the complete model. The intuition behind this architecture is as follows. The symbolic ordering extracted on one iteration will serve to constrain the model's interpretation of the anagram on the next iteration. Consequently, the feedback forces the model down one path in a solution tree. When viewed from a high level, the model steps through a sequence of symbolic states. The transitions among symbolic states, however, are driven by the subsymbolic constraint-satisfaction network. To reflect the importance of the interplay between symbolic and subsymbolic processing, we call the architecture the iterative extraction-injection model. Before describing the extraction, verification, and injection components in detail, we emphasize one point about the role of the lexicon. The model makes a strong claim about the sort of knowledge used to guide the solution of anagrams. Lexical knowledge is used only for verification, not for generation of candidate solutions. The limited use of the lexicon restricts the computational capabilities of the model, but in a way that we conjecture corresponds to human limitations. 2.1 Symbolic Extraction The extraction component transforms the subsymbolic ordering matrix to an approximately equivalent symbolic ordering matrix. In essence, the extraction component treats the network activities as probabilities that pairs of letters will be joined, and samples a symbolic matrix from this probability distribution, subject to the restriction that each letter can precede or follow at most one other letter. If sub symbolic matrix element Pij has a value close to 1, then it is clear that bigram ij should be included in the symbolic ordering. However, if a row or column of a sub symbolic ordering matrix is close to uniform, the selection of a bigram in that row or column will be somewhat arbitrary. Consequently, we endow the model with the ability to select only some bigrams and leave other letter pairings unspecified. Thus, we allow the extraction component to consider symbolic partial orderings-i.e., a subset of the letter pairings in a complete ordering. For example, { <TH, RY } is a partial ordering that specifies that the T and H belong together in sequence at the beginning of the word, and the R should precede the Y, but does not specify the relation of these letter clusters to one another or to other letters of the anagram. Formally, a symbolic partial ordering matrix is a binary matrix in which the row and columns sum to values less than or equal to the corresponding mask value. A symbolic partial ordering can be formed by setting to zero some elements of a symbolic ordering (Figure l(c?. In the context of this task, a sub symbolic ordering is best viewed as a set of parameters specifying a distribution over a space P of all possible symbolic partial ordering matrices. Rather than explicitly generating and assigning probabilities to each element in P, our approach samples from the distribution specified by the subsymbolic ordering using Markov Chain Monte Carlo (Neal, 1993). Our MCMC method obtains samples consistent with the bigram probabilities Pij and the row and column constraints, J-Lj. 2.2 Lexical Verification Lexical verification involves consulting the lexicon to identify and validate candidate solutions. The extracted symbolic partial ordering is fed into the lexical verification component to identify a set of words, each of which is consistent with the partial ordering. By consistent, we mean the word contains all of the bigrams in the partial ordering. This set of words gO.8 '? 51 '" ~O.6 '0 - Length 3 Length 4 - - Length 5 - - Length 6 Length 7 ~ 0.4 j ..... 0.2 ~L----2~ 0----~ 40-----6~ O ----~ 80----~ 100 Number of iteration s '"'0 .~0.4 :g e 0..0.2 - Extraction-Injection No Feedback - - Random Feedback - - - Continuous Feedback " , , I I 50 100 150 Number of iterations Figure 3: (a) Probability of finding solution for different word lengths as a function of number of iterations. (b) Convergence of the extraction-injection model and variants of the feedback mechanism. is then checked to see if any word contains the same letters as the anagram. If so, the lexical verifier returns that the problem is solved, Otherwise, the lexical verifier indicates failure. Because the list of consistent words can be extremely large, and recalling and processing a large number of candidates seems implausible, we limit the size of the consistent set by introducing a recall parameter 'fJ that controls the maximum size of the consistent set. If the actual number of consistent words is larger, a random sample of size 'fJ is retrieved. 2.3 Injection When the lexical verification component fails, the symbolic partial ordering is injected into the constraint-satisfaction network, replacing the letter ordering of the original anagram, and a new processing iteration begins. Were it not for new bias injected into the constraint satisfaction network, the constraint-satisfaction network would produce the same output as on the previous iteration, and the model would likely become stuck without finding a solution. In our experiments, we show that injecting the symbolic partial ordering allows the model to arrive at a solution more rapidly than other sorts of feedback. 3 Results and Discussion Through simulation of our architecture we modeled several basic findings concerning human anagram problem solving. In our simulations, we define the model solution time to be the number of extraction-injection iterations before the solution word is identified. Figure 3(a) shows the probability of the model finding the a solution as a function of the number of iterations the model is allowed to run and the number of letters in the word set. The data set consists of 40 examples for each of five different word lengths. The most striking result is that the probability of finding a solution increases monotonically over time. It is also interesting to note that the model's asymptotic accuracy is 100%, indicating that the model is computationally sufficient to perform the task. Of more significance is the fact that the model exhibits the word length effect as reported in Sargent (1940), that is, longer words take more time to solve. Our model can explain other experimental results on anagram problem solving. Mayzner and Tresselt (1958) found that subjects were faster to find solutions composed of high frequency bigrams than solutions composed of low frequency bigrams. For example, SHIN contains higher frequency bigrams than HYMN. The iterative extraction-injection model reproduced this effect in the solution time to two classes of five five-letter words. Each word was presented 30 times to obtain a distribution of solution times. A mean of 5.3 iterations was required for solutions composed of high frequency bigrams, compared to a mean of 21.2 iterations for solutions composed of low frequency bigrams. The difference is statistically reliable (F(l, 8) = 30.3,p < .001). It is not surprising that the model produces this result, as the constraint-satisfaction network attempts to generate high frequency pairings of letters. Mayzner and Tresselt (1959) found that subjects also are faster to solve an anagram if the anagram is composed of low frequency bigrams. For example, RCDA might be recognized as CARD more readily than would DACR. Our model reproduces this result as well. We tested the model with 25 four-letter target words whose letters could be rearranged to form anagrams with either low or high bigram frequency ; each target word was presented 30 times. The mean solution time for low bigram-frequency anagrams was 21.4, versus 27.6 for high bigram-frequency anagrams. This difference is statistically reliable (F(1,24) = 41.4, p < .001). The difference is explained by the model's initial bias to search for solutions containing bigrams in the anagram, plus the fact that the model has a harder time pulling apart bigrams with high frequency. Simulation results to date have focused on the computational properties of the model, with the goal of showing that the iterative extraction-injection process leads to efficient solution times. The experiments involve testing performance of models with some aspect of the iterative extraction-injection model modified. Three such variants were tested: 1) the feedback connection was removed, 2) random symbolic partial orderings were fed-back, and 3) sub symbolic partial orderings were fed-back. The experiment consisted of 125 words taken from Kucera and Francis (1967) corpus, which was also used for bigram and trigram frequencies. The median of25 solution times for each word/model was used to compute the mean solution time for the original, no feedback, random feedback, and continuous feedback: 13.43, 41.88,74.91 , 43.17. The key result is that the iterative extraction-injection model was reliably 3-5 faster than the variants, as respective F(l, l24,p < 0.001) scores were 87.8, 154.3, 99.1. Figure 3(b) shows the probability that each of these four models found the solution at a given time. Although our investigation of this architecture is just beginning, we have shown that the model can explain some fundamental behavioral data, and that surprising computational power arises from the interplay of symbolic and subsymbolic information processing. Acknowledgments This work benefited from the initial explorations and ideas of Tor Mohling. This research was supported by Grant 97-18 from the McDonnell-Pew Program in Cognitive Neuroscience, and by NSF award IBN-9873492. References Kucera, H. & Francis, W. N. (1967). Computational analysis of present-day American English. Providence, RI: Brown University Press. Mayzner, M. S. & TresseIt, M. E. (1958). Anagram solution times: A fun ction of letter and word frequen cy. Journal of Experimental Psychology, 56, 376-379. Mayzner, M. S. & Tresselt, M. E. (1959). Anagram solution times: A fun ction of transitional probabilities. Journal of Experimental Psychology, 63, 510-513. Neal, R. M. (1993). Probabilistic inference using Markov Chain Monte Carlo Methods. Technical Report CRG-TR-93-1 , Dept. of Computer Science, University of Toronto. Sargent, S. Stansfeld (1940). Thinking Processes at Various Levels of Difficulty. Archives of Psychology 249. New York. Sinkhorn, Richard (1964). A Relationship Between Arbitrary Positive Matri ces and Doubly Stochastic Matrices. Annals of Mathematical Statistics, Vol. 35, No. 2. pp. 876-879.
1819 |@word bigram:33 seems:1 grey:1 simulation:3 tr:1 harder:1 initial:2 contains:4 score:1 hereafter:1 outperforms:1 surprising:2 assigning:1 must:5 readily:2 grain:1 numerical:1 subsequent:1 motor:1 update:5 item:1 beginning:3 short:2 core:1 mental:1 consulting:1 lexicon:9 toronto:1 five:3 mathematical:1 along:1 become:1 pairing:4 consists:1 doubly:6 frequen:1 behavioral:1 uphill:1 anagram:39 mask:4 os:1 ry:3 brain:1 actual:1 considering:1 provided:1 begin:1 interpreted:1 string:15 unspecified:1 proposing:1 finding:5 fun:2 interactive:1 exactly:1 control:2 unit:3 grant:1 before:2 positive:1 understood:1 local:2 treat:1 limit:1 solely:1 path:1 approximately:1 might:2 plus:2 specifying:1 co:2 limited:1 statistically:2 acknowledgment:1 testing:1 orthographic:3 procedure:5 shin:1 thought:1 word:25 symbolic:60 cannot:2 close:4 selection:1 context:1 restriction:1 equivalent:1 lexical:12 conceptualized:1 go:2 delimit:1 focused:1 formalized:1 insight:1 rule:3 retrieve:1 handle:1 searching:1 annals:1 target:4 play:1 colorado:1 element:8 bottom:2 role:3 solved:2 capture:1 imes:1 cy:1 ensures:1 ordering:44 removed:1 transitional:1 mozer:1 intuition:1 solving:9 purely:2 serve:2 completely:1 represented:1 various:1 alphabet:2 describe:1 monte:2 ction:2 query:1 outside:1 whose:3 encoded:1 larger:1 solve:3 roo:2 otherwise:2 ability:3 statistic:4 highlighted:1 noisy:1 reproduced:1 obviously:1 interplay:6 sequence:2 propose:1 rapidly:1 date:1 adapts:1 description:2 validate:1 convergence:1 cluster:1 optimum:2 produce:2 renormalizing:1 generating:1 leave:1 ij:4 progress:1 strong:1 ibn:1 involves:5 come:2 indicate:3 stochastic:6 exploration:1 human:7 eoo:1 premise:1 investigation:1 proposition:1 crg:1 cognition:6 puzzle:1 mo:1 claim:1 trigram:4 tor:1 omitted:2 failing:1 injecting:1 precede:2 harmony:8 modified:1 rather:1 endow:1 encode:1 focus:1 indicates:1 contrast:1 sense:1 inference:2 lj:3 relation:1 issue:1 among:2 constrained:1 equal:1 extraction:18 having:1 represents:1 thinking:1 connectionist:8 report:1 richard:1 primarily:1 composed:5 ve:1 consisting:1 opposing:1 attempt:2 recalling:1 grime:1 behind:1 delimiters:3 chain:2 capable:1 partial:15 respective:1 tree:2 psychological:1 column:15 formalism:1 goodness:1 ordinary:1 introducing:2 subset:3 entry:1 uniform:1 inadequate:1 gr:1 too:1 characterize:2 reported:1 providence:1 fundamental:1 probabilistic:1 michael:1 together:1 thesis:1 reflect:1 satisfied:1 containing:2 possibly:1 cognitive:5 american:1 derivative:1 return:1 li:3 account:4 satisfy:1 explicitly:1 depends:1 performed:1 view:2 tijk:1 francis:2 start:1 sort:3 capability:1 defer:1 formed:2 accuracy:1 correspond:2 identify:3 critically:1 carlo:2 finer:1 explain:3 implausible:1 checked:1 definition:1 failure:1 frequency:16 pp:1 proved:1 logical:1 recall:1 knowledge:4 realm:1 back:4 focusing:1 appears:3 feed:1 higher:4 day:1 follow:1 specify:3 done:2 just:2 stage:1 until:1 working:1 sketch:1 replacing:1 indicated:1 pulling:1 usa:1 effect:2 contain:4 consisted:1 brown:1 hence:2 alternating:1 neal:2 uniquely:1 essence:1 presenting:1 complete:2 bring:2 fj:2 reasoning:1 belong:1 he:1 interpretation:3 significant:1 pew:1 seldom:1 similarly:1 access:2 longer:1 sinkhorn:4 perspective:6 retrieved:2 driven:2 apart:1 binary:1 arbitrarily:1 additional:1 somewhat:1 eo:1 recognized:1 converge:1 determine:1 monotonically:3 violate:1 technical:1 match:1 faster:3 concerning:2 award:1 underlies:1 basic:3 variant:3 iteration:12 represent:1 normalization:2 orthography:1 cell:7 lea:1 median:1 operate:1 archive:1 ascent:1 subject:3 superhuman:1 call:2 psychology:3 architecture:4 identified:1 idea:1 motivated:1 handled:1 assist:1 york:1 cause:1 generally:1 clear:1 involve:1 transforms:1 conscious:1 hardware:1 rearranged:2 generate:1 specifies:2 restricts:1 nsf:1 neuroscience:1 discrete:2 vol:1 key:2 four:2 ce:1 asymptotically:1 injects:1 sum:2 convert:1 kucera:2 run:1 letter:38 powerful:1 injected:2 striking:1 place:1 arrive:1 utilizes:1 hi:1 matri:1 activity:5 constraint:23 constrain:1 ri:1 aspect:1 extremely:1 performing:1 attempting:1 injection:15 conjecture:3 department:1 structured:1 according:1 conceptualization:1 mcdonnell:1 explained:1 sij:1 boulder:1 taken:1 computationally:1 discus:2 describing:1 mechanism:5 mind:3 tractable:1 fed:3 end:2 serf:1 adopted:1 available:1 operation:4 apply:1 enforce:1 alternative:1 original:6 top:1 denotes:1 running:1 ensure:1 exploit:1 verifier:2 prof:1 graded:1 contact:4 unchanged:1 move:1 arrangement:1 occurs:1 strategy:1 traditional:1 evolutionary:1 gradient:1 exhibit:1 separate:1 card:1 entity:1 sensible:1 seven:2 argue:1 reason:1 length:8 modeled:1 relationship:3 insufficient:1 difficult:1 disparate:1 implementation:2 reliably:1 perform:1 allowing:1 observation:1 markov:2 immediate:1 defining:1 neurobiology:1 extended:1 stack:1 arbitrary:2 david:1 pair:2 required:1 specified:1 connection:1 tap:1 engine:1 address:1 beyond:1 suggested:1 perception:1 pattern:2 program:1 reliable:2 memory:1 power:1 suitable:1 satisfaction:13 difficulty:2 hybrid:1 treated:2 rely:1 force:1 representing:1 historically:1 nonsense:1 extract:3 embodied:1 sargent:2 relative:1 wander:1 asymptotic:1 permutation:2 interesting:4 generation:1 limitation:1 versus:1 awareness:1 degree:1 pij:5 consistent:8 verification:9 sufficient:1 principle:1 pi:1 row:15 course:1 supported:1 english:10 bias:3 guide:1 allow:1 institute:1 benefit:2 feedback:10 transition:1 stuck:1 made:2 approximate:1 emphasize:1 obtains:1 neurobiological:1 reproduces:1 corpus:1 search:3 continuous:3 iterative:7 nature:1 domain:3 did:1 significance:1 repeated:1 allowed:1 benefited:1 fashion:1 sub:13 fails:1 wish:1 candidate:7 third:1 weighting:1 subsymbolic:22 down:2 showing:1 er:1 symbol:3 list:1 concern:1 importance:1 illustrates:1 yin:1 likely:1 contained:2 joined:1 corresponds:2 truth:1 extracted:3 ljk:1 modulate:1 viewed:3 goal:2 consequently:2 included:1 operates:2 called:1 experimental:4 ijk:1 indicating:1 formally:2 select:1 people:1 arises:1 dept:1 mcmc:1 tested:2 avoiding:1
895
182
485 GENESIS: A SYSTEM FOR SIMULATING NEURAL NETWOfl.KS Matthew A. Wilson, Upinder S. Bhalla, John D. Uhley, James M. Bower. Division of Biology California Institute of Technology Pasadena, CA 91125 ABSTRACT We have developed a graphically oriented, general purpose simulation system to facilitate the modeling of neural networks. The simulator is implemented under UNIX and X-windows and is designed to support simulations at many levels of detail. Specifically, it is intended for use in both applied network modeling and in the simulation of detailed, realistic, biologicallybased models. Examples of current models developed under this system include mammalian olfactory bulb and cortex, invertebrate central pattern generators, as well as more abstract connectionist simulations. INTRODUCTION Recently, there has been a dramatic increase in interest in exploring the computational properties of networks of parallel distributed processing elements (Rumelhart and McClelland, 1986) often referred to as Itneural networks" (Anderson, 1988). Much of the current research involves numerical simulations of these types of networks (Anderson, 1988; Touretzky, 1989). Over the last several years, there has also been a significant increase in interest in using similar computer simulation techniques to study the structure and function of biological neural networks. This effort can be seen as an attempt to reverse-engineer the brain with the objective of understanding the functional organization of its very complicated networks (Bower, 1989). Simulations of these systems range from detailed reconstructions of single neurons, or even components of single neurons, to simulations of large networks of complex neurons (Koch and Segev, 1989). Modelers associated with each area of research are likely to benefit from exposure to a large range of neural network simulations. A simulation package capable of implementing these varied types of network models would facilitate this interaction. 486 Wilson, Bhalla, Uhley and Bower DESIGN FEATURES OF THE SIMULATOR We have built GENESIS (GEneral NEtwork SImulation System) and its graphical interface XODUS (X-based Output and Display Utility for Simulators) to provide a standardized and flexible means of constructing neural network simulations while making minimal assumptions about the actual structure of the neural components. The system is capable of growing according to the needs of users by incorporating user-defined code. We will now describe the specific features of this system. Device independence. The entire system has been designed to run under UNIX and X-windows (version 11) for maximum portability. The code was developed on Sun workstations and has been ported to Sun3's, Sun4's, Sun 386i's, and Masscomp computers. It should be portable to all installations supporting UNIX and X-II. In addition, we will be developing a parallel implementation of the simulation system (Nelson et al., 1989). Modular design. The design of the simulator and interface is based on a "building-block" approach. Simulations are constructed of modules which receive inputs, perform calculations on them, and generate outputs (figs. 2,3). This approach is central to the generality and flexibility of the system as it allows the user to easily add new features without modification to the base code. Interactive specification and control. Network specification and control is done at a high level using graphical tools and a network specification language (fig. 1). The graphics interface provides the highest and most user friendly level of interaction. It consists of a number of tools which the user can configure to suit a particular simulation. Through the graphical interface the user can display, control and adjust the parameters of simulations. The network specification language we have developed for network modeling represents a more basic level of interaction. This language consists of a set of simulator and interface functions that can be executed interactively from the keyboard or from text flies storing command sequences (scripts). The language also provides for arithmetic operations and program control functions such as looping, conditional statements, and subprograms or macros. Figures 3 and 4 demonstrate how some of these script functions are used. Simulator and interrace toolkits. Extendable toolkits which consist of module libraries, graphical tools and the simulator base code itself (fig. 2) provide the routines and modules used to construct specific simulations. The base code provides the common control and support routines for the entire system. GENESIS: A System for Simulating Neural Networks Gra hics Interface ~.. ~ .. Script Files . DP~~Data Files ( Genesis command window and ke board Script Language Interpreter Genesis 1% Figure 1. Levels Of Interaction With The Simulator CONSTRUCTING SIMULATIONS The first step in using GENESIS involves selecting and linking together those modules from the toolkits that will be necessary for a particular simulation (fig. 2,3). Additional commands in the scripting language establish the network and the graphical interface (fig. 4). Module Classes. Modules in GENESIS are divided into computational modules, communications modules and graphical modules. All instances of computational modules are called elements. These are the central components of simulations, performing all of the numerical calculations. Elements can communicate in two ways: via links and via connections. Links allow the passing of data between two elements with no time delay and with no computation being performed on the data. Thus. links serve to unify a large number of elements into a single computational unit (e.g. they are used to link elements together to form the neuron in fig. 3C). Connections. on the other hand. interconnect computational units via simulated communication channels which can incorporate time delays and perform transformations on data being transmitted (e.g. axons in fig. 3C). Graphical modules called widgets are used to construct the interface. These modules can issue script commands as well as respond to them, thus allowing interactive access to simulator structures and functions. 487 488 Wilson, Bhalla, Uhley and Bower Hierarchical organization. In order to keep track of the structure of a simulation, elements are organized into a tree hierarchy similar to the directory structure in UNIX (fig. 3B). The tree structure does not explicitly represent the pattern of links and connections between elements, it is simply a tool for organizing complex groups of elements in the simulation. Simulation example. As an example of the types of modules available and the process of structuring them into a network simulation and graphical interface, we will describe the construction of a simple biological neural simulation (fig. 3). The I11pdel consists of two neurons. Each neuron contains a passive dendritic compartment, an active cell body, an axonal output, and a synaptic input onto the dendrite. The axon of one neuron connects to a synaptic input of the other. Figure 3 shows the basic structure of the model as implemented under GENESIS. In the model, the synapse, channels, Simulator and interrace toolkit -----------------------------------------------------------------~ Graphics Modules Communications modules Computational Modules ( A CLinker oDCO Earn ? Simulation Simulator => __ ffi ~ .. ....-----0001 ca .... ;.......... \.< . .::<::;:::;";::::,:::-:.<., ..... . :? j~ : CQdK Figure 2. Stages In Constructing A Simulation. .. ... GENESIS: A System for Simulating Neural Net~orks network B A ~ neuron! neuron2 ~~ cell-body Na A K dendrite axon \ synapse C KEY Element Connection dendrite -Link D Figure 3. Implementation of a two neuron model in GENESIS. (A) Schematic diagram of compartmentally modeled neurons. Each cell in this simple model has a passive dendritic compartment, an active cell-body, and an output axon. There is a synaptic input to the dendrite of one cell and two ionic channels on the cell body. (B) Hierarchical representation of the components of the simulation as maintained in GENESIS. The cell-body of neuron 1 is referred to as /network/neuronl/cell-body. (C) A representation of the functional links between the basic components of one neuron. (D) Sample interface control and display widgets created using the XODUS toolkit. 489 490 Wilson, Bhalla, Uhley and Bower dendritic compartments, cell body and axon are each treated as separate computational elements (fig. 3C). Links allow elements to share information (e.g. the Na channel needs to have access to the cell-body membrane voltage). Figure 4 shows a portion of the script used to construct this simulation. Create different types or elements and create create active compartment create passive_compartment create synapse assign them names. neuronl cell-body dendrite dendrite/synapse Establish functional "links" between the elements. link dendrite to cell-body link dendrite/synapse to dendrite Set parameters associated with the elements. set dendrit~ capacitance l.Oe-6 Make copies or entire element subtrees. copy neuronl to neuron2 Establish "connections" between two elements. connect neuronl/axon to neuron2/dendrite/synapse Set up a graph to monitor an element variable graph neuronl/cell-body potential Make a control panel with several control "widgets". xform control xdialo g nstep set-nstep -default 200 xdialog dt set-dt -default 0.5 Xloggle Euler set-euler Figure 4. Sample script commands for constructing a simulation (see fig. 3) SIMULATOR SPECIFICATIONS Memory requirements or GENESIS. Currently. GENESIS consists of about 20,000 lines of simulator code and a similar amount of graphics code, all written in C. The executable binaries take up about 1.5 Megabytes. A rough estimate of the amount of additional memory necessary for a particular simulation can be calculated from the sizes and number of modules used in a simulation. Typically, elements use around 100 bytes, connections 16 and messages 20. Widgets use 5-20 Kbytes each. GENESIS: A System for Simulating Neural Networks Performance The overall efficiency of the GENESIS system is highly simulation specific. To consider briefly a specific case, the most sophisticated biologically based simulation currently implemented under GENESIS, is a model of piriform (olfactory) cortex (Wilson et al., 1986; Wilson and Bower, 1988; Wilson and Bower, 1989). This simulation consists of neurons of four different types. Each neuron contains from one to five compartments. Each compartment can contain several channels. On a SUN 386i with 8 Mbytes of RAM. this simulation with 500 cells runs at I second per time step. Other models that have been implemented under GENESIS The list of projects currently completed under GENESIS includes approximately ten different simulations. These include models of the olfactory bulb (Bhalla et al., 1988), the inferior olive (Lee and Bower, 1988). and a motor circuit in the invertebrate sea slug Tritonia (Ryckebusch et aI., 1989)~ We have also built several tutorials to allow students to explore compartmental biological models (Hodgkin and Huxley, 1952), and Hopfield networks (Hopfield. 1982). Access/use of GENESIS GENESIS and XODUS will be made available at the cost of distribution to all interested users. As described above, new user-defined modules can be linked into the simulator to extend the system. Users are encouraged to support the continuing development of this system by sending modules they develop to Caltech. These will be reviewed and compiled into the overall system by GENESIS support staff. We would also hope that users would send completed published simulations to the GENESIS data base. This will provide others with an opportunity to observe the behavior of a simulation first hand. A current listing of modules and full simulations will be maintained and available through an electronic mail newsgroup. Babel. Enquiries about the system should be sent to GENESIS@caltech.edu or GENESIS@caltech.biblet. Acknowledgments We would like to thank Mark Nelson for his invaluable assistance in the development of this system and specifically for his suggestions on the content of this manuscript. We would also like to recognize Dave Bilitch. Wojtek Furmanski. Christof Koch, innumerable Caltech students and the students of the 1988 MBL summer course on Methods in Computational Neuroscience for their contributions to the creation and evolution of GENESIS (not mutually exclusive). This research was also supported by the NSF (EET-8700064). the NIH (BNS 22205). the ONR (Contract NOOOI4-88-K-0513). the Lockheed Corporation. the Caltech Presidents Fund, the JPL Directors Development Fund. and the Joseph Drown Foundation. 491 492 Wilson, Bhalla, Uhley and Bower References D. Anderson. (ed.) Neural information processing systems. American Institute of Physics, New York (1988). U.S. Bhalla, M.A. Wilson, & J.M. Bower. Integration of computer simulations and multi-unit recording in the rat olfactory system. Soc. Neurosci. Abstr. 14: 1188 (1988). I.M. Bower. Reverse engineering the nervous system: An anatomical, physiological, and computer based approach. In: An Introduction to Neural and Electronic Networks. Zornetzer, Davis, and Lau, editors. Academic Press (1989)(in press). A.L. Hodgkin and A.F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. I.Physiol, (Lond.) 117, 500544 (1952). 1.J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA. 79,2554-2558 (1982). C. Koch and I. Segev. (eds.) Methods in Neuronal Modeling: From Synapses to Networks. MIT Press, Cambridge, MA (in press). M. Lee and I.M. Bower. A structural simulation of the inferior olivary nucleus. Soc. Neurosci. Abstr. 14: 184 (1988). M. Nelson, W. Furmanski and I.M. Bower. Simulating neurons and neuronal networks on parallel computers. In: Methods in Neuronal Modeling: From Synapses to Networks. C. Koch and I. Segev, editors. MIT Press, Cambridge, MA (1989)(in press). S. Ryckebusch, C. Mead and I.M. Bower. Modeling a central pattern generator in software and hardware: Tritonia in sea moss (CMOS). (l989)(this volume). D.E. Rumelhart, 1.L. McClelland and the PDP Research Group. Parallel Distributed Processing. MIT Press, Cambridge, MA (1986). D. Touretzky. (ed.) Advances in Neural Network Information Processing Systems. Morgan Kaufmann Publishers, San Mateo, California (1989). M.A. Wilson and I.M. Bower. The simulation of large-scale neuronal networks. In: Methods in Neuronal Modeling: From Synapses to Networks. C. Koch and I. Segev, editors. MIT Press, Cambridge, MA (1989)(in press). M.A. Wilson and I.M. Bower. A computer simulation of olfactory cortex with functional implications for storage and retrieval of olfactory information. In: Neural information processing systems. pp. 114-126 D. Anderson, editor. Published by AlP Press, New York, N.Y (1988). M.A. Wilson, I.M. Bower and L.B. Haberly. A computer simulation of piriform cortex. Soc. Neurosci. Abstr. 12.1358 (1986). Part IV Structured Networks
182 |@word briefly:1 version:1 simulation:45 dramatic:1 xform:1 contains:2 selecting:1 current:4 written:1 olive:1 john:1 physiol:1 numerical:2 realistic:1 motor:1 designed:2 fund:2 device:1 nervous:1 directory:1 kbytes:1 provides:3 five:1 constructed:1 director:1 consists:5 olfactory:6 behavior:1 growing:1 simulator:14 brain:1 multi:1 actual:1 window:3 project:1 panel:1 circuit:1 developed:4 interpreter:1 transformation:1 corporation:1 quantitative:1 friendly:1 interactive:2 olivary:1 control:9 installation:1 unit:3 christof:1 engineering:1 acad:1 toolkits:3 mead:1 approximately:1 k:1 mateo:1 range:2 acknowledgment:1 block:1 area:1 onto:1 storage:1 send:1 graphically:1 exposure:1 ke:1 unify:1 his:2 president:1 hierarchy:1 construction:1 user:10 element:19 rumelhart:2 mammalian:1 module:20 fly:1 sun:3 oe:1 highest:1 mbytes:1 creation:1 serve:1 division:1 efficiency:1 easily:1 hopfield:3 emergent:1 describe:2 modular:1 compartmental:1 slug:1 ability:1 itself:1 sequence:1 net:1 reconstruction:1 interaction:4 macro:1 organizing:1 flexibility:1 description:1 abstr:3 requirement:1 sea:2 cmos:1 develop:1 soc:3 implemented:4 involves:2 alp:1 implementing:1 assign:1 biological:3 dendritic:3 exploring:1 subprogram:1 koch:5 around:1 drown:1 matthew:1 purpose:1 proc:1 currently:3 create:5 tool:4 hope:1 rough:1 mit:4 command:5 wilson:12 voltage:1 structuring:1 neuronl:5 interconnect:1 entire:3 typically:1 pasadena:1 interested:1 issue:1 overall:2 flexible:1 ported:1 development:3 integration:1 construct:3 encouraged:1 biology:1 represents:1 connectionist:1 others:1 oriented:1 recognize:1 intended:1 connects:1 suit:1 attempt:1 organization:2 interest:2 message:1 highly:1 adjust:1 wojtek:1 configure:1 natl:1 subtrees:1 implication:1 capable:2 necessary:2 tree:2 iv:1 continuing:1 minimal:1 instance:1 modeling:7 cost:1 euler:2 delay:2 graphic:3 conduction:1 connect:1 extendable:1 lee:2 contract:1 physic:1 together:2 earn:1 na:2 central:4 interactively:1 megabyte:1 american:1 potential:1 student:3 includes:1 explicitly:1 script:7 performed:1 linked:1 portion:1 orks:1 parallel:4 complicated:1 contribution:1 compartment:6 kaufmann:1 tritonia:2 listing:1 ionic:1 published:2 dave:1 synapsis:3 touretzky:2 synaptic:3 ed:3 pp:1 james:1 associated:2 modeler:1 workstation:1 noooi4:1 organized:1 routine:2 sophisticated:1 nstep:2 nerve:1 manuscript:1 dt:2 synapse:6 done:1 anderson:4 generality:1 stage:1 hand:2 facilitate:2 name:1 usa:1 contain:1 building:1 evolution:1 assistance:1 inferior:2 maintained:2 davis:1 excitation:1 uhley:5 rat:1 demonstrate:1 invaluable:1 interface:10 passive:2 recently:1 nih:1 common:1 functional:4 executable:1 physical:1 volume:1 linking:1 extend:1 significant:1 cambridge:4 ai:1 language:6 toolkit:2 specification:5 access:3 cortex:4 compiled:1 add:1 base:4 reverse:2 keyboard:1 binary:1 onr:1 caltech:5 seen:1 transmitted:1 additional:2 morgan:1 staff:1 ii:1 arithmetic:1 full:1 academic:1 calculation:2 retrieval:1 divided:1 schematic:1 basic:3 represent:1 cell:14 receive:1 addition:1 diagram:1 publisher:1 file:2 recording:1 sent:1 axonal:1 structural:1 independence:1 utility:1 effort:1 passing:1 york:2 detailed:2 amount:2 ten:1 hardware:1 mcclelland:2 generate:1 nsf:1 tutorial:1 neuroscience:1 track:1 per:1 anatomical:1 bhalla:7 group:2 key:1 four:1 monitor:1 ram:1 graph:2 year:1 run:2 package:1 unix:4 communicate:1 respond:1 hodgkin:2 gra:1 electronic:2 summer:1 display:3 huxley:2 segev:4 looping:1 software:1 invertebrate:2 bns:1 lond:1 performing:1 sun4:1 furmanski:2 structured:1 developing:1 according:1 membrane:2 widget:4 joseph:1 making:1 modification:1 biologically:1 lau:1 mutually:1 ffi:1 sending:1 available:3 operation:1 observe:1 hierarchical:2 simulating:5 standardized:1 include:2 completed:2 graphical:8 opportunity:1 establish:3 objective:1 capacitance:1 ryckebusch:2 exclusive:1 dp:1 sun3:1 link:11 separate:1 simulated:1 thank:1 sci:1 nelson:3 mail:1 portable:1 code:7 modeled:1 piriform:2 executed:1 statement:1 design:3 implementation:2 collective:1 perform:2 allowing:1 neuron:15 supporting:1 scripting:1 innumerable:1 communication:3 genesis:25 pdp:1 varied:1 connection:6 california:2 pattern:3 program:1 built:2 memory:2 treated:1 technology:1 library:1 created:1 moss:1 text:1 byte:1 understanding:1 babel:1 suggestion:1 generator:2 foundation:1 nucleus:1 bulb:2 mbl:1 haberly:1 editor:4 storing:1 share:1 course:1 supported:1 last:1 copy:2 allow:3 institute:2 distributed:2 benefit:1 default:2 calculated:1 made:1 san:1 lockheed:1 eet:1 keep:1 active:3 portability:1 zornetzer:1 reviewed:1 channel:5 ca:2 dendrite:10 complex:2 constructing:4 neurosci:3 body:11 neuronal:5 fig:11 referred:2 board:1 axon:6 bower:17 specific:4 list:1 physiological:1 jpl:1 incorporating:1 consist:1 simply:1 likely:1 explore:1 ma:4 conditional:1 content:1 specifically:2 engineer:1 called:2 newsgroup:1 support:4 mark:1 incorporate:1
896
1,820
Combining ICA and top-down attention for robust speech recognition Un-Min Bae and Soo-Young Lee Department of Electrical Engineering and Computer Science and Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu, Taejon, 305-701, Korea bum@neuron.kaist.ac.kr, sylee@ee.kaist.ac.kr Abstract We present an algorithm which compensates for the mismatches between characteristics of real-world problems and assumptions of independent component analysis algorithm. To provide additional information to the ICA network, we incorporate top-down selective attention. An MLP classifier is added to the separated signal channel and the error of the classifier is backpropagated to the ICA network. This backpropagation process results in estimation of expected ICA output signal for the top-down attention. Then, the unmixing matrix is retrained according to a new cost function representing the backpropagated error as well as independence. It modifies the density of recovered signals to the density appropriate for classification. For noisy speech signal recorded in real environments, the algorithm improved the recognition performance and showed robustness against parametric changes. 1 Introduction Independent Component Analysis (ICA) is a method for blind signal separation. ICA linearly transforms data to be statistically as independent from each other as possible [1,2,5]. ICA depends on several assumptions such as linear mixing and source independence which may not be satisfied in many real-world applications. In order to apply ICA to most real-world problems, it is necessary either to release of all assumptions or to compensate for the mismatches with another method. In this paper, we present a complementary approach to compensate for the mismatches. The top-down selective attention from a classifier to the ICA network provides additional information of the signal-mixing environment. A new cost function is defined to retrain the unmixing matrix of the ICA network considering the propagated information. Under a stationary mixing environment, the averaged adaptation by iterative feedback operations can adjust the feature space to be more helpful to classification performance. This process can be regarded as a selective attention model in which input patterns are adapted according to top-down infor- mation. The proposed algorithm was applied to noisy speech recognition in real environments and showed the effectiveness of the feedback operations. 2 2.1 The proposed algorithm Feedback operations based on selective attention As previously mentioned, ICA supposes several assumptions. For example, one assumption is a linearly mixing condition, but in general, there is inevitable nonlinearity of microphones to record input signals. Such mismatches between the assumptions of ICA and real mixing conditions cause unsuccessful separation of sources. To overcome this problem, a method to supply valuable information to the rcA network was proposed. In the learning phase of ICA, the unmixing matrix is subject to the signal-mixing matrix, not the input patterns. Under stationary mixing environment where the mixing matrix is fixed, iteratively providing additional information of the mixing matrix can contribute to improving blind signal separation performance. The algorithm performs feedback operations from a classifier to the ICA network in the test phase, which adapts the unmixing matrices of ICA according to a newly defined measure considering both independence and classification error. This can result in adaptation of input space of the classifier and so improve recognition performance. This process is inspired from the selective attention model [9,10] which calculates expected input signals according to top-down information. In the test phase, as shown in Figure 1, ICA separates signal and noise, and Melfrequency cepstral coefficients (MFCCs) extracted as a feature vector are delivered to a classifier, multi-layer perceptron (MLP). After classification, the error function of the classifier is defined as E m1p = 2"1~ L...,.(tmIP,i - 2 (1) Ymlp,i) , i where tmlp,i is target value of the output neuron Ymlp,i. In general, the target values are not known and should be determined from the outputs Ymlp. Only the target value of the highest output is set to 1, and the others are set to -1 when the nonlinear function of the classifier is the bipolar sigmoid function. The algorithm performs gradient-descent calculation by error backpropagation. To reduce the error, it computes the required changes of the input values of the classifier and finally those of the unmixed signals of the ICA network. Then, the leaning rule of the ICA algorithm should be changed considering these variations. The newly defined cost function of the ICA network includes the error backpropagated term as well as the joint entropy H (Yica) of the outputs Yica. Eica = 1 -H(Yica) + 'Y. 2" (Utarget - u)(Utarget - u) 1 H -H(Yica) + 'Y. 2"~u~u , H (2) where u are the estimate recovered sources and 'Y is a coefficient which represents the relative importance of two terms. The learning rule derived using gradient descent on the cost function in Eq.(2) is ~w ex: [I - <p(u)uH]W + 'Y. x~u, (3) where x are the input signals of the rcA network. The first term in Eq.(3) is the learning rule of ICA which is applicable to complex-valued data in the frequency Input Speech Block into Frames Hamming Window Fourier Transform ICA Retraining ICA ~u u Linear-to-Mel Freq. Filter Bank Conversion Mel-to-Linear Freq. Filter Bank Conversion Frame Normalization Frame Re-normalization MLP Classification MLP Backpropagation Figure 1: Real-world speech recognition with feedback operations from a classifier to the ICA network domain [8,11]. In real environments where substantial time delays occur, the observed input signals are convolved mixtures of sources, not linear mixtures and the mixing model no longer is a matrix. In this case, blind signal separation using ICA can be achieved in the frequency domain. The complex score function is 'P(z) = tanh(Re{z}) + j. tanh(Im{z}). (4) The procedure in the test phase is summarized as follows. 1. For a test input, perform the forward operation and classify the pattern. 2. Define the error function of the classifier in Eq. (1) and perform error backpropagation to find the required changes of unmixed signals of ICA. 3. Define the cost function of the ICA network in Eq.(2) and update unmixing matrix with the learning rule in Eq.(3). Then, go to step 1. The newly defined error function ofthe classifier in Eq.(l) does not cause overfitting problems because it is used for updating the unmixing matrix of ICA only once. If classification performance is good, the averaged changes of the unmixing matrix over the total input patterns can contribute to improving recognition performance. 2.2 Considering the assumptions of ICA The assumptions of ICA [3,4,5] are summarized as follows. Figure 2: a nonlinear mixing model due to the distortions of microphones 1. The sources are linearly mixed. 2. The sources are mutually independent. 3. At most, one source is normally distributed. 4. The number of sensors is equal to or greater than the number of sources. 5. No sensor noise or only low additive noise signals are permitted. The assumptions 4 and 5 can be released if there are enough sensors. The assumption 3 is also negligible because the source distribution is usually approximated as super-Gaussian or Laplacian distributions in the speech recognition problem. As to speech recognition in real mixing environments, the nonlinearity of microphones is an inevitable problem. Figure 2 shows a nonlinear mixing model, the nonlinear functions g(.) and h(?) denote the distortions of microphones. s are original sources, x are observed signals, and u are the estimates of the recovered sources. If the sources 81 and 82 are mutually independent, the random variables 8r and 8 2 are still independent each other, and so are Voo and VlO. The density of Zl = VOO+VlO equals the convolution of the densities of Voo and VlO [7]. = = f Pvoo(Zl - VlO)PVIO(VlO)dvlO, P(Zl) h~ (5) . After all, the observed signal Xl is not a linear mixture of two independent components due to the nonlinear distortion h(?). The assumption of source independence is violated. In this situation, it is hard to expect what would be the leA solution and to assert the solution is reliable. Even if Xl has two independent components, which is the case of linear distortion of microphones, there is a conflict between independence and source density approximation because the densities of independent components of observed signals are different from those of original sources by g(.) and h(?), and may be far from the density approximated by f(?). The proposed algorithm can be a solution to this problem. In the training phase, a classifier learns noiseless data and the density of Xl used for the learning is p(xd = p(81) aoo h~g~ . (6) The second backpropagated term in the cost function Eq.(2) changes the unmixing matrix W to adapt the density of unmixed signals to the density that the classifier Table 1: The recognition rates of noisy speech recorded with F-16 fighter noise (%) SNR MLP leA The proposed algorithm lJlean 99.9 99.7 99.9 Training data 15dB lOdB 93.3 73.5 97.0 91.9 99.3 94.5 5dB 42.8 78.7 lJlean 96.1 93.9 Test data 15dB 10dB 84.8 63.0 90.6 85.6 5dB 36.7 68.9 80.6 96.1 93.5 71.1 86.3 learned. This can be a clue to what should be the leA solution. Iterative operations over the total data induce that the averaged change of the unmixing matrix becomes roughly a function of the nonlinearity g(.) and h(?), not a certain density P(Sl) subject to every pattern. 3 Noisy Speech Recognition in Real Environments The proposed algorithm was applied to isolated-word speech recognition. The input data are convolved mixtures of speech and noise recorded in real environments. The speech data set consists of 75 Korean words of 48 speakers, and F-16 fighter noise and speech babbling noise were used as noise sources. Each leA network has two inputs and two outputs for the signal and noise sources. Tables 1 and 2 show the recognition results for the three methods: MLP only, MLP with standard leA, and the proposed algorithm. 'Training data' mean the data used for learning of the classifier, and 'Test data' are the rest. leA improves classification performance compared to MLP only in the heavy-noise cases, but in the cases of clean data, leA does not contribute to recognition and the recognition rates are lower than those of MLP only. The proposed algorithm shows better recognition performance than standard leA for both training and test data. Especially, for the clean data, the proposed algorithm improves the recognition rates to be the same as those of MLP only in most cases. The algorithm reduces the false recognition rates by about 30% to 80% in comparison with standard leA when signal to noise ratios (SNRs) are 15dB or higher. With such low noise, the classification performance of MLP is relatively reliable, and MLP can provide the leA network for helpful information. However, with heavy noise, the recognition rates of MLP sharply decrease, and the error backpropagation can hardly provide valuable information to the leA network. The overall improvement for the training data is higher than that for the test data. This is because the the recognition performance of MLP is better for the training data. As shown in Figure 3, iterative feedback operations decrease the false recognition rates, and the variation of the unknown parameter '"Y in Eq.(2) doesn't affect the final recognition performance. The variation of the learning rate for updating the unmixing matrix also doesn't affect the final performance, and it only influences on the converging time to reach the final recognition rates. The learning rate was fixed regardless of SNR in all of the experiments. 4 Discussion The proposed algorithm is an approach to complement leA by providing additional information based on top-down selective attention with a pre-trained MLP classifier. The error backpropagation operations adapt the density of recovered signals Table 2: The recognition rates of noisy speech recorded with speech babbling noise (%) SNR MLP ICA The proposed algorithm lJlean 99.7 98.5 99.7 Training data 15dtl lOdtl 88.6 61.5 95.2 91.9 97.7 92.5 5dtl 32.6 76.5 lJlean 96.8 91.7 Test data 15dtl 10dtl 82.9 64.5 88.6 85.1 5dtl 38.5 73.2 76.7 97.2 93.1 73.4 87.4 according to the new cost function of ICA. This can help ICA find the solution proper for classification under the nonlinear and independence violations, but this needs the stationary condition. For nonstationary environments, a mixture model like the ICA mixture model [6] can be considered. The ICA mixture model can assign class membership to each environment category and separate independent sources in each class. To completely settle the nonlinearity problem in real environment, it is necessary to introduce a scheme which models the nonlinearity such as the distortions of microphones. Multi-layered ICA can be an approach to model nonlinearity. In the noisy recognition problem, the proposed algorithm improved recognition performance compared to ICA alone. Especially in moderate noise cases, the algorithm remarkably reduced the false recognition rates. This is due to the high classification performance of the pre-trained MLP. In the case of heavy noise the expected ICA output estimated from the top-down attention may not be accurate, and the selective attention does not help much. It is natural that we only put attention to familiar subjects. Therefore more robust classifiers may be needed for signals with heavy noise. Acknowledgments This work was supported as a Brain Science & Engineering Research Program sponsored by Korean Ministry of Science and Technology. References [1] Amari, S., Cichocki, A., and Yang, H. (1996) A new learning algorithm for blind signal separation, In Advances in Neural Information Processing Systems 8, pp. 757-763. [2] Bell, A. J. and Sejnowski, T. J. (1995) An information-maximization approach to blind separation and blind deconvolution, Neural Computation, 7:1129-1159. [3] Cardoso, J.-F. and Laheld, B. (1996) Equivariant adaptive source separation, IEEE Trans. on S.P., 45(2):434-444. [4] Comon, P. (1994) Independent component analysis - a new concept?, Signal Processing, 36(3):287-314. [5] Lee, T.-W. (1998) Independent component analysis - theory and applications, Kluwer Academic Publishers, Boston. [6] Lee, T.-W., Lewicki, M. S., and Sejnowski, T. J. (1999) ICA mixture models for unsupervised classification of non-Gaussian sources and automatic context 2.5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~j ~ ~:~ ~- ~ ~ 2 1ll r--------~---_____, ~~~~~~~:::::::::~~:::~~::~~j ~ ~:~ ~- ~ ~ 6 1ll a: a: c:: 1.5 c:: 4.5 o o :;:; :;:; 'E 'E ell 8 7.5 r-~------~---_____' ell 8 1 Q) 3 Q) a: a: Q) Q) !!) 0.5 '" U. !!) K----- --------------- ----- ----- ---------O '" 1.5 U. O '------..~~~~~~~~~~~--"-----' o 5 10 15 '------~-~-~-~--------' o Iteration of total data set 5 (a) ~12 ~~~~~~~~~~~~~~~~~~~~~~~~~~~j ~ ~~:~ ~- 1ll o c:: 21 '';::; 'E ell 6 -------~---~ - ~~~~ Q) - - - - - - - - - - - - - - - ---~ . ~~~ ell 8 19 Q) a: a: Q) '" 1:- o :;:; !!) :~~~~~:::~~~~::~~~~~::::~~:::i : ;~:~ ~ ~23 a: 9 'E 8 ,----~---~---__, 1ll a: c:: 15 (b) 25 ~ 10 Iteration of total data set Q) !!) 17 3 '" U. U. 5 10 Iteration of total data set (c) 15 15 '------~-~-~-~---~ o 5 10 15 Iteration of total data set (d) Figure 3: The false recognition rates by iteration of total data and the value of the 'Y parameter. (a) Clean speech; (b) SNR=15 dB; (c) SNR=lO dB; (d) SNR=5 dB [7] [8] [9] [10] [11] switching in blind signal separation, IEEE Trans . on Pattern Analysis and Machine Intelligence, in press. Papoulis, A. (1991) Probability, random variables, and stochastic processes, McGraw-Hill, Inc. Park, H.-M., Jung, H.-Y., Lee, T.-W., and Lee, S.-Y. (1999) Subbandbased blind signal separation for noisy speech recognition, Electronics Letters, 35(23) :2011-2012. Park, K.-Y. and Lee, S.-Y. (1999) Selective attention for robust speech recognition in noisy environments, In Proc. of IJCNN, paper no. 829. Park, K.-Y. and Lee, S.-Y. (2000) Out-of-vocabulary rejection based on selective attention model, Neural Processing Letters, 12:41-48. Smaragdis, P. (1997) Information theoretic approaches to source separation, Masters Thesis, MIT Media Arts and Science Dept.
1820 |@word retraining:1 papoulis:1 electronics:1 score:1 recovered:4 additive:1 aoo:1 sponsored:1 update:1 stationary:3 alone:1 intelligence:1 record:1 provides:1 unmixed:3 contribute:3 supply:1 consists:1 introduce:1 ica:38 expected:3 equivariant:1 roughly:1 multi:2 brain:2 inspired:1 window:1 considering:4 becomes:1 medium:1 what:2 assert:1 every:1 xd:1 bipolar:1 classifier:17 zl:3 normally:1 negligible:1 engineering:2 switching:1 statistically:1 averaged:3 acknowledgment:1 block:1 backpropagation:6 procedure:1 laheld:1 bell:1 word:2 induce:1 pre:2 dtl:5 layered:1 put:1 context:1 influence:1 center:1 modifies:1 go:1 attention:13 regardless:1 rule:4 regarded:1 variation:3 target:3 recognition:29 approximated:2 updating:2 observed:4 electrical:1 decrease:2 highest:1 valuable:2 mentioned:1 substantial:1 environment:13 trained:2 completely:1 gu:1 uh:1 joint:1 snrs:1 separated:1 sejnowski:2 voo:3 kaist:2 valued:1 distortion:5 amari:1 compensates:1 transform:1 noisy:8 delivered:1 final:3 melfrequency:1 adaptation:2 combining:1 mixing:13 adapts:1 unmixing:10 help:2 ac:2 eq:8 filter:2 stochastic:1 settle:1 assign:1 im:1 considered:1 released:1 estimation:1 proc:1 applicable:1 tanh:2 mit:1 sensor:3 gaussian:2 mation:1 super:1 release:1 derived:1 improvement:1 helpful:2 membership:1 selective:9 infor:1 overall:1 classification:11 art:1 ell:4 equal:2 once:1 represents:1 park:3 unsupervised:1 inevitable:2 others:1 familiar:1 phase:5 mlp:17 adjust:1 violation:1 mixture:8 bum:1 accurate:1 necessary:2 korea:2 re:2 isolated:1 classify:1 maximization:1 cost:7 snr:6 delay:1 supposes:1 density:12 lee:7 dong:1 thesis:1 recorded:4 satisfied:1 summarized:2 includes:1 coefficient:2 inc:1 blind:8 depends:1 characteristic:1 ofthe:1 vlo:5 reach:1 against:1 frequency:2 pp:1 hamming:1 propagated:1 newly:3 improves:2 higher:2 permitted:1 improved:2 nonlinear:6 concept:1 yusong:1 iteratively:1 freq:2 ll:4 speaker:1 mel:2 hill:1 theoretic:1 performs:2 sigmoid:1 kluwer:1 automatic:1 nonlinearity:6 mfccs:1 longer:1 showed:2 moderate:1 certain:1 ministry:1 additional:4 greater:1 signal:29 babbling:2 reduces:1 adapt:2 calculation:1 academic:1 compensate:2 fighter:2 laplacian:1 calculates:1 converging:1 noiseless:1 iteration:5 normalization:2 achieved:1 lea:12 remarkably:1 source:21 publisher:1 rest:1 subject:3 db:9 effectiveness:1 nonstationary:1 ee:1 yang:1 enough:1 independence:6 affect:2 reduce:1 speech:18 cause:2 hardly:1 cardoso:1 transforms:1 backpropagated:4 category:1 reduced:1 sl:1 estimated:1 clean:3 bae:1 letter:2 master:1 separation:10 taejon:1 layer:1 smaragdis:1 adapted:1 occur:1 ijcnn:1 sharply:1 fourier:1 min:1 relatively:1 department:1 according:5 comon:1 rca:2 mutually:2 previously:1 needed:1 operation:9 apply:1 appropriate:1 robustness:1 convolved:2 original:2 top:8 especially:2 added:1 parametric:1 gradient:2 separate:2 sylee:1 providing:2 ratio:1 korean:2 proper:1 unknown:1 perform:2 conversion:2 neuron:2 convolution:1 descent:2 situation:1 frame:3 retrained:1 complement:1 required:2 conflict:1 learned:1 trans:2 usually:1 pattern:6 mismatch:4 program:1 unsuccessful:1 soo:1 reliable:2 natural:1 advanced:1 representing:1 scheme:1 improve:1 technology:2 cichocki:1 relative:1 expect:1 mixed:1 leaning:1 bank:2 heavy:4 lo:1 changed:1 jung:1 supported:1 perceptron:1 institute:1 cepstral:1 distributed:1 feedback:6 overcome:1 vocabulary:1 world:4 computes:1 doesn:2 forward:1 clue:1 adaptive:1 far:1 mcgraw:1 overfitting:1 un:1 iterative:3 table:3 channel:1 robust:3 improving:2 complex:2 domain:2 linearly:3 noise:17 complementary:1 retrain:1 xl:3 learns:1 young:1 down:8 deconvolution:1 false:4 kr:2 importance:1 boston:1 rejection:1 entropy:1 lewicki:1 extracted:1 change:6 hard:1 determined:1 microphone:6 total:7 violated:1 incorporate:1 dept:1 ex:1
897
1,821
Automated State Abstraction for Options using the U-Tree Algorithm Anders Jonsson, Andrew G. Barto Department of Computer Science University of Massachusetts Amherst, MA 01003 {ajonsson,barto}@cs.umass.edu Abstract Learning a complex task can be significantly facilitated by defining a hierarchy of subtasks. An agent can learn to choose between various temporally abstract actions, each solving an assigned subtask, to accomplish the overall task. In this paper, we study hierarchical learning using the framework of options. We argue that to take full advantage of hierarchical structure, one should perform option-specific state abstraction, and that if this is to scale to larger tasks, state abstraction should be automated. We adapt McCallum's U-Tree algorithm to automatically build option-specific representations of the state feature space, and we illustrate the resulting algorithm using a simple hierarchical task. Results suggest that automated option-specific state abstraction is an attractive approach to making hierarchical learning systems more effective. 1 Introduction Researchers in the field of reinforcement learning have recently focused considerable attention on temporally abstract actions (e.g., [1,3,5,6,7,9]). The term temporally abstract describes actions that can take variable amounts of time. One motivation for using temporally abstract actions is that they can be used to exploit the hierarchical structure of a problem. Among other things, a hierarchical structure is a natural way to incorporate prior knowledge into a learning system by allowing reuse of temporally abstract actions whose policies were learned in other tasks. Learning in a hierarchy can also significantly reduce the number of situations between which a learning agent needs to discriminate. We use the framework of options [6, 9], which extends the theory of reinforcement learning to include temporally abstract actions. In many cases, accurately executing an option's policy does not depend on all state features available to the learning agent. Further, the features that are relevant often differ from option to option. Within a hierarchical learning system, it is possible to perform option-specific state abstraction by which irrelevant features specific to each option are ignored. Using option-specific state abstraction in a hierarchical learning system can save memory through the development of compact state representations, and it can accelerate learning because of the generalization induced by the abstraction. Dietterich [2] introduced action-specific state abstraction in a hierarchy of temporally abstract actions. However, his approach requires the system developer to define a set of relevant state features for each action prior to learning. As the complexity of a problem grows, it becomes increasingly difficult to hand-code such state representations. One way to remedy this problem is to use an automated process for constructing state representations. We apply McCallum' s U-Tree algorithm [4] to individual options to achieve automated, option-specific state abstraction. The U-Tree algorithm automatically builds a state-feature representation starting from one that makes no distinctions between different observation vectors. Thus, no specification of state-feature dependencies is necessary prior to learning. In Section 2, we give a brief description of the U-Tree algorithm. Section 3 introduces modifications necessary to make the U-Tree algorithm suitable for learning in a hierarchical system. We describe the setup of our experiments in Section 4 and present the results in Section 5. Section 6 concludes with a discussion of future work. 2 The U -Tree algorithm The U-Tree algorithm [4] retains a history of transition instances Tt = < Tt- l ,at- I , Tt , St > composed of the observation vector, St , at time step t, the previous action, at-l , the reward, Tt, received during the transition into St, and the previous instance, Tt- l. A decision treethe U-Tree-sorts a new instance Tt based on its components and assigns it to a unique leaf of the tree. The distinctions associated with a leaf are determined by the root-to-leaf path. For each leaf-action pair (Lj ,a), the algorithm keeps an action value Q(Lj ,a) estimating the future discounted reward associated with being in Lj and executing a. The utility of a leaf is denoted U(Lj ) = maxaQ(Lj ,a). The algorithm also keeps a model consisting of estimated transition probabilities Pr (Lk ILj , a) and expected immediate rewards R( Lj, a) computed from the transition instances. The model is used in performing one sweep of value iteration after the execution of each action, modifying the values of all leaf-action pairs (Lj ,a): Q(Lj ,a) ~ R(Lj ,a) + yLPr(Lk ILj ,a)U (Lk ). Lk One can use other reinforcement learning algorithms to update the action values, such as Q-learning or prioritized sweeping. The U-Tree algorithm periodically adds new distinctions to the tree in the form of temporary nodes, called fringe nodes, and performs statistical tests to see whether the added distinctions increase the predictive power ofthe U-Tree. Each distinction is based on (1) a perceptual dimension, which is either an observation or a previous action, and (2) a history index, indicating how far back in the current history the dimension will be examined. Each leaf of the tree is extended with a subtree of a fixed depth, z, constructed from permutations of all distinctions not already on the path to the leaf. The instances associated with the leaf are distributed to the leaves of the added subtree-the fringe nodes-according to the corresponding distinctions. A statistical test, the Kolmogorov-Smirnov (KS) test, compares the distributions of future discounted reward of the leaf node's policy action with that of a fringe node's policy action. The distribution of future discounted reward associated with a node Lj and its policy action a = argmax a Q(Lj ,a) is composed of the estimated future discounted reward of individual instances Tt E T(Lj , a) given by: V(Tt ) = Tt + l + yLPr(Lk ILj ,a)U(Lk ). Lk The KS test outputs a statistical difference dL ,Lk E [0, 1] between the distributions of two nodes Lj and Lk. The U-Tree algorithm retains the subtree of distinctions i at a leaf Lj if the sum of the KS statistical differences over the fringe nodes F(Lj , i) of the subtree is (1) larger than the sum of the KS differences of all other subtrees, and (2) exceeds some threshold 8. That is, the tree is extended from leaf L j with a subtree i of new distinctions if for all subtrees m -=I- i: ~ d LJ l F(L} 1i) > ?,..; ~ dL) 1F(L?) 1m) ?..J F (Lj ,i) and F(Lj ,m) ~ dLJ? ' F(L?J ' i) ?,..; F(Lj ,i) > 8. Whenever the tree is extended, the action values of the previous leaf node are passed on to the new leaf nodes. One can restrict the number of distinctions an agent can make at anyone time by imposing a limit on the depth of the V-Tree. The length of the history the algorithm needs to retain depends only on the tree size and not on the size of the overall state set. Consequently, the algorithm has the potential to scale well to large tasks. In previous experiments, the V-Tree algorithm was able to learn a compact state representation together with a satisfactory policy in a complex driving task [4]. A version of the V-Tree algorithm suitable for continuous state spaces has also been developed and successfully used in robot soccer [10]. 3 Adapting the U -Tree algorithm for options We now turn to the issue of adapting the V-Tree algorithm for use with options and hierarchicallearning architectures. Given a finite Markov decision process with state set S, an option 0 =< 1, rr, ~ > consists of a setl ~ S of states from which the option can be initiated, a closed-loop policy rr for the choice of actions, and a termination condition ~ which, for each state, gives the probability that the option will terminate when that state is reached. Primitive actions generalize to options that always terminate after one time step. It is easy to define hierarchies of options in which the policy of an option can select other options. A local reward function can be associated with an option to facilitate learning the option's policy. What makes the V-Tree algorithm so suitable for performing option-specific state abstraction is that a V-Tree simultaneously defines a state representation and a policy over this representation. With a separate V-Tree assigned to each option, the algorithm is able to perform state abstraction separately for each option while modifying its policy. Because options at different levels of a hierarchy operate on different time scales, their transition instances must take different forms. To make our scheme work, we need to add a notion of temporal abstraction to the definition of a transition instance: Definition: A transition instance of an option 0 has the form "fro = < ~~k' O/-k, R/ , s/ >, where s/ is the observation vector at time step t, O/-k is the option previously executed by option 0, terminating at time t and with a duration k, R/ = I.~ 11- 1 r/ - k+i is the discounted sum of rewards received during the execution of O/- b and ~~ k is the previous instance. Since options at one level in a hierarchy are executed one at a time, they will each experience a different sequence of transition instances. For the V-Tree algorithm to work under these conditions, the U-Tree of each option has to keep its own history of instances and base distinctions on these instances alone. The V-Tree algorithm was developed for infinite-horizon tasks. Because an option terminates and may not be executed again for some time, its associated history will be made up of finite segments corresponding to separate executions of the option. The first transition Figure 1: The Taxi task instance recorded during an execution is independent of the last instance recorded during a previous execution. Consequently, we do not allow updates across segments. With these modifications, the V-Tree algorithm can be applied to hierarchical learning with options. 3.1 Intra-option learning When several options operate in the same parts of the state space and choose from among the same actions, it is possible to learn something about one option from the behavior generated by the execution of other options. In a process called intra-option learning [8], the action values of one option are updated based on actions executed in another, associated option. The update only occurs if the action executed in the latter has a non-zero probability of being executed in the former. Similarly, we can base distinctions in the V-Tree associated with one option on transition instances recorded during the execution of another option. We do this by adding instances recorded during the execution of one option to the history of each associated option. By associating each instance with a vector of leaves, one for the V-Tree of each option, this approach does not require additional memory for keeping multiple copies of an instance. For the scheme to work, we introduce a vector of rewards Rt = {Kj'} in an instance ~o, where Rf' is the discounted sum of local rewards for each option 0' associated with Ot-k. 4 Experiments We tested our version of the V-Tree algorithm on the Taxi task [1], in which an agentthe taxi-moves around on a grid (Figure 1). The taxi is assigned the task of delivering passengers from their locations to their destination, both chosen at random from the set of pick-up/drop-off sites P = {1,2,3,4}. The taxi agent's observation vector s = (x,y,i,d ) is composed of the (x,y) -position of the taxi, the location i E P U {taxi} of the current passenger, and this passenger's destination d E P. The actions available to the taxi are Pick-up, Drop-off, and Move(m), m E {N,E,S , w}, the four cardinal directions. When a passenger is delivered, a new passenger appears at a random pickup site. The rewards provided to the taxi are: 19 - 11 - 1 for delivering the passenger for illegal Pick-up or Drop-off for any other action (including moving into walls) To aid the taxi agent we introduced four options: Navigate(p) =< [P, 1tP , ~P >, p E P, where, letting S denote the set of all observation vectors and GP = {(x ,y, i , d) E S I (x,y) is the location of p}: [P: 1tP : ~P : S - GP the policy for getting to GP that the agent is trying to learn 1 if s E GP; 0 otherwise. We further introduced a local reward Rf for Naviga te(p), identical to the global reward provided to the agent with the exception that Rf = 9 for reaching GP. In our application of the V-Tree algorithm to the taxi problem, the history of each option had a maximum length of 6,000 instances. If this length was exceeded, the oldest instance in the history was discarded. Expanding the tree was only considered if there were more than 3,000 instances in the history. We set the expansion depth z to 1 and the expansion threshold 8 to 1.0, except when no distinctions were present in the tree, in which case 8 = 0.3. The algorithm used this lower threshold when the agent was not able to make any distinctions because it is difficult in this case to accumulate enough evidence of statistical difference to accept a distinction. Since the V-Tree algorithm does not go back and reconsider distinctions in the tree, it is important to reduce the number of incorrect distinctions due to sparse statistical evidence. Therefore, our implementation only compared two distributions of future discounted reward between leaves if each contained more than 15 instances. Because the taxi task is fully observable, we set the history index of the V-tree algorithm to zero. For exploration, the system used an ?-softmax strategy, which picks a random action with probability ? and performs softmax otherwise. Normally, tuning the softmax temperature 't provides a good balance between exploration and exploitation, but as the V-Tree evolves, a new value of't may improve performance. To avoid re-tuning 't, the ?-random part ensured that all actions were executed regularly. We designed one set of experiments to examine the efficiency of intra-option learning. We randomly selected one of the options Naviga te(p) to execute, and randomly selected a new position for the taxi whenever it reached p, ignoring the issue of delivering a passenger. At the beginning of each learning run, we assigned a V-Tree containing a single node to each option. In one set of runs, the algorithm used intra-option learning, and in another set, it used regular learning in which the V-Trees of different options did not share any instances. In a second set of experiments, the policies of the options and the overall Taxi task were learned in parallel. We allowed the policy of the overall task to choose between the options Navigate(p), and the actions pick-up and Drop-off. The reward provided for the overall task was the sum of global reward and local reward of the option currently being executed (cf. Digney [3]). When a passenger was delivered, a new taxi position was selected randomly and a new passenger appeared at a randomly selected pickup site. 5 Results The results from the intra-option learning experiments are shown in Figure 2. The graphs for intra-option learning (solid) and regular learning (broken) are averaged over 5 independent runs. We tuned 't and ? for each set of learning runs to give maximum performance. At intervals of 500 time steps, the V-Trees of the options were saved and evaluated separately. The evaluation consisted of fixing a target, repeatedly navigating to that target for 25,000 time steps, randomly repositioning the taxi every time the target was reached, repeating for all targets, and adding the rewards. From these results, We conclude that (1) intra-option learning converges faster than regular learning, and (2) intra-option learning achieves a higher level of performance. Faster convergence is due to the fact that the histories associated with the options fill up more quickly during intra-option learning. Higher performance is achieved because the amount of evidence is larger. The target of an option is only reached once during each execution of the option, whereas it might be reached several times during the execution of another option. In the second set of experiments, we performed 10 learning runs, each with a duration of 04 -Intraopllon - - 2 25 Regulllr 3 Time steps Figure 2: Comparison between intra-option and regular learning 200,000 time steps. Figure 3 shows an example of the resulting V -Trees. Nodes that represent distinctions are drawn as circles, and leaf nodes are shown as squares or, in most cases, omitted. In the figure, a denotes a distinction over the previously executed option (in the order Navigate(p), pick-up and Drop-off), and other letters denote a distinction over the corresponding observation. Note that the V-Tree ofNavigate(l) did not make a distinction between x-positions in the lower part of the grid. In some places, for example in Navigate(4), the right branch of x, the algorithm made a suboptimal distinction. A distinction over y would have given a smaller number of leaves and would have been sufficient to represent an optimal policy. The V-Trees in the figure contain a total of 188 leaf nodes. Across 10 runs, the number of leaf nodes varied from 154 to 259, with an average of 189. Some leaf nodes were never visited, making the actual number of states even smaller. This is comparable to the results of Dietterich [2] who hand-coded a representation containing 106 states. Compared to the 500 distinct states in a flat representation of the task, or the 2,500 distinct states that the five policies would require without abstraction, our result is a significant improvement. Certainly, the memory required to store histories should also be taken into account. However, we believe that the memory savings due to option-specific state abstraction in larger tasks will significantly outweigh the memory requirement for V-Trees. 6 Conclusion We have shown that the V-Tree algorithm can be used with options in a hierarchical learning system. Our results suggest that automated option-specific state abstraction performed by the algorithm is an attractive approach to making hierarchical learning systems more effective. Although our testbed was small, we believe this is an important first step toward automated state abstraction in hierarchies. We also incorporated intra-option learning into the V-Tree algorithm, a method that allows a learning agent to extract more information from the training data. Results show that intra-option learning can significantly improve the performance of a learning agent performing option-specific state abstraction. Although our main motivation for developing a hierarchical version of the V-Tree algorithm was automating state abstraction, the new definition of a transition instance enables history to be structured hierarchically, something that is useful when learning to solve problems in partially observable domains. Future work will examine the performance of option-specific state abstraction using the V-Tree algorithm in larger, more realistic tasks. We also plan to develop a version of ~aVigate(l) ddc10~ x ~'(3) Figure 3: U-Trees for different policies the U-Tree algorithm that goes back in the tree and reconsiders distinctions. This has the potential to improve the performance of the algorithm by correcting nodes for which incorrect distinctions were made. Acknowledgments The authors would like to thank Tom Dietterich for providing code for the Taxi task, Andrew McCallum for valuable cOll'espondence regarding the U -Tree algorithm, and Ted Perkins for reading and providing helpful comments on the paper. This work was funded by the National Science Foundation under Grant No. ECS-9980062. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. References [1] Dietterich, T. (2000). Hierarchical reinforcement leaming with the MAXQ value function decomposition. Artificial Intelligence Research 13:227-303. [2] Dietterich, T. (2000) State Abstraction in MAXQ Hierarchical Reinforcement Learning. In S. A. Solla, T. K. Leen, and K.-R. Muller (eds .), Advances in Neural Information Processing Systems 12, pp. 994-1000. Cambridge MA: MIT Press. [3] Digney, B. (1996) Emergent hierarchical control structures: Leaming reactivelhierarchical relationships in reinforcement environments. In P. Meas and M. Mataric (eds.), From animals to animats 4. Cambridge MA: MIT Press . [4] McCallum, A. (1995) Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, Computer Science DepaItment, University of Rochester. [5] PaI1', R. , and Russell, S. (1998) Reinforcement leaming with hierarchies of machines. In M. 1. Jordan, M. J. Keams, and S. A. Solla (eds.), Advances in Neural Information Processing Systems 10, pp. 1043- 1049. Cambridge MA: MIT Press. [6] Precup, D., and Sutton, R. (1998) Multi-time models for temporally abstract planning. In M. 1. Jordan, M. J. Keams, and S. A. Solla (eds.), Advances in Neural Information Processing Systems 10, pp. 1050-1056. Cambridge MA: MIT Press. [7] Singh, S. (1992) Reinforcement leaming with a hierarchy of abstract models. In Proc. of the 10th National Con! on Artificial Intelligence, pp. 202-207. Menlo Park, CA: AAAI PresslMIT Press. [8] Sutton, R., Precup, D., and Singh, S. (1998) Intra-Option Leaming about Temporally Abstract Actions. In Proc. of the 15th Inti. Con! on Machine Learning, ICML'98, pp. 556-564. Morgan Kaufman. [9] Sutton, R., Precup, D., and Singh, S. (1999) Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112:181- 211. [10] Uther, w., and Veloso, M. (1997) Generalizing Adversarial Reinforcement Leaming. AAAI Fall Symposium on Model Directed Autonomous Systems.
1821 |@word exploitation:1 version:4 smirnov:1 termination:1 decomposition:1 pick:6 solid:1 uma:1 tuned:1 current:2 must:1 realistic:1 periodically:1 enables:1 drop:5 designed:1 update:3 alone:1 intelligence:3 leaf:22 selected:4 mccallum:4 oldest:1 beginning:1 provides:1 node:17 location:3 five:1 constructed:1 symposium:1 incorrect:2 consists:1 introduce:1 expected:1 behavior:1 examine:2 planning:1 multi:1 discounted:7 automatically:2 actual:1 becomes:1 provided:3 estimating:1 what:1 kaufman:1 developer:1 developed:2 finding:1 temporal:2 every:1 ensured:1 control:1 normally:1 grant:1 local:4 limit:1 taxi:17 sutton:3 initiated:1 path:2 might:1 k:4 examined:1 averaged:1 directed:1 unique:1 acknowledgment:1 significantly:4 adapting:2 illegal:1 regular:4 suggest:2 outweigh:1 primitive:1 attention:1 starting:1 duration:2 go:2 focused:1 assigns:1 correcting:1 fill:1 his:1 notion:1 autonomous:1 updated:1 hierarchy:9 target:5 solla:3 russell:1 valuable:1 subtask:1 environment:1 broken:1 complexity:1 reward:18 terminating:1 depend:1 solving:1 segment:2 singh:3 predictive:1 efficiency:1 accelerate:1 emergent:1 various:1 kolmogorov:1 distinct:2 effective:2 describe:1 artificial:3 whose:1 larger:5 solve:1 otherwise:2 gp:5 delivered:2 advantage:1 rr:2 sequence:1 relevant:2 loop:1 achieve:1 description:1 getting:1 convergence:1 requirement:1 executing:2 converges:1 illustrate:1 andrew:2 develop:1 fixing:1 received:2 c:1 differ:1 direction:1 saved:1 modifying:2 exploration:2 opinion:1 material:1 require:2 generalization:1 wall:1 around:1 considered:1 driving:1 achieves:1 omitted:1 proc:2 currently:1 visited:1 successfully:1 mit:4 always:1 reaching:1 avoid:1 barto:2 improvement:1 adversarial:1 helpful:1 abstraction:21 anders:1 lj:19 accept:1 hidden:1 keams:2 selective:1 overall:5 among:2 issue:2 denoted:1 development:1 plan:1 animal:1 softmax:3 field:1 once:1 never:1 saving:1 ted:1 identical:1 park:1 icml:1 future:7 cardinal:1 randomly:5 composed:3 simultaneously:1 national:3 individual:2 argmax:1 consisting:1 intra:13 evaluation:1 certainly:1 introduces:1 subtrees:2 necessary:2 experience:1 tree:55 re:1 circle:1 digney:2 instance:26 tp:2 retains:2 dependency:1 accomplish:1 repositioning:1 st:3 amherst:1 retain:1 automating:1 destination:2 off:5 together:1 quickly:1 precup:3 ilj:3 again:1 reflect:1 recorded:4 thesis:1 containing:2 choose:3 aaai:2 account:1 potential:2 depends:1 passenger:9 performed:2 root:1 view:1 closed:1 mataric:1 reached:5 sort:1 option:80 parallel:1 rochester:1 square:1 who:1 ofthe:1 generalize:1 accurately:1 researcher:1 history:14 whenever:2 ed:4 definition:3 pp:5 associated:11 con:2 massachusetts:1 knowledge:1 back:3 appears:1 exceeded:1 higher:2 tom:1 leen:1 execute:1 evaluated:1 hand:2 defines:1 believe:2 grows:1 facilitate:1 dietterich:5 consisted:1 contain:1 remedy:1 former:1 assigned:4 satisfactory:1 attractive:2 during:9 soccer:1 trying:1 tt:9 performs:2 temperature:1 recently:1 accumulate:1 significant:1 cambridge:4 imposing:1 tuning:2 grid:2 similarly:1 had:1 funded:1 moving:1 specification:1 robot:1 add:2 base:2 something:2 own:1 irrelevant:1 pai1:1 store:1 muller:1 morgan:1 additional:1 semi:1 branch:1 full:1 multiple:1 exceeds:1 faster:2 adapt:1 veloso:1 coded:1 iteration:1 represent:2 achieved:1 whereas:1 separately:2 interval:1 ot:1 operate:2 comment:1 induced:1 thing:1 regularly:1 jordan:2 easy:1 enough:1 automated:7 architecture:1 restrict:1 associating:1 suboptimal:1 reduce:2 regarding:1 whether:1 utility:1 reuse:1 passed:1 action:32 repeatedly:1 ignored:1 useful:1 delivering:3 setl:1 amount:2 repeating:1 estimated:2 four:2 threshold:3 drawn:1 graph:1 sum:5 run:6 facilitated:1 letter:1 extends:1 place:1 decision:2 comparable:1 perkins:1 flat:1 anyone:1 performing:3 department:1 developing:1 according:1 structured:1 describes:1 terminates:1 increasingly:1 across:2 smaller:2 evolves:1 making:3 modification:2 pr:1 inti:1 taken:1 previously:2 turn:1 letting:1 available:2 apply:1 hierarchical:16 save:1 denotes:1 include:1 cf:1 exploit:1 build:2 sweep:1 move:2 added:2 already:1 occurs:1 strategy:1 rt:1 navigating:1 separate:2 thank:1 argue:1 toward:1 code:2 length:3 index:2 relationship:1 providing:2 balance:1 difficult:2 setup:1 executed:9 reconsider:1 implementation:1 policy:17 perform:3 allowing:1 animats:1 observation:7 markov:1 discarded:1 finite:2 pickup:2 immediate:1 defining:1 situation:1 extended:3 incorporated:1 varied:1 sweeping:1 jonsson:1 subtasks:1 introduced:3 pair:2 required:1 learned:2 distinction:25 temporary:1 testbed:1 maxq:2 able:3 perception:1 appeared:1 reading:1 rf:3 including:1 memory:5 power:1 suitable:3 natural:1 scheme:2 improve:3 brief:1 mdps:2 temporally:9 lk:9 concludes:1 fro:1 extract:1 kj:1 prior:3 fully:1 permutation:1 foundation:2 agent:11 sufficient:1 uther:1 share:1 last:1 keeping:1 copy:1 allow:1 fall:1 sparse:1 distributed:1 dimension:2 depth:3 transition:11 author:2 made:3 reinforcement:11 coll:1 far:1 ec:1 compact:2 observable:2 keep:3 global:2 conclude:1 continuous:1 learn:4 terminate:2 expanding:1 ca:1 ignoring:1 menlo:1 expansion:2 complex:2 necessarily:1 constructing:1 domain:1 did:2 main:1 hierarchically:1 motivation:2 allowed:1 site:3 aid:1 dlj:1 position:4 perceptual:1 specific:13 navigate:4 meas:1 evidence:3 dl:2 adding:2 phd:1 execution:10 subtree:5 te:2 horizon:1 generalizing:1 expressed:1 contained:1 partially:1 recommendation:1 ma:5 fringe:4 consequently:2 leaming:6 prioritized:1 considerable:1 determined:1 infinite:1 except:1 called:2 total:1 discriminate:1 indicating:1 select:1 exception:1 latter:1 incorporate:1 tested:1
898
1,822
A Linear Programming Approach to Novelty Detection Colin Campbell Dept. of Engineering Mathematics, Bristol University, Bristol Bristol, BS8 1TR, United Kingdon C. Campbell@bris.ac.uk Kristin P. Bennett Dept. of Mathematical Sciences Rensselaer Polytechnic Institute Troy, New York 12180-3590 United States bennek@rpi.edu Abstract Novelty detection involves modeling the normal behaviour of a system hence enabling detection of any divergence from normality. It has potential applications in many areas such as detection of machine damage or highlighting abnormal features in medical data. One approach is to build a hypothesis estimating the support of the normal data i. e. constructing a function which is positive in the region where the data is located and negative elsewhere. Recently kernel methods have been proposed for estimating the support of a distribution and they have performed well in practice - training involves solution of a quadratic programming problem. In this paper we propose a simpler kernel method for estimating the support based on linear programming. The method is easy to implement and can learn large datasets rapidly. We demonstrate the method on medical and fault detection datasets. 1 Introduction. An important classification task is the ability to distinguish b etween new instances similar to m embers of the training set and all other instances that can occur. For example, we may want to learn the normal running behaviour of a machine and highlight any significant divergence from normality which may indicate onset of damage or faults. This issue is a generic problem in many fields. For example, an abnormal event or feature in medical diagnostic data typically leads to further investigation. Novel events can be highlighted by constructing a real-valued density estimation function. However, here we will consider the simpler task of modelling the support of a data distribution i.e. creating a binary-valued function which is positive in those regions of input space where the data predominantly lies and negative elsewhere. Recently kernel methods have been applied to this problem [4]. In this approach data is implicitly mapped to a high-dimensional space called feature space [13]. Suppose the data points in input space are X i (with i = 1, . . . , m) and the mapping is Xi --+ ?;(Xi) then in the span of {?;(Xi)}, we can expand a vector w = Lj cr.j?;(Xj). Hence we can define separating hyperplanes in feature space by w . ?;(x;) + b = O. We will refer to w . ?;(Xi) + b as the margin which will be positive on one side of the separating hyperplane and negative on the other. Thus we can also define a decision function: (1) where z is a new data point. The data appears in the form of an inner product in feature space so we can implicitly define feature space by our choice of kernel function: (2) A number of choices for the kernel are possible, for example, RBF kernels: (3) With the given kernel the decision function is therefore given by: (4) One approach to novelty detection is to find a hypersphere in feature space with a minimal radius R and centre a which contains most of the data: novel test points lie outside the boundary of this hypersphere [3 , 12] . This approach to novelty detection was proposed by Tax and Duin [10] and successfully used on real life applications [11] . The effect of outliers is reduced by using slack variables to allow for datapoints outside the sphere and the task is to minimise the volume of the sphere and number of datapoints outside i.e. ei mIll s.t. [R2 + oX L i ei 1 (Xi - a) . (Xi - a) S R2 + ei, ei ~ a (5) Since the data appears in the form of inner products kernel substitution can be applied and the learning task can be reduced to a quadratic programming problem. An alternative approach has been developed by Scholkopf et al. [7]. Suppose we restricted our attention to RBF kernels (3) then the data lies on the surface of a hypersphere in feature space since ?;(x) . ?;(x) = K(x , x) = l. The objective is therefore to separate off the surface region constaining data from the region containing no data. This is achieved by constructing a hyperplane which is maximally distant from the origin with all datapoints lying on the opposite side from the origin and such that the margin is positive. The learning task in dual form involves minimisation of: mIll s.t. W(cr.) = t L7,'k=l cr.icr.jK(Xi, Xj) a S cr.i S C, L::1 cr.i = l. (6) However, the origin plays a special role in this model. As the authors point out [9] this is a disadvantage since the origin effectively acts as a prior for where the class of abnormal instances is assumed to lie. In this paper we avoid this problem: rather than repelling the hyperplane away from an arbitrary point outside the data distribution we instead try and attract the hyperplane towards the centre of the data distribution. In this paper we will outline a new algorithm for novelty detection which can be easily implemented using linear programming (LP) techniques. As we illustrate in section 3 it performs well in practice on datasets involving the detection of abnormalities in medical data and fault detection in condition monitoring. 2 The Algorithm For the hard margin case (see Figure 1) the objective is to find a surface in input space which wraps around the data clusters: anything outside this surface is viewed as abnormal. This surface is defined as the level set, J(z) = 0, of some nonlinear function. In feature space, J(z) = L; O'.;K(z, x;) + b, this corresponds to a hyperplane which is pulled onto the mapped datapoints with the restriction that the margin always remains positive or zero. We make the fit of this nonlinear function or hyperplane as tight as possible by minimizing the mean value of the output of the function, i.e., Li J(x;). This is achieved by minimising: (7) subject to: m LO'.jK(x;,Xj) + b 2:: 0 (8) j=l m L 0'.; = 1, 0'.; 2:: 0 (9) ;=1 The bias b is just treated as an additional parameter in the minimisation process though unrestricted in sign. The added constraints (9) on 0'. bound the class of models to be considered - we don't want to consider simple linear rescalings of the model. These constraints amount to a choice of scale for the weight vector normal to the hyperplane in feature space and hence do not impose a restriction on the model. Also, these constraints ensure that the problem is well-posed and that an optimal solution with 0'. i- 0 exists. Other constraints on the class of functions are possible, e.g. 110'.111 = 1 with no restriction on the sign of O'.i. Many real-life datasets contain noise and outliers. To handle these we can introduce a soft margin in analogy to the usual approach used with support vector machines. In this case we minimise: (10) subject to: m LO:jJ{(Xi , Xj)+b~-ei' ei~O (11) j=l and constraints (9). The parameter). controls the extent of margin errors (larger ). means fewer outliers are ignored: ). -+ 00 corresponds to the hard margin limit). The above problem can be easily solved for problems with thousands of points using standard simplex or interior point algorithms for linear programming. With the addition of column generation techniques, these same approaches can be adopted for very large problems in which the kernel matrix exceeds the capacity of main memory. Column generation algorithms incrementally add and drop columns each corresponding to a single kernel function until optimality is reached. Such approaches have been successfully applied to other support vector problems [6 , 2]. Basic simplex algorithms were sufficient for the problems considered in this paper, so we defer a listing of the code for column generation to a later paper together with experiments on large datasets [1]. 3 Experiments Artificial datasets. Before considering experiments on real-life data we will first illustrate the performance of the algorithm on some artificial datasets. In Figure 1 the algorithm places a boundary around two data clusters in input space: a hard margin was used with RBF kernels and (J" = 0.2. In Figure 2 four outliers lying outside a single cluster are ignored when the system is trained using a soft margin. In Figure 3 we show the effect of using a modified RBF kernel J{ (Xi, Xj) = e- ix ,-xji/ 2 ,,2. This kernel and the one in (3) use a measure X - y, thus J{(x, x) is constant and the points lie on the surface of a hypersphere in feature space. As a consequence a hyperplane slicing through this hypersphere gives a closed boundary separating normal and abnormal in input space: however, we found other choices of kernels may not produce closed boundaries in input space. 01 .. -.~. ?02 ?03 '--~"-"--''--~-'----'-~'----'-----''----'-----'---' -035 -03 -025 -02 ?015 -01 -005 ODS 01 015 02 Figure 1: The solution in input space for the hyperplane minimising W(o:, b) equation (7). A hard margin was used with RBF kernels trained using (J" = 0.2 III Medical Diagnosis. For detection of abnormalities in medical data we investigated performance on the Biomed dataset [5] from the Statlib data archive [14]. 04 02 ?06 ?06 '---~-~-~--'--~-~--~-' -DB -06 -0 4 02 04 06 08 Figure 2: In this example 4 outliers are ignored by using a soft margin (with A = 10.0). RBF kernels were used with (J" = 0.2 -005 -015 -025 D o .03 L..:::'---'-_~_~~_-,-_~_~~_~-' ?03 01 Figure 3: The solution in input space for a modified RBF kernel K (Xi, Xj) (J" = 0.5 e- 1x ,-xjl/2a 2 with This dataset consisted of 194 observations each with 4 attributes corresponding to measurements made on blood samples (15 observations with missing values were removed). We trained the system on 100 randomly chosen normal observations from healthy patients. The system was then tested on 27 normal observations and 67 observations which exhibited abnormalities due to the presense of a rare genetic disease. In Figure 4 we plot the results for training the novelty detector using a hard margin and with RBF kernels. This plot gives the error rate (as a percentage) on the yaxis, versus (J" on the x-axis with the solid curve giving the performance on normal observations in the test data and the dashed curve giving performance on abnormal observations. Clearly, when (J" is very small the system puts a Gaussian of narrow width around each data point and hence all test data is labelled as abnormal. As (J" increases the model improves and at (J" = 1.1 all but 2 of the normal test observations are correctly labelled and 57 of the 67 abnormal observations are correctly labelled. As (J" increases to (J" = 10.0 the solution has 1 normal test observation incorrectly labelled and 29 abnormal observations correctly labelled. The kernel parameter (J" is therefore crucial is determining the balance between I~ 80 40 20 Figure 4: The error rate (as a percentage) on the y-axis, versus (J" on the x-axis. The solid curve giving the performance on normal observations in the test data and the dashed curve giving performance on abnormal observations. normality and abnormality. Future research on model selection may indicate a good choice for the kernel parameter. However, if the dataset is large enough and some abnormal events are known then a validation study can be used to determine the kernel parameter - as we illustrate with the application below. Interestingly, if we use an ensemble of models instead, with (J" chosen across a range , then the relative proportion indicating abnormality gives an approximate measure of the confidence in the novelty of an observation: 29 observations are abnormal for all (J" in Figure 4 and hence must be abnormal with high confidence. Condition Monitoring. Fault detection is an important generic problem in the condition monitoring of machinery: failure to detect faults can lead to machine damage, while an oversensitive fault detection system can lead to expensive and unnecessary downtime. An an example we will consider detection of 4 classes of fault in ball-bearing cages, which are often safety critical components in machines, vehicles and other systems such as aircraft wing flaps. In this study we used a dataset from the Structural Integrity and Damage Assessment Network [15] . Each instance consisted of 2048 samples of acceleration taken with a Bruel and Kjaer vibration analyser. After pre-processing with a discrete Fast Fourier Transform each such instance had 32 attributes characterising the measured signals. The dataset consisted of 5 categories: normal data corresponding to measurements made from new ball-bearings and 4 types of abnormalities which we will call type 1 (outer race completely broken), type 2 (broken cage with one loose element), type 3 (damaged cage with four loose elements) and type 4 (a badly worn ball-bearing with no evident damage) . To train the system we used 913 normal instances on new ball-bearings. Using RBF kernels the best value of (J" ((J" = 320.0) was found using a validation study consisting of 913 new normal instances, 747 instances of type 1 faults and 996 instances of type 2 faults. On new test data 98.7% of normal instances were correctly labelled (913 instances), 100% of type 1 instances were correctly labelled (747 instances) and 53.3% of type 2 instances were correctly labelled (996 instances). Of course, with ample normal and abnormal data this problem could also be approached using a binary classifier instead. Thus to evaluate performance on totally unseen abnormalities we tested the novelty detector on type 3 errors and type 4 errors (with 996 instances of each). The novelty detector labelled 28.3% of type 3 and 25.5% oftype 4 instances as abnormal- which was statistically significant against a background of 1.3% errors on normal data. 4 Conclusion In this paper we have presented a new kernelised novelty detection algorithm which uses linear programming techniques rather than quadratic programming. The algorithm is simple, easy to implement with standard LP software packages and it performs well in practice. The algorithm is also very fast in execution: for the 913 training examples used in the experiments on condition monitoring the model was constructed in about 4 seconds using a Silicon Graphics Origin 200. References [1] K. Bennett and C. Campbell. A Column Generation Algorithm for Novelty Detection. Preprint in preparation. [2] K. Bennett, A . Demiriz and J. Shawe-Taylor , A Column Generation Algorithm for Boosting. In Proceed. of Intl. Conf on Machine Learning. Stanford, CA, 2000. [3] C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2, p. 121-167, 1998. [4] C. Campbell. An Introduction to Kernel Methods. In: Radial Basis Function Networks: Design and Applications. R.J. Howlett and L.C. Jain (eds). Physica Verlag, Berlin , to appear. [5] L. Cox, M. Johnson and K. Kafadar . Exposition of Statistical Graphics Technology. ASA Proceedings of the Statistical Computation Section, p. 55-56 , 1982. [6] O. L. Mangasarian and D. Musicant. Massive Support Vector Regression. Data Mining Institute Technical Report 99-02, University of Wisconsin-Madison, 1999. [7] B. Scholkopf, J.C. Platt , J. Shawe-Taylor, A.J. Smola, R.C. Williamson. Estimating the support of a high-dimensional distribution. Microsoft Research Corporation Technical Report MSR-TR-99-87, 1999 , 2000 [8] B. Scholkopf, R. Williamson, A. Smola, and J. Shawe-Taylor. SV estimation of a distribution 's support. In Neural Information Processing Systems, 2000, to appear. [9] B. Scholkopf, J. Platt and A. Smola. Kernel Method for Percentile Feature Extraction. Microsoft Technical Report MSR-TR-2000-22. [10] D . Tax and R. Duin. Data domain description by Support Vectors. In Proceedings of ESANN99, ed. M Verleysen, D . Facto Press, Brussels, p . 251-256, 1999. [11] D. Tax, A . Ypma, and R. Duin. Support vector data description applied to machine vibration analysis. In: M. Boasson , J . Kaandorp , J.Tonino , M. Vosselman (eds.), Proc. 5th Annual Conference of the Advanced School for Computing and Imaging (Heijen, NL, June 15-17), 1999 , 398-405. [12] V. Vapnik. The N ature of Statistical Learning Theory. Springer, N.Y., 1995. [13] V. Vapnik. Statistical Learning Theory. Wiley, 1998. [14] cf. http) /lib.stat.cmu.edu/datasets [15] http://www .sidanet .org
1822 |@word aircraft:1 cox:1 msr:2 proportion:1 tr:3 solid:2 substitution:1 contains:1 united:2 genetic:1 interestingly:1 od:1 repelling:1 rpi:1 must:1 distant:1 drop:1 plot:2 fewer:1 hypersphere:5 boosting:1 hyperplanes:1 org:1 simpler:2 mathematical:1 constructed:1 scholkopf:4 ypma:1 introduce:1 xji:1 considering:1 totally:1 lib:1 estimating:4 developed:1 corporation:1 act:1 classifier:1 uk:1 control:1 platt:2 medical:6 facto:1 appear:2 positive:5 before:1 engineering:1 safety:1 limit:1 consequence:1 range:1 statistically:1 practice:3 implement:2 kernelised:1 area:1 downtime:1 confidence:2 pre:1 radial:1 onto:1 interior:1 selection:1 put:1 restriction:3 www:1 missing:1 worn:1 attention:1 slicing:1 datapoints:4 kjaer:1 handle:1 play:1 damaged:1 massive:1 suppose:2 xjl:1 programming:8 us:1 hypothesis:1 origin:5 element:2 expensive:1 jk:2 located:1 recognition:1 role:1 preprint:1 solved:1 thousand:1 region:4 removed:1 disease:1 broken:2 trained:3 tight:1 asa:1 completely:1 basis:1 easily:2 train:1 jain:1 fast:2 artificial:2 approached:1 analyser:1 outside:6 posed:1 valued:2 larger:1 stanford:1 ability:1 unseen:1 highlighted:1 transform:1 demiriz:1 propose:1 product:2 rapidly:1 tax:3 bennek:1 description:2 cluster:3 intl:1 produce:1 illustrate:3 ac:1 stat:1 measured:1 school:1 implemented:1 involves:3 indicate:2 radius:1 attribute:2 behaviour:2 investigation:1 physica:1 lying:2 around:3 considered:2 normal:17 mapping:1 estimation:2 proc:1 healthy:1 vibration:2 successfully:2 kristin:1 bs8:1 clearly:1 always:1 gaussian:1 modified:2 rather:2 avoid:1 cr:5 minimisation:2 june:1 modelling:1 detect:1 attract:1 typically:1 lj:1 expand:1 biomed:1 issue:1 classification:1 dual:1 l7:1 verleysen:1 special:1 field:1 extraction:1 future:1 simplex:2 report:3 randomly:1 divergence:2 consisting:1 microsoft:2 detection:16 mining:2 nl:1 machinery:1 taylor:3 minimal:1 instance:17 column:6 modeling:1 soft:3 disadvantage:1 etween:1 rare:1 johnson:1 graphic:2 sv:1 density:1 off:1 together:1 containing:1 conf:1 creating:1 wing:1 li:1 potential:1 race:1 onset:1 performed:1 try:1 later:1 closed:2 vehicle:1 reached:1 defer:1 listing:1 ensemble:1 monitoring:4 bristol:3 detector:3 ed:3 failure:1 against:1 dataset:5 knowledge:1 improves:1 campbell:4 appears:2 maximally:1 ox:1 though:1 just:1 smola:3 until:1 ei:6 nonlinear:2 cage:3 assessment:1 incrementally:1 effect:2 contain:1 consisted:3 hence:5 width:1 anything:1 percentile:1 outline:1 evident:1 demonstrate:1 performs:2 characterising:1 novel:2 recently:2 mangasarian:1 predominantly:1 volume:1 significant:2 refer:1 measurement:2 silicon:1 mathematics:1 centre:2 shawe:3 had:1 surface:6 add:1 integrity:1 verlag:1 binary:2 fault:9 life:3 musicant:1 additional:1 unrestricted:1 impose:1 novelty:11 determine:1 colin:1 dashed:2 signal:1 exceeds:1 technical:3 minimising:2 sphere:2 dept:2 involving:1 basic:1 regression:1 patient:1 cmu:1 kernel:25 achieved:2 background:1 addition:1 want:2 rescalings:1 crucial:1 exhibited:1 archive:1 subject:2 db:1 ample:1 call:1 structural:1 abnormality:7 iii:1 easy:2 enough:1 xj:6 fit:1 opposite:1 inner:2 minimise:2 york:1 proceed:1 jj:1 ignored:3 amount:1 category:1 reduced:2 http:2 percentage:2 tutorial:1 sign:2 diagnostic:1 correctly:6 diagnosis:1 discrete:1 four:2 blood:1 imaging:1 package:1 place:1 decision:2 abnormal:15 bound:1 distinguish:1 quadratic:3 annual:1 badly:1 occur:1 duin:3 constraint:5 software:1 flap:1 fourier:1 span:1 optimality:1 brussels:1 ball:4 across:1 lp:2 outlier:5 restricted:1 taken:1 equation:1 remains:1 slack:1 loose:2 adopted:1 polytechnic:1 away:1 generic:2 alternative:1 running:1 ensure:1 cf:1 madison:1 giving:4 build:1 objective:2 added:1 damage:5 usual:1 wrap:1 separate:1 mapped:2 separating:3 capacity:1 berlin:1 outer:1 extent:1 code:1 minimizing:1 balance:1 troy:1 negative:3 design:1 bris:1 observation:15 datasets:8 enabling:1 incorrectly:1 arbitrary:1 narrow:1 below:1 pattern:1 memory:1 event:3 critical:1 treated:1 advanced:1 normality:3 technology:1 axis:3 prior:1 discovery:1 determining:1 relative:1 wisconsin:1 highlight:1 generation:5 analogy:1 versus:2 validation:2 sufficient:1 lo:2 statlib:1 elsewhere:2 course:1 side:2 allow:1 pulled:1 bias:1 institute:2 burges:1 boundary:4 curve:4 author:1 made:2 approximate:1 implicitly:2 yaxis:1 assumed:1 unnecessary:1 xi:10 don:1 rensselaer:1 learn:2 ca:1 bearing:4 investigated:1 williamson:2 constructing:3 domain:1 main:1 noise:1 wiley:1 lie:5 ix:1 r2:2 exists:1 vapnik:2 effectively:1 ature:1 execution:1 ember:1 margin:12 mill:2 highlighting:1 springer:1 corresponds:2 viewed:1 acceleration:1 rbf:9 towards:1 exposition:1 labelled:9 bennett:3 hard:5 hyperplane:9 called:1 indicating:1 support:12 preparation:1 evaluate:1 tested:2
899
1,823
A Support Vector Method for Clustering AsaBen-Hur Faculty of IE and Management Technion, Haifa 32000, Israel Hava T. Siegelmann Lab for Inf. & Decision Systems MIT Cambridge, MA 02139, USA David Horn School of Physics and Astronomy Tel Aviv University, Tel Aviv 69978, Israel Vladimir Vapnik AT&T Labs Research 100 Schultz Dr., Red Bank, NJ 07701, USA Abstract We present a novel method for clustering using the support vector machine approach. Data points are mapped to a high dimensional feature space, where support vectors are used to define a sphere enclosing them. The boundary of the sphere forms in data space a set of closed contours containing the data. Data points enclosed by each contour are defined as a cluster. As the width parameter of the Gaussian kernel is decreased, these contours fit the data more tightly and splitting of contours occurs. The algorithm works by separating clusters according to valleys in the underlying probability distribution, and thus clusters can take on arbitrary geometrical shapes. As in other SV algorithms, outliers can be dealt with by introducing a soft margin constant leading to smoother cluster boundaries. The structure of the data is explored by varying the two parameters. We investigate the dependence of our method on these parameters and apply it to several data sets. 1 Introduction Clustering is an ill-defined problem for which there exist numerous methods [1, 2]. These can be based on parametric models or can be non-parametric. Parametric algorithms are usually limited in their expressive power, i.e. a certain cluster structure is assumed. In this paper we propose a non-parametric clustering algorithm based on the support vector approach [3], which is usually employed for supervised learning. In the papers [4, 5] an SV algorithm for characterizing the support of a high dimensional distribution was proposed. As a by-product of the algorithm one can compute a set of contours which enclose the data points. These contours were interpreted by us as cluster boundaries [6]. In [6] the number of clusters was predefined, and the value of the kernel parameter was not determined as part of the algorithm. In this paper we address these issues. The first stage of our Support Vector Clustering (SVC) algorithm consists of computing the sphere with minimal radius which encloses the data points when mapped to a high dimensional feature space. This sphere corresponds to a set of contours which enclose the points in input space. As the width parameter of the Gaussian kernel function that represents the map to feature space is decreased, this contour breaks into an increasing number of disconnected pieces. The points enclosed by each separate piece are interpreted as belonging to the same cluster. Since the contours characterize the support of the data, our algorithm identifies valleys in its probability distribution. When we deal with overlapping clusters we have to employ a soft margin constant, allowing for "outliers". In this parameter range our algorithm is similar to the space clustering method [7]. The latter is based on a Parzen window estimate of the probability density, using a Gaussian kernel and identifying cluster centers with peaks of the estimator. 2 Describing Cluster Boundaries with Support Vectors In this section we describe an algorithm for representing the support of a probability distribution by a finite data set using the formalism of support vectors [5, 4]. It forms the basis of our clustering algorithm. Let {xd ~ X be a data-set of N points, with X ~ Rd, the input space. Using a nonlinear transformation <I> from X to some high dimensional featurespace, we look for the smallest enclosing sphere of radius R, described by the constraints: 11<I>(xi) - aW ~ R2 'Vi , where II . II is the Euclidean norm and a is the center of the sphere. Soft constraints are incorporated by adding slack variables ~j: (1) with ~j ~ O. To solve this problem we introduce the Lagrangian L = R2 - 2:(R 2 + ~j -11<I>(Xj) - aI1 2 ),Bj - 2:~j{tj + C2:~j , (2) j where,Bj ~ 0 and {tj ~ 0 are Lagrange multipliers, C is a constant, and C L: ~j is a penalty term. Setting to zero the derivative of L with respect to R, a and ~j, respectively, leads to (3) a = 2: ,Bj <I> (Xj ), (4) j ,Bj =C - (5) {tj The KKT complementarity conditions [8] result in ~j{tj =0 (R2 +~j -11<I>(xj) - aI1 2 ),Bj =0 (6) (7) A point Xi with ~i > 0 is outside the feature-space sphere (cf. equation 1). Equation (6) states that such points Xi have {ti = 0, so from equation (5) ,Bi = C. A point with ~i = 0 is inside or on the surface of the feature space sphere. If its ,Bi i= 0 then equation 7 implies that the point Xi is on the sudace of the feature space sphere. In this paper any point with o < ,Bi < C will be referred to as a support vector or SV; points with ,Bi = C will be called bounded support vectors or bounded SVs. This is to emphasize the role of the support vectors as delineating the boundary. Note that when C ~ 1 no bounded SVs exist because of the constraint L:,Bi = 1. Using these relations we may eliminate the variables R , a and {tj, turning the Lagrangian into the Wolfe dual which is a function of the variables ,Bj: (8) j i ,j Since the variables {lj don't appear in the Lagrangian they may be replaced with the constraints: (9) We follow the SV method and represent the dot products <Ii (Xi) . <Ii(Xj) by an appropriate Mercer kernel K (Xi, Xj). Throughout this paper we use the Gaussian kernel K(Xi,Xj) = e-qllxi-XjI12 , (10) with width parameter q. As noted in [5], polynomial kernels do not yield tight contour representations of a cluster. The Lagrangian W is now written as: (11) i,j j At each point x we define its distance, when mapped to feature space, from the center of the sphere: (12) In view of (4) and the definition of the kernel we have: R2(X) = K(x, x) - 2 L .8jK(xj, x) + L .8i.8jK(Xi, Xj) . (13) i ,j j The radius of the sphere is: R = {R(Xi) I Xi is a support vector} . (14) In practice, one takes the average over all support vectors. The contour that encloses the cluster in data space is the set (15) {X I R(x) = R} . A data point Xi is a bounded SV if R(Xi) > R. Note that since we use a Gaussian kernel for which K (x, x) = 1, our feature space is a unit sphere; thus its intersection with a sphere of radius R < 1 can also be defined as an intersection by a hyperplane, as in conventional SVM. The shape of the enclosing contours in input space is governed by two parameters, q and C. Figure 1 demonstrates that, as q is increased, the enclosing contours form tighter fits to the data. Figure 2 describes a situation that necessitated introduction of outliers, or bounded SV, by allowing for C < 1. As C is decreased not only does the number of bounded SVs increase, but their influence on the shape of the cluster contour decreases (see also [6]). The number of support vectors depends on both q and C. For fixed q, as C is decreased, the number of SVs decreases since some of them turn into bounded SVs and the resulting shapes of the contours become smoother. We denote by nsv, nbsv the number of support vectors and bounded support vectors, respectively, and note the following result: Proposition 2.1 [4] nbsv + nsv ~ l/C, nbsv < l/C (16) This is an immediate consequence of the constraints (3) and (9). In fact, we have found empirically that nbsv(q, C) = max(O, l/C - no) , (17) where no > 0 may be a function of q and N. This was observed for artificial and real data sets. Moreover, we have also observed that nsv = a/C +b, (18) where a and b are functions of q and N. The linear behavior of nbsv continues until nbsv nsv = N. + 3 Support Vector Clustering (SVC) In this section we go through a set of examples demonstrating the use of SVC. We begin with a data set in which the separation into clusters can be achieved without outliers, i.e. C = 1. As seen in Figure 1, as q is increased the shape of the boundary curves in data-space varies. At several q values the enclosing contour splits, forming an increasing number of connected components. We regard each component as representing a single cluster. While in this example clustering looks hierarchical, this is not strictly true in general. -D.l L, --~~~,--? '" -D_~.';, ---; ~::;, - --;; I" Figure 1: Data set contains 183 points. A Gaussian kernel was used with C = 1.0. SVs are surrounded by small circles. (a): q = 1 (b): q = 20 (c): q = 24 (d): q = 48. In order to label data points into clusters we need to identify the connected components. We define an adjacency matrix Aij between pairs of points Xi and Xj: A-. - { I OJ - a iffor all y on the line segment connecting xi and Xj R(y) ~ R otherwise. (19) Clusters are then defined as the connected components of the graph induced by A. This labeling procedure is justified by the observation that nearest neighbors in data space can be connected by a line segment that is contained in the high dimensional sphere. Checking the line segment is implemented by sampling a number of points on the segment (a value of 10 was used in the numerical experiments). Note that bounded SVs are not classified by this procedure; they can be left unlabeled, or classified e.g. , according to the cluster to which they are closest to. We adopt the latter approach. The cluster description algorithm provides an estimate of the support of the underlying probability distribution [4]. Thus we distinguish between clusters according to gaps in the support of the underlying probability distribution. As q is increased the support is characterized by more detailed features, enabling the detection of smaller gaps. Too high a value of q may lead to overfitting (see figure 2(a?, which can be handled by allowing for bounded SVs (figure 2(b?: letting some of the data points be bounded SVs creates smoother contours, and facilitates contour splitting at low values of q. 3.1 Overlapping clusters In many data sets clusters are strongly overlapping, and clear separating valleys as in Figures 1 and 2 are not present. Our algorithm is useful in such cases as well, but a slightly different interpretation is required. First we note that equation (15) for the enclosing contour can be expressed as {x I 'E,J3i K(Xi,X) = p}, where p is determined by the value of this sum on the support vectors. The set of points enclosed by the contour is: (b ) (9 ) Figure 2: Clustering with and without outliers. The inner cluster is composed of 50 points generated by a Gaussian distribution. The two concentric rings contain 150/300 points, generated by a uniform angular distribution and radial Gaussian distribution. (a) The rings cannot be distinguished when C = 1. Shown here is q = 3.5, the lowest q value that leads to separation of the inner cluster. (b) Outliers allow easy clustering. The parameters are l/(NC) = 0.3 and q = 1.0. SVs are surrounded by small ellipses. {X I 2:i f3i K (Xi, x) > p} . In the extreme case when almost all data points are bounded SVs, the sum in this expression is approximately p(x) 1 =N LK(Xi,X). (20) i This is recognized as a Parzen window estimate of the density function (up to a normalization factor, if the kernel is not appropriately normalized). The contour will then enclose a small number of points which correspond to the maximum of the Parzen-estimated density. Thus in the high bounded SVs regime we find a dense core of the probability distribution. In this regime our algorithm is closely related to an algorithm proposed by Roberts [7]. He defines cluster centers as maxima of the Parzen window estimate p(x). He shows that in his approach, which goes by the name of scale-space clustering, as q is increased the number of maxima increases. The Gaussian kernel plays an important role in his analysis: it is the only kernel for which the number of maxima (hence the number of clusters) is a monotonically non-decreasing function of q (see [7] and references therein). The advantage of SVC over Roberts' method is that we find a region, rather than just a peak, and that instead of solving a problem with many local maxima, we identify the core regions by an SV method with a global optimal solution. We have found examples where a local maximum is hard to identify by Roberts' method. 3.2 The iris data We ran SVC on the iris data set [9], which is a standard benchmark in the pattern recognition literature. It can be obtained from the UCI repository [10]. The data set contains 150 instances, each containing four measurements of an iris flower. There are three types of flowers, represented by 50 instances each. We clustered the data in a two dimensional subspace formed by the first two principal components. One of the clusters is linearly separable from the other two at q = 0.5 with no bounded SVs. The remaining two clusters have significant overlap, and were separated at q = 4.2, l/(NC) = 0.55, with 4 misclassifications. Clustering results for an increasing number of principal components are reported Table 1: Performance of SVC on the iris data for a varying number of principal components. Principal components q 1/(NC) SVs bounded SVs rnisclassified 1-2 4.2 20 4 72 0.55 1-3 7.0 23 94 4 0.70 34 1-4 9.0 14 0.75 96 in Table 1. Note that as the number of principal components is increased from 3 to 4 there is a degradation in the performance of the algorithm - the number of misclassifications increases from 4 to 14. Also note the increase in the number of support vectors and bounded support vectors required to obtain contour splitting. As the dimensionality of the data increases a larger number of support vectors is required to describe the contours. Thus if the data is sparse, it is better to use SVC on a low dimensional representation, obtained, e.g. by principal component analysis [2]. For comparison we quote results obtained by other non-parametric clustering algorithms: the information theoretic approach of [11] leads to 5 miscalssification and the SPC algorithm of [12] has 15 misclassifications. 4 Varying q and C SVC was described for fixed values of q and C, and a method for exploring parameter space is required. We can work with SVC in an agglomerative fashion, starting from a large value of q, where each point is in a different cluster, and decreasing q until there is a single cluster. Alternatively we may use the divisive approach, by starting from a small value of q and increasing it. The latter seems more efficient since meaningful clustering solutions (see below for a definition of this concept), usually have a small number of clusters. The following is a qualitative schedule for varying the parameters. One may start with a small value of q where only one cluster occurs: q = 1/ maxi,j Ilxi - Xj 112. q is then increased to look for values at which a cluster contour splits. When single point clusters start to break off or a large number of support vectors is obtained (overfitting, as in Figure 2(a? I/C is increased. An important issue in the divisive approach is the decision when to stop dividing the clusters. An algorithm for this is described in [13]. After clustering the data they partition the data into two sets with some sizable overlap, perform clustering on these smaller data sets and compute the average overlap between the two clustering solutions for a number of partitions. Such validation can be performed here as well. However, we believe that in our SV setting it is natural to use the number of support vectors as an indication of a meaningful solution, since their (small) number is an indication of good generalization. Therefore we should stop the algorithm when the fraction of SVs exceeds some threshold. If the clustering solution is stable with respect to changes in the parameters this is also an indication of meaningful clustering. The quadratic programming problem of equation (2) can be solved by the SMO algorithm [14] which was recently proposed as an efficient tool for solving such problems in SVM training. Some minor modifications are required to adapt it to the problem that we solve here [4]. Benchmarks reported in [14] show that this algorithm converges in most cases in 0(N2) kernel evaluations. The complexity of the labeling part of the algorithm is 0(N 2d), so that the overall complexity is 0(N 2 d). We also note that the memory requirements of the SMO algorithm are low - it can be implemented using 0(1) memory at the cost of a decrease in efficiency, which makes our algorithm useful even for very large data-sets. 5 Summary The SVC algorithm finds clustering solutions together with curves representing their boundaries via a description of the support or high density regions of the data. As such, it separates between clusters according to gaps or low density regions in the probability distribution of the data, and makes no assumptions on cluster shapes in input space. SVC has several other attractive features : the quadratic programming problem of the cluster description algorithm is convex and has a globally optimal solution, and, like other SV algorithms, SVC can deal with noise or outliers by a margin parameter, making it robust with respect to noise in the data. References [1] A.K. Jain and R.C. Dubes. Algorithmsfor clustering data . Prentice Hall, Englewood Cliffs, NJ, 1988. [2] K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, San Diego, CA, 1990. [3] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995. [4] B. Sch6lkopf, R.C. Williamson, AJ. Smola, and J. Shawe-Taylor. SV estimation of a distribution's support. In Neural Information Processing Systems, 2000. [5] D.MJ. Tax and R.P.W. Duin. Support vector domain description. Pattern Recognition Letters, 20:1991-1999, 1999. [6] A. Ben-Hur, D. Horn, H.T. Siegelmann, and V. Vapnik. A support vector clustering method. In International Conference on Pattern Recognition, 2000. [7] SJ. Roberts. Non-parametric unsupervised cluster analysis. Pattern Recognition, 30(2):261-272,1997. [8] R. Fletcher. Practical Methods of Optimization. Wiley-Interscience, Chichester, 1987. [9] R.A. Fisher. The use of multiple measurements in taxonomic problems. Annual Eugenics, 7:179-188, 1936. [10] C.L. Blake and CJ. Merz. DCI repository of machine learning databases, 1998. [11] N. Tishby and N. Slonim. Data clustering by Markovian relaxation and the information bottleneck method. In Neural Information Processing Systems, 2000. [12] M. Blatt, S. Wiseman, and E. Domany. Data clustering using a model granular magnet. Neural Computation, 9:1804-1842,1997. [13] S. Dubnov, R. EI-Yaniv, Y. Gdalyahu, E. Schneidman, N. Tishby, and G. Yona. A new nonparametric pairwise clustering algorithm. Submitted to Machine Learning. [14] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. SchOlkopf, C. J. C. Burges, and A. 1. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 185-208, Cambridge, MA, 1999. MIT Press.
1823 |@word repository:2 faculty:1 polynomial:1 norm:1 seems:1 contains:2 written:1 numerical:1 partition:2 shape:6 core:2 provides:1 c2:1 become:1 scholkopf:1 qualitative:1 consists:1 interscience:1 inside:1 introduce:1 pairwise:1 behavior:1 globally:1 decreasing:2 window:3 increasing:4 begin:1 underlying:3 bounded:16 moreover:1 lowest:1 israel:2 interpreted:2 astronomy:1 transformation:1 nj:2 ti:1 xd:1 delineating:1 demonstrates:1 platt:1 unit:1 appear:1 local:2 slonim:1 consequence:1 cliff:1 approximately:1 therein:1 limited:1 range:1 bi:5 gdalyahu:1 horn:2 practical:1 practice:1 procedure:2 radial:1 cannot:1 unlabeled:1 valley:3 encloses:2 prentice:1 influence:1 conventional:1 map:1 lagrangian:4 center:4 go:2 starting:2 convex:1 splitting:3 identifying:1 estimator:1 his:2 diego:1 play:1 programming:2 complementarity:1 wolfe:1 recognition:5 jk:2 continues:1 database:1 observed:2 role:2 solved:1 region:4 connected:4 decrease:3 ran:1 complexity:2 tight:1 segment:4 solving:2 creates:1 efficiency:1 basis:1 represented:1 separated:1 jain:1 fast:1 describe:2 artificial:1 labeling:2 outside:1 larger:1 solve:2 otherwise:1 advantage:1 indication:3 propose:1 product:2 uci:1 tax:1 description:4 cluster:40 requirement:1 yaniv:1 ring:2 converges:1 ben:1 dubes:1 nearest:1 minor:1 school:1 sizable:1 dividing:1 implemented:2 enclose:3 implies:1 radius:4 closely:1 adjacency:1 clustered:1 d_:1 generalization:1 proposition:1 tighter:1 strictly:1 exploring:1 hall:1 blake:1 fletcher:1 bj:6 adopt:1 smallest:1 estimation:1 label:1 quote:1 tool:1 mit:2 gaussian:9 rather:1 varying:4 eliminate:1 lj:1 relation:1 issue:2 dual:1 ill:1 overall:1 spc:1 sampling:1 represents:1 look:3 unsupervised:1 employ:1 composed:1 tightly:1 replaced:1 detection:1 englewood:1 investigate:1 ai1:2 evaluation:1 chichester:1 extreme:1 tj:5 predefined:1 necessitated:1 euclidean:1 taylor:1 haifa:1 circle:1 minimal:2 increased:7 formalism:1 soft:3 instance:2 markovian:1 wiseman:1 cost:1 introducing:1 uniform:1 technion:1 too:1 tishby:2 characterize:1 reported:2 aw:1 varies:1 sv:10 density:5 peak:2 international:1 ie:1 physic:1 off:1 parzen:4 connecting:1 together:1 management:1 containing:2 dr:1 derivative:1 leading:1 vi:1 depends:1 piece:2 performed:1 break:2 view:1 lab:2 closed:1 red:1 start:2 blatt:1 nsv:4 formed:1 yield:1 identify:3 correspond:1 dealt:1 classified:2 submitted:1 hava:1 definition:2 stop:2 hur:2 dimensionality:1 cj:1 schedule:1 supervised:1 follow:1 strongly:1 angular:1 stage:1 just:1 smola:2 until:2 expressive:1 ei:1 nonlinear:1 overlapping:3 defines:1 aj:1 believe:1 aviv:2 usa:2 name:1 contain:1 true:1 concept:1 multiplier:1 normalized:1 hence:1 deal:2 attractive:1 width:3 noted:1 iris:4 theoretic:1 geometrical:1 novel:1 svc:12 ilxi:1 recently:1 empirically:1 interpretation:1 he:2 measurement:2 significant:1 cambridge:2 rd:1 shawe:1 dot:1 stable:1 surface:1 closest:1 inf:1 certain:1 seen:1 employed:1 recognized:1 monotonically:1 schneidman:1 ii:4 smoother:3 multiple:1 exceeds:1 characterized:1 adapt:1 academic:1 sphere:14 ellipsis:1 kernel:15 represent:1 normalization:1 achieved:1 justified:1 decreased:4 appropriately:1 induced:1 facilitates:1 split:2 easy:1 xj:11 fit:2 misclassifications:3 f3i:1 inner:2 domany:1 bottleneck:1 expression:1 handled:1 iffor:1 penalty:1 svs:16 useful:2 detailed:1 clear:1 nonparametric:1 exist:2 estimated:1 four:1 demonstrating:1 threshold:1 graph:1 relaxation:1 fraction:1 sum:2 letter:1 taxonomic:1 throughout:1 almost:1 separation:2 decision:2 distinguish:1 quadratic:2 annual:1 duin:1 constraint:5 fukunaga:1 separable:1 according:4 disconnected:1 belonging:1 describes:1 smaller:2 slightly:1 modification:1 making:1 outlier:7 equation:6 describing:1 slack:1 turn:1 letting:1 apply:1 hierarchical:1 appropriate:1 distinguished:1 clustering:26 cf:1 j3i:1 remaining:1 siegelmann:2 occurs:2 parametric:6 dependence:1 subspace:1 distance:1 separate:2 mapped:3 separating:2 agglomerative:1 vladimir:1 nc:3 robert:4 dci:1 enclosing:6 perform:1 allowing:3 sch6lkopf:1 observation:1 benchmark:2 finite:1 enabling:1 immediate:1 situation:1 incorporated:1 arbitrary:1 concentric:1 david:1 pair:1 required:5 smo:2 address:1 eugenics:1 usually:3 pattern:5 flower:2 below:1 regime:2 max:1 oj:1 memory:2 power:1 overlap:3 natural:1 turning:1 representing:3 numerous:1 identifies:1 lk:1 literature:1 checking:1 enclosed:3 granular:1 validation:1 mercer:1 editor:1 bank:1 surrounded:2 summary:1 algorithmsfor:1 aij:1 allow:1 burges:1 neighbor:1 characterizing:1 sparse:1 regard:1 boundary:7 curve:2 contour:24 san:1 schultz:1 sj:1 emphasize:1 global:1 kkt:1 overfitting:2 assumed:1 xi:17 alternatively:1 don:1 table:2 nature:1 mj:1 robust:1 ca:1 tel:2 williamson:1 domain:1 dense:1 linearly:1 noise:2 n2:1 yona:1 referred:1 fashion:1 wiley:1 governed:1 maxi:1 explored:1 r2:4 svm:2 vapnik:3 adding:1 sequential:1 margin:3 gap:3 intersection:2 forming:1 lagrange:1 expressed:1 contained:1 springer:1 corresponds:1 ma:2 fisher:1 hard:1 change:1 determined:2 hyperplane:1 principal:6 degradation:1 called:1 divisive:2 merz:1 meaningful:3 support:34 latter:3 magnet:1