File size: 80,720 Bytes
0484a8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
# Weight-Balancing Fixes And Flows For Deep Learning

Lawrence K. Saul lsaul@flatironinstitute.org Flatiron Institute, Center for Computational Mathematics 162 Fifth Avenue, New York, NY 10010 Reviewed on OpenReview: *https: // openreview. net/ forum? id= uaHyXxyp2r*

## Abstract

Feedforward neural networks with homogeneous activation functions possess an internal symmetry: the functions they compute do not change when the incoming and outgoing weights at any hidden unit are rescaled by reciprocal positive values. This paper makes two contributions to our understanding of these networks. The first is to describe a simple procedure, or fix, for balancing the weights in these networks: this procedure computes multiplicative rescaling factors—one at each hidden unit—that rebalance the weights of these networks without changing the end-to-end functions that they compute. Specifically, given an initial network with arbitrary weights, the procedure determines the functionally equivalent network whose weight matrix is of minimal ℓp,q-norm; the weights at each hidden unit are said to be balanced when this norm is stationary with respect to rescaling transformations. The optimal rescaling factors are computed in an iterative fashion via simple multiplicative updates, and the updates are notable in that (a) they do not require the tuning of learning rates, (b) they operate in parallel on the rescaling factors at all hidden units, and (c) they converge monotonically to a global minimizer of the ℓp,q-norm. The paper's second contribution is to analyze the optimization landscape for learning in these networks.

We suppose that the network's loss function consists of two terms—one that is invariant to rescaling transformations, measuring predictive accuracy, and another (a regularizer) that breaks this invariance, penalizing large weights. We show how to derive a weight-balancing flow such that the regularizer remains minimal with respect to rescaling transformations as the weights descend in the loss function. These dynamics reduce to an ordinary gradient flow for ℓ2-norm regularization, but not otherwise. In this way our analysis suggests a canonical pairing of alternative flows and regularizers.

## 1 Introduction

Many recent studies of deep learning have focused on the important role of symmetries (Bronstein et al.,
2021; Kunin et al., 2021; Tanaka & Kunin, 2021; Gluch & Urbanke, 2021; Armenta & Jodoin, 2021). In large part these studies were inspired by the role that symmetries play in our understanding of the physical world (Anderson, 1972; Zee, 2016). Of particular interest are symmetries that arise when a model is formulated or expressed in terms of more parameters than its essential degrees of freedom. In physics these symmetries arise from so-called *gauge* degrees of freedom (Gross, 1992), while in deep learning they are present in many popular models of overparameterized networks. One such model in machine learning is a feedforward network with rectified linear hidden units. Such a network is specified by the values of its weights, but the function it computes does not change when the incoming and outgoing weights at any hidden unit are inversely rescaled by some positive value (Glorot et al., 2011); see Fig. 1. This particular symmetry of deep learning has already led to many important findings. For example, it is known that this symmetry gives rise to a conservation law: at each hidden unit, there is a certain balance of its incoming and outgoing weights that does not change when these networks are trained by gradient flow (i.e., gradient descent in the limit of an infinitesimally small learning rate) (Du et al., 2018). From this conservation law follows another important observation: if the weights are initially

![1_image_0.png](1_image_0.png)

Figure 1: A rectified linear unit (ReLU) has the same effect if its incoming weights w and bias b are rescaled by some factor a>0 while its outgoing weights w are rescaled by the inverse factor a
−1. When a > 1, the rescaling increases the magnitudes of incoming weights and decreases the magnitudes of outgoing weights; when a<1, the effects are reversed.
balanced across layers, then they remain so during training, a key condition for proving certain convergence results (Arora et al., 2019). It is also possible, by analyzing the synaptic flows across adjacent layers, to devise more powerful pruning algorithms (Tanaka et al., 2020). Finally, a number of authors have proposed more sophisticated forms of learning that are invariant to these rescaling transformations (Neyshabur et al.,
2015a; Meng et al., 2019) or that break this invariance in purposeful ways (Badrinarayanan et al., 2015; Stock et al., 2019; Armenta et al., 2021; Zhao et al., 2022).

These latter works highlight an important distinction: though rescaling transformations do not change the functions computed by these networks, they may affect other aspects of learning. For example, unbalanced networks may take longer to converge and/or converge to altogether different solutions. It is therefore important to understand the different criteria that can be used to break this invariance.

## 1.1 Inspiration From Physics

To proceed, we consider how similar ideas have been developed to study the physical world. The simplest gauge symmetry arises in classical electrodynamics (Jackson, 2002). The gauge degrees of freedom in this setting may be *fixed* to highlight certain physics of interest. Two well-known choices are the Coulomb gauge, which minimizes the volume integral of the squared vector potential (Gubarev et al., 2001), and the Lorenz gauge, which simplifies certain wave equations. It is natural to ask if there are analogous fixes for deep neural networks—fixes, for instance, that minimize the norms of weights or simplify the dynamics of learning.

With these goals in mind, this paper investigates a family of norm-minimizing rescaling transformations for feedforward networks with homogeneous activation functions. These transformations are designed to minimize the entri-wise ℓp,q-norm (with *p, q*≥1) of the network's weight matrix W, defined as

$$\|\mathbf{W}\|_{p,q}=\left(\sum_{i}\left(\sum_{j}|W_{i j}|^{p}\right)^{\frac{q}{p}}\right)^{\frac{1}{q}},$$

$\left(1\right)$ . 
, (1)
without changing the end-to-end function that the network computes (Neyshabur et al., 2015a). Here Wij is the incoming weight at unit i from unit j. A particular transformation is specified by a set of multiplicative rescaling factors, one at each hidden unit of the network. The first main contribution of this paper is to show how to obtain rescaling factors that minimize the norm in eq. (1). More concretely, we derive simple multiplicative updates to compute these factors (Algorithm 1), and we prove that these updates converge monotonically to fixed points that represent minimum-norm solutions (Theorem 1). Notably, these multiplicative updates are parameter-free in that they do not require the setting or tuning of learning rates.

## 1.2 From Fixes To Flows

Norm-minimizing rescaling transformations provide one way to balance the incoming and outgoing weights at each hidden unit in the network. We also analyze the optimization landscape in networks with rescaling symmetries and consider how the balancedness of weights can be preserved during learning—that is, in the course of adapting the network's weights and biases. This analysis yields an interesting counterpart to Noether's theorem for physical systems (Noether, 1918). When a physical system is described by a Lagrangian, its dynamics are specified by a least action principle, and Noether's theorem gives a recipe to deduce the conserved quantities that are generated by its symmetries.

The theorem has also been applied to deduce conserved quantities in deep learning (Kunin et al., 2021; Tanaka
& Kunin, 2021; Gluch & Urbanke, 2021). The networks we study in this paper have a symmetry group of rescaling transformations, but here we have sought to solve a sort of *inverse* problem. Specifically, in our case, the conserved quantities are specified by the choice of regularizer, and instead we have needed a recipe to deduce a compatible dynamics for learning—that is, one in which the desired quantities are conserved.

It is easy to see how this inverse problem arises. Let E(Θ) measure the training error of a network in terms of its weights and biases, denoted collectively by Θ = (W, b). Also, let R(W) denote a regularizer that penalizes large weights in some way. When this regularizer is minimized with respect to rescaling transformations, it induces a particular balance of the incoming and outgoing weights at each hidden unit of the network; see Fig. 1. This minimality is expressed by the zeros of derivatives with respect to rescaling factors at each hidden unit, and these zeros are the quantities that we seek to conserve for a weight-balancing flow. Thus we start from a regularizer R(W) and seek a dynamics that not only *conserves* these zeros, but also *descends* in a loss function of the form L(Θ) = E(Θ) + λR(W) for any λ≥0.

The second main contribution of this paper is to give a recipe for such dynamics—namely, to derive weightbalancing flows such that a regularizer R(W) *remains minimal with respect to rescaling transformations* throughout the course of learning. This recipe is given by Theorem 4, and it has an especially simple form

$${\frac{d}{d t}}\left(W_{i j}{\frac{\partial{\mathcal{R}}}{\partial W_{i j}}}\right)=-W_{i j}{\frac{\partial{\mathcal{L}}}{\partial W_{i j}}}.$$
$$\left(2\right)$$

. (2)
Additionally, in Theorem 5, we prove that these flows descend under fairly general conditions in the loss function, L(Θ). The dynamics in eq. (2) reduces to an ordinary gradient flow, namely W˙ =
1 2
(∂L/∂W),
when the weights are regularized by their ℓ2-norm (i.e., taking R(W) = 12 Pij W2 ij ), but it suggests a much richer set of flows for other regularizers, including but not limited to those based on the ℓp,q-norm in eq. (1). The paper is organized as follows. Section 2 presents the multiplicative updates for minimizing the ℓp,q-norm in eq. (1) with respect to rescaling transformations. Section 3 derives the weight-balancing flows in eq. (2).

Finally, section 4 relates these ideas to previous work, and section 5 discusses directions for future research.

## 2 Weight Balancing

In this section we show how to balance the weights in feedforward networks with homogeneous activation functions without changing the end-to-end function that these networks compute. Section 2.1 describes the symmetry group of rescaling transformations in these networks, and section 2.2 presents multiplicative updates to optimize over the elements of this group. Finally, section 2.3 demonstrates empirically that these updates converge quickly in practice. We refer the reader to appendix A for a formal proof of convergence.

## 2.1 Preliminaries

Our interest lies in feedforward networks that parameterize vector-valued functions f : ℜ
d → ℜr. The following notation will be useful. We denote the indices of a network's input, hidden, and output units, respectively, by I, H, and O, and we order the units so that I = {1*, . . . , d*}, H = {d+ 1*, . . . , n*−r}, and O = {n−r+1*, . . . , n*} where n is the total number of units. Let x ∈ ℜd denote an input to the network and f(x) ∈ ℜrits corresponding output. The mapping x → f(x) is determined by the network's weight matrix W ∈ ℜn×n, biases b ∈ ℜn, and activation function g :*ℜ → ℜ* at each non-input unit. The mapping x → f(x)
can be computed by the feedforward procedure that sequentially activates all of the units in the networkthat is, setting hi = xi for the input units, propagating hi = g(Pj Wijhj + bi) to the remaining units, and setting fi(x) = hn+i−r from the i th output unit. Since the network is feedforward, its weight matrix is strictly lower triangular, and many of its lower triangular entries are also equal to zero (e.g., between input units). We also assume that the output units are unconnected (with Wij = 0 if j ∈ O and *i> j*) and that the same activation function is used at every hidden unit.

A rescaling symmetry arises at each hidden unit when its activation function is positive homogeneous of degree one (Neyshabur et al., 2015a; Dinh et al., 2017)—that is, when it satisfies g(az) = ag(z) for all a>0; see Fig. 1. In this paper we focus on networks of rectified linear hidden units (ReLUs), with the activation function g(z) = max(0, z) at all i ∈ H, but this property is also satisfied by linear units (Saxe et al., 2013),
leaky ReLUs (He et al., 2015), and maxout units (Goodfellow et al., 2013). Our results in this paper apply generally to feedforward networks (i.e., those represented by directed graphs) with homogeneous activation functions. For obvious reasons, the case of layered architectures is of special interest, and in such architectures, the weight matrix will have many zeros indicating the absence of weights between non-adjacent layers. However, we will not assume this structure in what follows. Our only assumption is that in each network there are no *vacuous* hidden units—or equivalently, that every hidden unit of the network plays some role in the computation of mapping x → f(x). For example, a minimal requirement is that for each i ∈ H, there exists some *j < i* such that Wij ̸= 0 and also some *j > i* such that Wji ̸= 0.

Next we consider the symmetry group of rescaling transformations in these networks. Each such transformation is specified by a set of rescaling factors, one for each hidden unit. We use

$${\mathcal{A}}=\{\mathbf{a}\in\Re^{n}\,|\,a_{i}{>}0{\mathrm{~if~}}i\in{\mathcal{H}},a_{i}{=}1{\mathrm{~otherwise}}\}$$
A = {a ∈ ℜn| ai >0 if i ∈ H, ai = 1 otherwise} (3)
to denote the set of these transformations. Then under a particular rescaling, represented by some a ∈ A,
the network's weights and biases are transformed multiplicatively as

$$\left({\boldsymbol{3}}\right)$$

$W_{ij}\gets W_{ij}(a_{i}/a_{j})$,  $b_{i}\gets b_{i}a_{i}$.  
$$\begin{array}{l}{(4)}\\ {(5)}\end{array}$$

It may seem redundant in eq. (3) to introduce rescaling factors at non-hidden units only to constrain them to be equal to one. With this notation, however, we can express the transformations in eqs. (4–5) without distinguishing between the network's different types of units. As shorthand, we write (W′, b
′) ∼ (W, b) and W′ ∼ W to denote when these parameters are equivalent via eqs. (4–5) up to some rescaling.

## 2.2 Multiplicative Updates

The first goal of this paper is to solve the following problem. Given a feedforward network with weights W0 and biases b0, we seek the functionally equivalent network, with (W, b) ∼ (W0, b0), such that W has the smallest possible norm in eq. (1). We provide an iterative solution to this problem in Algorithm 1, which computes W and b via a *sequence* of rescaling transformations. Each rescaling transformation is in turn implemented by a multiplicative update of the form in eqs. (4–5). For each update, the key step is to compute the rescaling factor ai at each hidden unit i ∈ H from a ratio comparing the magnitudes of its ingoing and outgoing weights. In this section, we describe the basic ingredients of this algorithm.

To begin, we define two intermediate quantities that play important roles in both the multiplicative updates of this section and the weight-balancing flows of section 3.3. The first of these is the *per-unit regularizer*

$$\rho_{i}({\bf W})=\left(\sum_{j}|W_{ij}|^{p}\right)^{\frac{q}{p}},\tag{6}$$

which we use to denote the local contribution to the norm in eq. (1) from the incoming weights at the i th unit in the network. The second of these quantities is the *stochastic matrix*

$$\pi_{i j}(\mathbf{W})={\frac{|W_{i j}|^{p}}{\sum_{k}|W_{i k}|^{p}}}$$
p(7)
$$(7)$$

whose columns sum to one. Note that these quantities depend on the values of p and q in the ℓp,q-norm, but to avoid an excess of notation, we will not explicitly indicate this dependence.

We now explain the roles played by these quantities in Algorithm 1. In this algorithm, the key step per update is to compute the rescaling factor ai at each hidden unit of the network. In terms of the above quantities, this rescaling factor takes the especially simple form

$$a_{i}=\left[{\frac{\sum_{j}\rho_{j}(\mathbf{W})\,\pi_{j i}(\mathbf{W})}{\rho_{i}(\mathbf{W})}}\right]^{{\frac{1}{4\,\operatorname*{max}(p,q)}}}.$$
$$({\boldsymbol{8}})$$
. (8)
Intuitively, eq. (8) shows that the multiplicative updates have a fixed point (where ai = 1 for all i) when the per-unit-regularizer in eq. (6) is a stationary distribution of the stochastic matrix in eq. (7). Finally, we note that many of the computations in eqs. (6–8) can be easily parallelized, and also that the updates in eqs. (4–5) can be applied in parallel after computing the rescaling factors in eq. (8). We derive the form of these rescaling factors (and particularly, the curious value of their outer exponent) in appendix A. Our first main result is contained in the following theorem:
Theorem 1 (Convergence of Multiplicative Updates). For all W0 ∈ ℜn×n and b0 ∈ ℜn, the multiplicative updates in Algorithm 1 converge to a global minimizer of the entry-wise ℓp,q*-norm*

$\begin{array}{ccccc}\mbox{argmin}\|\mbox{\bf W}\|_{p,q}&\mbox{such that}&(\mbox{\bf W},\mbox{\bf b})\sim(\mbox{\bf W}_{0},\mbox{\bf b}_{0})\end{array}$
∥W∥p,q such that (W, b) ∼ (W0, b0) (9)
without changing the function computed by the network. Also, the intermediate solutions from these updates yield monotonically decreasing values for these norms.

A proof of this theorem can also be found in appendix A. Note that Theorem 1 provides stronger guarantees than exist for the full problem of learning via back-propagated gradients. In particular, these weightbalancing updates converge monotonically to a global minimizer, and they do not require the tuning of hyperparameters such as learning rates. One might hope, therefore, that such fixes could piggyback on top of existing learning algorithms without much extra cost, and this has indeed served as motivation for many previous studies of symmetry and symmetry-breaking in deep learning (Neyshabur et al., 2015a; Badrinarayanan et al., 2015; Meng et al., 2019; Stock et al., 2019; Armenta et al., 2021; Zhao et al., 2022).

## 2.3 Demonstration Of Convergence

Theorem 1 states that the multiplicative updates in Algorithm 1 converge *asymptotically* to a global minimum of the ℓp,q-norm. But how fast do they converge in practice? Fig. 2 plots the convergence of the multiplicative updates in Algorithm 1 for different values of p and q and for three randomly initialized networks with differing numbers of hidden layers but the same overall numbers of input (200), hidden (3750), and output (10) units. From shallowest to deepest, the networks had 200-2500-1250-10 units, 200-2000-1000-500-250-10 units, and 200-1000-750-750-500-500-250-10 units. The networks were initialized with zero-valued biases and zero-mean Gaussian random weights whose variances were inversely proportional to the fan-in at each unit (He et al.,
2015). The panels in the figure plot the ratio ∥W∥p,q/∥W0∥p,q as a function of the number of multiplicative updates, where ∥W0∥p,q and ∥W∥p,q are respectively the ℓp,q-norms, defined in eq. (1), of the initial and updated weight matrix. Results are shown for several values for p and q. As expected, the updates take longer to converge in deeper networks (where imbalances must propagate through more layers), but in general a high degree of convergence is obtained for a modest number of iterations. The panels show that conventionally initialized networks are (i) far from minimal as measured by the ℓp,q-norm of their weights and (ii) easily rebalanced by a sequence of rescaling transformations. Finally we note that the results in Fig. 2 did not depend sensitively on the value of the random seed.

Algorithm 1 Given a network with weights W0 and biases b0, this procedure returns a functionally equivalent network whose rescaled weights W and biases b minimize the norm ∥W∥p,q in eq. (1) up to some tolerance δ > 0. The set H contains the indices of the network's hidden units. The rescaled weights and biases are computed via a sequence of multiplicative updates of the form in eqs. (4–5).

procedure (W, b) = MinNormFix(W0, b0, H, p, q, δ)
(W, b, a) ← (W0, b0, 1) ▷ *Initialize.*
repeat for all (*i, j*) do πij (W) ←|Wij | p Pk |Wik| p▷ *Compute stochastic matrix.*
for all i do ρi ←
Pj |Wij | p qp. ▷ *Compute per-unit regularizers.*
for all i ∈ H do ai ←
Pj ρj (W) πji(W)
ρi(W)
 1 4 max(p,q)
▷ *Compute rescaling factors.*
for all (*i, j*) do Wij ← Wij (ai/aj ) ▷ *Rescale weights.*
for all i do bi ← biai ▷ *Rescale biases.*
δi ← |ai − 1| until maxi(δi) < δ ▷ *Iterate until convergence.*

## 3 Learning And Regularization

The rescaling symmetry in Fig. 1 also has important consequences for learning in ReLU networks (Kunin et al., 2021; Gluch & Urbanke, 2021; Armenta & Jodoin, 2021; Neyshabur et al., 2015a; Meng et al., 2019; Badrinarayanan et al., 2015; Armenta et al., 2021; Zhao et al., 2022). The goal of learning is to discover the weights and biases that minimize the network's loss function on a data set of training examples. In this section we examine the conditions for learning under which some norm or regularizer of the weight matrix
(e.g., the ℓp,q-norm) remains minimal with respect to rescaling transformations. Equivalently, these are the conditions for learning under which the incoming and outgoing weights at each hidden unit remain *balanced* with respect to this norm. Our interest lies mainly in the following question: are there conditions such that the weight-balancing procedure of the last section can be *one and done* at the outset of learning? To answer this question, we must understand whether learning and weight-balancing are complementary procedures or whether they are operating in some way at cross-purposes (e.g., the former undoing the latter).

Our ultimate goal in this section is to derive the *weight-balancing flows* in eq. (2). Section 3.1 reviews the basic properties of gradient flow, while section 3.2 derives the more general flows for weight-balancing. Our main results, stated in Theorems 4 and 5, are that these flows descend in a regularized loss function while preserving the minimality of the regularizer with respect to rescaling transformations. Finally, section 3.3 analyzes the specific forms and properties of these flows for regularizers based on the ℓp,q-norm in eq. (1).

## 3.1 Gradient Flow

There are many forms of learning in deep networks. Perhaps the simplest to analyze is gradient flow (Elkabetz
& Cohen, 2021), in which the network's parameters are adapted in continuous time along the negative gradient of the loss function. Gradient flows can be derived for any differentiable loss function, and they are widely used to study the behavior of gradient descent in the limit of small learning rates.

There are two properties of gradient flow in neural networks that we will need to generalize for the minimalitypreserving flows in eq. (2). The first is the property of *descent*—namely, that the loss function of a network decreases over time. This is simple to demonstrate for gradient flow. Let Θ = (W, b) denote the weights and biases of the network, and suppose that the network is trained to minimize a loss function L(Θ). Then

![6_image_0.png](6_image_0.png)

Figure 2: Convergence of weight-balancing multiplicative updates in networks with differing numbers of hidden layers but the same overall numbers of units. The updates minimize the ℓp,q-norm of the weight matrix, and each panel shows the results for a different set of values for p and q. See text for details.
under gradient flow, at non-stationary points of the loss function, we see that

$${\frac{d{\mathcal{L}}}{d t}}\ =\ {\frac{\partial{\mathcal{L}}}{\partial\Theta}}\cdot{\frac{d\Theta}{d t}}\ =\ -$$

From eq. (10) it is also clear that the fixed points of gradient flow correspond to stationary points of the loss function. In the next section, we will see how these properties are preserved in more general flows.

$$\cdot{\frac{\partial{\mathcal{L}}}{\partial\mathbf{\dot{e}}}}\,.$$
on, we see that  $$\frac{\partial{\cal L}}{\partial\mathbf{\Theta}}\ <\ 0.\tag{10}$$

Another important property of gradient flow emerges in feedforward networks with homogeneous activation functions. This property is a consequence of the rescaling symmetry in Fig. 1; when such a network is trained by gradient flow, their rescaling symmetries give rise to *conservation laws*, one at each hidden unit of the network (Du et al., 2018; Kunin et al., 2021; Bronstein et al., 2021; Gluch & Urbanke, 2021; Armenta &
Jodoin, 2021). These conservation laws do not depend on the details of the network's loss function, requiring only that it is also invariant1 under the symmetry group of rescaling transformations. In particular, for any such loss function, it is known that the quantity

$$\Delta_{i}=b_{i}^{2}+\sum_{j}W_{ij}^{2}-\sum_{j}W_{ji}^{2}\tag{11}$$

is conserved under gradient flow at each hidden unit i ∈ H. The connection between symmetries and conservation laws is well known in physical systems from Noether's theorem (Noether, 1918), but it is worth noting that the dynamics of gradient flow were not historically derived from a Lagrangian.

1Typical regularizers break this invariance, but it is possible to construct those that do not (Neyshabur et al., 2015a).

Our next step is to derive the conserved quantities in eq. (11) as a special case of a more general framework.

For now, we wish simply to emphasize the logic that has been used to derive many conservation laws in neural networks: to start, one assumes that the network is trained by gradient flow in its weights and biases, and then, inspired by Noether's theorem in physical systems, one derives the conserved quantities that are implied by the network's symmetries. In the next section, we shall flip this script on its head.

## 3.2 Weight-Balancing Flows

In this section we investigate a larger family of flows for feedforward networks with homogeneous activation functions. We assume that the loss functions for these networks take a standard form with two competing terms—one that measures the network's predictive accuracy on the training data, and another that serves as regularizer. For example, let {(xt, yt)}
T
t=1 denote a training set of T labeled examples, and let C(y,f(x))
denote the cost when a network's actual output f(x) is evaluated against some reference output y. We use

$${\mathcal{E}}(\mathbf{\Theta})={\frac{1}{T}}\sum_{t}{\mathcal{C}}(\mathbf{y}_{t},\mathbf{f}(\mathbf{x}_{t}))$$
$$(12)$$

to denote the error obtained by averaging these costs over the training set. Likewise, we assume that the network's overall loss function is given by

$$(13)$$
$${\mathcal{L}}(\mathbf{\Theta})={\mathcal{E}}(\mathbf{\Theta})+\lambda{\mathcal{R}}(\mathbf{W}),$$
L(Θ) = E(Θ) + λR(W), (13)
where R(W) is a regularizer that penalizes large weights2 and the constant λ ≥ 0 determines the amount of regularization. Note that while the error in eq. (12) may depend in a complicated way on the network's weights and biases, it is necessarily invariant to the rescaling transformations of these parameters in eqs. (4–
5). On the other hand, typical regularizers are not invariant to rescaling transformations of the weights, and these are the sorts of regularizers that we consider here. When a regularizer penalizes large weights, it is not only useful to prevent overfitting; in a feedforward network with rescaling symmetries, it also induces a natural way to measure if (and how well) the weights are balanced. Consider the effect of a rescaling transformation on the weights at any hidden unit i ∈ H.

A rescaling transformation with ai <1 redistributes the magnitudes of these weights in a forward direction
(from incoming weights to outgoing weights), while a rescaling transformation with ai > 1 will redistribute the magnitudes of weights in a backward direction (from outgoing to incoming weights). We formalize these ideas with the following definition so that we can speak interchangeably of *balancedness* and *stationarity*:
Definition 2 (Weight balancedness). *In a feedforward network with homogeneous activation functions, we* say that the weights are balanced with respect to the regularizer R(W) if the regularizer is stationary with respect to infinitesimal instances of the rescaling transformations of the form in eq. (4).

In section 2, we examined the minimality of the ℓp,q-norm in eq. (1) with respect to rescaling transformations.

The main ideas of this section, however, do not depend on that specific choice of norm, and therefore in what follows we state them in the more general terms of the above definition. To begin, we describe more precisely what it means for the regularizer R(W), or indeed any differentiable function of the weight matrix, to be stationary at some point with respect to infinitesimal rescaling transformations.

Lemma 3 (Stationarity). Let K(W) be a differentiable function of the weight matrix in a feedforward network with homogeneous activation functions. Then K *is stationary with respect to rescaling transformations* at W *if and only if*

$$0\ =\ \sum_{j}\left(W_{i j}\frac{\partial{\mathcal{K}}}{\partial W_{i j}}-W_{j i}\frac{\partial{\mathcal{K}}}{\partial W_{j i}}\right)\quad\mathrm{for~all}\quad i\in{\mathcal{H}}.$$
 $a\ differentiable\ function\ of\ the\ weig$
∂Wji for all i ∈ H. (14)
Proof. Consider a rescaling transformation of the form ai = 1 + δai for all i ∈ H, where δai denotes an infinitesimal variation. Under such a transformation, the weights of the network are rescaled via eq. (4) as 2The regularizer penalizes large weights, but not large biases, as only the former cause the outputs of the network to depend sensitively on the inputs to the network.

Wij ← Wij (1 + δai)/(1 + δaj ); thus to lowest order, their change is given by δWij = Wij (δai −δaj ). Now we can work out (again to lowest order) the change δK = K(W+δW)− K(W) that results from this variation:

$$\delta\mathcal{K}=\sum_{ij}\frac{\partial\mathcal{K}}{\partial W_{ij}}\delta W_{ij}=\sum_{ij}\frac{\partial\mathcal{K}}{\partial W_{ij}}W_{ij}(\delta a_{i}-\delta a_{j})=\sum_{i}\delta a_{i}\sum_{j}\left(\frac{\partial\mathcal{K}}{\partial W_{ij}}W_{ij}-\frac{\partial\mathcal{K}}{\partial W_{ji}}W_{ji}\right).\tag{15}$$  The following result from this result. If $\alpha$ (14) is satisfied, then it is clear from $\alpha$ (15) that $\delta\mathcal{K}=0$
The lemma follows easily from this result. If eq. (14) is satisfied, then it is clear from eq. (15) that δK = 0, proving one direction of implication; likewise if δK = 0 for arbitrary infinitesimal variations δai, then each coefficient of δaiin eq. (15) must vanish, proving the other direction.

Regularizers were originally introduced to avoid overfitting (Goodfellow et al., 2016), but it is now widely appreciated that they also serve other purposes. It has been observed, for example, that regularizers can help to learn better models on the *training* data (Krizhevsky et al., 2012), suggesting that smaller weights in deep networks lead to better behaved gradients. Likewise, it has been observed that highly unbalanced weights lead to much slower training in ReLU networks; the reason is that partial derivatives such as *∂L/∂W*ij scale inversely as the weights under rescaling transformations (Neyshabur et al., 2015a; Dinh et al., 2017). More generally, it has been argued (van Laarhoven, 2017) that "by decreasing the scale of the weights, weight decay increases the effective learning rate" and that "if no regularization is used the weights can grow unbounded, and the effective learning rate goes to 0." All of the above observations suggest a role for learning algorithms that actively balance the weights of feedforward networks with homogeneous activation functions. Such algorithms can in turn be derived from the weight-balancing flows of the following theorem.

Theorem 4 (Weight-balancing flows). Let L(Θ) *denote the regularized loss in eq. (13), and consider a*
network whose weights are initially balanced with respect to the regularizer R(W) and whose biases at hidden
units are initialized at zero and never adapted. Then the weights will remain balanced with respect to R(W)
if they evolve as
$$\frac{d}{dt}\left(W_{ij}\frac{\partial\mathcal{R}}{\partial W_{ij}}\right)=-W_{ij}\frac{\partial\mathcal{L}}{\partial W_{ij}}.$$
. (16)
The theorem shows how to construct a canonical, weight-balancing flow for any differentiable regularizer.

Before proving the theorem, we emphasize the precondition that the biases of all hidden (but not output)
units are frozen at zero: i.e., ˙bi = bi = 0 for all i ∈ H. As we shall see, this requirement3 arises because we defined balancedness with respect to a regularizer R(W) that penalizes large weights *but not large biases*.

Proof. By Lemma 3, the weights are balanced with respect to the regularizer R(W) if they satisfy the stationarity conditions (substituting R for K) in eq. (14). We prove the theorem by showing that the zeros in these stationarity conditions are conserved quantities analogous to those in eq. (11). As shorthand, let

$$(16)^{\frac{1}{2}}$$
$$Q_{i}=\sum_{j}\left(W_{i j}{\frac{\partial{\mathcal{R}}}{\partial W_{i j}}}-W_{j i}{\frac{\partial{\mathcal{R}}}{\partial W_{j i}}}\right)$$

denote the imbalance of incoming and outgoing weights at a particular hidden unit. To prove that Qiis conserved, we must show that Q˙i = 0 when the weights of the network evolve under the flow in eq. (16) from an initially balanced state. It follows from eq. (16) that

$$\frac{d Q_{i}}{d t}=-\sum_{j}\left(W_{i j}\frac{\partial{\mathcal{L}}}{\partial W_{i j}}-W_{j i}\frac{\partial{\mathcal{L}}}{\partial W_{j i}}\right).$$

From eq. (13) the network's loss function L(Θ) consists of two terms, an empirical error E(Θ) and a regularizer R(W). Substituting these terms into (18), we obtain

$${\frac{d Q_{i}}{d t}}=-\sum_{j}\left(W_{i j}{\frac{\partial{\mathcal{E}}}{\partial W_{i j}}}-W_{j i}{\frac{\partial{\mathcal{E}}}{\partial W_{j i}}}\right)\ -\lambda\sum_{j}\left(W_{i j}{\frac{\partial{\mathcal{R}}}{\partial W_{i j}}}-W_{j i}{\frac{\partial{\mathcal{R}}}{\partial W_{j i}}}\right).$$
$$(17)$$
$$(18)$$
$$\left(19\right)$$

3There are other reasons besides weight-balancing to learn ReLU networks with zero biases. It has been noted that zero biases are necessary to learn *intensity-equivariant* representations of sensory inputs (Hinton et al., 2011; Mohan et al., 2020);
these are representations in which the hidden-layer activations scale in proportion to the intensity of visual or auditory signals.

Such networks also have certain margin-maximizing properties when they are trained by gradient flow (Lyu & Li, 2020).
Consider the left sum in eq. (19). If the hidden units do not have biases, then E(Θ) is invariant under (and
hence stationary with respect to) rescaling transformations of the weights *alone*, and the first term vanishes
by Lemma 3 (substituting E for K). Now consider the right sum in eq. (19); we see from eq. (17) that this
term is proportional to Qiitself, so thatdQi
$$\frac{dQ_{i}}{dt}=-\lambda Q_{i}.\tag{1}$$
Finally, we observe that Qi = 0 at time t = 0 if the weights are initially balanced. In this case, the only
solution to eq. (20) is the trivial one with Qi =Q˙i = 0 for all time, thus proving the theorem.

The calculation in this proof yields another insight. From eq. (20), we see that Qi decays exponentially to zero *if the weights are not initialized in a balanced state*. The decay is caused by the regularizer, an effect that Tanaka & Kunin (2021) describe as the Noether learning dynamics. However, the decay may be slow for the typically small values of the hyperparameter λ > 0 used in practice. Taken together, the fixes and flows in this paper can be viewed as a way of balancing the weights at the outset and throughout the entire course of learning, rather than relying on the asymptotic effects of regularization to do so in the limit.

A weight-balancing flow is only useful insofar as it also reduces the network's loss function. The weightbalancing flows in eq. (16) do not strictly follow the gradient of the loss function in eq. (13). Nevertheless, our final theorem shows that under relatively mild conditions these flows have the same property of descent.

Theorem 5 (Balanced descent). Let ωij = log |Wij |*. Then the weight-balancing flow in eq. (16) descends* everywhere that the loss function is not stationary with respect to ω and the regularizer has a positive definite Hessian with respect to ω:

$$(20)$$
$$\frac{d{\cal L}}{d t}<0\quad\mathrm{~whenever~}\quad\left|\frac{\partial{\cal L}}{\partial\omega}\right|>0\quad\mathrm{~and~}\quad\frac{\partial^{2}{\cal R}}{\partial\omega\partial\omega^{\top}}>0.$$
≻ 0. (21)
Before proving the theorem, we emphasize that eq. (21) refers to the Hessian of the regularizer R(W)*, not* the Hessian of the network's overall loss function L(W). As we shall see, the former is very well behaved for typical regularizers, while the latter (about which one can say very little) depends in a complicated way on the network's fit to the training data.

Proof. The property of descent emerges for these flows in a similar way as for other generalized flows (Wibisono et al., 2016; Tanaka & Kunin, 2021). We begin by observing that the flow in eq. (16)
takes a simpler form in terms of the variable ω. Since ωij = log |Wij |, we have equivalently that W2 ij = e 2ωij, and differentiating we find ∂Wij/∂wij = Wij . We can use the chain rule to differentiate in eq. (16) with respect to ω instead of W. In this way we find (in vector notation) that

$$\frac{d}{dt}\left(\frac{\partial{\cal R}}{\partial\mathbf{\omega}}\right)=-\frac{\partial{\cal L}}{\partial\mathbf{\omega}}.\tag{1}$$
$$(22)$$
$$(23)$$

This equation specifies *implicitly* how the weights evolve in time through the form of the regularizer, R(W).

To derive an explicit form for this evolution, we differentiate through the left side of the equation:

$${\frac{d}{d t}}\left({\frac{\partial{\mathcal{R}}}{\partial\mathbf{\omega}}}\right)~=~{\frac{\partial^{2}{\mathcal{R}}}{\partial\mathbf{\omega}\,\partial\mathbf{\omega}^{\top}}}\cdot{\frac{d\mathbf{\omega}}{d t}}.$$

Note how the Hessian of the regularizer appears in this equation; as shorthand in what follows we denote this Hessian by H(ω) = ∂
2R
∂ω ∂ω⊤ . Now suppose that H(ω) is positive definite (hence also invertible) at the current values of the weights. Then combining eqs. (22–23), we see that

$${\frac{d\mathbf{\omega}}{d t}}=\mathbf{\nabla}-\mathbf{H}^{-1}(\mathbf{\omega})\cdot{\frac{\partial{\mathcal{L}}}{\partial\mathbf{\omega}}}.$$

In sum, we have shown that if H(ω) is positive definite, then the weight-balancing flow in eq. (16) has the same dynamics as eq. (24). Thus we can also interpret these dynamics as a generalized gradient flow in

$$(24)$$

which the weights are reparameterized in terms of ωij = log |Wij | and the gradient is preconditioned by the inverse Hessian H−1(ω). Now suppose further that the gradient ∂L/∂ω does not vanish. Then we have

$$\frac{d{\cal L}}{dt}\ =\ \frac{\partial{\cal L}}{\partial\mathbf{\omega}}\cdot\frac{d\mathbf{\omega}}{dt}\ =\ -\frac{\partial{\cal L}}{\partial\mathbf{\omega}}\cdot{\bf H}^{-1}(\mathbf{\omega})\cdot\frac{\partial{\cal L}}{\partial\mathbf{\omega}}\ <\ 0.\tag{25}$$

This suffices to prove the theorem, but further intuition may be gained by comparing the argument in eq. (25)
to the analogous one for gradient flow in eq. (10). The main differences are the appearance of the inverse Hessian preconditioner (via the regularizer) and the change of variables (from Wij to ωij ).

## 3.3 Flows For ℓp,Q**-Norm Regularization**

In the last section we derived weight-balancing flows for any regularizer R(W). The derivation was general, assuming only that the regularizer was differentiable. In this section we investigate the flow in eq. (16) for regularizers based on the ℓp,q-norm in eq. (1). Specifically, we consider regularizers of the form

$${\mathcal R}({\bf W})=\frac{1}{q}\sum_{i}\left(\sum_{j}|W_{i j}|^{p}\right)^{\frac{q}{p}}.\tag{1}$$
$$(26)$$
$$(27)$$

The flows induced by these regularizers have several interesting properties, which we briefly summarize.

Proofs and further results can be found in appendix B.

To build intuition, we begin by noting some special cases of interest. The weight-balancing flow in eq. (16)
reduces to familiar forms for the simplest choices of p and q in eq. (26). For example, when p = q = 2, this flow simplifies (for *nonzero* weights) to W˙ij = −
1 2
∂L
∂Wij
, which can be approximated by gradient descent

$$W_{i j}\gets W_{i j}-\eta\frac{\partial{\cal L}}{\partial W_{i j}}\tag{1}$$
$$(28)$$

with a small learning rate η > 0. On the other hand, when p = q = 1, the flow in eq. (16) simplifies to W˙ij = −|Wij |
∂L
∂Wij
, which can be approximated by the *exponentiated* gradient descent

$$W_{i j}\gets W_{i j}\exp\left(-\eta\cdot\mathrm{sign}(W_{i j})\cdot{\frac{\partial{\mathcal{L}}}{\partial W_{i j}}}\right)\tag{1}$$

Similar updates based on exponentiated gradients have been studied in many different contexts (Kivinen &
Warmuth, 1997; Arora et al., 2012; Bernstein et al., 2020). With these special cases in mind, we now state the main result of this section.

Theorem 6 (Almost everywhere descent). If the R(W) is based on the ℓp,q-norm as in eq. (26), then the weight-balancing flow in eq. (16) decreases the regularized loss in every open orthant of weight space.

Theorem 6 establishes the property of descent for weight-balancing flows based on ℓp,q-norm regularization.

The proof of this theorem requires two steps—first, to compute the Hessian for this regularizer, and second, to show that it is positive definite in every open orthant of the weight space. These steps are presented in appendix B.

## 4 Related Work

The results in this paper build on earlier work. Most relevant to section 2 is the work on ENorm (Stock et al.,
2019)—an elegant procedure, also based on multiplicative updates, to minimize the ℓp-norm of the weights in ReLU networks. The updates for ENorm are derived from block coordinate descent of eq. (26) with respect to rescaling transformations for the special case p=q; the blocks are formed by grouping hidden units in the same layer. The updates in this paper are derived in a different way—from a so-called auxiliary function, an approach used in algorithms for nonnegative matrix factorization (Lee & Seung, 2000) and ExpectationMaximization (Dempster et al., 1977). This approach (see appendix A) has two notable features. First, it leads to updates that are applied in parallel at all of the network's hidden units, and thus unlike those derived from block coordinate descent, they are not specialized to layered networks whose structure suggests a natural partition (i.e., blocking) of the hidden units. Second, it yields closed-form updates to minimize the ℓp,q-norm despite the more complicated gradients that arise when p̸=q. This more general case is of interest because the *max-norm* emerges from eq. (1) in the limit q→∞ (Neyshabur et al., 2015a).

Our results in section 3 also build on previous work. Most relevant here is the work by Tanaka & Kunin (2021) that analyzes symmetry-breaking in deep learning via the continuous-time dynamics of damped Lagrangian systems. Their work gives an elegant and broadly unifying treatment for overparameterized networks with general symmetry groups. It builds in turn on the highly influential work of Wibisono et al. (2016), who introduced Bregman Lagrangians to analyze accelerated methods for optimization. Tanaka & Kunin (2021)
stated that "a future direction is to extend our analysis to the case of rescale symmetry of the homogeneous activation functions such as ReLU." In this paper we have not made use of Bregman Lagrangians; instead we have given a proof of Theorem 4 stripped down to its bare essentials. To do so, we derived the flows of eq. (16) in a bottom-up fashion, starting from the requirement that balanced weights should remain balanced, rather than a top-down fashion, starting from a time-dependent Lagrangian and deriving its Euler-Lagrange equations. In the latter approach, the Bregman kinetic energy plays the role of the regularizer, breaking the rescaling symmetry of the Lagrangian and undoing the conservation law for the Noether charge Qiin eq. (17). We hope that some readers will appreciate the different path we have taken to obtain these results. Many other studies have also investigated the relationship between rescaling transformations and learning. In a seminal paper, Neyshabur et al. (2015a) showed that stochastic gradient descent (SGD) performs poorly in highly unbalanced networks, and in its place, they proposed PathSGD, a rescaling-invariant procedure that approximates steepest descent with respect to a special path-based regularizer. Notably, this regularizer has the distinguishing property that it computes the minimum value of a max-norm regularizer, where the minimum is performed over all networks equivalent up to rescaling (Neyshabur et al., 2015b). PathSGD was followed by other formulations of rescaling-invariant learning. For example, Badrinarayanan et al. (2015)
fixed the rescaling degrees of freedom by constraining weight vectors to have unit norm, while Meng et al.

(2019) showed how to perform SGD in the vector space of paths (as opposed to weights), where the rescalinginvariant value of a path is given by the product of its weights. Also, Stock et al. (2019) trained networks by interweaving updates from SGD and ENorm with ℓ2-norm regularization; interestingly, the networks in these experiments generalized better on test data. This improvement suggests to interweave the more general updates in section 2, minimizing the ℓp,q-norm, with learning rules based on their corresponding flows in section 3. Our study lays the foundation for this further exploration.

Even more recent work has suggested that learning can be accelerated by certain rescaling transformations.

For instance, Armenta et al. (2021) showed that the magnitudes of backpropagated gradients in ReLU networks are increased on average by randomly rescaling their weights—a process they call *neural teleportation*.

More generally, Zhao et al. (2022) explored how to choose symmetry group transformations that purposefully increase the norms of gradients for learning. Because these gradients are computed with respect to training examples, this approach can be viewed as a *data-driven* procedure for manipulating the optimization landscape via symmetry group transformations. Our approach differs from the above by aiming to minimize the norms of a network's weights rather than to maximize the norms of its gradients. We note that the latter are unbounded above with respect to the (non-compact) group of rescaling transformations, and therefore one must be careful to identify the regime in which they serve as a reliable proxy for rates of convergence.

## 5 Discussion

Many aspects of deep learning are not fully understood. In this paper we have shown that further understanding may be gained from the symmetries of multilayer networks and the analogies they suggest to physical systems. We have leveraged this understanding in two ways. First, we derived simple multiplicative updates that minimize the ℓp,q-norm of the weight matrix over the equivalence class of networks related by rescaling transformations. Second, we derived weight-balancing flows that preserve the minimality of any
(differentiable) regularizer over the course of learning.

There are many questions deserving of further investigation. Most obviously, weight-balancing flows can be discretized to yield learning rules other than gradient descent. One important question is how to combine such learning rules with accelerated gradient-based methods, such as those involving momentum (Polyak, 1964)
or adaptive learning schedules (Kingma & Ba, 2015; Duchi et al., 2010; Tieleman & Hinton, 2018), that are used in all large-scale experiments. Another is how weight-balancing relates to (or may be incorporated with) schemes such as batch normalization (Ioffe & Szegedy, 2015), weight normalization (Salimans & Kingma, 2016), and layer normalization (Ba et al., 2016). It will require a mix of empirical and theoretical investigation to understand the interplay of these methods (van Laarhoven, 2017). In this work, we have not relied heavily on the machinery of Bregman Lagrangians (Wibisono et al., 2016; Tanaka & Kunin, 2021), but it is clear that they provide a powerful framework for further progress.

Other potential benefits of weight-balancing are suggested in the more familiar setting of matrix factorization (Horn & Johnson, 2012). The basic problem of factorization is underdetermined: any matrix can be written in an infinite number of ways as the product of two or more other matrices. But there are certain canonical factorizations of large matrices, such as the singular value decomposition, that reveal a wealth of information (Eckart & Young, 1936). It is natural to ask whether the functions computed by multilayer networks can be represented in a similarly canonical way, and if so, whether such representations might suggest more effective strategies for pruning, compressing, or otherwise approximating their weight matrices.

Finally we note that there are many possible criteria for weight-balancing besides minimizing the ℓp,q-norm of the weight matrix. It would be interesting to study other regularizers and their corresponding flows from Theorem 4. We believe that the present work can provide a template for these further investigations—and also that such investigations will reveal a similarly rich mathematical structure.

## Acknowledgements

The author is grateful to the reviewers and action editor (N. Cohen) for many helpful suggestions. He has also benefited from discussions of these ideas with many colleagues, including A. Barnett, A. Bietti, J. Bruna, M. Eickenberg, R. Gower, B. Larsen, E. Simoncelli, S. Villar, N. Wadia, A. Wibisono, A. Wilson, and R. Yu.

## References

P. W. Anderson. More is different. *Science*, 177(4047):393–396, 1972.

M. Armenta and P.-M. Jodoin. The representation theory of neural networks. *Mathematics*, 9(24), 2021.

M. A. Armenta, T. Judge, N. Painchaud, Y. Skandarani, C. Lemaire, G. G. Sanchez, P. Spino, and P. M.

Jodoin. Neural teleportation, 2021. arXiv:2012.01118.

S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta-algorithm and applications. *Theory of Computing*, 8(1):121–164, 2012.

S. Arora, N. Cohen, N. Golowich, and W. Hu. A convergence analysis of gradient descent for deep linear neural networks. In *Proceedings of the 8th International Conference on Learning Representations*, 2019.

J. Ba, K. Lei, J. Ryan, and G. E. Hinton. Layer normalization, 2016. arxiv:1607.06450. V. Badrinarayanan, B. Mishra, and R. Cipolla. Understanding symmetries in deep networks. In *Proceedings* of the 8th NeurIPS Workshop on Optimization for Machine Learning, 2015.

J. Bernstein, J. Zhao, M. Meister, M.-Y. Liu, A. Anandkumar, and Y. Yue. Learning compositional functions via multiplicative weight updates. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems 33*, pp. 13319–13330, 2020.

S. Boyd and L. Vandenberghe. *Convex Optimization*. Cambridge University Press, 2004.

M. M. Bronstein, J. Bruna, T. Cohen, and P. Veličković. Geometric deep learning: grids, groups, graphs, geodesics, and gauges, 2021. arxiv:2104.13478.

A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. *Journal of the Royal Statistical Society B*, 39:1–38, 1977.

L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. Sharp minima can generalize for deep nets. In D. Precup and Y. W. Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, pp. 1019–1028, 2017.

S. S. Du, W. Hu, and J. D. Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. In S. Bengio, H. M Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 31*, pp. 382–393, 2018.

J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. In *Proceedings of the 23rd Conference on Learning Theory*, pp. 257–269, 2010.

C. Eckart and G. Young. The approximation of one matrix by another of lower rank. *Psychometrika*, 1(3):
211–218, 1936.

O. Elkabetz and N. Cohen. Continuous vs. discrete optimization of deep neural networks. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan (eds.), *Advances in Neural Information* Processing Systems 34, pp. 4947–4960, 2021.

X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In G. Gordon, D. Dunson, and M. Dudík (eds.), *Proceedings of the 14th International Conference on Artificial Intelligence and Statistics*,
pp. 315–323, 2011.

G. Gluch and R. Urbanke. Noether: the more things change, the more they stay the same, 2021.

arXiv:2104.05508.

I. Goodfellow, Y. Bengio, and A. Courville. *Deep Learning*. MIT Press, 2016.

I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio. Maxout networks. In *Proceedings of the 30th International Conference on Machine Learning*, pp. 1319–1327, 2013.

D. J. Gross. Gauge theory—past, present, and future? *Chinese Journal of Physics*, 30:955–972, 1992. F. V. Gubarev, L. Stodolsky, and V. I. Zakharov. On the significance of the vector potential squared. Physical Review Letters, 86(11):2220–2222, 2001.

A. Gunawardana and W. Byrne. Convergence theorems for generalized alternating minimization procedures.

Journal of Machine Learning Research, 6:2049–2073, 2005.

K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In *Proceedings of the 2015 IEEE International Conference on Computer Vision*,
pp. 1026–1034, 2015.

G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In *Proceedings of the International Conference on Artificial Neural Networks*, pp. 44–51, 2011.

R. A. Horn and C. R. Johnson. *Matrix Analysis*. Cambridge University Press, 2012.

S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In F. R. Bach and D. M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37, pp. 448–456, 2015.

J. D. Jackson. From Lorenz to Coulomb and other explicit gauge transformations. *American Journal of* Physics, 70:917–928, 2002.

D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Y. Bengio and Y. LeCun (eds.),
Proceedings of the 3rd International Conference on Learning Representations, 2015.

J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors.

Information and Computation, 132(1):1–63, 1997.

A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (eds.), *Advances in Neural Information* Processing Systems 25, pp. 1106–1114, 2012.

D. Kunin, J. Sagastuy-Breña, S. Ganguli, D. L. K. Yamins, and H. Tanaka. Neural mechanics: Symmetry and broken conservation laws in deep learning dynamics. In Proceedings of the 9th International Conference on Learning Representations, 2021.

D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. *Nature*, 401:
788–791, 1999.

D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In T. K. Leen, T. G. Dietterich, and V. Tresp (eds.), *Advances in Neural Information Processing Systems 13*, pp. 556–562. MIT Press, 2000.

K. Lyu and J. Li. Gradient descent maximizes the margin of homogeneous neural networks. In *Proceedings* of the 8th International Conference on Learning Representations, 2020.

Q. Meng, S. Zheng, H. Zhang, W. Chen, Q. Ye, Z.-M. Ma, N. Y, and T.-Y. Liu. G-SGD: Optimizing ReLU
neural networks in its positively scale-invariant space. In Proceedings of the 7th International Conference on Learning Representations, 2019.

R. R. Meyer. Sufficient conditions for the convergence of monotonic mathematical programming algorithms.

Journal of Computer and System Sciences, 12(1):108–121, 1976.

S. Mohan, Z. Kadkhodaie, E. P. Simoncelli, and C. Fernandez-Granda. Robust and interpretable blind image denoising via bias-free convolutional neural networks. In *Proceedings of the 8th International Conference* on Learning Representations, 2020.

B. Neyshabur, R. Salakhutdinov, and N. Srebro. Path-SGD: Path-normalized optimization in deep neural networks. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 2422–2430, 2015a.

B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In Proceedings of the 28th Conference on Learning Theory, pp. 1376–1401, 2015b.

E. Noether. Invariante variationsprobleme. *Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse*, pp. 235–257, 1918.

B. T. Polyak. Some methods of speeding up the convergence of iteration methods. *USSR Computational* Mathematics and Mathematical Physics, 4(5):1–17, 1964.

T. Salimans and D. P. Kingma. Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In D. D Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 901–909. Curran Associates, Inc., 2016.

L. K. Saul, F. Sha, and D. D. Lee. Statistical signal processing with nonnegativity constraints. In *Proceedings* of the 8th European Conference on Speech Communication and Technology, pp. 1001–1004, 2003.

A. M. Saxe, J. L. McClelland, and S. Ganguil. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Y. Bengio and Y. LeCun (eds.), Proceedings of the 2nd International Conference on Learning Representations, 2013.

F. Sha, Y. Lin, L. K. Saul, and D. D. Lee. Multiplicative updates for nonnegative quadratic programming.

Neural Computation, 19:2004–2031, 2007.

B. K. Sriperumbudur and G. R. G. Lanckriet. A proof of convergence of the concave-convex procedure using zangwill's theory. *Neural Computation*, 24:1391–1407, 2012.

P. Stock, B. Graham, R. Gribonval, and H. Jégou. Equi-normalization of neural networks. In Proceedings of the 7th International Conference on Learning Representations, 2019.

H. Tanaka and D. Kunin. Noether's learning dynamics: Role of symmetry breaking in neural networks. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan (eds.), *Advances in Neural* Information Processing Systems 34, pp. 25646–25660, 2021.

H. Tanaka, D. Kunin, D. L. Yamins, and S. Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems 33, pp. 6377–6389, 2020.

T. Tieleman and G. E. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. *COURSERA: Neural networks for machine learning*, 4(2):26–31, 2018.

T. van Laarhoven. L2 regularization versus batch and weight normalization, 2017. arxiv:1706.04340.

A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods in optimization. *Proceedings of the National Academy of Sciences USA*, 113(47):E7351–E7358, 2016.

C. F. J. Wu. On the convergence properties of the EM algorithm. *Annals of Statistics*, 11(1):95–103, 1983. A. L. Yuille and A. Rangarajan. The concave-convex procedure. *Neural Computation*, 15:915–936, 2003.

W. J. Zangwill. *Nonlinear programming: A unified approach*. Prentice Hall, 1969.

A. Zee. *Fearful Symmetry: The Search for Beauty in Modern Physics*. Princeton University Press, 2016. B. Zhao, N. Dehmamy, R. Walters, and R. Yu. Symmetry teleportation for accelerated optimization. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems 35*, 2022.

## A Convergence Of Multiplicative Updates

In this appendix we prove that the multiplicative updates in Algorithm 1 minimize the ℓp,q norm in eq. (1).

The proof has many technicalities, so we begin by developing further intuitions in section A.1, which analyzes the fixed points of these updates, and section A.2, which relates the updates to those considered in previous work. Finally, section A.3 gives a formal proof of convergence.

## A.1 Analysis Of Fixed Points

It is clear that Algorithm 1 can only converge to fixed points of the multiplicative updates. We begin with an elementary observation of where these fixed points occur.

Proposition 7 (Fixed Points). If a weight matrix W is a fixed point of the multiplicative updates in Algorithm 1, then all of the rescaling factors in eqs. (4,5,8) are equal to unity: i.e., ai = 1 *for all* i.

Proof. We proceed by induction. Note that a1 = 1 because the network must have at least one input unit and the rescaling factors at all non-hidden units are fixed to one. Let i > 1 and suppose that aj = 1 for all j < i. The i th unit in the network is either hidden (i ∈ H) or not hidden (i *̸∈ H*). If the latter, then as before we have trivially that ai = 1. If the former, then there must exist some j such that Wij ̸= 0 (since otherwise the unit would be unaffected by the network's inputs, a condition that we exclude). Note that the weight Wij is multiplicatively rescaled in eq. (4) by the ratio ai/aj . Since Wij ̸= 0, a fixed point occurs only when ai =aj , and since aj = 1 by the inductive hypothesis, it follows that ai = 1.

Next we consider what the fixed points of Algorithm 1 signify. Recall that the updates were introduced to minimize the norm in eq. (1). The next lemma shows that the fixed points correspond to global minima.

Lemma 8 (Global Optimality). If a weight matrix W is a fixed point of the multiplicative updates in Algorithm 1, then there exists no weight matrix W′ ∼ W such that ∥W′∥p,q < ∥W∥p,q.

Proof. Suppose that W′∼W with W′
ij = Wij (ai/aj ) for some rescaling transformation a ∈ A, and consider how ∥W′∥
q p,q depends on the rescaling factors of this transformation. This dependence is captured (up to a multiplicative constant) by the continuous function F : *A → ℜ*, where

$$F(\mathbf{a})={\frac{1}{q}}\sum_{i}\left(\,\sum_{j}|W_{i j}|^{p}(a_{i}/a_{j})^{p}\,\right)^{\frac{q}{p}}.$$

It is instructive to examine the dependence of this function on the logarithms of the scaling factors, log ai, rather than the scaling factors themselves. Doing so, we find:

$$F({\bf a})=\frac{1}{q}\sum_{i}\exp\bigg(\frac{q}{p}\,\log\sum_{j}e^{\,p\big[\,\log a_{i}-\log a_{j}+\log|W_{i j}|\big]}\bigg).\tag{1}$$

It follows from basic composition laws of convex functions (Boyd & Vandenberghe, 2004) that ∥W′∥
q p,q is itself a convex function of the vector log a = (log a1, log a2*, ...,* log an), and this in turn implies that any stationary point of F *is a global minimizer of* F. To locate such a point, we examine where the partial derivatives of F vanish. Starting from eq. (29), a tedious but straightforward calculation gives

$${\frac{\partial F}{\partial a_{i}}}={\frac{1}{a_{i}}}\left[\left(\sum_{j}|W_{i j}|^{p}(a_{i}/a_{j})^{p}\right)^{\frac{8}{p}}-\ \sum_{j}\left(\sum_{k}|W_{j k}|^{p}(a_{j}/a_{k})^{p}\right)^{\frac{8}{p}-1}|W_{j i}|^{p}(a_{j}/a_{i})^{p}\right].$$
$$(29)$$
$$(30)$$
$$(31)$$
$$(32)$$
 . (31)
Now suppose that W is a fixed point of the multiplicative updates. Then by the previous proposition, it must be the case that the right hand side of eq. (8) is equal to unity. This occurs when the denominator and numerator of eq. (8) are equal, or equivalently when

$$\left(\sum_{j}|W_{i j}|^{p}\right)^{\frac{q}{p}}\ =\ \sum_{j}\left(\sum_{k}|W_{j k}|^{p}\right)^{\frac{q}{p}-1}|W_{j i}|^{p}.$$

Eq. (32) can be viewed as a balancing condition that holds at fixed points of the multiplicative updates:
it equates a sum over incoming weights (on the left) with a sum over outgoing weights (on the right).

Comparing the last two equations, we see another consequence of this balance; it implies that gradient of F in eq. (31) vanishes when a = 1, where 1 ∈ ℜn is the vector of all ones. But this implies that a global minimum of F(a) is obtained when a is the *identity* transformation. In this case, by definition there exists no a
′ ∈ A such that F(a
′) < F(1), or equivalently, there exists no W′ ∼ W such that ∥W′∥p,q < ∥W∥p,q.

To summarize, we have *not yet* shown that the updates in Algorithm 1 converge. But we have shown that if they do converge, they yield a functionally equivalent network whose weights minimize the norm in eq. (1).

## A.2 Relation To Enorm

Consider the special case of the norm in eq. (1) when p=q. This special case leads to many simplifications. When p=q, for example, the partial derivative in eq. (31) reduces to

$${\frac{\partial F}{\partial a_{i}}}={\frac{1}{a_{i}}}\Biggl[\sum_{j}|W_{i j}|^{p}(a_{i}/a_{j})^{p}-\,\sum_{j}|W_{j i}|^{p}(a_{j}/a_{i})^{p}\Biggr].$$
$$(33)$$

This expression is sufficiently simple to pursue a strategy of coordinate descent, minimizing F(a) with respect to one rescaling factor at a time. In particular, consider the rescaling factor ai at some hidden unit (i ∈ H),
and suppose that aj = 1 for all j ̸= i. Then the partial derivative in eq. (33) vanishes when

$$a_{i}=\left[\frac{\sum_{j}|W_{j i}|^{p}}{\sum_{j}|W_{i j}|^{p}}\right]^{\frac{1}{2p}}.$$
$$(34)$$
. (34)
Eq. (34) is the basis for the ENorm algorithm (Stock et al., 2019), which uses multiplicative updates with rescaling factors, as in eqs. (4–5), to minimize the ℓp,p-norm of the weight matrix. The updates for ENorm are derived from *block* coordinate descent of eq. (29) for the special case p = q; the blocks are composed of hidden units in the same layer of a deep network (for which the partial derivatives in eq. (33) decouple). As its name suggests, the procedure aims to equalize the p-norms of incoming and outgoing weights. When p=q, the astute reader may have noticed a mismatch in the outer exponents of eqs. (8) and (34); the former is 1 4p for the multiplicative updates in this paper, while the latter is 1 2p for ENorm. The origin of this discrepancy, by a factor of two, will be further explained in section A.3. At a high level, the discrepancy arises because ENorm is based on block coordinate descent whereas the updates in this paper are applied in parallel at all the hidden units of the network. The larger exponent of ENorm corresponds to a more aggressive update, but one that is only applied to the rescaling factors in one layer of hidden units. The smaller exponent in our approach is needed to provide a similar guarantee of monotonic convergence when all the rescaling factors are updated in parallel. Compared to previous work, our results can be viewed as extending the reach of parallelizable multiplicative updates in two directions—first, to the larger family of feedforward (but not strictly layered) networks, and second, to the larger family of norms when p̸=q.

## A.3 Proof Of Convergence

Theorem 1 is proved by showing that the multiplicative updates in Algorithm 1 satisfy the preconditions for Meyer's convergence theorem (Meyer, 1976). Meyer's result itself builds on the convergence theory of Zangwill (1969). Our tools are similar to those that have been used to derive the convergence of other multiplicative updates with nonnegativity constraints (Lee & Seung, 1999; Saul et al., 2003; Sha et al., 2007) as well as more general iterative procedures in statistical learning (Dempster et al., 1977; Wu, 1983; Yuille
& Rangarajan, 2003; Gunawardana & Byrne, 2005; Sriperumbudur & Lanckriet, 2012).

We prove the theorem with the aid of two additional lemmas. The first lemma considers the effect of a single rescaling transformation in the weight-balancing procedure of Algorithm 1. This lemma is at the heart of the algorithm: it shows that each multiplicative update reduces the entry-wise ℓp,q-norm of the weight matrix.

Lemma 9 (Monotone Improvement). Suppose W′∼W*, and specifically, suppose* W′
ij = Wij (ai/aj )
where a ∈ A is the vector of rescaling factors whose elements for i ∈ H *are given by eq. (8). Then*
∥W′∥p,q ≤ ∥W∥p,q*, and this inequality is strict unless* W′ =W.

Proof. Recall the function F : *A → ℜ* from eq. (29), and recall in particular that F(a) = q
−1∥W′∥
q p,q if W′
ij = Wij (ai/aj ). To prove the lemma, we must show equivalently that F(a) ≤ F(1) when a ∈ A is given by eq. (8) and also that this inequality is strict unless a = 1. The proof is based on constructing a so-called auxiliary function, as in the derivations of the EM algorithm (Dempster et al., 1977), nonnegative matrix factorization (Lee & Seung, 2000), and the convex-concave procedure (Yuille & Rangarajan, 2003).

Specifically, we seek an auxiliary function G : *A → ℜ* with the following three properties:
(i) F(a) ≤ G(a) for all a ∈ A.

(ii) F(1) = G(1) where 1 is the vector of all ones.

(iii) G : *A → ℜ* has a unique global minimizer at a ∈ A given by eq. (8).

Suppose, for instance, that we can construct a function with these properties. Then at the minimizer a ∈ A
given by eq. (8), we have at once that

$$F(\mathbf{a})\leq G(\mathbf{a})\leq G(\mathbf{1})=F(\mathbf{1}),$$
F(a) ≤ G(a) ≤ G(1) = F(1), (35)
$\left(35\right)^{\frac{1}{3}}$
where the inequalities in eq. (35) follow from properties (i) and (iii) of the auxiliary function and the equality follows from property (ii). This proves the first part of the lemma. Now if a ̸= 1 in eqs. (8) and (35), then G(a) < G(1) (because G has a *unique* global minimizer) and by extension F(a) < F(1). On the other hand, if a=1, then W is a fixed point of the multiplicative updates, and this proves the second part of the lemma. The more challenging part of the proof is to *construct* an auxiliary function with these properties. As shorthand, let r = max(*p, q*), and consider the function

$$G({\bf a})\ =\ \frac{1}{2r}\sum_{ij}\rho_{i}({\bf W})\,\pi_{ij}({\bf W})\left(a_{i}^{2r}+a_{j}^{-2r}\right)\ +\ \max\left(0,1-\frac{q}{p}\right)F({\bf1}).\tag{36}$$

We claim that eq. (36) satisfies the three properties listed above, and we verify them in order of difficulty. First, we verify property (ii): comparing eqs. (29) and (36), we see by simple substitution that F(1) = G(1).

Next we verify property (iii). We begin by observing that Pj |Wij | p >0 and Pj |Wji| p >0 for all i ∈ H; these conditions are necessary (as mentioned in section 2.1) to ensure that each hidden unit lies on some directed path from the network's inputs to outputs. It follows that the coefficients of a 2r iand a
−2r jin eq. (36) are strictly positive, and hence G(a) is *strongly convex* in the variable (a r 1
, ar2
, . . . , arn
). By setting ∂G/∂ar i = 0 for all i ∈ H, we obtain the solution in eq. (8), and by strong convexity this solution must correspond to a unique global minimizer. This verifies property (iii) of the auxiliary function, and it also accounts for the peculiar exponent of 1 4r where r = max(*p, q*), that appears in the multiplicative update.

Finally, we verify property (i) of the auxiliary function. To do this, we must work out separately the cases for *p < q* and *p > q*. These regimes require different arguments to establish the lower-bounding property of the auxiliary function. We also remind the reader of the definitions in eqs. (6–7).

- **Case 1:** p≤q. To verify property (i) in this case we must show that F(a) ≤ G(a) when p≤q. Starting from eq. (29) for F(a), we can derive the auxiliary function in eq. (36) via two simple inequalities:

j |Wij | p(ai/aj ) p qp, (37) X F(a) = 1 q X i =1 q X i ρi(W) X j πij (W) (ai/aj ) p qp, (38) ≤1 q X i ρi(W) X j πij (W)(ai/aj ) q, (39) ≤1 2q X ij ρi(W) πij (W) a 2q i + a −2q j , (40) = G(a) for p ≤ q. (41)

In eq. (39) we have used Jensen's inequality, exploiting the fact that πij is a stochastic matrix, and in eq. (40) we have appealed to the inequality of arithmetic and geometric means. Finally, eq. (41) follows upon resubstituting eq. (7) into the previous line. This verifies property (i) for the case p ≤ q.

- **Case 2:** p ≥ q. Now we must show that F(a) ≤ G(a) when p ≥ q. Here we exploit the fact that a concave function (e.g., x q

p ) lies below any one of its tangents. It follows that j |Wij | p(ai/aj ) p qp, (42) X F(a) = 1 q X i ≤ F(1) + 1p X ij X k |Wik| p qp −1|Wij | p(ai/aj ) p − 1 , (43) = F(1) + 1p X ij ρi(W) πij (W) (ai/aj ) p − 1 , (44) ≤ F(1) 1− q p +1 2p X ij ρi(W) πij (W) a 2p i + a −2p j (45) = G(a) for p ≥ q. (46)

In sum, the auxiliary function G(a) is derived by identifying those parts of ∥W′∥
q p,q that can be bounded by elementary inequalities. In doing so, we find that different bounds are required for the cases p<q and p>q, and these differences account for the exponent r = max(*p, q*) that appears in the multiplicative updates.

Lemma 9 shows that each multiplicative update in Algorithm 1 decreases ∥W∥p,q unless W is itself a fixed point. The lemma also rules out oscillations between distinct global minima. But this by itself is not enough to prove that the updates converge. To do this, we must also show that none of the rescaling factors increase without bound. This is the content of the next lemma.

Lemma 10 (Compactness of Sublevel Sets). Let C = {W′|W′ ∼ W, ∥W′∥p,q ≤ ∥W∥p,q} *denote the* sublevel set of weight matrices whose ℓp,q-norm does not exceed that of W. This set is compact.

Proof. Let F : *A → ℜ* be the function, as defined in eq. (29), that computes the change in the ℓp,q-norm of the weight matrix after a rescaling transformation. To prove the lemma, we must show equivalently that the sublevel set given by F1 = {a *∈ A |* F(a) ≤ F(1)} is compact. It follows from the continuity of F that its sublevel sets are closed; thus it remains only to show that F1 is bounded. At a high level, this boundedness will follow from the fact that the network has finite depth; a similar result has been obtained for the special case p=q (Stock et al., 2019). In particular, suppose a ∈ F1 with F(a) ≤ F(1). Then if Wij ̸= 0, it must be the case that

$$\frac{a_{i}}{a_{j}}\leq\frac{(qF({\bf1}))^{\frac{1}{q}}}{|W_{ij}|},\tag{1}$$

because otherwise the ijth term of the sum in eq. (29) would by itself exceed F(1). Let j0 → j1 *→ · · · →* jm denote an m-step path through the network that starts at some input unit (j0 ∈ I), passes through the i th hidden unit after k steps (so that jk = i), ends at some output unit (jm ∈ O), and traverses only nonzero weights Wjℓ−1jℓ
̸= 0 in the process. Note that there must exist at least one such path if the i th hidden unit contributes in some way to the function computed by the network. Since aj0 = 1 and ajk =ai, it follows that

$$a_{i}\ =\ \frac{a_{j_{k}}}{a_{j_{0}}}\ =\ \prod_{\ell=1}^{k}\frac{a_{j_{\ell}}}{a_{j_{\ell-1}}}\ \leq\ \prod_{\ell=1}^{k}\frac{(qF({\bf1}))^{\frac{1}{q}}}{|W_{j_{\ell}j_{\ell-1}}|},\tag{1}$$
$$\left(47\right)$$
$$(48)$$
$$(49)$$

where the inequality follows from eq. (47). Likewise, since ajm = 1 and ajk =ai, it follows that

$$\frac{1}{a_{i}}\ =\ \frac{a_{j_{m}}}{a_{j_{k}}}\ =\ \prod_{\ell=k+1}^{m}\frac{a_{j_{\ell}}}{a_{j_{\ell-1}}}\ \leq\ \prod_{\ell=k+1}^{m}\frac{(q F({\bf1}))^{\frac{1}{q}}}{|W_{j_{\ell}j_{\ell-1}}|}$$

Eqs. (48–49) provide upper and lower bounds on ai for all a ∈ F1. Thus F1 is closed and bounded, hence compact.

The previous two lemmas establish the necessary conditions for Meyer's monotone convergence theorem (Meyer, 1976). Armed with these lemmas, we can now prove Theorem 1.

Proof of Theorem 1. Let W0 denote an initial weight matrix. From Lemma 10, it follows that the set C = {W|W ∼ W0, ∥W∥p,q ≤ ∥W0∥p,q} is compact. From Lemma 9, it follows that each multiplicative update yields a weight matrix W ∈ C whose ℓp,q-norm is less than or equal to the previous one (with equality occurring only when W is both a global minimizer and a fixed point of the updates). Finally, from eq. (8) we note that the multiplicative coefficients are a continuous function of the weights from which they are derived. The procedure in Algorithm 1 therefore satisfies the preconditions of compactness, strict monotonicity, and continuity for Meyer's monotone convergence theorem (Meyer, 1976) in the setting where fixed points occur at (and only at) global minima of ∥W∥p,q in C.

## B Weight-Balancing Flow For ℓp,Q**-Norm Regularization**

In this appendix we prove Theorem 6 that the weight-balancing flow for the regularizer R(W) in eq. (26)
descends in the network's regularized loss function. As a preliminary step, we remind the reader of the per-unit regularizer ρi(W) defined in eq. (6) and the stochastic matrix with elements πij (W) defined in eq. (7). From these definitions, it is also a straightforward exercise to verify that Wij
∂R
∂Wij= πij (W)ρi(W),
and this identity is useful in what follows.

Proof of Theorem 6. Consider the regularizer R(W) in eq. (26) as a function of the variable ωij = log |Wij |.

To prove the result, we show that the Hessian H(ω) = ∂
2R
∂ω∂ω⊤ in Theorem 5 is positive definite whenever no weight Wij is equal to zero, or equivalently when every ωij is finite. The proof requires two steps—first, to compute the Hessian, and second, to show that it is positive definite. For the first step, we begin by evaluating the time-derivative in the flow of eq. (16). Here we find, by repeated differentiation, that

d dt Wij ∂R ∂Wij = d dt hπij (W)ρi(W) i, (50) = "X ∂πij ∂Wik W˙ik#ρi(W) + πij (W) "X ∂ρi ∂Wik W˙ik#, (51) k k = pπij (W) X k hδjk − πik(W) iW˙ik Wik ρi(W) + qπij (W) X k hπik(W)ρi(W) iW˙ik = πij (W)ρi(W) hp ω˙ ij + (q−p) X k πik(W) ˙ωiki. (53)
$$(50)$$
(51)  $$\begin{array}{l}\left(52\right)\end{array}$$ = $$\begin{array}{l}\left(53\right)\end{array}$$ . 
(54)  $$\begin{array}{l}\left(55\right)\end{array}$$ = $$\begin{array}{l}\left(56\right)\end{array}$$ . 
Wik(52)
We can now read off the nonzero elements of H(ω) from the relation between eqs. (23) and (53). Let V be any nonzero matrix of the same size as W, and let Vec(V) denote the flattened representation of V as a vector. Then from eqs. (23) and (53) we have

Vec(V) ⊤H(ω) Vec(V) = p X ij πij (W)ρi(W) V 2 ij + (q−p) X ijk πij (W)ρi(W)πik(W) VijVik, (54) ij ρi(W)πij (W) Vij − X k πik(W) Vik2+ q X i ρi(W) X j πij (W) Vij2, (55) = p X > 0 for all V ̸= 0. (56) The strict inequality in the last line is justified by the following observations: (i) both πij (W) and ρi(W)
are strictly positive when no weights are exactly equal to zero; (ii) the left term in eq. (55) only vanishes when V has constant rows; (iii) the right term only vanishes when for all rows Pj πij (W)Vij = 0. Since V
is nonzero, however, it is not possible to satisfy both of these conditions simultaneously, so that one term and/or the other must be strictly positive. Finally, since Vec(V)
⊤H(ω) Vec(V) > 0 for any nonzero V, we conclude that H(ω) is positive definite in every open orthant of the weight space. The result then follows from Theorem 5.