tmlr-md-dump / iRDwUXYsSJ /iRDwUXYsSJ.md
RedTachyon's picture
Upload folder using huggingface_hub
a038fe2 verified
|
raw
history blame
67.9 kB

Differential Equation Scaling Limits Of Shaped And Unshaped Neural Networks

Mufan (Bill) Li mufan.li@princeton.edu Princeton University Mihai Nica nicam@uoguelph.ca University of Guelph and Vector Institute Reviewed on OpenReview: https: // openreview. net/ forum? id= iRDwUXYsSJ

Abstract

Recent analyses of neural networks with shaped activations (i.e. the activation function is scaled as the network size grows) have led to scaling limits described by differential equations. However, these results do not a priori tell us anything about "ordinary" unshaped networks, where the activation is unchanged as the network size grows. In this article, we find similar differential equation based asymptotic characterization for two types of unshaped networks.

  • Firstly, we show that the following two architectures converge to the same infinite-depthand-width limit at initialization: (i) a fully connected ResNet with a d −1/2factor on the residual branch, where d is the network depth. (ii) a multilayer perceptron (MLP) with depth d ≪ width n and shaped ReLU activation at rate d −1/2.

  • Secondly, for an unshaped MLP at initialization, we derive the first order asymptotic correction to the layerwise correlation. In particular, if ρℓ is the correlation at layer ℓ, then qt = ℓ 2(1 − ρℓ) with t = ℓ n converges to an SDE with a singularity at t = 0.

These results together provide a connection between shaped and unshaped network architectures, and opens up the possibility of studying the effect of normalization methods and how it connects with shaping activation functions.

1 Introduction

Martens et al. (2021); Zhang et al. (2022) proposed transforming the activation function to be more linear as the neural network becomes larger in size, which significantly improved the speed of training deep networks without batch normalization. Based on the infinite-depth-and-width limit analysis of Li et al. (2022), the principle of these transformations can be roughly summarized as follows: choose an activation function φs : R → R as a perturbation of the identity map depending on the network width n (or depth d = n 2p*, p >* 0)

φs(x)=x+1nph(x)+O(n2p)=x+1dh(x)+O(d1),\varphi_{s}(x)=x+\frac{1}{n^{p}}h(x)+O(n^{-2p})=x+\frac{1}{\sqrt{d}}h(x)+O(d^{-1})\,, −1), (1.1) where for simplicity we will ignore the higher order terms for now. Li et al. (2022) also showed the limiting multilayer perceptron (MLP) can be described by a Neural Covariance stochastic differential equation (SDE).

Furthermore, it appears the choice of p = 1 2 is necessary to reach a non-degenerate nor trivial limit, at least when the depth-to-width ratio dn converges to a positive constant (see (Li et al., 2022, Proposition 3.4)).

Recently, Hayou & Yang (2023); Cirone et al. (2023) also studied the infinite-depth-and-width limit of a specific ResNet architecture (He et al., 2016). Most interestingly, the limit is described by an ordinary differential equation (ODE) very similar to the neural covariance SDE. Furthermore, Hayou & Yang (2023) showed the width and depth limits commute, i.e. no dependence on the depth-to-width ratio dn . It is then natural to consider a more careful comparison and understanding of the two differential equations.

(1.1)(1.1)

1_image_0.png

Figure 1: Empirical distribution of the transformed correlation rt = log(ℓ 2(1 − ρℓ)) for an unshaped ReLU MLP, SDE sample density computed via kernel density estimation. Simulated with n = d = 150, ρ0 = 0.3, r0 = log(1 − ρ0) = log(0.7), SDE step size 10−2, and 2 13 samples. At the same time, Li et al. (2022) demonstrated the unshaped network also has an accurate approximation via a Markov chain. Jakub & Nica (2023) further studied the large width asymptotics of the Markov chain updates, where the transition kernel still depends on the width. Since the Markov chain quickly converges to a fixed point, it does not immediately appear to have a scaling limit. However, this motivates us to consider a modified scaling around the fixed point, so that we can recover a first order asymptotic correction term.

In this note, we provide two technical results that address both of the above questions. Both of the results are achieved by considering a modification of the scaling, which leads to the following results.

  • Firstly, we demonstrate that shaping the activation has a close connection to ResNets, and the covariance ODE is in fact just the deterministic drift component of the covariance SDE. Furthermore, in the limit as the scaled ratio of d n2p to converges to a positive constant and p ∈ (0, 1 2 ), the shaped MLP covariance also converges to the same ODE.

  • Secondly, we analyze the correlation of an unshaped MLP, providing a derivation of the first order asymptotic correction. The correction term arises from rescaling the correlation ρℓ in layer ℓ by qℓ = ℓ 2(1 − ρℓ), and we show it is closely approximated by an SDE.

The rest of this article is organized as follows. Firstly, we will provide a brief literature review in the rest of this section. Next, we will review the most relevant known results on this covariance SDEs and ODEs in Section 2. Then in Section 3, we will make the connection between shaping and ResNets precise. At the same time, we will also provide a derivation of the unshaped regime in Section 4, where we show that by modifying the scaling yet again, we can recover another SDE related to the correlation of a ReLU MLP.

1.1 Related Work

On a conceptual level, the main difficulty of analyzing neural networks is due to the lack of mathematical tractability. In a seminal work, Neal (1995) showed that two layer neural networks at initialization converges to a Gaussian process. Beyond the result itself, the conceptual breakthrough opened up the field to analyzing large size asymptotics of neural networks. In particular, this led to a large body of work on large or infinite width neural networks (Lee et al., 2018; Jacot et al., 2018; Du et al., 2019; Mei et al., 2018; Sirignano & Spiliopoulos, 2018; Yang, 2019; Bartlett et al., 2021). However, majority of these results relied on the network converging to a kernel limit, which are known to perform worse than neural networks (Ghorbani et al., 2020).

The gap in performance is believed to be primarily due to a lack of feature learning (Yang & Hu, 2021; Abbe

Table 1: Notation
Notation Description Notation Description
nin ∈ N Input dimension nout ∈ N Output dimension
n ∈ N Hidden layer width d ∈ N Number of hidden layers (depth)
φ(·) Base activation φs(·) Shaped activation
x α ∈ R nin Input for 1 ≤ α ≤ m W0 ∈ R nin×n Weight matrix at layer 0
z α n×nout Weight matrix at final layer
out ∈ R nout Network output Wout ∈ R
α n Neurons (pre-activation)
z ℓ ∈ R Wℓ ∈ R n×n Weight matrix at layer 1 ≤ ℓ ≤ d
for layer 1 ≤ ℓ ≤ d All weights initialized iid ∼ N (0, 1)
α φ ℓ ∈ R n Neurons (post-activation) c ∈ R Normalizing constant
for layer 1 ≤ ℓ ≤ d c := E φ(g) 2 −1 for g ∼ N (0, 1)
αβ β αβ αβ ββ
V ℓ ∈ R Covariance c n ⟨φ α ℓ , φ ℓ ⟩ ρ ℓ ∈ [−1, 1] Correlation V ℓ / p V ℓ V
αα ℓ

et al., 2022; Ba et al., 2022). While this motivated the study of several alternative scaling limits, in this work we are mostly interested in the infinite-depth-and-width limit.

First investigated by Hanin & Nica (2019b), it was shown that not only does this regime not converge to a Gaussian process at initialization, it also learns features Hanin & Nica (2019a). This limit has since been analyzed with transform based methods (Noci et al., 2021) and central limit theorem approaches (Li et al., 2021). As we will describe in more detail soon, the result of most interest is the covariance SDE limit of Li et al. (2022). The MLP results were also further extended to the transformer setting (Noci et al., 2023).

The d −1/2scaling for ResNets was first considered by Hayou et al. (2021), with the depth limit carefully studied afterwards (Hayou, 2022; Hayou & Yang, 2023; Hayou, 2023). Fischer et al. (2023) also arrived at the same scaling through a different theoretical approach. This scaling has found applications for hyperparameter tuning (Bordelon et al., 2023; Yang et al., 2023) when used in conjunction with the µP scaling (Yang & Hu, 2021).

Batch and layer normalization methods were introduced as a remedy for unstable training (Ioffe & Szegedy, 2015; Ba et al., 2016), albeit theoretical analyses of these highly discrete changes per layer has been challenging. A recent promising approach studies the isometry gap, and shows that batch normalization methods achieves a similar effect as shaping activation functions (Meterez et al., 2023). Theoretical connections between these approaches using a differential equation based description remains an open problem.

2 Background On Shaped Networks And Resnets

Let {x α} m α=1 be a set of input data points in R nin , and let z α ℓ ∈ R n denote the ℓ-th hidden layer with respect to the input x α. We consider the standard width-n depth-d MLP architecture with He-initialization (He et al., 2015) defined by the following recursion

zoutα=cnWoutφ(zdα),zt+1α=cnWφs(zα),z1α=1ninWinxα,(2.1)z^{\alpha}_{\rm out}=\sqrt{\frac{c}{n}}W_{\rm out}\,\varphi(z^{\alpha}_{d})\,,\quad z^{\alpha}_{t+1}=\sqrt{\frac{c}{n}}W_{\ell}\,\varphi_{s}(z^{\alpha}_{\ell})\,,\quad z^{\alpha}_{1}=\frac{1}{\sqrt{n_{\rm in}}}W_{\rm in}\,x^{\alpha}\,,\tag{2.1}

where φs : R → R is the activation function to be specified, c −1 = E φs(g) 2for g ∼ N(0, 1), z α ℓ ∈ R n, zα out ∈ R nout , and the matrices Wout ∈ R nout×n, Wℓ ∈ R n×n, Win ∈ R n×nin are initialized with iid N(0, 1) entries.

The main structure used to study neural networks at initialization, such as the neural network Gaussian processes (NNGP) (Neal, 1995; Lee et al., 2018), is on the conditional Gaussian property. More precisely, if we condition on the previous layers Fℓ = σ((z α k )α∈[m],k≤ℓ), we have that

[z+1α]α=1mFN(0,cn[φα,φβ]α,β=1mIn),[z_{\ell+1}^{\alpha}]_{\alpha=1}^{m}|{\mathcal{F}}_{\ell}\stackrel{d}{=}{\mathcal{N}}\left(0,\frac{c}{n}[\langle\varphi_{\ell}^{\alpha},\varphi_{\ell}^{\beta}\rangle]_{\alpha,\beta=1}^{m}\otimes I_{n}\right)\,, , (2.2) where we use the notation [·] m α=1 to vertically stack vectors, let φ α ℓ = φs(z α ℓ ) be the post activation hidden layer, and let ⊗ be the Kronecker product.

(2.2)(2.2)

This naturally leads us to define the covariance matrix as Vℓ := cn [⟨φ α ℓ , φ β ℓ ⟩] m α,β=1. The NNGP results essentially reduces down to applying the Law of Large Numbers inductively to show the covariance V αβ ℓ converges to its expect value. More precisely, if [z α ℓ ] m α=1 ∼ N (0, Vℓ−1 ⊗ In), then in the limit as n → ∞

Vαβ=cni=1nφ(z,iα)φ(z,iβ)cEφ(gα)φ(gβ),where[gα,gβ]N(0,[V1ααV1αβV1αβV1ββ]).(2.3)V_{\ell}^{\alpha\beta}=\frac{c}{n}\sum_{i=1}^{n}\varphi_{*}(z_{\ell,i}^{\alpha})\varphi_{*}(z_{\ell,i}^{\beta})\stackrel{{d}}{{\to}}c\,\mathbb{E}\,\varphi_{*}(g^{\alpha})\varphi_{*}(g^{\beta})\,,\quad\text{where}[g^{\alpha},g^{\beta}]^{\top}\sim\mathcal{N}\left(0,\begin{bmatrix}V_{\ell-1}^{\alpha\alpha}&V_{\ell-1}^{\alpha\beta}\\ V_{\ell-1}^{\alpha\beta}&V_{\ell-1}^{\beta\beta}\end{bmatrix}\right)\,.\tag{2.3}

However, since the next layer covariance Vℓ is a deterministic function of the previous Vℓ−1, this forms a fixed

point type iteration Vℓ = f(Vℓ−1). Indeed, if we observe the correlations ρ
                                                                            αβ
                                                                            ℓ =
                                                                                  ⟨φ
                                                                                    α
                                                                                    ℓ
                                                                                     ,φ
                                                                                       β
                                                                                       ℓ
                                                                                        ⟩
                                                                                  |φα
                                                                                   ℓ
                                                                                     | |φ
                                                                                       β
                                                                                       ℓ
                                                                                        |
                                                                                         , which is bounded

in [−1, 1], it does in fact converge to a fixed point at ρ∞ = 1 for ReLU activations (see e.g. Proposition 3.4 (i) Li et al. (2022)).

This degeneracy of correlations also causes unstable gradients, which led to a proposal by Martens et al.

(2021); Zhang et al. (2022) to modify the shape of activation functions φs depending on the size of the network, leading to improved training speeds without using normalization methods. However, as both of these works computed the activation shape based on a set of criteria, it is unclear what is the appropriate modification in the scaling limit as depth and width both approaches infinity.

2.1 Shaped Limit Of Neural Networks

To this end, Li et al. (2022) is the first to describe the limit as d, n → ∞ with dn → T > 0 and the activation function φs is shaped at a precise rate to be closer to the identity as we increase n. In particular, the covariance matrix Vℓ forms a Markov chain, as it satisfies the definition of Vℓ+1|Fℓ = Vℓ+1|σ(Vℓ) (see e.g.

(2.3)). The main result describes the scaling limit of the Markov chain V⌊tn⌋ via a stochastic differential equation (SDE), which we can intuitively interpret as the Euler discretization converging to the differential equation

Vt+1=Vt+b(Vt)n+Σ(Vt)1/2ξtn+O(n3/2)ndVt=b(Vt)dt+Σ(Vt)dBt,(2.4)V_{t+1}=V_{t}+\frac{b(V_{t})}{n}+\frac{\Sigma(V_{t})^{1/2}\xi_{t}}{\sqrt{n}}+O(n^{-3/2})\xrightarrow{n\to\infty}dV_{t}=b(V_{t})\,dt+\Sigma(V_{t})\,dB_{t}\,,\tag{2.4}

where ξℓ are iid. zero mean identity variance random vectors, and we interpret 1n as the step size of the discretization.

The activation functions are modified as follows. For a ReLU-like activation, we choose

φs(x)=s+max(x,0)+smin(x,0),s±=1+c±np,c±R,p0,\varphi_{s}(x)=s_{+}\operatorname*{max}(x,0)+s_{-}\operatorname*{min}(x,0)\,,\quad s_{\pm}=1+{\frac{c_{\pm}}{n^{p}}}\,,c_{\pm}\in\mathbb{R}\,,p\geq0\,, or for a smooth activation $\varphi\in C^{4}(\mathbb{R})$ such that $\varphi^{\prime}(0)=0,\varphi^{\prime}(0)=1$ and $\varphi^{\prime,(4)}(x)$ bounded by a polynomial, we choose $$\varphi_{s}(x)=s,\varphi\left(\frac{x}{s}\right),,\quad s=any^{p},a\neq0,,p\geq0,.\tag{2.6}$$

We will first recall one of the main results of Li et al. (2022), which is stated informally1 below.

Theorem 2.1 (Theorem 3.2 and 3.9 of Li et al. (2022), Informal). Let p = 1 2 . Then in the limit as d, n → ∞, d n → T > 0, and φs defined as above, we have that the upper triangular entries of V⌊tn⌋ (flattened to a vector) converges to the following SDE weakly

dVt=b(Vt)dt+Σ(Vt)1/2dBt,V0=1nin[(xα,xβ)]α,β=1m,d V_{t}=b(V_{t})\,d t+\Sigma(V_{t})^{1/2}\,d B_{t}\,,\quad V_{0}=\frac{1}{n_{i n}}[(x^{\alpha},x^{\beta})]_{\alpha,\beta=1}^{m}\,,

where Σ(V )|αβ,γδ = V αγV βδ + V αδV βγ, and if φ is a ReLU-like activation we have

b(V)αβ=ν(ραβ)VααVββ,ραβ=VαβVααVββ,ν(ρ)=(c+c)22π(1ρ2ρarccosρ),b(V)|_{\alpha\beta}=\nu(\rho^{\alpha\beta})\sqrt{V^{\alpha\alpha}V^{\beta\beta}}\,,\quad\rho^{\alpha\beta}=\frac{V^{\alpha\beta}}{\sqrt{V^{\alpha\alpha}V^{\beta\beta}}}\,,\quad\nu(\rho)=\frac{(c_{+}-c_{-})^{2}}{2\pi}\left(\sqrt{1-\rho^{2}}-\rho\arccos\rho\right)\,, , (2.8) 1The statement is "informal" in the sense that we have stated what the final limit is, but not the precise sense of the convergence. See Appendix A for a rigorous treatment of the convergence result.

(2.5)(2.5) (2.7)(2.7) (2.8)(2.8)

or else if φ is a smooth activation we have

bαβ(Vt)=φ(0)24a2(VtααVtββ+Vtαβ(2Vtαβ3))+φ(0)2a2Vtαβ(Vtαα+Vtββ2).(2.9)b^{\alpha\beta}(V_{t})=\frac{\varphi^{\prime\prime}(0)^{2}}{4a^{2}}\left(V_{t}^{\alpha\alpha}V_{t}^{\beta\beta}+V_{t}^{\alpha\beta}(2V_{t}^{\alpha\beta}-3)\right)+\frac{\varphi^{\prime\prime\prime}(0)}{2a^{2}}V_{t}^{\alpha\beta}(V_{t}^{\alpha\alpha}+V_{t}^{\beta\beta}-2)\,.\tag{2.9} where f(ρ) = 1π (ρ arcsin ρ + p1 − ρ 2) + 12 ρ. While the formulae may seem overwhelming, there is actually a fairly straight forward interpretation of both the drift b and the diffusion coefficient Σ 1/2. In particular, for the unshaped ReLU network, that is φs(x) = max(x, 0), the deterministic component of the update for the covariance matrix compared to the shaped network are as follows

Unshaped: $\mathbb{E},V_{\ell+1}-V_{\ell}\propto b(V_{\ell}),,$ Shaped: $\mathbb{E},V_{\ell+1}-V_{\ell}\propto\dfrac{b(V_{\ell})}{n},.$ (2.10)(2.10) Effectively, the drift component got slowed down by a multiplicative factor of 1n , which can interpreted as an Euler discretization step. In order to achieve a stable limit as we take depth d to infinity, we would also require each layer to contribute infinitesimally as a rate proportional to 1d , therefore this is a desired rescaling. The diffusion coefficient is much more interesting. Since we can interpret this diffusion to be on the manifold of symmetric positive definite matrices, we would expect the diffusion coefficient of Brownian motions on this manifold to correspond to a Riemannian metric. Indeed, this is shown in Li et al. (2024), as Σ −1corresponds to the affine invariant metric. More precisely, for all V ∈ SPD(m), the inner product corresponding to Σ −1

A,BΣ1(V)=1ijmAijBijΣ1(V)ij=12Tr(AV1BV1),for allA,BSym(m).(2.11)\langle A,B\rangle_{\Sigma^{-1}(V)}=\sum_{1\leq i\leq j\leq m}A_{ij}B_{ij}\Sigma^{-1}(V)_{ij}=\frac{1}{2}\operatorname{Tr}(AV^{-1}BV^{-1})\,,\quad\text{for all}A,B\in\operatorname{Sym}(m)\,.\tag{2.11}

Furthermore, this gives the linear network SDE dVt = Σ(Vt) 1/2*, dB*t an interpretation as the dual Brownian motion in information geometry, which uses a pair of dually flat affine connections instead of the standard Levi-Civita connection.

2.2 Ode Limit Of Residual Networks

At the same time, Hayou & Yang (2023); Cirone et al. (2023) found an ordinary differential equation (ODE) limit describing the covariance matrix for infinite-depth-and-width ResNets. The authors considered a ResNet architecture with a √ 1 d factor on their residual branch, more precisely their recursion is defined as follows (in our notation and convention)

z+1α=zα+1dnWφ(zα), where φ(x)=max(x,0).z_{\ell+1}^{\alpha}=z_{\ell}^{\alpha}+\frac{1}{\sqrt{d n}}W_{\ell}\,\varphi(z_{\ell}^{\alpha})\,,\quad\mathrm{~where~}\varphi(x)=\operatorname*{max}(x,0)\,.

Intuitively, this is similar to shaping activations, it also weakens the effect of each layer as we take d to infinity. This will be discussed in more details in the following section.

One of their main results can be stated informally as follows.

Theorem 2.2 (Theorem 2 of Hayou & Yang (2023), Informal). Let d, n → ∞ (in any order), and the covariance process V αβ ⌊td⌋ converges to the following ODE

(2.12)(2.12) ddtVtαβ=12f(ρtαβ)ρtαβVtαβ,(1)\frac{d}{d t}V_{t}^{\alpha\beta}=\frac{1}{2}\frac{f(\rho_{t}^{\alpha\beta})}{\rho_{t}^{\alpha\beta}}V_{t}^{\alpha\beta}\,,\tag{1}

Here, we observe this ODE (2.13) is exactly the drift component of the covariance SDE (2.7), i.e.

12f(ρtαβ)ρtαβVtαβ=ν(ρtαβ)VtααVtββ,if(c+c)2=1,dn=1.\frac{1}{2}\frac{f(\rho_{t}^{\alpha\beta})}{\rho_{t}^{\alpha\beta}}V_{t}^{\alpha\beta}=\nu(\rho_{t}^{\alpha\beta})\sqrt{V_{t}^{\alpha\alpha}V_{t}^{\beta\beta}}\,,\quad\mathrm{if}\,\,(c_{+}-c_{-})^{2}=1\,,\frac{d}{n}=1\,. = 1 . (2.14) (2.13)(2.13) (2.14)(2.14)

To see this, we just need to use the identity arcsin ρ = π 2 − arccos ρ to get that 12 f(ρ) = ν(ρ), and that V αβ t ρ αβ t

qV αα t V ββ tis exactly the definition of ρ αβ t. We also note the correlation ODE d dtρ αβ t = ν(ρ αβ t) was first derived in (Zhang et al., 2022, Proposition 3), where they considered the sequential width then depth limit with a fixed initial and terminal condition. In the next section, we will describe another way to recover this ODE from an alternative scaling limit of the shaped MLP.

3 An Alternative Shaped Limit For P ∈ (0,

p(0,12)p\in(0,{\frac{1}{2}}) We start by providing some intuitions on this result. The shaped MLP can be seen as a layerwise perturbation We start by proving some functions on this result. The simplest kind can be seen as a piecewise perturbation of the linear network $$z_{\ell+1}=\sqrt{\frac{c}{n}}W_{\ell},\varphi_{s}(z_{\ell})\approx\sqrt{\frac{c}{n}}W_{\ell},z_{\ell}+\frac{1}{\sqrt{d}}\sqrt{\frac{c}{n}}W_{\ell},h(z_{\ell}),,\tag{3.1}$$ where $c^{-1}=\mathbb{E},\varphi_{s}(g)^{2}$ for $g\sim N(0,1)$ corresponds to the He-initialization (He et al., 2015), and $W_{\ell}\in\mathbb{R}^{n\times n}$ has iid N(0, 1) entries.

On an intuitive level (which we will make precise in Remark 3.3), if we take the infinite-width limit first, then this removes the effect of the random weights. In other words, if we replace the weights √ 1 nWℓ with the identity matrix In in each hidden layer, we get the same limit at initialization. Therefore, we can heuristically write

z+1z+1dh(z),(1)z_{\ell+1}\approx z_{\ell}+\frac{1}{\sqrt{d}}h(z_{\ell})\,,\tag{1} limit.
where we also used the fact c → 1 in the limit.

Observe this is resembling a ResNet, where the first zℓ is the skip connection. In fact, we can again heuristically add back in the weights on the residual branch to get

(3.2)(3.2) z+1z+1dWh(z),(1)z_{\ell+1}\approx z_{\ell}+\frac{1}{\sqrt{d}}W_{\ell}h(z_{\ell})\,,\tag{1} (3.3)(3.3)

which exactly recovers the ResNet formulation of Hayou et al. (2021); Hayou & Yang (2023), where the authors studied the case when h(x) = max(x, 0) is the ReLU activation.

Remark 3.1. On a heuristic level, this implies that whenever the width limit is taken first (or equivalently d = n 2pfor p ∈ (0, 1/2)), the shaped network with shaping parameter d −1/2 has the same limiting distribution at initialization as a ResNet with a d −1/2 weighting on the residual branch.

However, we note that despite having identical ODE for the covariance at initialization, this does not imply the training dynamics will be the same - it will likely be different. Furthermore, since Hayou & Yang (2023) showed the width and depth limits commute for ResNets, this provides the additional insight that noncommunitativity of limits in shaped MLPs arises from the product of random matrices.

3.1 Precise Results

The core object that forms a Markov chain is the post-activation covariance matrix cn [⟨φ α ℓ , φ β ℓ ⟩] m α,β=1. To see this, we will use the property of Gaussian matrix multiplication, where we let W ∈ R n×n with iid entries Wij ∼ N(0, 1), and {u α} m α=1 ∈ R n be a collection of constant vectors, which gives us

[Wuα]α=1mN(0,[(uα,uβ)]α=1mIn),[W u^{\alpha}]_{\alpha=1}^{m}\stackrel{d}{=}N(0,[(u^{\alpha},u^{\beta})]_{\alpha=1}^{m}\otimes I_{n})\,,

where we use the notation [v α] m α=1 to stack the vectors vertically. This forms a Markov chain because we can condition on Fℓ = σ([z α ℓ ] m α=1) to get

[z+1α]α=1mF=[z+1α]α=1mσ(V)N(0,VIn).[z_{\ell+1}^{\alpha}]_{\alpha=1}^{m}|{\mathcal F}_{\ell}=[z_{\ell+1}^{\alpha}]_{\alpha=1}^{m}|\sigma(V_{\ell})\sim N(0,V_{\ell}\otimes I_{n})\,.

and we can see that Vℓ+1|Fℓ = Vℓ+1|σ(Vℓ), which is exactly the definition of a Markov chain.

(3.4)(3.4) (3.5)(3.5)

We will start by deriving the precise Markov chain update up to a term of size O(n −3p), which will be a slight modification of the Euler discretization we saw in (2.4).

Lemma 3.2 (Covariance Markov Chain for the Shaped MLP). Let z α ℓ be the MLP in defined in (2.1) with shaped ReLU activations defined in (2.5). For p ∈ (0, 1 2 ) and d = n 2p*, the Markov chain satisfies*

V+1=V+b(V)d+Σs(V)1/2ξn+O(d3/2),V_{\ell+1}=V_{\ell}+\frac{b(V_{\ell})}{d}+\frac{\Sigma_{s}(V_{\ell})^{1/2}\,\xi_{\ell}}{\sqrt{n}}+O(d^{-3/2})\,,

where {ξℓ}ℓ≥0 are iid zero mean and identity covariance random vectors, and in the limit as n, d → ∞ we have that Σs → Σ*, and the coefficients are defined as*

b(V)αβ=ν(ραβ)VαVββ,Σ(V)αβ,γδ=VαγVβδ+VαδVβγ,b(V)^{\alpha\beta}=\nu(\rho^{\alpha\beta})\sqrt{V\alpha\,V^{\beta\beta}}\,,\quad\Sigma(V)^{\alpha\beta,\gamma\delta}=V^{\alpha\gamma}V^{\beta\delta}+V^{\alpha\delta}V^{\beta\gamma}\,, where $\rho^{\alpha\beta}=\frac{V^{\alpha\beta}}{\sqrt{V^{\alpha\gamma}V^{\beta\gamma}}}$, and $\nu(\rho)=\frac{(c{\nu}-c_{\rho})^{2}}{2\pi}\left(\sqrt{1-\rho^{2}}+\rho\arccos\rho\right)$._ Proof. To start, we will observe that conditioned on Fℓ, we have that

V+1αβF=cni=1nφs(z+1,iα)φs(z+1,iβ)=φαφβcni=1nφs(giα)φs(giβ),V_{\ell+1}^{\alpha\beta}|{\mathcal{F}}_{\ell}=\frac{c}{n}\sum_{i=1}^{n}\varphi_{s}(z_{\ell+1,i}^{\alpha})\,\varphi_{s}(z_{\ell+1,i}^{\beta})=|\varphi_{\ell}^{\alpha}|\,|\varphi_{\ell}^{\beta}|\frac{c}{n}\sum_{i=1}^{n}\varphi_{s}(g_{i}^{\alpha})\,\varphi_{s}(g_{i}^{\beta})\,, (3.6)(3.6) (3.7)(3.7) (3.8)(3.8) (3.9)(3.9) (3.10)(3.10) (3.11)(3.11) (3.12)(3.12)

where used the fact that zℓ+1 are jointly Gaussian, and we have that

[giα]i=1nN(0,[1ραβραβ1]In),\left[g_{i}^{\alpha}\right]_{i=1}^{n}\sim{\mathcal N}\left(0\,,\,\left[\begin{array}{l l}{{1}}&{{\rho_{\ell}^{\alpha\beta}}}\\ {{\rho_{\ell}^{\alpha\beta}}}&{{1}}\end{array}\right]\otimes I_{n}\right)\,,

for ρ αβ =V αβ √ V ααV ββ . At this point, we can define K1(ρ αβ) = E φs(g α i ) φs(g β i ) and the random variable

Rαβ=1ni=1ncφs(giα)φs(giβ)cK1(ραβ),R_{\ell}^{\alpha\beta}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}c\varphi_{s}(g_{i}^{\alpha})\varphi_{s}(g_{i}^{\beta})-c K_{1}(\rho_{\ell}^{\alpha\beta})\,, (3.13)(3.13)

which allows us to decompose this into a deterministic and a random component as

V+1αβF=φαφβ(cK1(ραβ)+1nRαβ).V_{\ell+1}^{\alpha\beta}|{\mathcal F}_{\ell}=|\varphi_{\ell}^{\alpha}|\,|\varphi_{\ell}^{\beta}|\left(c K_{1}(\rho_{\ell}^{\alpha\beta})+\frac{1}{\sqrt{n}}R_{\ell}^{\alpha\beta}\right)\,. . (3.11) We can Taylor expand cK1(ρ) in terms of n −pto get the following result from Lemma B.1

cK1(ρ)=ρ+ν(ρ)n2ρ+O(n3ρ),ν(ρ)=(c+c)22π(1ρ2+ρarccosρ).cK_{1}(\rho)=\rho+\frac{\nu(\rho)}{n^{2\rho}}+O(n^{-3\rho})\,,\quad\nu(\rho)=\frac{\left(c_{+}-c_{-}\right)^{2}}{2\pi}\left(\sqrt{1-\rho^{2}}+\rho\arccos\rho\right)\,. Similarly from Lemma B.3, we also have the approximation $$\mathbb{E},R_{\ell}^{\alpha\beta}R_{\ell}^{\beta}=\rho_{\ell}^{\alpha\gamma}\rho_{\ell}^{\beta\delta}+\rho_{\ell}^{\alpha\delta},\rho_{\ell}^{\beta\gamma}+O(n^{-p}),.$$ . (3.12) Putting everything together, we can write

Later, we can write $$cK_1(\rho_\ell^{\alpha\beta})+\frac{1}{\sqrt{n}}R_\ell^{\alpha\beta}=\rho_\ell^{\alpha\beta}+\frac{\nu(\rho_\ell^{\alpha\beta})}{n^{2p}}+\frac{1}{\sqrt{n}}R_\ell^{\alpha\beta}+O(n^{-3p}),,$$ write ... −3p), (3.14) which implies we can write

V+1αβ=Vαβ+b(V)αβn2p+Rαβn+O(n3p),V_{\ell+1}^{\alpha\beta}=V_{\ell}^{\alpha\beta}+\frac{b(V_{\ell})^{\alpha\beta}}{n^{2p}}+\frac{R^{\alpha\beta_{\ell}}}{\sqrt{n}}+O(n^{-3p})\,, −3p), (3.15) where b(V ) = ν(ρ αβ ℓ) qV αα ℓV ββ ℓ. Now taking the upper triangular entries of Vℓ as a vector, we have that

where $\Sigma_{\mu}(V)=e^{\mu}V^{\mu\nu}V^{\nu\rho}+V^{\mu\nu}$. Now using the upper triangular entries of $V$ as a vector, we have that $$V_{\ell+1}=V_{\ell}+\frac{b(V_{\ell})}{n^{2p}}+\frac{\Sigma_{\mu}(V_{\ell})\xi_{\ell}}{\sqrt{n}}+O(n^{-3p}),,\tag{3.16}$$ where $\Sigma_{\mu}(V)|{n\beta>\ell}=\sqrt{V^{\alpha\nu}V^{3/2p}}(\rho^{\alpha\nu}\rho^{\beta}+\rho^{\alpha\beta}\rho^{\nu\gamma})+O(n^{-p})=V^{\alpha\nu}V^{\beta\delta}+V^{\alpha\delta}V^{\beta\gamma}+O(n^{-p})$, and $\xi{\ell}$ is a zero mean identity covariance random vector. We recover the exact desire result from writing d = n 2p. (3.14)(3.14)

(3.15)(3.15) 7 Remark 3.3. We note that the drift term arising from the activation depends only on the depth d and the random term only depends on the width n. If we decouple the dependence on d and n, and take the infinite-width limit first, we arrive at

V+1=V+bs(V)d+O(n3/2),V_{\ell+1}=V_{\ell}+\frac{b_{s}(V_{\ell})}{d}+O(n^{-3/2})\,, (3.17)(3.17)

which is equivalent to removing the randomness of the weights.

We note this Markov chain behaves like a sum of two Euler updates with step sizes 1 n2p and 1n , where the 1 n corresponds to the random term with coefficient √ 1 n . However since p ∈ (0, 1 2 ), the first term with step size 1 n2p will dominate, which is the term that corresponds to shaping the activation function. Therefore, we expect the random term to vanish in the limit, and hence leaving us with an ODE only. We make this result precise in the following Proposition.

Proposition 3.4 (Covariance ODE for the Shaped ReLU MLP). Let p ∈ (0, 1 2 ). Then in the limit as d, n → ∞,d n2p → 1, and φs is the shaped ReLU defined in (2.5), we have that the upper triangular entries of V⌊tn⌋ (flattened to a vector) converges to the following ODE weakly with respect to the Skorohod topology of DR+,Rm(m+1)/2

dVt=b(Vt)dt,V0=1nin[xα,xβ]α,β=1m,(3.18)dV_{t}=b(V_{t})\,dt\,,\quad V_{0}=\frac{1}{n_{in}}[\langle x^{\alpha},x^{\beta}\rangle]^{m}_{\alpha,\beta=1}\,,\tag{3.18} 2.1 and Lemma 3.2 where b is defined in Theorem 2.1 and Lemma 3.2. Proof. Starting with the Markov chain in Lemma 3.2, we will treat the random term of order O(n −1/2) as part of the drift instead. More precisely, we let

b^(V)=b(V)n2p+Σs(V)1/2ξn,(3.19)\widehat{b}(V)=\frac{b(V)}{n^{2p}}+\frac{\Sigma_{s}(V)^{1/2}\xi_{\ell}}{\sqrt{n}}\,,\tag{3.19} \square

which in expectation just equals to b(V )n −2p. Since there is no random term at the order of n −p, we can apply Proposition A.7 to the special as if the diffusion coefficient is equal to zero. At the same time, since the higher order term in the Markov chain is at the desired order of O(n −3p), which will vanish in the limit, we get the desired result.

At this point it's worth pointing out the regime of p ∈ (0, 1 2 ) was studied in Li et al. (2022), but the scaling limit was taken to be dn → T instead of d n2p . This led to a "degenerate" regime where ρt = 1 for all t > 0.

The above ODE result implies that the degenerate limit can be characterized in a more refined way if the scaling is chosen carefully.

In the next and final section, we show that actually even when the network is unshaped (i.e. p = 0), there exists a scaling such that we can characterize the limiting Markov chain up to the first order asymptotic correction.

4 An Sde For The Unshaped Relu Mlp

In this section, we let φs(x) = φ(x) = max(x, 0), and we are interested in studying the correlation

ραβ=VαβVααVββ=φα,φβφαφβ.(4.1)\rho_{\ell}^{\alpha\beta}=\frac{V_{\ell}^{\alpha\beta}}{\sqrt{V_{\ell}^{\alpha\alpha}V_{\ell}^{\beta\beta}}}=\frac{\langle\varphi_{\ell}^{\alpha},\varphi_{\ell}^{\beta}\rangle}{|\varphi_{\ell}^{\alpha}|\,|\varphi_{\ell}^{\beta}|}\,.\tag{4.1}

From this point onwards, we will only consider the marginal over two inputs, so we will drop the superscript αβ. Similar to the previous section, we will also start by providing an intuitive sketch.

Many existing work has derived the rough asymptotic order of the unshaped correlation to be ρℓ = 1−O(ℓ −2), where ℓ is the layer (see for example Appendix E of Li et al. (2022) and Jakub & Nica (2023)). Firstly, this

implies that a Taylor expansion of all functions of ρ in the Markov chain update around ρ = 1 will be very accurate. At the same time, it is natural to magnify the object inside the big O by reverting the scaling, or more precisely consider the object q=2(1ρ),(1)q_{\ell}=\ell^{2}(1-\rho_{\ell})\,,\tag{1} which will hopefully remain at size Θ(1).

For simplicity, we can consider the infinite-width update of the unshaped correlation (which corresponds to the zeroth-order Taylor expansion in 1n )

ρ+1=ρ+c1(1ρ)3/2+O((1ρ)5/2),\rho_{\ell+1}=\rho_{\ell}+c_{1}(1-\rho_{\ell})^{3/2}+O((1-\rho_{\ell})^{5/2})\,,

where for the sake of illustration we will take c1 = 1 and drop the big O term for now. Substituting in qℓ, we can recover the update

(4.2)(4.2) (4.3)(4.3) q+1=q+2qq3/2.q_{\ell+1}=q_{\ell}+\frac{2q_{\ell}}{\ell}-\frac{q_{\ell}^{3/2}}{\ell}\,. (4.4)(4.4) ℓ. (4.4) While this doesn't quite look like an Euler update just yet, we can substitute in t = ℓ n for the time scale,

which will lead us to have q+1=q+1tn(2qq3/2),q_{\ell+1}=q_{\ell}+\frac{1}{t n}\left(2q_{\ell}-q_{\ell}^{3/2}\right)\,, ular ODE hence (heuristically) giving us the singular ODE

(4.5)(4.5) dqt=2qtqt3/2t.d q_{t}=\frac{2q_{t}-q_{t}^{3/2}}{t}\,. (4.6)(4.6) t. (4.6) To recover the SDE, we will simply include the additional terms of the Markov chain instead taking the infinite-width limit first.

4.1 Full Derivation

In the rest of this section, we will provide a derivation of an SDE arising from an appropriate scaling of ρℓ. Theorem 4.1 (Rescaled Correlation). Let qℓ = ℓ 2(1 − ρℓ). Then for all t0 > 0*, the process* {q⌊tn⌋}t≥t0 converges to a solution of the following SDE weakly in the Skorohod topology (see (Li et al., 2022, Appendix A))

dqt=2qt(123πqt1/2t1)dt+22qtdBt  .d q_{t}=2q_{t}\left(\frac{1-\frac{\sqrt{2}}{3\pi}q_{t}^{1/2}}{t}-1\right)\,d t+2\sqrt{2}q_{t}\,d B_{t}\;.

The above statement holds only when t0 > 0, and there is a interesting technicality that must be resolved to interpret what happens as t → 0 +. In particular, the Markov chain is not time homogeneous, and the limiting SDE has a singularity at t = 0. The contribution of the singularity needs to be controlled in order to establish convergence for all t ≥ 0. Furthermore, due to the singularity issue, it is also unclear what the initial condition of qt should be.

In our simulations for Figure 1, we addressed the time singularity by shifting the time evaluation of 1 t to the next step of 1 t+∆t , where ∆t > 0 is the time step size. More precisely, we first consider the log version of rt = log qt

drt=2(1123πexp(rt2)t)dt+22dBt.d r_{t}=-2\left(1-\frac{1-\frac{\sqrt{2}}{3\pi}\exp(\frac{r_{t}}{2})}{t}\right)\,d t+2\sqrt{2}d B_{t}\,. (4.7)(4.7) (4.8)(4.8)

Then we choose the following discretization

rt+Δt=rt2(1123πexp(rt2)t+Δt)Δt+22ξtΔt,r_{t+\Delta_{t}}=r_{t}-2\left(1-\frac{1-\frac{\sqrt{2}}{3\pi}\exp\left(\frac{r_{t}}{2}\right)}{t+\Delta_{t}}\right)\,\Delta_{t}+2\sqrt{2}\,\xi_{t}\sqrt{\Delta_{t}}\,, (4.9)(4.9)

where ξt ∼ N(0, 1). For initial conditions, we also noticed that since the initial correlation must be contained in the interval [−1, 1], the end result was not very sensitive to the choice of r0.

Proof of Theorem 4.1. Firstly, we will introduce the definitions

Kp,r(ρ)=Eφs(g)pφs(g^)r,(1)K_{p,r}(\rho)=\mathbb{E}\,\varphi_{s}(g)^{p}\varphi_{s}(\hat{g})^{r}\,,\tag{1}

where g, w ∼ N(0, 1) and we define gˆ = ρg +qw with q = p1 − ρ 2. We will also use the short hand notation to write Kp := Kp,p. Here we will recall several formulae calculated in Cho & Saul (2009) and (Li et al., 2022, Lemma B.4)

K0(ρ)=E\mathds1{g>0}\mathds1{ρg+w>0}=arccos(ρ)2π,K_{0}(\rho)=\mathbb{E}\,\mathds{1}_{\{g>0\}}\mathds{1}_{\{\rho g+w>0\}}=\frac{\arccos(-\rho)}{2\pi}\,, $$K_{1}(\rho)=\mathbb{E},\varphi(g)\varphi(\rho g+qw)=\frac{q+\rho\arccos(-\rho)}{2\pi},,\tag{4.11}$$ $$K_{2}(\rho)=\mathbb{E},\varphi(g)^{2}\varphi(\rho g+qw)^{2}=\frac{3\rho q+\arccos(-\rho)(1+2\rho^{2})}{2\pi},,$$ $$K_{3,1}(\rho)=\mathbb{E},\varphi(g)^{3}\varphi(\rho g+qw)=\frac{q(2+\rho^{2})+3\arccos(-\rho)\rho}{2\pi},.$$ (4.10)(4.10)

(4.12)(4.12) (4.13)(4.13)

Furthermore, we will define M2:=E[cφ(g)21]2=5.M_{2}:=\mathbb{E}\left[c\varphi(g)^{2}-1\right]^{2}=5\,. 2 − 1]2 = 5 . (4.12) Using the steps of (Li et al., 2022, Proposition B.8), we can establish an approximate Markov chain

ρ+1=cK1(ρ)+μ^r(ρ)n+σr(ρ)ξn+O(n3/2),\rho_{\ell+1}=c K_{1}(\rho_{\ell})+\frac{\widehat{\mu}_{r}(\rho_{\ell})}{n}+\frac{\sigma_{r}(\rho_{\ell})\xi_{\ell}}{\sqrt{n}}+O(n^{-3/2})\,,

where ξℓ are iid with zero mean and unit variance, and

μr(ρ)=E[μ^r(ρ)ρ]=c4[K1(c2K2+3M2+3)4cK3.1],σr2(ρ)=c22[K12(c2K2+M2+1)4cK1K3.1+2K2],\begin{array}{l}{{\mu_{r}(\rho_{\ell})=\mathbb{E}[\widehat{\mu}_{r}(\rho_{\ell})|\rho_{\ell}]=\frac{c}{4}\left[K_{1}(c^{2}K_{2}+3M_{2}+3)-4c K_{3.1}\right]\,,}}\\ {{\sigma_{r}^{2}(\rho_{\ell})=\frac{c^{2}}{2}\left[K_{1}^{2}(c^{2}K_{2}+M_{2}+1)-4c K_{1}K_{3.1}+2K_{2}\right]\,,}}\end{array} (4.14)(4.14)

and we write K· = K·(ρℓ).

Here we use the big O(f(n, ℓ)) to denote a random variable X such that for all p ≥ 1

EXpf(n,)pCp<,(1)\frac{\mathbb{E}|X|^{p}}{f(n,\ell)^{p}}\leq C_{p}<\infty\,,\tag{1}

for some constants Cp > 0 independent of n and ℓ.

In view of the SDE convergence theorem Proposition A.7, if we eventually reach an SDE, we will only need to keep track of the expected drift µr instead of the random drift. We can then Taylor expand the coefficients in terms of ρℓ about ρℓ = 1 (from the negative direction) using SymPy (Meurer et al., 2017), which translates to the following update rule

ρ+1=ρ+223π(1ρ)3/2+230π(1ρ)5/2(4.16)\rho_{\ell+1}=\rho_{\ell}+\frac{2\sqrt{2}}{3\pi}(1-\rho_{\ell})^{3/2}+\frac{\sqrt{2}}{30\pi}(1-\rho_{\ell})^{5/2}\tag{4.16} $$+\frac{1}{n}\left(-2(1-\rho_{\ell})+\frac{4\sqrt{2}}{\pi}(1-\rho_{\ell})^{3/2}+3(1-\rho_{\ell})^{2}-\frac{73\sqrt{2}}{15\pi}(1-\rho_{\ell})^{5/2}\right)$$ $$+\frac{\xi_{\ell}}{\sqrt{n}}\left(2\sqrt{2}(1-\rho_{\ell})-\frac{56}{15\pi}(1-\rho_{\ell})^{3/2}\right)+O((1-\rho_{\ell})^{4}+n^{-3/2}),.$$ (4.15)(4.15) (4.17)(4.17)

We note up to this point, a similar approach was taken in Jakub & Nica (2023). However, we will diverge here by consider the following scaling

$q_{\ell}=\ell^{2}(1-\rho_{\ell}),.$ (10.11) The choice of ℓ 2scale is motivated by the infinite-width limit, where (1 − ρℓ) shown to be of order O(ℓ −2) in (Li et al., 2022, Appendix E). This implies the following update

2(+1)2q+1=q2223π(1ρ)3/22230π(1ρ)5/2(4.18)\frac{\ell^{2}}{(\ell+1)^{2}}\,q_{\ell+1}=q_{\ell}-\ell^{2}\frac{2\sqrt{2}}{3\pi}(1-\rho_{\ell})^{3/2}-\ell^{2}\frac{\sqrt{2}}{30\pi}(1-\rho_{\ell})^{5/2}\tag{4.18} $$-\frac{\ell^{2}}{n}\left(-2(1-\rho_{\ell})+\frac{4\sqrt{2}}{\pi}(1-\rho_{\ell})^{3/2}+3(1-\rho_{\ell})^{2}-\frac{73\sqrt{2}}{15\pi}(1-\rho_{\ell})^{5/2}\right)$$ $$+\frac{\ell^{4}\ell^{2}}{\sqrt{n}}\left(2\sqrt{2}(1-\rho_{\ell})-\frac{56}{15\pi}(1-\rho_{\ell})^{3/2}\right)+O(\ell^{-9}q_{\ell}^{4}+\ell^{-2}n^{-3/2}),.$$ 4.18)4.18) (4.19)(4.19) (4.20)(4.20) (4.21)(4.21) \square

Next, we will drop all higher order terms in ℓ, then using the fact that ℓ 2 (ℓ+1)2 = 1− 2 ℓ +O(ℓ −2), we can write

q+1=q(1+21)223πq3/22qn+ξn22q+O(2+n1/21),\begin{aligned} & q_{\ell+1}=q_{\ell}(1+2\ell^{-1})-\frac{2\sqrt{2}}{3\pi}\frac{q_{\ell}^{3/2}}{\ell}-\frac{2q_{\ell}}{n}+\frac{\xi_{\ell}}{\sqrt{n}}2\sqrt{2}q_{\ell}+O(\ell^{-2}+n^{-1/2}\ell^{-1})\,, \end{aligned} ed $\ell^{-2}n^{-3/2}$ in the big $O$ since it gets dominated by $\ell^{-2}$. where we dropped ℓ Choosing the time scaling t = ℓ n then gives us

q+1=q+2nq(123πq1/2t1)+22nqξ+O(t2n2+t1n3/2).q_{\ell+1}=q_{\ell}+\frac{2}{n}q_{\ell}\left(\frac{1-\frac{\sqrt{2}}{3\pi}q_{\ell}^{1/2}}{t}-1\right)+2\sqrt{\frac{2}{n}}q_{\ell}\,\xi_{\ell}+O(t^{-2}n^{-2}+t^{-1}n^{-3/2})\,.

Finally, we will use the Markov chain convergence result to an SDE result Proposition A.7, which leads to the desired SDE for t ≥ t0 > 0

dqt=2qt(123πqt1/2t1)dt+22qtdBt  .d q_{t}=2q_{t}\left(\frac{1-\frac{\sqrt{2}}{3\pi}q_{t}^{1/2}}{t}-1\right)\,d t+2\sqrt{2}q_{t}\,d B_{t}\;.

5 Discussion

In this section, we provide some discussion on the potential impact of this work, from both a practical and a theoretical point of view.

Stable Training via Depth Scaling. Martens et al. (2021) make the key observation that as depth increases, the increased instability of training dynamics is due to the key role of nonlinear activation functions.

Since then, it is better understood that to achieve stable training in large depth, it is necessary to weaken the nonlinearities of each layer (Noci et al., 2023; Bordelon et al., 2023; Yang et al., 2023). Our results observe a key connection between two strategies, either weakening the activation function, or weakening the entire layer via skip connections directly. From a practical point of view, since both the shaping and ResNet approaches can lead to the same covariance ODE at initialization, we understand that the key to preventing unstable gradients is weakening nonlinearities. To choose between the two regimes, it is therefore left to study the role of weakening the weight matrix or otherwise, and how this affects training dynamics.

In particular, we note that the shaped limit admits feature learning without modifying the scaling (Hanin & Nica, 2019a), which is fundamentally different than the µP regime (Yang & Hu, 2021).

Analysis of Normalization Methods. Since the correlation Markov chain has large jumps, and quickly to a degenerate fixed point, it is intuitive to assume this chain does not admit a continuous time limit, and therefore difficult to analyze from an analytical approach. Indeed, this is the approach taken by Meterez et al. (2023), which yielded one of the first analysis of normalization methods in a deep network. However, our result provides a counter-intuitive understanding: it remains possible to analyze seemingly large discrete jumps in a Markov chain if rescaled appropriately. In our case, since the Markov chain converges quickly to the fixed point at ρ = 1, zooming in around the fixed point as a function of time (or layer ℓ) allows us to view the dynamics at the correct scale. This overcomes a previously known technical hurdle, and opens up the possibility of analyzing normalization methods, which is still lacking theoretical progress. More specifically, we would like to understand how normalization methods may help or hurt performance in practice, and how to best choose and tune these methods given their many variants, all of which are now made easier by our scaling approach.

Foundation for Studying Training Dynamics. Until recently, almost all of the work on infinitedepth neural networks remain at initialization. This is not due to a lack of attempts, but rather due to a lack of mathematical techniques available. In particular, we also emphasize the seminal work of neural tangent kernels for training dynamics (Jacot et al., 2018) is entirely built on the same techniques studying initialization (Neal, 1995; Lee et al., 2018). For this reason, it is important to slowly yet firmly build up a foundation of theoretical results, that understands as much structure as possible at initialization.

To this goal, the key distinction from the infinite-width regime is that each layer must be viewed as an infinitesimal discretization of a continuous "layer time." This approach is what helped yield some of the first characterization of training dynamics for infinite-depth ResNets (Bordelon et al., 2023; Yang et al., 2023). In fact, any theory of training dynamics must also account for this infinitesimal treatment of layers, otherwise the limit cannot possibly be stable. Therefore, we view this line of work as building towards a theory of training dynamics, and eventually generalization as well.

References

Emmanuel Abbe, Enric Boix Adsera, and Theodor Misiakiewicz. The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks. In Conference on Learning Theory, pp. 4782–4887. PMLR, 2022.

Jimmy Ba, Murat A Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, and Greg Yang. High-dimensional asymptotics of feature learning: How one gradient step improves the representation. arXiv preprint arXiv:2205.01445, 2022.

Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.

Peter L Bartlett, Andrea Montanari, and Alexander Rakhlin. Deep learning: a statistical viewpoint. Acta numerica, 30:87–201, 2021.

Blake Bordelon, Lorenzo Noci, Mufan Bill Li, Boris Hanin, and Cengiz Pehlevan. Depthwise hyperparameter transfer in residual networks: Dynamics and scaling limit, 2023.

Youngmin Cho and Lawrence K Saul. Kernel methods for deep learning. In Advances in Neural Information Processing Systems (NeurIPS), pp. 342–350, 2009.

Nicola Muca Cirone, Maud Lemercier, and Cristopher Salvi. Neural signature kernels as infinite-width-depthlimits of controlled resnets. arXiv preprint arXiv:2303.17671, 2023.

Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In Int. Conf. Machine Learning (ICML), pp. 1675–1685. PMLR, 2019.

Stewart N Ethier and Thomas G Kurtz. Markov processes: characterization and convergence. John Wiley & Sons, 2009.

Kirsten Fischer, David Dahmen, and Moritz Helias. Optimal signal propagation in resnets through residual scaling. arXiv preprint arXiv:2305.07715, 2023.

Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems, 33:14820–14830, 2020.

Boris Hanin and Mihai Nica. Finite depth and width corrections to the neural tangent kernel. In Int. Conf.

Learning Representations (ICLR), 2019a.

Boris Hanin and Mihai Nica. Products of many large random matrices and gradients in deep neural networks.

Communications in Mathematical Physics, pp. 1–36, 2019b.

Soufiane Hayou. On the infinite-depth limit of finite-width neural networks. Transactions on Machine Learning Research, 2022.

Soufiane Hayou. Commutative width and depth scaling in deep neural networks, 2023.

Soufiane Hayou and Greg Yang. Width and depth limits commute in residual networks. arXiv preprint arXiv:2302.00453, 2023.

Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, and Judith Rousseau.

Stable ResNet. In Int. Conf. Artificial Intelligence and Statistics (AISTATS), pp. 1324–1332. PMLR, 2021.

Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In Proc. IEEE Int. Conf. Computer Vision, pp. 1026–1034, 2015.

Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.

Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448–456. pmlr, 2015.

Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Information Processing Systems (NeurIPS), 2018.

Cameron Jakub and Mihai Nica. Depth degeneracy in neural networks: Vanishing angles in fully connected relu networks on initialization. arXiv preprint arXiv:2302.09712, 2023.

O. Kallenberg. Foundations of Modern Probability. Probability theory and stochastic modelling. Springer, 2021. ISBN 9783030618728.

Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha SohlDickstein. Deep neural networks as gaussian processes. In Int. Conf. Learning Representations (ICLR), 2018.

Mufan Li, Mihai Nica, and Dan Roy. The future is log-gaussian: Resnets and their infinite-depth-and-width limit at initialization. Advances in Neural Information Processing Systems, 34, 2021.

Mufan Li, Jaume de Dios Pont, Mihai Nica, and Daniel M. Roy. Geometric dyson brownian motion and the free log-normal for minor of products of random matrices. In Preparation, 2024.

Mufan Bill Li, Mihai Nica, and Daniel M Roy. The neural covariance sde: Shaped infinite depth-and-width networks at initialization. arXiv preprint arXiv:2206.02768, 2022.

James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha SohlDickstein, and Samuel S Schoenholz. Rapid training of deep neural networks without skip connections or normalization layers using deep kernel shaping. arXiv preprint arXiv:2110.01765, 2021.

Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, 2018.

Alexandru Meterez, Amir Joudaki, Francesco Orabona, Alexander Immer, Gunnar Rätsch, and Hadi Daneshmand. Towards training without depth limits: Batch normalization without gradient explosion, 2023.

Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ondřej Čertík, Sergey B. Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rathnayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta, Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, Štěpán Roučka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Scopatz. Sympy: symbolic computing in python. PeerJ Computer Science, 3:e103, January 2017. ISSN 2376-5992. doi: 10.7717/peerj-cs.103. URL https://doi.

org/10.7717/peerj-cs.103.

Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 1995.

Lorenzo Noci, Gregor Bachmann, Kevin Roth, Sebastian Nowozin, and Thomas Hofmann. Precise characterization of the prior predictive distribution of deep relu networks. Advances in Neural Information Processing Systems, 34, 2021.

Lorenzo Noci, Chuning Li, Mufan Bill Li, Bobby He, Thomas Hofmann, Chris Maddison, and Daniel M Roy. The shaped transformer: Attention models in the infinite depth-and-width limit. arXiv preprint arXiv:2306.17759, 2023.

Justin Sirignano and Konstantinos Spiliopoulos. Mean field analysis of neural networks: A law of large numbers, 2018.

Daniel W Stroock and SR Srinivasa Varadhan. Multidimensional diffusion processes, volume 233. Springer Science & Business Media, 1997.

Greg Yang. Tensor programs i: Wide feedforward or recurrent neural networks of any architecture are gaussian processes, 2019.

Greg Yang and Edward J. Hu. Feature learning in infinite-width neural networks. In Int. Conf. Machine Learning (ICML), 2021.

Greg Yang, Dingli Yu, Chen Zhu, and Soufiane Hayou. Tensor programs vi: Feature learning in infinite-depth neural networks, 2023.

Guodong Zhang, Aleksandar Botev, and James Martens. Deep learning without shortcuts: Shaping the kernel with tailored rectifiers. arXiv preprint arXiv:2203.08120, 2022.

A Background On Markov Chain Convergence To Sdes

In this section, we will review the main technical results used for Markov chain convergence to SDEs. In particular, we will consider piecewise constant interpolations of Markov chains in continuous time defined by V (n) ⌊tn⌋ , which converges to a continuous process Vt. Due to measurability concerns of the Markov chain with jumps, in order to define weak convergence on the space of processes, we need to define a precise space along with its equipped topology first introduced by Skorohod. We will mostly follow the references Kallenberg (2021); Ethier & Kurtz (2009); Stroock & Varadhan (1997) in the rest of this section, and the content will be essentially a rephrasing of Appendix A in Li et al. (2022).

To start, we will let S be a complete separable metric space, where the stochastic processes takes value in (in our case, it's the upper triangular entries R m(m+1)/2). Let DR+,S be the space of càdlàg functions, i.e.

functions that are right continuous with left limits, from R+ to S. For xn ∈ R+, we use xn ul −→ x to denote locally uniform convergence, or uniform convergence on compact subsets of R+. We also consider a class of bijections λ on R+ such that λ is strictly increasing, and λ0 = 0. We define Skorohod convergence, written as xn s−→ x on DR+,S, if there exists a sequence of such bijections λn such that

λnId,xnλnx.\lambda_{n}\stackrel{u l}{\longrightarrow}\mathrm{Id}\,,\quad x_{n}\circ\lambda_{n}\stackrel{u l}{\longrightarrow}x\,. (A.1)(\mathrm{A.1}) −→ x . (A.1) Intuitively, this allows us to side step any discontinuities in the notion of convergence, as λn is allowed to perform a "time change" so we always land on the "correct side" of the discontinuity.

Importantly, we note that DR+,S with the above notion of convergence forms a well behaved probability space.

Theorem A.1 (Theorem A5.3, Kallenberg (2021)). For any separable complete metric space S, there exists a topology T on DR+,S such that (i) T induces the Skorohod convergence xn s−→ x, (ii) DR+,S is Polish (separable completely metrizable topological space) under T , (iii) T generates the Borel σ-field generated by the evaluation maps πt, t ≥ 0*, where* πt(x) = xt.

To be fully self-contained, we will also introduce the definitions of a Feller semi-group, generator, and the core. Let S be a locally compact separable metric space, and let C0 = C0(S) be the space of continuous functions that vanishes at infinity, equipped with the sup norm, hence making C0 a Banach space. We say T : C0 → C0 is a positive contraction operator if for all 0 ≤ f ≤ 1, we have that 0 ≤ T f ≤ 1. A semi-group (Tt) of positive contraction operators on C0 is called a Feller semi-group if it also satisfies

TtC0C0,t0,Ttf(x)x,ast0,fC0,xS.\begin{array}{l}{{T_{t}\,C_{0}\subset C_{0}\,,\quad t\geq0\,,}}\\ {{T_{t}\,f(x)\to x\,,\,\,\mathrm{as}\,\,t\to0\,,\quad f\in C_{0}\,,x\in S\,.}}\end{array} (A.2)(\mathrm{A.2})

Let D ⊂ C0 and A : D → C0. We say the pair (A, D) is a generator of the semi-group (Tt) if D is the maximal set such that

limt0Ttfft=Af.\lim_{t\to0}\frac{T_{t}\,f-f}{t}=A f\,.

We say an operator A with domain D on a Banach space B is closed, if its graph G = {(f, Af) : f ∈ D} is a closed subset of B × B. If the closure of G is the graph of an operator A, we say that A is the closure of A.

We say a linear subspace D ⊂ D is a core of A, if the closure of A|D is A. Intuitively, all important properties of A can be recovered via a limit point of Afn, for fn ∈ D. Furthermore, if (A, D) is a geneator of a Feller semi-group, every dense invariant subspace D ⊂ D is a core of A (Kallenberg, 2021, Proposition 17.9). In particular, the core we will work with is the space C∞ 0 of smooth functions that vanishes at infinity.

The following is a sufficient condition for a semi-group to be Feller.

(A.3)(\mathrm{A.3})

Theorem A.2 (Section 8, Theorem 2.5, Ethier & Kurtz (2009)). Let a ij ∈ C 2(R d) with ∂k∂ℓa ij be bounded for all i, j, k, ℓ ∈ [d], and let b : R d → R dbe Lipschitz. Then the generator defined by the elliptic operator

Af=12i,j=1daijijf+i=1dbiif,A f=\frac{1}{2}\sum_{i,j=1}^{d}a^{i j}\partial_{i}\partial_{j}f+\sum_{i=1}^{d}b^{i}\partial_{i}f\,, (A.4)(\mathrm{A.4})

generates a Feller semi-group on C0.

Next, we will state a set of equivalent conditions for convergence of Feller processes.

Theorem A.3 (Theorem 17.25, Kallenberg (2021)). Let X, X1, X2, X3, · · · be Feller processes in S with semi-groups (Tt),(Tn,t) and generators (A, D),(An, Dn), respectively, and fix a core D for A. Then these conditions are equivalent:

(i) for any $f\in D$, there exists some $f_n\in\mathcal{D}_n$ with $f_n\to f$ and $A_nf_n\to Af$, . (ii) Tn,tTt strongly for each t>0,(i i)\ T_{n,t}\to T_{t}\ \,s t r o n g l y\ \,f o r\ e a c h\ t>0, (iii) $T_{n,t}f\to T_t f$ for every $f\in C_0$, uniformly for bounded $t>0$, . (iv) $X{0}^{n}\stackrel{{d}}{{\to}}X_{0}$ in $S\Rightarrow X^{n}\stackrel{{d}}{{\to}}X$ in the Skohorod topology of $D_{\mathbb{R}{+},S}$. We note that it is common to choose the core D = C∞ 0 , and that checking the first condition is sufficient for convergence in the Skorohod topology. The next result will help us apply the above criterion to continuous time interpolated Markov chains. Theorem A.4 (Theorem 17.28, Kallenberg (2021)). Let Y 1, Y 2, Y 3, · · · be discrete time Markov chains in S with transition operators U1, U2, U3, · · · , and let X be a Feller process with semi-group (Tt) and generator A. Fix a core D for A, and let 0 < hn → 0. Then conditions (i) − (iv) of Theorem A.3 remain equivalent for the operators and processes

An=hn1(UnI),Tn,t=Unt/hn,Xtn=Yt/hnn.A_{n}=h_{n}^{-1}(U_{n}-I)\,,\quad T_{n,t}=U_{n}^{\lfloor t/h_{n}\rfloor}\,,\quad X_{t}^{n}=Y_{\lfloor t/h_{n}\rfloor}^{n}\,.

Here we note that in our applications, we will always choose hn = n −2p, which is essentially dependent on the width of the neural network.

At this point, we still need to check that the generators An converges to A with respect to the core D = C∞ 0 .

For this goal, we will use a lemma from Stroock & Varadhan (1997). Here we will define Πn(x, dy) to be the Markov transition kernel of Y n, and let

anij(x)=1hnyx1(yixi)(yjxj)Πn(x,dy),a_{n}^{ij}(x)=\frac{1}{h_{n}}\int_{|y-x|\leq1}(y_{i}-x_{i})(y_{j}-x_{j})\,\Pi_{n}(x,dy)\,, $$b_{n}^{i}(x)=\frac{1}{h_{n}}\int_{|y-x|\leq1}(y_{i}-x_{i}),\Pi_{n}(x,dy),,$$ $$\Delta_{n}^{\epsilon}(x)=\frac{1}{h_{n}}\Pi_{n}(x,\mathbb{R}^{d}\setminus B(x,\epsilon)),.$$ (A.5)(\mathrm{A.5}) (A.6)(\mathrm{A.6}) (A.7)(\mathrm{A.7}) (A.8)(\mathrm{A.8})

Lemma A.5 (Lemma 11.2.1, Stroock & Varadhan (1997)). The following two conditions are equivalent: (i) For any R > 0, ϵ > 0 we have that

limnsupxRan(x)a(x)op+bn(x)b(x)+Δnϵ(x)=0,\operatorname*{lim}_{n\to\infty}\operatorname*{sup}_{|x|\leq R}\|a_{n}(x)-a(x)\|_{o p}+|b_{n}(x)-b(x)|+\Delta_{n}^{\epsilon}(x)=0\,, (ii) For each $f\in C_0^\infty(\mathbb{R}^d)$, we have that . Anf → Af , (A.8) 1hnAnfAf,{\frac{1}{h_{n}}}A_{n}f\to A f\,,

uniformly on compact sets of R d, where A is defined as (A.3).

We will also need to weaken the definition slightly for non-Lipschitz coefficients.

Definition A.6. We say a sequence of processes Xn converge locally to X in the Skorohod topology if for any r > 0*, we define the following stopping times*

τn:={t0:Xtnr}τ:={t0:Xtr}(1)\tau^{n}:=\{t\geq0:|X_{t}^{n}|\geq r\}\,\quad\tau:=\{t\geq0:|X_{t}|\geq r\}\,\tag{1} (A.9)(\mathrm{A.9})

and we have that Xn t∧τ n converge to Xt∧τ in the Skorohod topology. Finally, to summarize everything into a useful form for this paper, we will state the following proposition.

Proposition A.7 (Convergence of Markov Chains to SDE, Proposition A.6, Li et al. (2022)). Let Y n be a discrete time Markov chain on R N defined by the following update for p, δ > 0

Y+1n=Yn+b^n(Yn,ωn)n2p+σn(Yn)npξn+O(n2pδ),Y_{\ell+1}^{n}=Y_{\ell}^{n}+\frac{\widehat{b}_{n}(Y_{\ell}^{n},\omega_{\ell}^{n})}{n^{2p}}+\frac{\sigma_{n}(Y_{\ell}^{n})}{n^{p}}\xi_{\ell}^{n}+O(n^{-2p-\delta})\,, (A.10)(\mathrm{A.10})

where ξ n ℓ ∈ R N are iid random variables with zero mean, identity covariance, and moments uniformly bounded in n*. Furthermore,* ω n ℓ are also iid random variables such that E[bbn(Y n ℓ , ωn ℓ )|Y n ℓ = y] = bn(y) and bbn(y, ωn ℓ ) has uniformly bounded moments in n. Finally, σn is a deterministic function, and the remainder terms in O(n −2p−δ) have uniformly bounded moments in n.

Suppose bn, σn are uniformly Lipschitz functions in n and converges to b, σ uniformly on compact sets, then in the limit as n → ∞*, the process* Xn t = Y n ⌊tn2p⌋ converges in distribution to the solution of the following SDE in the Skorohod topology of DR+,RN

dXt=b(Xt)dt+σ(Xt)dBt,X0=limnY0n  .dX_{t}=b(X_{t})\,dt+\sigma(X_{t})\,dB_{t}\,,\quad X_{0}=\operatorname*{lim}_{n\to\infty}Y_{0}^{n}\;. (A.11)(\mathrm{A.11}) (A.12)(\mathrm{A.12})

Suppose otherwise bn, σn are only locally Lipschitz (but still uniform in n), then Xn converges locally to X in the same topology (see Definition A.6). More precisely, for any fixed r > 0*, we consider the stopping times*

τn:=inf{t0:Xtnr},τ:=inf{t0:Xtr},\tau^{n}:=\operatorname*{inf}\left\{t\geq0:|X_{t}^{n}|\geq r\right\}\,,\quad\tau:=\operatorname*{inf}\left\{t\geq0:|X_{t}|\geq r\right\}\,,

then the stopped process Xn t∧τ n converges in distribution to the stopped solution Xt∧τ of the above SDE in the same topology.

B Technical Lemmas For Shaped Activations

Here we will recall and slightly modify a collection of definitions and technical results from Appendix B and C of Li et al. (2022), which are related to the shaped activation that we will use in the main theorems. To start, we will let φ(x) = max(x, 0), and recall the ReLU-like activation function as

φs(x)=s+max(x,0)+smin(x,0)=s+φ(x)sφ(x).\varphi_{s}(x)=s_{+}\operatorname*{max}(x,0)+s_{-}\operatorname*{min}(x,0)=s_{+}\varphi(x)-s_{-}\varphi(x)\,.

Let g ∼ N (0, 1), then we will restate Li et al. (2022, Lemma B.3, B.6)

Eφ(g)=12π,Eφ(g)2=12,Eφ(g)4=32.Eφs(g)=s+s2π,Eφs(g)2=s+2+s22,Eφs(g)4=32(s+4+s4).\begin{array}{l l l}{{\mathbb{E}\,\varphi(g)=\frac{1}{\sqrt{2\pi}}\,,}}&{{\mathbb{E}\,\varphi(g)^{2}=\frac{1}{2}\,,}}&{{\mathbb{E}\,\varphi(g)^{4}=\frac{3}{2}\,.}}\\ {{}}&{{}}&{{}}\\ {{\mathbb{E}\,\varphi_{s}(g)=\frac{s_{+}-s_{-}}{\sqrt{2\pi}}\,,}}&{{\mathbb{E}\,\varphi_{s}(g)^{2}=\frac{s_{+}^{2}+s_{-}^{2}}{2}\,,}}&{{\mathbb{E}\,\varphi_{s}(g)^{4}=\frac{3}{2}(s_{+}^{4}+s_{-}^{4})\quad.}}\end{array} (B.2)(\mathrm{B.2})

We will also recall the definitions

Jˉp,r(ρ):=Eφ(g)pφ(g^)r,Kp,r(ρ)\bar{J}_{p,r}(\rho):=\mathbb{E}\,\varphi(g)^{p}\varphi(\hat{g})^{r}\,,\quad K_{p,r}(\rho) r, (B.3) Kp,r(ρ):=Eφs(g)pφs(g^)r,K_{p,r}(\rho):=\mathbb{E}\,\varphi_{s}(g)^{p}\varphi_{s}(\hat{g})^{r}\,,

where g, w are iid N (0, 1) and we define gˆ = ρg + qw with q = p1 − ρ 2. We will also use the short hand notation to write J¯p := J¯p,p, Kp := Kp,p.

(B.1)(\mathrm{B.1})

(B.3)(\mathrm{B.3})

Here we recall from Cho & Saul (2009) and Li et al. (2022, Lemma B.7) that

Jˉ1(ρ)=1ρ2+(πarccosρ)ρ2π,K1(ρ)=(s+2+s2)Jˉ1(ρ)2s+sJˉ1(ρ).\bar{J}_{1}(\rho)=\frac{\sqrt{1-\rho^{2}}+(\pi-\operatorname{arccos}\rho)\rho}{2\pi}\,,\quad K_{1}(\rho)=(s_{+}^{2}+s_{-}^{2})\bar{J}_{1}(\rho)-2s_{+}s_{-}\bar{J}_{1}(-\rho)\,.

Next we will slightly modify a Taylor expansion result from (Li et al., 2022, Lemma C.1) Lemma B.1 (Taylor Expand Shaping Correlation). Recall φs(x) = s+ max(x, 0) + s− min(x, 0). Let s± = 1 + c± np for p > 0*, then we have the following Taylor expansion*

cK1(ρ)=ρ+ν(ρ)n2p+O(n3p),ν(ρ)=(c+c)22π(1ρ2+ρarccosρ).(1)cK_{1}(\rho)=\rho+\frac{\nu(\rho)}{n^{2p}}+O(n^{-3p})\,,\quad\nu(\rho)=\frac{(c_{+}-c_{-})^{2}}{2\pi}\left(\sqrt{1-\rho^{2}}+\rho\arccos\rho\right)\,.\tag{1}

Proof. We start by expanding the formula for K1(ρ) to get

cK1(ρ)=2s+2+s212π((s+2+s2)(1ρ2+(πarccosρ)ρ)2s+s(1ρ2(arccosρ)ρ)).cK_{1}(\rho)=\frac{2}{s_{+}^{2}+s_{-}^{2}}\frac{1}{2\pi}\left((s_{+}^{2}+s_{-}^{2})\left(\sqrt{1-\rho^{2}}+(\pi-\arccos\rho)\rho\right)-2s_{+}s_{-}\left(\sqrt{1-\rho^{2}}-(\arccos\rho)\rho\right)\right)\,. (B.6) At this point, we can plug in s± = 1 + c± np , and Taylor expanding gives us

cK1(ρ) = ρ arccos (ρ) π+ ρ (π − arccos (ρ)) π + n −p2 −ρc2+ arccos (ρ) + 2ρc+c− arccos (ρ) − ρc2− arccos (ρ) 2π + c 2 + p1 − ρ 2 − 2c+c− p1 − ρ 2 + c 2 − p1 − ρ 2 2π ! (B.10)(\mathbf{B}.10) (B.4)(\mathrm{B.4}) (B.5)(\mathrm{B.5}) (B.7)(\mathrm{B.7)} (B.8)(\mathrm{B.8}) (B.9)(\mathrm{B.9}) +O((np)3).+\,O\left(\left(n^{-p}\right)^{3}\right)\,. Simplifying the expressions gives us the desired result of

cK1(ρ)=ρ+ν(ρ)n2p+O(n3p).(1)c K_{1}(\rho)=\rho+\frac{\nu(\rho)}{n^{2p}}+O(n^{-3p})\,.\tag{1}

Similar to Li et al. (2022, Lemma C.2), we can also approximate the fourth moment of shaped activation functions via the fourth moment of Gaussians.

Lemma B.2 (Fourth Moment Approximation). Considers the jointly Gaussian random variables

[gα]α=1AN(0,[ραβ]α,β=1A),[g^{\alpha}]^{A}_{\alpha=1}\sim{\cal N}\left(0,\,[\rho^{\alpha\beta}]^{A}_{\alpha,\beta=1}\right)\,, for all $\alpha$. Let $\varphi{s}(x)=s_{+}\max(x,0)+s_{-}\min(x,0)$ with coefficients $s_{\pm}=1+\frac{c_{\pm}}{n^{\beta}}$, the where ρ np , then we have Eα=14φs(gα)=Eα=14gα+O(np)=ρ12ρ34+ρ13ρ24+ρ14ρ23+O(np).\begin{array}{l l l l l l}{{}}&{{}}&{{}}&{{}}&{{}}&{{}}\\ {{}}&{{}}&{{}}&{{}}&{{}}&{{}}\\ {{\mathbb{E}\prod_{\alpha=1}^{4}\varphi_{s}(g^{\alpha})=\mathbb{E}\prod_{\alpha=1}^{4}g^{\alpha}+O(n^{-p})=\rho^{12}\rho^{34}+\rho^{13}\rho^{24}+\rho^{14}\rho^{23}+O(n^{-p})\,.}}\end{array}

Proof. Observe that we can write φs(x) as a perturbation of identity

φs(x)=x+1np(c+φ(x)cφ(x))=x+O(np).\varphi_{s}(x)=x+\frac{1}{n^{p}}\left(c_{+}\varphi(x)-c_{-}\varphi(-x)\right)=x+O(n^{-p})\,.

This means we can approximate the product of shaped activations without the activation

α=14φs(gα)=α=14gα+O(np).(1)\prod_{\alpha=1}^{4}\varphi_{s}(g^{\alpha})=\prod_{\alpha=1}^{4}g^{\alpha}+O(n^{-p})\,.\tag{1} (B.11)(\mathrm{B.11}) (B.12)(\mathrm{B.12})

Finally, we can use the Isserlis Theorem to write

Eα=14gα=Eg1g2Eg3g4+Eg1g3Eg2g4+Eg1g4Eg2g3=ρ12ρ34+ρ13ρ24+ρ14ρ23,\mathbb{E}\,\prod_{\alpha=1}^{4}g^{\alpha}=\mathbb{E}\,g^{1}\,g^{2}\mathbb{E}\,g^{3}\,g^{4}+\mathbb{E}\,g^{1}\,g^{3}\mathbb{E}\,g^{2}\,g^{4}+\mathbb{E}\,g^{1}\,g^{4}\mathbb{E}\,g^{2}\,g^{3}=\rho^{12}\rho^{34}+\rho^{13}\rho^{24}+\rho^{14}\rho^{23}\,, 23, (B.13) which is the desired result.

Furthermore, we can characterize the covariance of the following random variables

Rαβ=1ni=1n[cφs(giα)φs(giβ)cK1(ραβ)],R^{\alpha\beta}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left[c\varphi_{s}(g_{i}^{\alpha})\varphi_{s}(g_{i}^{\beta})-c K_{1}(\rho^{\alpha\beta})\right]\,, (B.13)(\mathrm{B.13}) (B.14)(\mathrm{B.14}) (B.15)(\mathrm{B.15}) (B.16)(\mathrm{B.16}) (B.17)(\mathrm{B.17})

where the random vector [g α i , g β i ] are iid copies of the same Gaussian vector. This follows from a modification of Li et al. (2022, Lemma C.3).

Lemma B.3 (Covariance of Rαβ). Let Rαβ be defined as above. Then if s± = 1 + c± np for the shaped activation, we have that

ERαβRγδ=ραγρβδ+ραδρβγ+O(np).\mathbb{E}\,R^{\alpha\beta}R^{\gamma\delta}=\rho^{\alpha\gamma}\rho^{\beta\delta}+\rho^{\alpha\delta}\rho^{\beta\gamma}+O(n^{-p})\,.

Proof. Firstly, we note since each entry of the sum in Rαβ are iid, we will only need to compute the moments for a single entry. This means

ERαβRγδ=Ec2(φs(giα)φs(giβ)K1(ραβ))(φs(giγ)φs(giδ)K1(ργδ)).\mathbb{E}\,R^{\alpha\beta}R^{\gamma\delta}=\mathbb{E}\,c^{2}\left(\varphi_{s}(g_{i}^{\alpha})\varphi_{s}(g_{i}^{\beta})-K_{1}(\rho^{\alpha\beta})\right)\,\left(\varphi_{s}(g_{i}^{\gamma})\varphi_{s}(g_{i}^{\delta})-K_{1}(\rho^{\gamma\delta})\right)\,. . (B.16) At this point, we observe that $c=1+O(n^{-p})$, $K_1(\rho)=\rho+O(n^{-2p})$, and we can write ... ERαβRγδ=E(φs(giα)φs(giβ)ραβ)(φs(giγ)φs(giδ)ργδ)+O(np).\mathbb{E}\,R^{\alpha\beta}R^{\gamma\delta}=\mathbb{E}\left(\varphi_{s}(g_{i}^{\alpha})\varphi_{s}(g_{i}^{\beta})-\rho^{\alpha\beta}\right)\,\left(\varphi_{s}(g_{i}^{\gamma})\varphi_{s}(g_{i}^{\delta})-\rho^{\gamma\delta}\right)+O(n^{-p})\,. −p). (B.17) Finally, the fourth moment approximation Lemma B.2 gives us the desired result

ERαβRγδ=ραβργδ+ραγρβδ+ραδρβγραβργδραβργδ+ραβργδ+O(np)=ραγρβδ+ραδρβγ+O(np).\begin{array}{l}{{\mathbb{E}\,R^{\alpha\beta}\,R^{\gamma\delta}=\rho^{\alpha\beta}\rho^{\gamma\delta}+\rho^{\alpha\gamma}\rho^{\beta\delta}+\rho^{\alpha\delta}\rho^{\beta\gamma}-\rho^{\alpha\beta}\rho^{\gamma\delta}-\rho^{\alpha\beta}\rho^{\gamma\delta}+\rho^{\alpha\beta}\rho^{\gamma\delta}+O(n^{-p})}}\\ {{\qquad\qquad=\rho^{\alpha\gamma}\rho^{\beta\delta}+\rho^{\alpha\delta}\rho^{\beta\gamma}+O(n^{-p})\,.}}\end{array} (B.18)(\mathrm{B.18})