diff --git "a/FDbQGCAViI/FDbQGCAViI.md" "b/FDbQGCAViI/FDbQGCAViI.md" new file mode 100644--- /dev/null +++ "b/FDbQGCAViI/FDbQGCAViI.md" @@ -0,0 +1,1339 @@ +# The Eigenlearning Framework: A Conservation Law Perspective On Kernel Regression And Wide Neural Networks + +James B. Simon james.simon@berkeley.edu University of California, Berkeley and Generally Intelligent Madeline Dickens *dickens@berkeley.edu* University of California, Berkeley Dhruva Karkada dkarkada@berkeley.edu University of California, Berkeley Michael R. DeWeese deweese@berkeley.edu University of California, Berkeley Reviewed on OpenReview: *https: // openreview. net/ forum? id= FDbQGCAViI* + +## Abstract + +We derive simple closed-form estimates for the test risk and other generalization metrics of kernel ridge regression (KRR). Relative to prior work, our derivations are greatly simplified and our final expressions are more readily interpreted. In particular, we show that KRR +can be interpreted as an explicit competition among kernel eigenmodes for a fixed supply of a quantity we term "learnability." These improvements are enabled by a sharp conservation law which limits the ability of KRR to learn any orthonormal basis of functions. Test risk and other objects of interest are expressed transparently in terms of our conserved quantity evaluated in the kernel eigenbasis. We use our improved framework to: +i) provide a theoretical explanation for the "deep bootstrap" of Nakkiran et al. (2020), +ii) generalize a previous result regarding the hardness of the classic parity problem, iii) fashion a theoretical tool for the study of adversarial robustness, and iv) draw a tight analogy between KRR and a canonical model in statistical physics. + +## 1 Introduction + +Kernel ridge regression (KRR) is a popular, tractable learning algorithm that has seen a surge of attention due to equivalences to infinite-width neural networks (NNs) (Lee et al., 2018; Jacot et al., 2018). In this paper, we derive a simple theory of the generalization of KRR that yields estimators for many quantities of interest, including test risk and the covariance of the predicted function. Our eigenframework is consistent with other recent works, such as those of Canatar et al. (2021) and Jacot et al. (2020), but is simpler to work with and easier to derive. Our eigenframework paints a new picture of KRR as an explicit competition between eigenmodes for a fixed budget of a quantity we term "learnability," and downstream generalization metrics can be expressed entirely in terms of the learnability received by each mode (Equations 7-14). + +This picture stems from a *conservation law* latent in KRR which limits any kernel's ability to learn any complete basis of target functions. The conserved quantity, learnability, is the inner product of the target and predicted functions and, as we show, can be interpreted as a measure of how well the target function can + +Code to reproduce results is available at https://github.com/james-simon/eigenlearning. +be learned by a particular kernel given n training examples. We prove that the total learnability, summed over a complete basis of target functions (such as the kernel eigenbasis), is no greater than the number of training samples, with equality at zero ridge parameter. This formal conservation law makes concrete the recent folklore intuition that, given n samples, KRR and wide NNs can reliably learn at most n orthogonal functions (Mei & Montanari, 2019; Bordelon et al., 2020). The conservation of this quantity suggests that it will prove useful for understanding the generalization of KRR. This intuition is borne out by our subsequent analysis: we derive a set of simple, closed-form estimates for test risk and other objects of interest and find that all of them can be transparently expressed in terms of eigenmode learnabilities. Our expressions are more compact and readily interpretable than those of prior work and constitute a major simplification. Our derivation of these estimators is significantly simpler and more accessible than those of prior work, which relied on the heavy mathematical machinery of replica calculations and random matrix theory to obtain comparable results. By contrast, our approach requires only basic linear algebra, leveraging our conservation law at a critical juncture to bypass the need for advanced techniques. + +We use our user-friendly framework to shed light on several topics of interest: +i) We provide a compelling theoretical explanation for the "deep bootstrap" phenomenon of Nakkiran et al. (2020) and identify two regimes of NN fitting occurring at early and late training times. + +ii) We generalize a previous result regarding the hardness of the parity problem for rotation-invariant kernels. Our technique is simple and illustrates the power of our framework. + +iii) We craft an estimator for predicted function *smoothness*, a new tool for the theoretical study of adversarial robustness. + +iv) We draw a tight analogy between our framework and the free Fermi gas, a well-studied statistical physics system, and thereby transfer insights into the free Fermi gas over to KRR. + +We structure these applications as a series of vignettes. + +The paper is organized as follows. We give preliminaries in Section 2. We define our conserved quantity and state its basic properties in Section 3. We characterize the generalization of KRR in terms of this quantity in Section 4. We check these results experimentally in Section 5. Section 6 consists of a series of short vignettes discussing topics (i)-(iv). We conclude in Section 7. + +## 1.1 Related Work + +The present line of work has its origins with early studies of the generalization of Gaussian process regression (Opper, 1997; Sollich, 1999), with Sollich (2001) deriving an estimator giving the expected test risk of KRR in terms of the eigenvalues of the kernel operator and the eigendecomposition of the target function. We refer to this result as the "omniscient risk estimator1," as it assumes full knowledge of the data distribution and target function. Bordelon et al. (2020) and Canatar et al. (2021) brought these ideas into a modern context, deriving the omniscient risk estimator with a replica calculation and connecting it to the "neural tangent kernel" (NTK) theory of wide neural networks (Jacot et al., 2018), with (Loureiro et al., 2021) extending the result to arbitrary convex losses. Sollich & Halees (2002); Caponnetto & De Vito (2007); Spigler et al. + +(2020); Cui et al. (2021); Mallinar et al. (2022) study the asymptotic consistency and convergence rates of KRR in a similar vein. Jacot et al. (2020); Wei et al. (2022) used random matrix theory to derive a risk estimator requiring only training data. In parallel with work on KRR, Dobriban & Wager (2018); Wu & Xu (2020); Richards et al. (2021); Hastie et al. (2022) and Bartlett et al. (2021) developed equivalent results in the context of linear regression using tools from random matrix theory. In the present paper, we provide a new interpretation for this rich body of work in terms of explicit competition between eigenmodes, provide simplified derivations of the main results of this line of work, and break new ground with applications to 1We borrow this terminology from Wei et al. (2022). + +![2_image_0.png](2_image_0.png) + +Figure 1: **Toy problem illustrating our conservation law. (A)** The task domain: the unit circle discretized into M = 10 points, n of which comprise the dataset D (filled circles). (B) The 10 eigenfunctions of a rotation-invariant kernel on this domain, grouped into degenerate pairs and shifted vertically for clarity. + +(C) We use each eigenfunction ϕk in turn as the target function. For each ϕk, we compute training targets ϕk(D), obtain a predicted function ˆfk in a standard supervised learning setup, and subsequently compute Dlearnability. This comprises 10 orthogonal learning problems. **(D,E)** Stacked bar charts with 10 components showing D-learnability for each eigenfunction. The left bar in each pair contains results from NTK regression, while the right bar contains results from wide neural networks. Models vary in activation function and number of hidden layers (HL). Dashed lines indicate n. Learnabilities always sum to n, exactly for kernel regression and approximately for wide networks. +new problems of interest. We compare selected works with ours and provide a dictionary between respective notations in Appendix A. + +All prior works in this line - and indeed most works in machine learning theory more broadly - rely on approximations, asymptotics, or bounds to make any claims about generalization. The conservation law is unique in that it gives a sharp *equality* even at finite dataset size. This makes it a particularly robust starting point for the development of our framework (which does later make approximations). + +In addition to those listed above, many works have investigated the spectral bias of neural networks in terms of both stopping time (Rahaman et al., 2019; Xu et al., 2019b;a; Xu, 2018; Cao et al., 2019; Su & Yang, 2019) and the number of samples (Valle-Perez et al., 2018; Yang & Salman, 2019; Arora et al., 2019). Our investigation into the deep bootstrap ties together these threads of work: we find that the interplay of these two sources of spectral bias is responsible for the deep bootstrap phenomenology. Conservation laws are common across all fields of physics. Such laws provide both meaningful quantities with which to reason and theoretical tools to aid in calculation. In classical mechanics in particular, the identification of a conserved quantity, such as energy or momentum, often greatly simplifies a subsequent calculation of a quantity of interest. Our conservation law serves precisely these purposes in our study, permitting a simplified derivation of the omniscient risk predictor and giving an intuitive variable with which we can characterize the generalization of KRR. Learnability, our conserved quantity, is precisely the +"teacher-student overlap" order parameter common in statistical physics treatments of KRR (e.g. Loureiro et al. (2021)), though its conservation has to our knowledge not been noted before. Kunin et al. (2020) +also study conservation laws obeyed by neural networks, but their conservation laws describe weights during training and are not related to ours. + +## 2 Preliminaries And Notation + +We study a standard supervised learning setting in which n training samples *D ≡ {*xi} +n i=1 are drawn i.i.d. + +from a distribution p over R +d. We wish to learn a (scalar) target function f given noisy evaluations y ≡ (yi) +n i=1 with yi = f(xi) + ηi, with ηi ∼ N (0, ϵ2). As it simplifies later analysis, we assume this N (0, ϵ2) label noise is also applied to test targets2. Our results are easily generalized to vector-valued functions as in Canatar et al. (2021). For scalar functions *g, h*, we define ⟨g, h⟩ ≡ Ex∼p[g(x)h(x)] and ||g||2 ≡ ⟨*g, g*⟩. + +We shall study the KRR predicted function ˆf given by + +$${\hat{f}}(x)=\mathbf{k}_{x{\mathcal{D}}}(\mathbf{K}_{{\mathcal{D}}{\mathcal{D}}}+\delta\mathbf{I}_{n})^{-1}\mathbf{y},$$ +$$(1)$$ +−1y, (1) +where, for a positive-semidefinite kernel K, we have constructed the row vector [kxD]i = K(*x, x*i) and the empirical kernel matrix [KDD]ij = K(xi, xj ) (which we trust to be nonsingular), δ is a ridge parameter, and In is the identity matrix. We wish to minimize test mean squared error (MSE) E +(D)(f) = ||f − ˆf||2 + ϵ 2 and its expectation over training sets E(f) = ED +-E +(D)(f)(where the expectation over D also averages over noise values). We emphasize that, here and in our discussion of learnability, ˆf is understood to be the KRR +predictor given by Equation 1 from training targets generated with target function f. In the classical biasvariance decomposition of test risk, we have bias B(f) = ||f −ED[ +ˆf]||2 +ϵ 2 and variance V(f) = E(f)− B(f). + +We also define train MSE as Etr(f) = 1n Pn i=1(f(xi) − ˆf(xi))2. + +## 2.1 The Kernel Eigensystem + +By Mercer's theorem Mercer (1909), the kernel admits the decomposition K(*x, x*′) = Pi λiϕi(x)ϕi(x +′), with eigenvalues λi ≥ 0 and a basis of eigenfunctions ϕi satisfying ⟨ϕi, ϕj ⟩ = δij . We assume eigenvalues are indexed in descending order. + +As the eigenfunctions form a complete basis, we are free to decompose f and ˆf as + +$$f(x)=\sum_{i}{\bf v}_{i}\phi_{i}(x)\qquad\mbox{and}\qquad{\hat{f}}(x)=\sum_{i}{\hat{\bf v}}_{i}\phi_{i}(x),$$ +$$\left(2\right)$$ + +where v ≡ (vi)i and ˆv ≡ (ˆvi)i are vectors of eigencoefficients. + +## 3 Learnability And Its Conservation Law + +Here we define learnability, our conserved quantity. Learnability is a measure of ˆf defined similarly to test MSE, but it is linear instead of quadratic. For any function f such that ||f|| = 1, let + +$${\cal L}^{({\cal D})}(f)\equiv\langle f,\hat{f}\rangle\quad\mbox{and}\quad{\cal L}(f)\equiv{\mathbb{E}}_{\cal D}\Big{[}{\cal L}^{({\cal D})}(f)\Big{]}\,,\tag{3}$$ + +where, as with MSE, ˆf is given by Equation 1. We refer to L +(D)(f) the D*-learnability* of function f with respect to the kernel and n and refer to L(f) as the *learnability*. Up to normalization, this quantity is akin to the cosine similarity between f and ˆf. We shall show that, for KRR, learnability gives a useful indication of how well a function (particularly a kernel eigenfunction) is learned. Results in this section are rigorous and exact; see Appendix H for proofs. + +We begin by stating several basic properties of learnability to build intuition for the quantity. + +Proposition 3.1. *The following properties of* L +(D), L, {ϕi}, and any f *such that* ||f|| = 1 *hold:* + +$$(a)\ {\mathcal{L}}(\phi_{i}),{\mathcal{L}}^{({\mathcal{D}})}(\phi_{i})\in[0,1].$$ +$$(b)\;\;W h e n\;n=0,\;{\mathcal{L}}^{({\mathcal{D}})}(f)={\mathcal{L}}(f)=0.$$ + +2To study noiseless targets instead, simply subtract ϵ 2from expressions for test MSE. +(c) Let D+ be D ∪ x, where x ∈ X, x /∈ D *is a new data point. Then* L + +$$\Phi(\phi_{i})\geq{\mathcal{L}}^{({\mathcal{D}})}(\phi_{i}).$$ +$\left(d\right)\;\frac{\partial}{\partial x}$ . +$$\frac{\partial}{\partial\lambda_{i}}{\mathcal{L}}^{({\mathcal{D}})}(\phi_{i})\geq0,\;\frac{\partial}{\partial\lambda_{i}}{\mathcal{L}}^{({\mathcal{D}})}(\phi_{j})\leq0,\;a n d\;\frac{\partial}{\partial\delta}{\mathcal{L}}^{({\mathcal{D}})}(\phi_{i})\leq0.$$ +$$(e)\ {\mathcal{E}}(f)\geq{\mathcal{B}}(f)\geq(1-{\mathcal{L}}(f))^{2}$$ + +Properties (a-c) together give an intuitive picture of the learning process: the learnability of each eigenfunction monotonically increases from zero as the training set grows, attaining its maximum of one in the ridgeless, maximal-data limit. Property (d) shows that the kernel eigenmodes are in competition - increasing one eigenvalue while fixing all others can only improve the learnability of the corresponding eigenfunction, but can only harm the learnabilities of all others - and that regularization only harms eigenfunction learnability. Property (e) gives a lower bound on MSE in terms of learnability and will be useful when we discuss the parity problem. + +We now state the conservation law obeyed by learnability. This rule follows from the view of KRR as a projection of f onto the n-dimensional subspace of the RKHS defined by the n samples and is closely related to the "dimension bound" for linear learning rules given by Hsu (2021). + +$\mathrm{Cu}_{20}$ +Theorem 3.2 (Conservation of learnability). For any complete basis of orthogonal functions F*, when* +ridge parameter δ = 0, X +$\scriptsize\text{{20}}x,any,\text{{4}}$ 6. +$\mathrm{zeta}$ +insertion of learnability). _For any complete basis of $\sigma$_, $\epsilon$ 0, $$\sum_{f\in\mathcal{F}}\mathcal{L}^{(\mathcal{D})}(f)=\sum_{f\in\mathcal{F}}\mathcal{L}(f)=n,$$ +L(f) = n, (4) + +and when δ > 0, X + +$$\sum_{f\in{\mathcal{F}}}{\mathcal{L}}^{({\mathcal{D}})}(f)n = 0). In this sense, overfitting of noise is "overconfidence" on the part of the kernel that the target function lies in the top-n subspace, and "hedging" via a wider distribution of learnability (or sacrificing a portion of the learnability budget to the ridge parameter) lowers E0 and fixes this problem. This overconfidence occurs when the kernel eigenvalues drop sharply around index n and is the cause of double-descent peaks (Belkin et al., 2019), which other works have also found can be avoided with an appropriate ridge parameter Canatar et al. (2021); Nakkiran et al. (2021). + +Equations 10 and 11 show that, remarkably, the bias and variance can be expressed solely in terms of E(f) +and E0. Since learnabilities strictly increase as n grows, the bias strictly decreases while the variance can be nonmonotone, as also noted by Canatar et al. (2021). Equation 12 states that train error is related to test error by the target-independent proportionality constant δ 2/κ2. Finally, Equations 13 and 14 give the mean and covariance of the predicted eigencoefficients. Different eigencoefficients are uncorrelated, and the variance of ˆviis proportional to L +2 i but surprisingly independent of vi. + +Each new unit of learnability (from each new training example) is distributed among the set of eigenmodes as + +$${\frac{d{\mathcal{L}}_{i}}{d n}}=n^{-1}{\mathcal{E}}_{0}r_{i}={\frac{r_{i}}{\sum_{j}r_{j}+\delta/\kappa}},$$ + +where ri ≡ Li(1 − Li) is a quantity which represents the rate at which mode i *is being learned*. As examples are added, each eigenmode's learnability grows in proportion to ri, with a fraction of the learnability budget proportional to δ/κ sacrificed to the ridge parameter as a hedge against overfitting. This learning rate is highest when Li ≈ +1 2 +, and thus it is the partially-learned eigenmodes which most benefit from the addition of new training examples. + +## 4.3 Mse At Low N + +A priori, one might expect a learning rule to obey the tenet that "more data never hurts". However, via the study of double-descent, it has recently become well-understood that this is not true: while risk typically initially decreases w.r.t. n, it can later increase (at least at fixed regularization strength) as n approaches the model's (effective) number of parameters. Defying even the double-descent picture, however, an additional tangle is presented by various experimental plots reported by Bordelon et al. (2020) and Misiakiewicz & Mei (2021) (Figures 3 and 1, respectively), which show the test MSE of KRR increasing w.r.t. n immediately (i.e. with no initial drop) *without* a subsequent peak. In these scenarios, having few samples is worse than having none. What causes this counterintuitive behavior? Here we explain this phenomenon, identifying a spectral condition under which an eigenmode's learning curve is nonmonotonic, and conclude that this is in fact a *different* class of nonmonotonic curve than one gets from double descent. Expanding Equation 9 about n = 0, we find that E(ϕi)|n=0 = 1 and + +$$\left(15\right)$$ +$$\left.\frac{d{\mathcal{E}}(\phi_{i})}{d n}\right|_{n=0}=\frac{1}{\sum_{j}\lambda_{j}+\delta}\left[\frac{\sum_{j}\lambda_{j}^{2}}{\sum_{j}\lambda_{j}+\delta}-2\lambda_{i}\right].$$ +. (16) +5See Mallinar et al. (2022) for a discussion of this interpretation. + +$$(16)$$ + +![7_image_0.png](7_image_0.png) + +$$(17)$$ + +Figure 2: **Predicted learnabilities and MSEs closely match experiment. (A-D)** Learnability of various eigenfunctions on synthetic domains and binary functions over image datasets. Theoretical predictions from Equation 7 (curves) are plotted against experimental values from trained finite networks (circles) and NTK regression (triangles) with varying dataset size n. Error bars show one standard deviation of variation. (E-H) Same as (A-D) for test MSE, with theoretical predictions from Equation 9. +This implies that, at small n, MSE *increases* as samples are added for all modes i such that + +$$\lambda_{i}<\frac{\sum_{j}\lambda_{j}^{2}}{2(\sum_{j}\lambda_{j}+\delta)}.$$ +. (17) +This worsening MSE is due to *overfitting*: confidently mistaking ϕi for more learnable modes. + +This phenomenon is distinct from double descent. Double descent requires a finite number of nonzero eigenvalues (or a spectrum with a sharp cutoff (Canatar et al., 2021)). By contrast, the phenomenon we have identified occurs with any spectrum when attempting to learn a mode with sufficiently small eigenvalue. + +We verify Equation 16 experimentally in Appendix E. + +## 5 Experiments + +Here we describe experiments confirming our main results. The targets are eigenfunctions on three synthetic domains - the unit circle discretized into M points, the (Boolean) hypercube {±1} +d, and the d-sphere — +as well as two-class subsets of MNIST and CIFAR-10. Unless otherwise stated, experiments use a fullyconnected four-hidden-layer ReLU architecture, and finite networks have width 500. The kernel eigenvalues on each synthetic domain group into degenerate sets which we index by k ∈ Z ++. Eigenvalues and eigencoefficients for image datasets are approximated numerically from a large sample of training data. Full experimental details can be found in Appendix B. + +Figure 1 illustrates Theorem 3.2 in a toy setting. Modewise D-learnabilities indeed sum to n. Figure 2 compares theoretical predictions for learnability and MSE with experiments on real and synthetic data, finding good agreement in all cases. Appendix C repeats one of these experiments with network widths varying from ∞ to 20, finding good agreement with theory even at narrow width. + +An important question is whether our KRR eigenframework remains predictive even in more realistic regimes with e.g. bigger datasets with structured kernels. We choose not to include in the present work a battery of experiments extending Figures 2 D and H which would establish this. The main reason is that it there is already appreciable evidence that this is true: for example, Wei et al. (2022) demonstrate that similar eigenequations (which one would expect to succeed or fail precisely when our approach succeeds or fails) +find excellent agreement with KRR using empirical ResNet NTKs on CIFAR-100. A secondary reason is computational cost: computing convolutional NTKs, or diagonalizing kernel matrices much bigger than those used in our experiment, is rather expensive. Nonetheless, we do think that testing the KRR eigenframework at scale would be a worthwhile inclusion in experimental followup work. + +## 6 Vignettes 6.1 Explaining The Deep Bootstrap + +The *deep bootstrap* (DB) is a phenomenon observed by Nakkiran et al. (2020) in which the performance of a neural network stopped after a given number of training steps is relatively insensitive to the size of the training set, unless the training set is so small that it has been interpolated. The DB has been studied theoretically on kernel gradient flow (KGF), which describes the training of wide neural networks, by Ghosh et al. (2022) in the toy case in which the data lies on a high-dimensional sphere. Here we give a convincing general explanation of this phenomenon using our framework and identify two regimes of NN fitting in the process. + +Ali et al. (2019) proved that KRR with finite ridge generalizes remarkably similarly to KGF with finite stopping time (a theoretical result confirmed empirically by Lee et al. (2020) for NTKs on image datasets). When considering a standard supervised learning setting, the effective training time corresponding to a ridge δ is τeff ≡ δ +−1n (see Appendix D for a discussion of this scaling). As a proxy for KGF, we shall study KRR +as τeff increases from 0 to ∞. + +In Equation 9, the ridge parameter affects MSE solely through the value of κ. Define κ0 ≡ κ|δ=0, the minimum effective regularization at a given dataset size. In Appendix D, we show that, for powerlaw eigenspectra (as are commonly found in practice), there are two regimes of fitting: +1. **Regularization-limited regime**: τeff ≪ κ +−1 0and κ ≈ τ +−1 eff . The generalization gap is small: +E(f)/Etr(f) ≈ 1. Regularization dominates generalization, and adding training samples does not affect generalization6. + +2. **Data-limited regime**: τeff ≫ κ +−1 0and κ ≈ κ0. The generalization gap is large: E(f)/Etr(f) ≫ 1. + +Data is interpolated, and decreasing regularization (i.e. increasing training time) does not affect generalization. + +We suggest that Nakkiran et al. (2020) observe overlapping error curves for different n at early times because, at these times, the model is in the regularization-limited regime. + +We now present an experiment confirming our interpretation. In Figure 3, we reproduce an experiment of Nakkiran et al. (2020) illustrating the DB using ResNets trained on CIFAR-10, juxtaposing it with our proposed model for this phenomenon - KRR with varying τeff - trained on binarized MNIST. The match is excellent: in particular, both plots share the DB phenomenon that error curves for all n overlap at early times, with test and train error peeling off the master curve at roughly the same time. We find that τeff ≈ κ +−1 0 is indeed when the transition between regimes occurs for KRR, matching our theoretical prediction. This experiment strongly suggests that both neural network and KRR fitting can be thought of as a transition between regularization-limited and data-limited regimes. See Appendix D for experimental details. + +## 6.2 The Hardness Of The Parity Problem For Rotation-Invariant Kernels + +The *parity problem* stands as a classic example of a function which is easy to write down but hard for common algorithms to learn. The parity problem was shown to be exponentially hard for Gaussian kernel methods by Bengio et al. (2006). Here we generalize this result to KRR with arbitrary rotation-invariant kernels. Our analysis is made trivial by the use of our framework and is a good illustration of the power of working in terms of learnabilities. + +The problem domain is the hypercube X = {−1, +1} +d, over which we define the subset-parity functions ϕS(x) = (−1) +Pi∈S +1[xi=1], where S ⊆ {1, ..., d} ≡ [d]. The objective is to learn ϕ[d]. + +6We mean here that adding training samples while holding τeff *constant* does not affect generalization. + +![9_image_0.png](9_image_0.png) + +$$(18)$$ + +Figure 3: **We reproduce and explain the deep bootstrap phenomenon in KRR. (A)** An experiment illustrating the deep bootstrap effect using a ResNet-18 on CIFAR-10. (B) An analogous experiment using KRR on binarized MNIST. Eigenlearning predictions closely match experimental curves, and τeff = κ +−1 0 +(vertical dashed lines) faithfully predicts the transition from regularization-limited to data-limited fitting for each n. +For any rotation-invariant kernel (such as the NTK of a fully-connected neural network), {ϕS}S are the eigenfunctions over this domain, with degenerate eigenvalues {λk} +d k=0 depending only on k = |S|. Yang +& Salman (2019) proved that, for any fully-connected kernel, the even and odd eigenvalues each obey a particular ordering in k. Letting d be odd for simplicity, this result and Equation 7 imply that L1 ≥ L3 ≥ +... ≥ Ld. Counting level degeneracies, this is a hierarchy of 2 d−1learnabilities of which Ld is the smallest. + +The conservation law of 3.2 then implies that Ld ≤n 2 d−1 , which, using Proposition 3.1(e), implies that + +$${\cal E}(\phi_{[d]})\geq\left(1-\frac{n}{2^{d-1}}\right)^{2}.$$ + +Obtaining an MSE below a desired threshold ϵ thus requires at least nmin = 2d−1(1−ϵ 1/2) samples, a sample complexity exponential in d. The parity problem is thus hard for all rotation-invariant kernels7. + +## 6.3 Mean-Squared Gradient And Adversarial Robustness + +While the omniscient risk estimate is the most important of the eigenlearning equations, we have also obtained estimators for arbitrary covariances of ˆf. Here we point out a first use for these covariances: studying the smoothness of ˆf and thus its adversarial robustness. We fashion an estimator for function smoothness, confirm its accuracy with KRR experiments, and identify a discrepancy between intuition and experiment ripe for further exploration. + +Consider the *mean squared gradient* (MSG) of ˆf defined by G( +ˆf) ≡ Ex h||∇x ˆf(x)||2i= ||∇ ˆf||22 +. This quantity is a measure of function smoothness8. Eigendecomposition yields that + +$$\mathbb{E}_{x}\left[\left\|\nabla_{x}\hat{f}(x)\right\|^{2}\right]=\sum_{ij}\mathbb{E}[\hat{\mathbf{v}}_{i}\hat{\mathbf{v}}_{j}]\,g_{ij}\quad\text{with}\quad g_{ij}\equiv\mathbb{E}_{x}[\nabla_{x}\phi_{i}(x)\cdot\nabla_{x}\phi_{j}(x)]\,.\tag{19}$$ + +The expectation E[ˆviˆvj ] = E[ˆvi] E[ˆvj ]+Cov[ˆvi, ˆvj ] is given by the eigenlearning equations, and the structure constants gij , which encode information about the domain, can be computed analytically for simple domains. + +On the d-sphere, for which the ϕi = ϕkℓ are spherical harmonics, these are g(kℓ),(k′ℓ +′) = k(k + d − 2)δkk′δℓℓ′ . + +Figure 4 shows the MSG of the function learned by KRR with a polynomial kernel trained on k = 1 modes on spheres of increasing dimension d, normalized by the MSG of the ground-truth target function. See Appendix F for experimental details and additional experiments in this vein. True MSG matches predicted MSG well in all settings, particularly at large n and d. + +7We note a number of recent approaches (Daniely & Malach, 2020; Kamath et al., 2020; Hsu, 2021) which give complexity lower bounds for learning parities using a simple degeneracy argument. However, these approaches do not leverage the spectral bias of the kernel and become vacuous when k = d. + +8See ? for further discussion of MSG as a proxy for robustness. + +The study of adversarial robustness currently suffers from a lack of theoretically tractable toy models, and we suggest our expression for MSG can help fill this gap. To illustrate this, we describe an insight that can be drawn from Figure 4. Vulnerability to gradient-based adversarial attacks can be viewed essentially as a phenomenon of surprisingly large gradients with respect to the input. If such vulnerability is an inevitable consequence of high dimension, a common heuristic belief (e.g. Gilmer et al. (2018)), one might expect that G( +ˆf) is generally much larger than G(f) at high dimension. Surprisingly, we see no such effect, a discrepancy which can be investigated further using our framework. + +## 6.4 A Quantum Mechanical Analogy And Universal Learnability Curves + +![10_image_0.png](10_image_0.png) + +Figure 4: Predicted function smoothness matches experiment. Predicted MSG of ˆf +(curves) and empirical MSG for kernel regression (triangles) for k = 1 modes on hyperspheres with varying dimension. +Here we describe a remarkably tight analogy between our picture of KRR generalization and the statistics of the free Fermi gas, a canonical model in statistical physics. This allows certain insights into the free Fermi gas to be ported over to KRR. This correspondence is also of fundamental interest: KRR and the free Fermi gas are both paradigmatic systems in their respective fields, and it is remarkable that their statistics are in fact the same. + +We defer the details of this correspondence to Appendix G and focus here on the takeaways. The free Fermi gas is defined by a scalar µ and a set of states with energies {εi}i, each of which may be occupied or not. + +We find that − ln κ is analogous to µ, the state energies are analogous to kernel eigenvalues, and the states' occupation probabilities are precisely analogous to the eigenmodes' learnabilities. + +In all prior work and in our work thus far, the constant κ has thus far been defined only as the solution to an *implicit* equation. Having an explicit equation would be advantageous. Leveraging methods for the study of the free Fermi gas, we find the following *explicit* formula for κ in terms of the *elementary symmetric* polynomials mn when δ = 0: + +$$\kappa=\frac{m_{n}(\lambda_{1},\lambda_{2},...)}{m_{n-1}(\lambda_{1},\lambda_{2},...)},\qquad\text{where}\qquad m_{n}(x_{1},x_{2},...)\equiv\sum_{1\leq j_{1}<... 1. Mallinar et al. (2022) show that κ0 ≡ κ|δ=0 →α +−1π csc(α +−1π)αn +−α as n → ∞. + +Extending their analysis to finite ridge, we find that, at large n, n = n(κ/κ0) +−1/α + (δκ) +−1, or equivalently + +$$1=\left({\frac{\kappa_{0}}{\kappa}}\right)^{\frac{1}{\alpha}}+{\frac{1}{\tau_{\mathrm{eff}}\kappa}}.$$ +$$(22)$$ + +. (22) +Inspection of Equation 22 reveals that when τeff ≪ κ +−1 0, then κ ≈ τ +−1 eff , and when τeff ≫ κ +−1 0, then κ ≈ κ0. + +We thus expect a crossover from the regularization-limited to the data-limited regime when τeff ≈ κ0. + +In Figure 9, we plot the theoretical omniscient risk estimate using powerlaw eigenvalues (λi ∼ i +−α) and powerlaw eigencoefficients v 2 i ∼ i +−α) with α = 2 for various n. The close resemblance of these curves to the experimental KRR curves of Figure 3 confirm that our powerlaw toy model is good. If desired, Figures 3(A,B) and Figure 9 can be viewed as the distillation of the DB phenomenon in three stages of increasing artificiality. + +It is worth noting that increasing n serves to both decrease κ and decrease E0. For powerlaw spectra, as n increases with fixed τeff, E0 tends to its lower bound of 1. It approaches 1 when the regularization-limited regime begins. + +Similar ideas regarding the scaling of KRR are discussed by Cui et al. (2021). The notion of regimes limited by either dataset size or training time is explored using the language of scaling laws by Kaplan et al. (2020) +and Bahri et al. (2021). We expect that this analysis could be extended to proper KGF using the analysis of Bordelon & Pehlevan (2021). + +## D.1 Deep Bootstrap Experiments + +In replicating the deep bootstrap phenomenon for neural networks, we use the same experimental setup as Nakkiran et al. (2020), a ResNet-18 architecture trained on CIFAR-10 with data augmentation (random horizontal flips and random crops). We optimize cross-entropy loss using SGD, with batchsize 128, momentum 0.9, and initial learning rate 0.1 with cosine decay. The test/train curves are the mean of 4 training runs; the curves have been gaussian-smoothed to remove high-frequency noise artifacts. We perform kernel ridge regression (KRR) using the NTK of a fully-connected network with four hidden layers of width 500. We binarize MNIST into two classes: {0, 1, 2, 3, 4} and {5, 6, 7, 8, 9}. Although the test 14More precisely, they study *linear* ridge regression and *linear* gradient flow, but their result also applies to the kernelized versions of these algorithms. + +curves shift slightly depending on the binarization scheme, the theory curves all fall within one standard deviation of the empirical curves for all binarizations we tried. The KRR test/train curves are the mean of 10 training runs. + +![23_image_0.png](23_image_0.png) + +Figure 9: **Deep bootstrap theoretical test/train curves for a synthetic kernel** spectrum. Here we illustrate the deep bootstrap phenomenon using a hypothetical kernel giving powerlaw eigenvalues and eigencoefficients with exponent α = 2. In this setting, we note that finite-data test/train curves simultaneously split from the n → ∞ +"online learning curve" (black dot-dashed line). We see that τeff = κ +−1 0(vertical dashed lines) predicts the transition from regularization-limited to data-limited fitting for each n. (We choose α = 2 for a clean illustration of the phenomenon; empirically, CIFAR-10 and MNIST give spectra with exponents closer to 1.) + +## E Nonmonotonic Mse At Low N + +![23_image_1.png](23_image_1.png) + +Figure 10: For difficult eigenmodes, MSE *increases* with n **due to overfitting.** Predicted MSE (curves) and empirical MSE for trained networks (circles) and NTK regression (triangles) for four eigenmodes on three domains at small n. Dotted lines indicated dE/dn|n=0 as predicted by Equation 16. + +## F Additional Mean-Squared Gradient Experiments + +In the MSG experiment of 4, we run KRR with a polynomial kernel on data sampled from hyperspheres of varying dimension. The polynomial kernel is K(*x, x*′) = 1 + x +⊤x +′ + (x +⊤x +′) +2 + (x +⊤x +′) +3. We perform this experiment using PyTorch (Paszke et al., 2019) and compute ∇ ˆf numerically. + +In the MSG experiment of 11, we train finite-width FCNs on eigenmodes on the (discretized) unit circle. + +![24_image_0.png](24_image_0.png) + +Figure 11: **Predicted function smoothness matches experiment on the unit** circle. Theoretical MSG predictions (curves) and empirical values for finite networks +(circles) and kernel regression (triangles) for various eigenmodes on the discretized unit circle with M = 256, normalized by the ground-truth mean squared gradient of G(f) = +E +-|f +′(x)| 2= k 2. Because this is a discrete domain, smoothness is computed using a discretization as G( +ˆf) = Ej h| ˆf(xj ) − ˆf(xj+1)| 2i, where xj and xj+1 are neighboring points on the unit circle. + +## G Quantum Mechanical Analogy G.1 Explicit Equation For Κ + +Here we develop a tight analogy between our picture of KRR and the *free Fermi gas*. The free Fermi gas consists of a collection of single-particle orbitals with energies {εi}i connected to a bath of particles at chemical potential µ. In our analogy, we will identify orbital i with kernel eigenmode i. Each orbital will contain zero or one fermions (ni ∈ {0, 1}), with the occupation probability of orbital i given by the Fermi-Dirac distribution to be ⟨ni⟩ = (1 + e εi−µ) +−1. If we identify εi = − ln λi and µ = − ln κ, we find that +⟨ni⟩ = Li: the orbital occupation probability is precisely the eigenmode learnability. + +We can always choose κ so that eigenmode learnabilities sum to n. This is equivalent to the statement that we can always choose the chemical potential µ to ensure the system contains n fermions on average. An eigenmode with eigenvalue λi ≥ κ receives at least half a unit of learnability, and an orbital with energy εi ≤ µ is occupied with probability at least one half. + +In our system of noninteracting fermions, we have thus far identified + +$-\ln\lambda_{i}\Leftrightarrow\varepsilon_{i}$ (energy of orbital $i$) $-\ln\kappa\Leftrightarrow\mu$ (chemical potential) ${\cal L}_{i}\Leftrightarrow\langle n_{i}\rangle$ (expected occupancy of orbital $i$). +− ln λi ⇔ εi (energy of orbital i) (23) +− ln κ ⇔ µ (chemical potential) (24) +Li ⇔ ⟨ni⟩ (expected occupancy of orbital i). (25) +What is the value of κ? Observe that +$$\langle n_{i}\rangle={\frac{1}{1+e^{\varepsilon_{i}-\mu}}}$$ +εi−µ(26) +gives the expected occupation number *in the grand canonical ensemble* where the total number of fermions is allowed to fluctuate. By the equivalence of thermodynamic ensembles, we expect to get the same answer for ⟨ni⟩ in the *canonical ensemble* where the total number of fermions does not fluctuate and is exactly n. + +This suggests that we should attempt to compute ⟨ni⟩ in the canonical ensemble. By comparing the answer to the grand canonical expression, we can solve for µ and hence for κ. + +We assume a total number of orbitals M ≫ n (which may be infinite). Note that the equivalence of ensembles holds only in the thermodynamic limit n ≫ 1, so we must take this limit at the end. + +$$\begin{array}{l}{(23)}\\ {(24)}\\ {(25)}\end{array}$$ + +$$(2{\mathfrak{f}}{\mathfrak{h}})$$ + +In the canonical ensemble, each microstate is labeled by of a list of indices 1 ≤ j1 *< ... < j*n ≤ M +corresponding to the occupied orbitals. Direct computation of the canonical partition function shows that + +$$Z_{\mathrm{C}}=m_{n}(e^{-\varepsilon_{1}},....,e^{-\varepsilon_{D}}),$$ +−ε1*, ...., e*−εD ), (27) +where mn is the so-called "elementary symmetric polynomial of order n": + +$$m_{n}(x_{1},...,x_{M})=\sum_{1\leq j_{1}<... 0. Comparing these equations, we find that the projectors are the same except that, in Equation 46, there is one additional dimension in the row-space and thus one new basis vector in the projector (provided ξ is orthonormal to the other columns of Φ; otherwise there are zero additional dimensions). In the case δ = 0, this new basis vector cannot decrease e +⊤ +i T(D+)ei, and thus L +(D+)(ϕi) ≥ L(D)(ϕi) in the ridgeless case. In the case δ > 0, a singular value decomposition of the projector confirms that the addition still cannot decrease e +⊤ +i T(D+)ei. This shows the desired property. It follows as a corollary that increasing n → n + 1 cannot decrease L(f). + +Property (d): ∂ +∂λi L +(D)(ϕi) ≥ 0,∂ +∂λi L +(D)(ϕj ) ≤ 0, and ∂ +∂δL + +$${\mathfrak{L}}^{({\mathcal{D}})}(\phi_{i})\leq0.$$ + +Proof. Differentiating T +(D) +jj with respect to a particular λi, we find that + +$$\frac{\partial}{\partial\lambda_{i}}{\mathcal{L}}^{(\mathcal{D})}(\phi_{i})=\frac{\partial}{\partial\lambda_{i}}\,{\mathbf{T}}_{i j}^{(\mathcal{D})}=(\delta_{i j}-\lambda_{j}\phi_{j}^{\top}{\mathbf{K}}^{-1}\phi_{i})\phi_{i}^{\top}{\mathbf{K}}^{-1}\phi_{j},$$ +$$\left(47\right)$$ + +where ϕ +⊤ +iis the ith row of Φ and K = Φ⊤ΛΦ + δIn. Specializing to the case i = j, we note that ϕ +⊤ +i K−1ϕi ≥ 0 because K is positive definite, and λiϕiK−1ϕ +⊤ +i ≤ 1 because λiϕiϕ +⊤ +iis one of the positive semidefinite summands in K =Pk λkϕkϕ +⊤ +k + δIn. The first clause of the property follows. + +To prove the second clause, we instead specialize to the case i ̸= j, which yields that + +$$\frac{\partial}{\partial\lambda_{i}}{\mathcal{L}}^{({\mathcal{D}})}(\phi_{j})=\frac{\partial}{\partial\lambda_{i}}{\mathbf{T}}_{j j}^{({\mathcal{D}})}=-\lambda_{j}\left(\phi_{j}^{\top}{\mathbf{K}}^{-1}\phi_{i}\right)^{2},$$ + +which is manifestly nonpositive because λj > 0. The second clause follows. + +Differentiating Equation 56 w.r.t. δ yields that ∂ +∂δ T(D) = −ΛΦK−2Φ⊤. We then observe that +∂ ∂δL (D)(ϕi) = e ⊤ i ∂ ∂δ T (D)ei = −λie ⊤ i ΦK−2Φ⊤ei, (49) +which must be nonpositive because λi > 0 and ΦK−2Φ⊤ is manifestly positive definite. The desired property follows. + +Property (e): E(f) ≥ B(f) ≥ (1 − L(f))2. + +Noting that $\|\mathbf{v}\|=1/\|=1$, expected MSE is given by $$\mathcal{E}(f)=\mathbb{E}\big{[}(\mathbf{v}-\hat{\mathbf{v}})^{2}\big{]}=\|\mathbf{v}\|^{2}-2\mathbf{v}^{\top}\mathbb{E}[\hat{\mathbf{v}}]+\mathbb{E}\big{[}\hat{\mathbf{v}}^{2}\big{]}=\underbrace{1-2\mathbf{v}^{\top}\mathbb{E}[\hat{\mathbf{v}}]+\|\mathbb{E}[\hat{\mathbf{v}}]\|^{2}}_{\text{bias}\mathbb{B}(f)}+\underbrace{\text{Var}[\|\mathbf{v}\|]}_{\text{variance}V(f)}\ \.$$ +. (50) +It is apparent that E(f) ≥ B(f). Projecting any vector onto an arbitrary unit vector can only decrease its magnitude, and so + +$${\mathcal{B}}(f)\geq1-2\mathbf{v}^{\top}{\mathbb{E}}[{\hat{\mathbf{v}}}]+{\mathbb{E}}\left[{\hat{\mathbf{v}}}^{\top}\right]\mathbf{v}\mathbf{v}^{\top}{\mathbb{E}}[{\hat{\mathbf{v}}}]=\left(1-\mathbf{v}^{\top}{\mathbb{E}}[{\hat{\mathbf{v}}}]\right)^{2}=(1-{\mathcal{L}}(f))^{2}.\qed$$ +$$(48)$$ +$$(49)$$ +$$(50)$$ +$$(51)$$ + +## H.1 Proof Of Theorem 3.2 (Conservation Of Learnability) + +First, we note that, for any orthogonal basis F on X , + +$$\sum_{f\in{\mathcal{F}}}{\mathcal{L}}^{({\mathcal{D}})}(f)=\sum_{\mathbf{v}\in{\mathcal{V}}}{\frac{\mathbf{v}^{\top}\mathbf{T}^{({\mathcal{D}})}\mathbf{v}}{\mathbf{v}^{\top}\mathbf{v}}},$$ +v⊤v, (52) + +$$\left(52\right)$$ + +where V is an orthogonal set of vectors spanning RM. This is equivalent to Tr -T(D). This trace is given by + +$${\rm Tr}\left[{\bf T}^{({\cal D})}\right]={\rm Tr}\left[\mathbf{\Phi}^{\sf T}{\bf A}\mathbf{\Phi}(\mathbf{\Phi}^{\sf T}{\bf A}\mathbf{\Phi}+\delta{\bf I}_{n})^{-1}\right]={\rm Tr}\left[{\bf K}({\bf K}+\delta{\bf I}_{n})^{-1}\right].\tag{53}$$ + +When δ = 0, this trace simplifies to Tr[In] = n. When δ > 0, it is strictly less than n. This proves the theorem. + +Remark. Theorem 3.2 is a consequence of the fact that T(D)is simply a projector onto an n-dimensional space spanned by the embeddings of the n samples. + +## I Derivations: The Eigenlearning Equations + +In this appendix, we derive the eigenlearning equations for the test risk and covariance of the predicted function. Throughout this derivation, we shall prioritize clarity and interpretability over mathematical rigor, giving a derivation which can be understood without advanced mathematical tools. For a formal derivation using random matrix theory, see Jacot et al. (2020), and for a derivation using a replica calculation of a similar level of rigor as we use here, see Canatar et al. (2021). In the interest of clarity, we begin with a brief summary of the appendix. We begin by discussing the problem setting (Section I.1). We then define the learning transfer matrix formalism (I.2) and use it to set the ridge parameter and noise to zero (I.3). After stating our main approximation (I.4), we find the expectation of the learning transfer matrix (I.5, I.6), using our conservation law to fix κ (I.7, I.8). Taking well-placed derivatives, we bootstrap the expectation of the learning transfer matrix to its second order statistics (I.9) +and add back the ridge and noise (I.10). We conclude with various useful bounds on κ (I.11). + +## I.1 The Data Distribution + +We shall, in this derivation, consider a slightly more general setting than discussed in the main text. In the main text, we supposed the data were drawn i.i.d. from a continuous distribution p over R +d. Here, in addition to this case, we shall also consider a setting in which the data are sampled without replacement from a *discrete* set X ⊂ R +d with *|X |* = M. We will refer to this as the "discrete setting" and the setting with continuous distribution as the "continuous setting." As discussed in Appendix B, the discrete setting matches several of our experiments (namely those on the discretized unit circle and on the vertices of the hypercube). + +This discrete setting clearly converges to the continuous setting as M → ∞: when M ≫ n 2, the probability of sampling the same point twice is negligible (and so we can drop the "without replacement" and sample with replacement as in the continuous setting) and, by distributing X throughout R +d with point density proportional to p, we approach sampling from a continuous distribution. Alternatively, we could imagine X +consists of M i.i.d. samples from p, so that, when we later sample n points from X , they are themselves i.i.d. samples from p as M → ∞. It is worth noting that the data distribution in e.g. a computer vision task is in fact discrete because pixel values are discretized, and thus it is reasonable to work with a discrete measure. + +Kernel eigenmodes in the discrete setting are defined as M−1 Px′∈X K(*x, x*′)ϕi(x +′) = λiϕi(x). Note that, in this case, the number of eigenmodes is M, the same number as the cardinality of X . In the continuous setting, the number of eigenmodes is infinite (though there may be only finitely many with nonzero eigenvalues), +but, like Bordelon et al. (2020), we will find it useful to assume that we need only consider a finite (but very large) number M. This serves merely permit us to work with finite matrices (to which the standard tools of linear algebra apply) and to thereby save us the trouble of dealing with infinite and semi-infinite matrices, which require greater care. So long as M is sufficiently large, we do not lose anything in discarding the exceedingly small eigenvalue tail λi>M, as is typical in the study of kernel methods and as our final results will confirm. + +Our subsequent derivation will generally apply to both the discrete and continuous settings. + +## I.2 The Learning Transfer Matrix + +We begin by translating the KRR predictor into the kernel eigenbasis. Because K(*x, x*′) = +PM +i=1 λiϕi(x)ϕi(x +′), we can decompose the empirical kernel matrix as KDD = Φ⊤ΛΦ, where Λ ≡ +diag(λ1*, ..., λ*M) and Φ is the M × n "design matrix" given by Φij ≡ ϕi(xj ). + +The predicted function coefficients ˆv are given by + +$$\hat{\mathbf{v}}_{i}=\langle\phi_{i},\hat{f}\rangle=\lambda_{i}\phi_{i}(\mathbf{K}_{\mathcal{D}\mathcal{D}}+\delta\mathbf{I}_{n})^{-1}\Phi^{\top}\mathbf{v},$$ +$$(54)$$ + +−1Φ⊤v, (54) +where we have used the orthonormality of the eigenfunctions and defined [ϕi]j = ϕi(xj ) to be the i-th row of Φ. Stacking these coefficients into a matrix equation, we find + +$$\hat{\bf v}=\Lambda\Phi\left(\Phi^{\top}\Lambda\Phi+\delta{\bf I}_{n}\right)^{-1}\Phi^{\top}{\bf v}={\bf T}^{({\cal D})}{\bf v},\tag{10}$$ + +where the *learning transfer matrix* + +$$(55)$$ +$${\bf T}^{({\cal D})}\equiv\Lambda\Phi\left(\Phi^{\top}\Lambda\Phi+\delta{\bf I}_{n}\right)^{-1}\Phi^{\top},$$ +$$(56)$$ + +is an M × M matrix, independent of f, that fully describes the model's learning behavior on a training set D15. The learning transfer matrix is the same as the "reconstruction operator" of Jacot et al. (2020) viewed as a finite matrix instead of a linear operator. Full understanding of the statistics of T(D) will give us the statistics of ˆv and thus of ˆf, and so our main objective will be to find the mean and the covariance of T(D). + +## I.3 Setting The Ridge And Noise To Zero + +Our setting includes both a nonzero ridge parameter and nonzero noise. However, it is by this point modern folklore that many small eigenvalues together act as an effective ridge parameter equal to their sum and that power in zero-eigenvalue modes is effectively noise (see e.g. Canatar et al. (2021) for a discussion of these). Inverting these equivalences, we should expect to be able to convert δ into a small increase to many eigenvalues and convert ϵ 2to power in a zero-eigenvalue mode, and thereby permit ourselves to consider neither ridge nor noise in our derivation and add them back at the end. Here we explain our method for doing so. + +The ridge parameter can be viewed as a uniform increase in all eigenvalues. We first observe that M−1Φ⊤Φ = +In in the discrete case. Letting T(D)(Λ; δ) denote the learning transfer matrix with eigenvalue matrix Λ and ridge parameter δ, it follows from Equation 56 and this fact that + +$${\bf T}^{({\cal D})}(\Lambda;\delta)=\Lambda\left(\Lambda+\frac{\delta}{M}{\bf I}_{M}\right)^{-1}{\bf T}^{({\cal D})}\left(\Lambda+\frac{\delta}{M}{\bf I}_{M}\;;\;0\right).$$ + +In the continuous case, M−1Φ⊤Φ → In as M → ∞ (since the columns of Φ are uncorrelated), and since M +is very large in the continuous setting, we are again free to use Equation 57. + +As for the noise, we simply set ϵ 2 = 0 for now and, once we have our final equations, add power ϵ 2to a hypothetical zero-eigenvalue mode. + +15We take this terminology from control theory in which, for a system under study, a "transfer function" maps inputs to outputs, or driving to response. + +$$\left(57\right)$$ + +## I.4 Assumption: The Universality Of Φ + +We wish to take averages over Φ in finding the statistics of T(D). The distribution of Φ is in fact highly structured (reflecting the eigenstructure of the kernel), and we know only that E[ΦijΦij′ ] = δjj′ . We neglect this structure, making the "universality" assumption that we may take Φ to be sampled from a simple Gaussian measure without substantially changing the statistics of T(D). We henceforth assume Φij iid∼ N (0, 1). + +This universality assumption is also made in comparable works studying KRR (implicitly by Bordelon et al. + +(2020); Canatar et al. (2021) and explicitly by Jacot et al. (2020)), and analogous works on linear regression +(e.g. (Hastie et al., 2022; Wu & Xu, 2020; Bartlett et al., 2021)) assume the features are either Gaussian or nearly so (e.g. uncorrelated and sub-Gaussian). See Appendix A for further references to relevant literature. The validity of this approximation is ultimately justified by the close match of ours and others' theories with experiment. Why should this rather strong assumption give good results for realistic cases? This is an active area of research and a satisfying answer has not yet emerged to our knowledge. However, this universality phenomenon has precedent in random matrix theory: many features of the spectra of random matrices depend only on the first and second moments of the matrix entries (within certain conditions), and so if one knows a matrix ensemble of interest obeys this property, then one might as well just study the simplest distribution with the same moments, which is often a Gaussian ensemble. These results are akin to central limit theorems for random matrices. + +It is worth noting that, while we assume Gaussian entries for simplicity, we do not actually require a condition quite this strong. We will only really use the facts that the distribution is symmetric and that certain scalar quantities concentrate (I.7). Wei et al. (2022) discuss a "local random matrix law" which we expect would be sufficient on its own. + +## I.5 Vanishing Off-Diagonals Of E-T(D) + +We next observe that + +$$\mathbb{E}_{\Phi}\left[\mathbf{A}\Phi\left(\Phi^{\top}\mathbf{U}^{\top}\mathbf{A}\mathbf{U}\Phi\right)^{-1}\Phi^{\top}\right]=\mathbb{E}_{\Phi}\left[\mathbf{A}\mathbf{U}^{\top}\Phi\left(\Phi^{\top}\mathbf{A}\Phi\right)^{-1}\Phi^{\top}\mathbf{U}\right],\tag{1}$$ + +where U is any orthogonal M × M matrix. Defining U(m) as the matrix such that U +(m) +ab ≡ δab(1 − 2δam), +noting that U(m)ΛU(m) = Λ, and plugging U(m)in as U in Equation 58, we find that + +$$\mathbb{E}\Big{[}\mathbf{T}_{ab}^{(\mathcal{D})}\Big{]}=\left(\left(\mathbf{U}^{(m)}\right)^{\top}\mathbb{E}\Big{[}\mathbf{T}^{(\mathcal{D})}\Big{]}\,\mathbf{U}^{(m)}\right)_{ab}=(-1)^{\delta_{nm}+\delta_{nm}}\mathbb{E}\Big{[}\mathbf{T}_{ab}^{(\mathcal{D})}\Big{]}\,.\tag{59}$$ + +$$(58)$$ + +By choosing m = a, we conclude that E +hT +(D) +ab i= 0 if a ̸= b. + +## I.6 Fixing The Form Of E Ht (D) Ii I + +We now isolate a particular diagonal element of the mean learning transfer matrix. To do so, we write E + +hT +(D) +ii iin terms of λi (the ith eigenvalue), Λ(i) (Λ with its ith row and column removed), ϕ +⊤ +i(the ith row of Φ), and Φ(i) (Φ with its ith row removed). + +Using the Sherman-Morrison matrix inversion formula, we find that Φ⊤ΛΦ−1= Φ⊤ (i)Λ(i)Φ(i) + λiϕiϕ ⊤ i −1 = Φ⊤ (i)Λ(i)Φ(i) −1− λi Φ⊤ (i)Λ(i)Φ(i) −1ϕiϕ ⊤ i Φ⊤ (i)Λ(i)Φ(i) −1 1 + λiϕ⊤ i Φ⊤ (i)Λ(i)Φ(i) −1ϕi . +$$(60)$$ +Inserting this into the expectation of T(D), we find that + +E hT (D) ii i= EΦ(i),ϕi hλiϕ ⊤ i Φ⊤ΛΦ−1ϕi i = EΦ(i),ϕi λiϕ ⊤ i Φ⊤ (i)Λ(i)Φ(i) −1ϕi − λ 2 i ϕ ⊤ i Φ⊤ (i)Λ(i)Φ(i) −1ϕi 2 1 + λiϕ⊤ i Φ⊤ (i)Λ(i)Φ(i) −1ϕi = EΦ(i),ϕi λi λi + ϕ⊤ i Φ⊤ (i)Λ(i)Φ(i) −1ϕi −1 = EΦ(i),ϕi λi λi + κ (Φ(i),ϕi) , + +$$(61)$$ +where $\kappa^{(\mathbf{\Phi}_{(i)},\mathbf{\phi}_{i})}\equiv\kappa_{i}^{(\mathbf{\Phi})}\equiv\left[\mathbf{\phi}_{i}^{\top}\left(\mathbf{\Phi}_{(i)}^{\top}\mathbf{\Lambda}_{(i)}\mathbf{\Phi}_{(i)}\right)^{-1}\mathbf{\phi}_{i}\right]^{-1}$ is a nonnegative scalar. + +## I.7 Concentration And Mode-Independence Of Κ (Φ) I + +Important quantities in statistical mechanics systems are typically self-averaging (i.e. concentrating about their expectation) in the thermodynamic limit. Self-averaging quantities are the focus of random matrix theory, and generalization metrics in machine learning also tend to be self-averaging under most circumstances +(e.g. resampling the data and rerunning a training procedure will typically yield a similar generalization error). Here we argue that κ +(Φ) +iis self-averaging in the thermodynamic limit and can be replaced by its expectation. + +This could be shown rigorously by means of random matrix theory. Here we opt to simply observe that, if κ +(Φ) +i were not self-averaging, then for modes i such that λi ∼ κ +(Φ) +i, T +(D) +ii and thus L +(D)(ϕi) would not be self-averaging either. However, because L +(D)(ϕi) is a generalization metric like MSE, we should in general expect that it is self-averaging at large n. Our experimental results (Figure 2) confirm that fluctuations in L +(D)(ϕi) are indeed small in practice, especially at large n. We thus replace κ +(Φ) +i with its expectation κi ≡ EΦ +hκ +(Φ) i i. + +We next argue that κiis approximately independent of i, so we can replace it with a mode-independent constant κ. This, too, could be argued rigorously by means of random matrix theory. We opt instead for an eigenmode-removal argument inspired by the cavity method of statistical physics (Del Ferraro et al., 2014). + +Observe that, in the thermodynamic limit, the addition or removal of a single eigenmode should have a negligible effect on any observable and thus on κi. We shall here show that, by inserting one eigenmode and removing another, we can transform κiinto κj for any i and j, implying that κi ≈ κj . + +Assume that the addition or removal of a single eigenmode negligibly affects κi. Concretely, assume that κi ≈ κ ++ +i +, where κ ++ +iis κi computed with the addition of one extra eigenmode of arbitrary eigenvalue. We choose the additional eigenmode to have eigenvalue λi, and we insert it at index i, effectively reinserting the missing mode i into Φ(i) and Λ(i). + +To clarify the random variables in play, we shall adopt a more explicit notation, writing out Φ(i)in terms of its row vectors as Φ(i) = [ϕ1, ..., ϕi−1, ϕi+1*, ...,* ϕM] +⊤. Using this notation, we find upon adding the new eigenmode that + +κi ≡ EΦ(i),ϕi "ϕ ⊤ i Φ⊤ (i)Λ(i)Φ(i) −1ϕi −1#(62) ≡ E{ϕk}M k=1 hϕ ⊤ i [ϕ1, ..., ϕi−1, ϕi+1, ..., ϕM]Λ(i)[ϕ1, ..., ϕi−1, ϕi+1, ..., ϕM] ⊤−1ϕi ≈ κ + i ≡ E{ϕk}M k=1,ϕ˜i hϕ ⊤ i [ϕ1, ..., ϕi−1, ϕ˜i, ϕi+1, ..., ϕM]Λ[ϕ1, ..., ϕi−1, ϕ˜i, ϕi+1, ..., ϕM] ⊤−1ϕi +i−1(63) +i−1, (64) +where Λ is the original eigenvalue matrix and ϕ˜⊤ +iis the design matrix row corresponding to the new mode. + +We can also perform the same manipulation with κj (j ̸= i), this time adding an additional eigenvalue λj at index j, yielding that + +κj ≡ EΦ(j),ϕj "ϕ ⊤ j Φ⊤ (j)Λ(j)Φ(j) −1ϕj −1#(65) ≡ E{ϕk}M k=1 hϕ ⊤ j [ϕ1, ..., ϕj−1, ϕj+1, ..., ϕM]Λ(j)[ϕ1, ..., ϕj−1, ϕj+1, ..., ϕM] ⊤−1ϕj ≈ κ + j ≡ E{ϕk}M k=1,ϕ˜j hϕ ⊤ j [ϕ1, ..., ϕj−1, ϕ˜j , ϕj+1, ..., ϕM]Λ[ϕ1, ..., ϕj−1, ϕ˜j , ϕj+1, ..., ϕM] ⊤−1ϕj +i−1(66) +i−1. +$$(62)$$ +(63) $\binom{64}{2}$ . +(65) $$\left[\begin{array}{c}\mbox{(66)}\\ \mbox{(67)}\end{array}\right].$$ +We now compare Equations 64 and 67. Each is an expectation over M+1 vectors from the isotropic measure . T he statistics of these M +1 vectors are symmetric under exchange, so we are free to relabel them. Equation 64 is identical to Equation 67 upon relabeling ϕi → ϕj , ϕ˜i → ϕi, and ϕj → ϕ˜j , so they are equivalent, and κ ++ +i = κ ++ j +. This in turn implies that κi ≈ κj . In light of this, we now replace all κi with a mode-independent +(but as-yet-unknown) constant κ and conclude that + +$$\boxed{\mathbb{E}\Big[\mathbf{T}_{i j}^{(\mathcal{D})}\Big]=\delta_{i j}\frac{\lambda_{i}}{\lambda_{i}+\kappa}.}$$ + +(68) $$\begin{array}{l}\small\left(\begin{array}{l}\small68\\ \small\end{array}\right)\end{array}$$ . +In summary, we have argued that κiis not significantly changed by the addition or removal of single eigenmodes, and two such changes permit us to transform κiinto κj , so they are therefore approximately equal. + +Our argument here is similar to the cavity method of statistical physics (Del Ferraro et al., 2014), which essentially compares the behavior of a weakly-interacting system with and without a single element. The cavity method is often used as a simpler and more intuitive alternative to the replica method, a role it reprises here (contrast our approach with the replica approach of Canatar et al. (2021)). + +## I.8 Determining Κ + +We can determine the value of κ by observing that, using the ridgeless case of Theorem 3.2, + +$$\sum_{i}\mathbb{E}\Big{[}\mathbf{T}_{ii}^{(\mathcal{D})}\Big{]}=\sum_{i}\frac{\lambda_{i}}{\lambda_{i}+\kappa}=n.\tag{1}$$ +$$(69)$$ + +This is a much more straightforward method of fixing this constant than used in comparable works. The ability to use the ridgeless version of Theorem 3.2 is the main motivation for setting δ = 0 at the start of the derivation. + +## I.9 Differentiating W.R.T. Λ **To Obtain The Covariance Of** T(D) + +Here we obtain expressions for the covariance of T(D) and thereby the covariance of ˆv. Remarkably, we shall need no further approximations beyond those already made in approximating E-T(D), which lends credence to our thesis that understanding modewise learnabilities is sufficient for understanding more interesting statistics of ˆf. + +We begin with a calculation that will later be of use: differentiating both sides of the constraint on κ with respect to a particular eigenvalue, we find that + +$$\frac{\partial}{\partial\lambda_{i}}\sum_{j=1}^{M}\frac{\lambda_{j}}{\lambda_{j}+\kappa}=\sum_{j=1}^{M}\frac{-\lambda_{j}}{(\lambda_{j}+\kappa)^{2}}\frac{\partial\kappa}{\partial\lambda_{i}}+\frac{\kappa}{(\lambda_{i}+\kappa)^{2}}=0,$$ + +yielding that + +$$\frac{\partial\kappa}{\partial\lambda_{i}}=\frac{\kappa^{2}}{q(\lambda_{i}+\kappa)^{2}}\qquad\mathrm{where}\qquad q\equiv\sum_{j=1}^{M}\frac{\kappa\lambda_{j}}{(\lambda_{j}+\kappa)^{2}}.$$ +$$(70)$$ +$$(71)$$ +$$(72)$$ +$$(73)$$ +$$(74)$$ + +We now factor T(D)into two matrices as + +$${\bf T}^{({\cal D})}=\Lambda{\bf Z},\qquad\mbox{where}\quad{\bf Z}\equiv\Phi\left(\Phi^{\top}\Lambda\Phi\right)^{-1}\Phi^{\top}.\tag{1}$$ + +Unlike T(D), the matrix Z has the advantage of being symmetric and containing only one factor of Λ. We will find the second-order statistics of Z, which will trivially give these statistics for T(D). From Equation 68, we find that the expectation of Z is + +$$\mathbb{E}[\mathbf{Z}]=(\mathbf{\Lambda}+\kappa\mathbf{I}_{M})^{-1}.$$ +$$(75)$$ +−1. (73) +We also define a modified Z-matrix Z +(U) ≡ ΦΦ⊤U⊤ΛUΦ−1 Φ⊤, where U is an orthogonal M × M +matrix. Because the measure over which Φ is averaged is rotation-invariant, we can equivalently average over Φ˜ ≡ UΦ with the same measure, giving + +$$\mathbb{E}_{\Phi}\left[\mathbf{Z}^{(\mathbf{U})}\right]=\mathbb{E}_{\Phi}\left[\mathbf{U}^{\top}\bar{\Phi}\left(\bar{\Phi}^{\top}\Lambda\bar{\Phi}\right)^{-1}\bar{\Phi}^{\top}\mathbf{U}\right]=\mathbb{E}_{\Phi}\left[\mathbf{U}^{\top}\mathbf{Z}\mathbf{U}\right]=\mathbf{U}^{\top}(\Lambda+\kappa\mathbf{I}_{M})^{-1}\mathbf{U}.$$ +−1U. (74) +It is similarly the case that + +$$\mathbb{E}_{\Phi}\Big[(\mathbf{Z}^{(\mathbf{U})})_{ij}(\mathbf{Z}^{(\mathbf{U})})_{k\ell}\Big]=\mathbb{E}_{\Phi}\Big[\big(\mathbf{U}^{\top}\mathbf{Z}\mathbf{U}\big)_{ij}\left(\mathbf{U}^{\top}\mathbf{Z}\mathbf{U}\right)_{k\ell}\Big]\,.\tag{1}$$ + +Our aim will be to calculate expectations of the form EΦ[ZijZkℓ]. With a clever choice of U, a symmetry argument quickly shows that most choices of the four indices make this expression zero. We define U +(m) +ab ≡ +δab(1−2δam) and observe that, because Λ is diagonal, (U(m)) +⊤ΛU(m) = Λ and thus Z +(U(m)) = Z. Equation 75 then yields that + +$$\mathbb{E}_{\Phi}[\mathbf{Z}_{i j}\mathbf{Z}_{k\ell}]=(-1)^{\delta_{i m}+\delta_{j m}+\delta_{k m}+\delta_{\ell m}}\mathbb{E}_{\Phi}\big[\mathbf{Z}_{i j}\mathbf{Z}_{k\ell}\big]\,,$$ +EΦ[ZijZkℓ] = (−1)δim+δjm+δkm+δℓmEΦ[ZijZkℓ] , (76) +from which it follows that EΦ[ZijZkℓ] = 0 if any index is repeated an odd number of times. In light of the fact that Zij = Zji, there are only three distinct nontrivial cases to consider: + +$$1.\;\;\mathbb{E}_{\Phi}[\mathbf{Z}_{i i}\mathbf{Z}_{i i}],$$ +2. $\mathbb{E}_{\Phi}[\mathbf{Z}_{ij}\mathbf{Z}_{ij}]$ with $i\neq j$, and $i$. + +$$(76)$$ +$$3.\ \operatorname{\mathbb{E}}_{\Phi}[\mathbf{Z}_{i i}\mathbf{Z}_{j j}]{\mathrm{~with~}}i\neq j.$$ + +We note that we are not using the Einstein convention of summation over repeated indices. Cases 1 and 2. We now consider differentiating Z with respect to a particular element of the matrix Λ. This yields + +$$\frac{\partial\mathbf{Z}_{i\ell}}{\partial\mathbf{\Lambda}_{j k}}=-\phi_{i}^{\top}\left(\Phi^{\top}\mathbf{\Lambda}\Phi\right)^{-1}\phi_{j}\phi_{k}^{\top}\left(\Phi^{\top}\mathbf{\Lambda}\Phi\right)^{-1}\phi_{\ell}=-\mathbf{Z}_{i j}\mathbf{Z}_{k\ell},$$ + +where as before ϕiis the i-th row of Φ. This gives us the useful expression that + +$$(77)$$ +$$\mathbb{E}[\mathbf{Z}_{ij}\mathbf{Z}_{k\ell}]=-\frac{\partial}{\partial\mathbf{\Lambda}_{jk}}\mathbb{E}[\mathbf{Z}_{i\ell}]\,.\tag{1}$$ +$$(78)$$ +$$\left(79\right)$$ + +We now set ℓ = *i, k* = j and evaluate this expression using Equation 73, concluding that + +$$\mathbb{E}[\mathbf{Z}_{i j}\mathbf{Z}_{i j}]=\mathbb{E}[\mathbf{Z}_{i j}\mathbf{Z}_{j i}]=-{\frac{\partial}{\partial\lambda_{j}}}\left({\frac{1}{\lambda_{i}+\kappa}}\right)={\frac{1}{(\lambda_{i}+\kappa)^{2}}}\left(\delta_{i j}+{\frac{\partial\kappa}{\partial\lambda_{j}}}\right)$$ + +and thus + +$$\mathrm{Cov}[{\bf Z}_{i j},{\bf Z}_{i j}]=\mathrm{Cov}[{\bf Z}_{i j},{\bf Z}_{j i}]=\frac{1}{(\lambda_{i}+\kappa)^{2}}\frac{\partial\kappa}{\partial\lambda_{j}}=\frac{\kappa^{2}}{q(\lambda_{i}+\kappa)^{2}(\lambda_{j}+\kappa)^{2}}.$$ +. (80) +We did not require that i ̸= j, and so Equation 79 holds for Case 1 as well as Case 2. + +Case 3. We now aim to calculate E[ZiiZjj ] for i ̸= j. We might hope to use Equation 78 in calculating E[ZiiZjj ], but this approach is stymied by the fact that we would need to take a derivative with respect to Λij , but we only have an approximation for Z for diagonal Λ. We can circumvent this by means of Z +(U), using an approach which is equivalent to rediagonalizing Λ after a symmetric perturbation. From the definition of Z +(U), we find that + +$$\left(\frac{\partial}{\partial\mathbf{U}_{ij}}-\frac{\partial}{\partial\mathbf{U}_{ji}}\right)\mathbf{Z}_{ij}^{(\mathbf{U})}\Bigg{|}_{\mathbf{U}=\mathbf{I}_{M}}\tag{1}$$ +$$=-\phi_{i}^{\top}\left(\Phi^{\top}\Lambda\Phi\right)^{-1}\left[\phi_{j}\lambda_{i}\phi_{i}^{\top}-\phi_{i}\lambda_{j}\phi_{j}^{\top}+\phi_{i}\lambda_{i}\phi_{j}^{\top}-\phi_{j}\lambda_{j}\phi_{i}^{\top}\right]\left(\Phi^{\top}\Lambda\Phi\right)^{-1}\phi_{j}\,.$$ +$$(80)$$ +$$=\left(\lambda_{j}-\lambda_{i}\right)\left(\mathbf{Z}_{i j}^{2}+\mathbf{Z}_{i i}\mathbf{Z}_{j j}\right).\quad(81)$$ + +Differentiating with respect to both Uij and Uji with opposite signs ensures that the derivative is taken within the manifold of orthogonal matrices. Now, using Equation 74, we find that + +$$\left(\frac{\partial}{\partial\mathbf{U}_{ij}}-\frac{\partial}{\partial\mathbf{U}_{ji}}\right)\mathbb{E}\left[\mathbf{Z}_{ij}^{(\mathbf{U})}\right]\Bigg{|}_{\mathbf{U}=\mathbf{I}_{M}}=\left(\frac{\partial}{\partial\mathbf{U}_{ij}}-\frac{\partial}{\partial\mathbf{U}_{ji}}\right)\left[\mathbf{U}^{\top}(\mathbf{A}+\kappa\mathbf{I}_{M})^{-1}\mathbf{U}\right]_{ij}\Bigg{|}_{\mathbf{U}=\mathbf{I}_{M}}\tag{82}$$ $$=\frac{1}{\lambda_{i}+\kappa}-\frac{1}{\lambda_{j}+\kappa}.$$ + +Taking the expectation of Equation 81, plugging in Equation 79 for E-Z +2 ij , comparing to 82, and performing some algebra, we conclude that + +$$\mathbb{E}[\mathbf{Z}_{i i}\mathbf{Z}_{j j}]={\frac{1}{(\lambda_{i}+\kappa)(\lambda_{j}+\kappa)}}-{\frac{\kappa^{2}}{q(\lambda_{i}+\kappa)^{2}(\lambda_{j}+\kappa)^{2}}}$$ +$$(\mathbf{83})$$ +$$\mathrm{Cov}[{\bf Z}_{i i},{\bf Z}_{j j}]=-\frac{\kappa^{2}}{q(\lambda_{i}+\kappa)^{2}(\lambda_{j}+\kappa)^{2}}.$$ +$$({\mathfrak{s}}4)$$ +$$(85)$$ +$$\mathrm{Cov}[{\bf Z}_{i j},{\bf Z}_{k\ell}]=\frac{\kappa^{2}\left(\delta_{i k}\delta_{j\ell}+\delta_{i\ell}\delta_{j k}-\delta_{i j}\delta_{k\ell}\right)}{q(\lambda_{i}+\kappa)(\lambda_{j}+\kappa)(\lambda_{k}+\kappa)(\lambda_{\ell}+\kappa)}.$$ +$$\boxed{\mathrm{Cov}\left[\mathbf{T}_{i j}^{(\mathcal{P})},\mathbf{T}_{k\ell}^{(\mathcal{P})}\right]=\frac{\mathcal{L}_{i}(1-\mathcal{L}_{j})\mathcal{L}_{k}(1-\mathcal{L}_{\ell})}{n-\sum_{m=1}^{M}\mathcal{L}_{m}^{2}}(\delta_{i k}\delta_{j\ell}+\delta_{i\ell}\delta_{j k}-\delta_{i j}\delta_{k\ell}).}$$ +$$(86)$$ +$$\mathcal{E}(f)=\frac{n}{n-\sum_{m}\mathcal{L}_{m}^{2}}\sum_{i}(1-\mathcal{L}_{i})^{2}\,\mathbf{v}_{i}^{2}.\tag{1}$$ +$$\mathbb{E}\Big{[}\mathbf{T}_{ii}^{(\mathcal{D})}\Big{]}=\frac{\delta_{ij}\lambda_{i}}{\lambda_{i}+\frac{\delta}{M}+\kappa},\tag{10}$$ + +$$\mathrm{Cov}\left[\mathbf{T}_{ij}^{(\mathcal{D})},\mathbf{T}_{kl}^{(\mathcal{D})}\right]=\frac{\kappa\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}-\delta_{ij}\delta_{kl}\right)}{q(\lambda_{i}+\frac{\delta}{M}+\kappa)(\lambda_{j}+\frac{\delta}{M}+\kappa)(\lambda_{k}+\frac{\delta}{M}+\kappa)(\lambda_{l}+\frac{\delta}{M}+\kappa)},$$ where $\kappa\geq0$ satisfies $\sum\limits_{i}\frac{\lambda_{i}+\frac{\delta}{M}}{\lambda_{i}+\frac{\delta}{M}+\kappa}=n$ and $q\equiv\sum\limits_{j=1}^{M}\frac{\lambda_{j}+\frac{\delta}{M}}{(\lambda_{j}+\frac{\delta}{M}+\kappa)^{2}}$. +, (90) +$$(87)$$ +$$(88)$$ +$$(89)$$ +$$(90)$$ +$$(91)$$ +$$(92)$$ +$$\boxed{n=\sum_{i}{\frac{\lambda_{i}}{\lambda_{i}+\kappa}}+{\frac{\delta}{\kappa}}}$$ +κ(92) + +## I.10 Adding Back The Ridge And Noise + +and thus that Zii, Zjj are anticorrelated with covariance Cases 1-3 can be summarized as Using the fact that T +(D) +ij = λiZij , defining Li ≡ λi(λi + κ) +−1, and noting that q = (Pi Li(1 − Li))−1, we find that Noting that E(f) = E-|v − ˆv| 2, recalling that ˆv = T(D)v, and using Equation 86 to evaluate a sum over eigenmodes, we find that expected MSE is given by Taking a sum over indices of v, we find that the covariance of the predicted function can be written simply in terms of MSE as + +$${\rm Cov}[\hat{\bf v}_{i},\hat{\bf v}_{j}]=\frac{{\cal L}_{i}^{2}{\cal E}(f)}{n}\ \delta_{ij}.\tag{1}$$ + +We have thus far assumed δ = 0. We can now add the ridge parameter back using Equation 57. To add a ridge parameter δ, we need merely replace λi → λi + +δ M and then change T +(D) +ij → λi(λi + +δ M ) +−1T +(D) +ij . This yields that Taking either δ = 0 or M → ∞, we find that and the mean and covariance of T(D) are again given by Equations 68 and 86. + +To summarize this simplification: in the continuous setting (M → ∞), we recover the results of prior work. + +In the discrete setting with zero ridge, we find that these expressions apply unmodified. In the discrete setting with positive ridge, we find that these expressions contain corrections with perturbative parameter δ M . In the main text, we report the expressions with δM = 0, and our experiments obey this condition. + +Modifying v so as to place power ϵ 2into an eigenmode with zero eigenvalue, Equation 87 for MSE becomes + +$${\mathcal{E}}(f)={\mathcal{E}}_{0}\left(\sum_{i}(1-{\mathcal{L}}_{i})^{2}{\bf v}_{i}^{2}+\epsilon^{2}\right).\tag{1}$$ +$$(93)$$ + +## I.11 Properties Of Κ + +In experimental settings, κ is in general easy to find numerically, but for theoretical study, we anticipate it being useful to have some analytical bounds on κ in order to, for example, prove that certain eigenmodes are or are not asymptotically learned for particular spectra. To that end, the following lemma gives some properties of κ. + +Lemma I.1. For κ ≥ 0 *solving* PM +i=1λi λi+κ + +δ κ = n*, with positive eigenvalues* {λi}M +i=1 *ordered from greatest* to least, the following properties hold: + + *$\kappa=\infty$ when $n=0$, and $\kappa=0$ when $n\to M$ and $\delta=0$.* *$\kappa$ is strictly decreasing with $n$.* *$\kappa\leq\frac{1}{n-\ell}\left(\delta+\sum_{i=\ell+1}^M\lambda_i\right)$ for all $\ell\in\{0,...,n-1\}$.* * $\kappa\geq\lambda_\ell\left(\frac{\ell}{n}-1\right)$ for all $\ell\in\{n,...,M\}$. +Proof of property (a): Because PM +i=1λi λi+κ + +δ κ is strictly decreasing with κ for κ ≥ 0, there can only be one solution for a given n. The first statement follows by inspection, and the second follows by inspection and our assumption that all eigenvalues are strictly positive. Proof of property (b): Differentiating the constraint on κ with respect to n yields + +$$\sum_{i=1}^{M}\frac{-\lambda_{i}}{(\lambda_{i}+\kappa)^{2}}\ +\ \frac{-\delta}{\kappa^{2}}\right]\frac{d\kappa}{dn}=1,\quad\mbox{which implies that}\quad\frac{d\kappa}{dn}=-\left[\sum_{i=1}^{M}\frac{\lambda_{i}}{(\lambda_{i}+\kappa)^{2}}\ +\ \frac{\delta}{\kappa^{2}}\right]^{-1}<0.\tag{94}$$ +$$(95)$$ + +Proof of property (c): We observe that n =PM +i=1λi λi+κ + +δ κ ≤ ℓ +PM +i=ℓ+1 λi κ + +δ κ +. The desired property follows. + +Proof of property (d): We set δ = 0 and consider replacing λi with λℓ if i ≤ ℓ and 0 if *i > ℓ*. Noting that this does not increase any term in the sum, we find that n =PM +i=1λi λi+κ ≥Pℓ i=1λℓ λℓ+κ =ℓλℓ λℓ+κ +. The desired property in the ridgeless case follows. A positive ridge parameter only increases κ, so the property holds in general. We note that a positive ridge parameter can be incorporated into the bound, giving + +$$\kappa\geq\frac{1}{2n}\left[(\ell-n)\lambda_{\ell}+\delta+\sqrt{\left((\ell-n)\lambda_{\ell}+\delta\right)^{2}+4n\delta\lambda_{\ell}}\,\right].\qed$$ + +We also note that, as observed by Jacot et al. (2020) and Spigler et al. (2020), the asymptotic scaling of κ can be fixed if the kernel eigenvalues follow a power law spectrum. Specifically, if λi ∼ i +−α for some α > 1, then Jacot et al. (2020)16 show that + +$$\kappa=\Theta(\delta\,n^{-1}+n^{-\alpha}).$$ +−α). (96) +16Jacot et al. (2020) scale the ridge parameter proportional to n in their definition of kernel ridge regression; our reproduction of their result is translated into our scaling convention. + +$$(96)$$