RedTachyon
commited on
Commit
•
865c5ac
1
Parent(s):
ada0b60
Upload folder using huggingface_hub
Browse files- zmBFzuT2DN/3_image_0.png +3 -0
- zmBFzuT2DN/zmBFzuT2DN.md +1110 -0
- zmBFzuT2DN/zmBFzuT2DN_meta.json +25 -0
zmBFzuT2DN/3_image_0.png
ADDED
Git LFS Details
|
zmBFzuT2DN/zmBFzuT2DN.md
ADDED
@@ -0,0 +1,1110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Deep Operator Learning Lessens The Curse Of Dimensionality For Pdes
|
2 |
+
|
3 |
+
Ke Chen kechen@umd.edu Department of Mathematics University of Maryland, College Park Chunmei Wang chunmei.wang@ufl.edu Department of Mathematics University of Florida Haizhao Yang hzyang@umd.edu Department of Mathematics University of Maryland, College Park Reviewed on OpenReview: *https: // openreview. net/ forum? id= zmBFzuT2DN*
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Deep neural networks (DNNs) have achieved remarkable success in numerous domains, and their application to PDE-related problems has been rapidly advancing. This paper provides an estimate for the generalization error of learning Lipschitz operators over Banach spaces using DNNs with applications to various PDE solution operators. The goal is to specify DNN width, depth, and the number of training samples needed to guarantee a certain testing error. Under mild assumptions on data distributions or operator structures, our analysis shows that deep operator learning can have a relaxed dependence on the discretization resolution of PDEs and, hence, lessen the curse of dimensionality in many PDE-related problems including elliptic equations, parabolic equations, and Burgers equations. Our results are also applied to give insights about discretization-invariance in operator learning.
|
8 |
+
|
9 |
+
## 1 Introduction
|
10 |
+
|
11 |
+
Nonlinear operator learning aims to learn a mapping from a parametric function space to the solution space of specific partial differential equation (PDE) problems. It has gained significant importance in various fields, including order reduction Peherstorfer & Willcox (2016), parametric PDEs Lu et al. (2021b); Li et al. (2021),
|
12 |
+
inverse problems Khoo & Ying (2019), and imaging problems Deng et al. (2020); Qiao et al. (2021); Tian et al. (2020). Deep neural networks (DNNs) have emerged as state-of-the-art models in numerous machine learning tasks Graves et al. (2013); Miotto et al. (2018); Krizhevsky et al. (2017), attracting attention for their applications to engineering problems where PDEs have long been the dominant model. Consequently, deep operator learning has emerged as a powerful tool for nonlinear PDE operator learning Lanthaler et al. (2022); Li et al. (2021); Nelsen & Stuart (2021); Khoo & Ying (2019). The typical approach involves discretizing the computational domain and representing functions as vectors that tabulate function values on the discretization mesh. A DNN is then employed to learn the map between finite-dimensional spaces. While this method has been successful in various applications Lin et al. (2021); Cai et al. (2021), its computational cost is high due to its dependence on the mesh. This implies that retraining of the DNN is necessary when using a different domain discretization. To address this issue, Li et al. (2021); Lu et al. (2022); Ong et al.
|
13 |
+
|
14 |
+
(2022) have been proposed for problems with sparsity structures and discretization-invariance properties.
|
15 |
+
|
16 |
+
Another line of works for learning PDE operators are generative models, including Generative adversarial models (GANs) and its variants Rahman et al. (2022); Botelho et al. (2020); Kadeethum et al. (2021) and diffusion models Wang et al. (2023). These methods can deal with discontinuous features, whereas neural network based methods are mainly applied to operators with continuous input and output. However, most of generative models for PDE operator learning are limited to empirical study and theoretical foundations are in lack.
|
17 |
+
|
18 |
+
Despite the empirical success of deep operator learning in numerous applications, its statistical learning theory is still limited, particularly when dealing with infinite-dimensional ambient spaces. The learning theory generally comprises three components: approximation theory, optimization theory, and generalization theory. Approximation theory quantifies the expressibility of various DNNs as surrogates for a class of operators. The universal approximation theory for certain classes of functions Cybenko (1989); Hornik
|
19 |
+
(1991) forms the basis of the approximation theory for DNNs. It has been extended to other function classes, such as continuous functions Shen et al. (2019); Yarotsky (2021); Shen et al. (2021), certain smooth functions Yarotsky (2018); Lu et al. (2021a); Suzuki (2018); Adcock et al. (2022), and functions with integral representations Barron (1993); Ma et al. (2022). However, compared to the abundance of theoretical works on approximation theory for high-dimensional functions, the approximation theory for operators, especially between infinite-dimensional spaces, is quite limited. Seminal quantitative results have been presented in Kovachki et al. (2021); Lanthaler et al. (2022). In contrast to approximation theory, generalization theory aims to address the following question:
|
20 |
+
How many training samples are required to achieve a certain testing error?
|
21 |
+
|
22 |
+
This question has been addressed by numerous statistical learning theory works for function regression using neural network structures Bauer & Kohler (2019); Chen et al. (2022); Farrell et al. (2021); Kohler & Krzyżak
|
23 |
+
(2005); Liu et al. (2021); Nakada & Imaizumi (2020); Schmidt-Hieber (2020). In a d-dimensional learning problem, the typical error decay rate is on the order of n
|
24 |
+
−O(1/d) as the number of samples n increases.
|
25 |
+
|
26 |
+
The fact that the exponent is very small for large dimensionality d is known as the *curse of dimensionality*
|
27 |
+
(CoD) Stone (1982). Recent studies have demonstrated that DNNs can achieve faster decay rates when dealing with target functions or function domains that possess low-dimensional structures Chen et al. (2019; 2022); Cloninger & Klock (2020); Nakada & Imaizumi (2020); Schmidt-Hieber (2019); Shen et al. (2019).
|
28 |
+
|
29 |
+
In such cases, the decay rate becomes independent of the domain discretization, thereby lessening the CoD Bauer & Kohler (2019); Chkifa et al. (2015); Suzuki (2018). However, it is worth noting that most existing works primarily focus on functions between finite-dimensional spaces. To the best of our knowledge, previous results de Hoop et al. (2021); Lanthaler et al. (2022); Lu et al. (2021b); Liu et al. (2022) provide the only generalization analysis for infinite-dimensional functions. Our work extends the findings of Liu et al. (2022)
|
30 |
+
by generalizing them to Banach spaces and conducting new analyses within the context of PDE problems.
|
31 |
+
|
32 |
+
The removal of the inner-product assumption is crucial in our research, enabling us to apply the estimates to various PDE problems where previous results do not apply. This is mainly because the suitable space for functions involved in most practical PDE examples are Banach spaces where the inner-product is not well-defined. Examples include the conductivity media function in the parametric elliptic equation, the drift force field in the transport equation, and the solution to the viscous Burgers equation that models continuum fluid. See more details in Section 3.
|
33 |
+
|
34 |
+
## 1.1 Our Contributions
|
35 |
+
|
36 |
+
The main objective of this study is to investigate the reasons behind the reduction of the CoD in PDErelated problems achieved by deep operator learning. We observe that many PDE operators exhibit a composition structure consisting of linear transformations and element-wise nonlinear transformations with a small number of inputs. DNNs are particularly effective in learning such structures due to their ability to evaluate networks point-wise. We provide an analysis of the approximation and generalization errors and apply it to various PDE problems to determine the extent to which the CoD can be mitigated. Our contributions can be summarized as follows:
|
37 |
+
◦ Our work provides a theoretical explanation to why CoD is lessened in PDE operator learning. We extend the generalization theory in Liu et al. (2022) from Hilbert spaces to Banach spaces, and apply it to several PDE examples. Such extension holds great significance as it overcomes a limitation in previous works, which primarily focused on Hilbert spaces and therefore lacked applicability in machine learning for practical PDEs problems. Comparing to Liu et al. (2022), our estimate circumvented the inner-product structure at the price of a non-decaying noise estimate. This is a tradeoff of accuracy for generalization to Banach space. Our work tackles a broader range of PDE
|
38 |
+
operators that are defined on Banach spaces. In particular, five PDE examples are given in Section 3 whose solution spaces are not Hilbert spaces.
|
39 |
+
|
40 |
+
◦ Unlike existing works such as Lanthaler et al. (2022), which only offer posterior analysis, we provide an a priori estimate for PDE operator learning. Our estimate does not make any assumptions about the trained neural network and explicitly quantifies the required number of data samples and network sizes based on a given testing error criterion. Furthermore, we identify two key structures—lowdimensional and low-complexity structures (described in assumptions 5 and 6, respectively)—that are commonly present in PDE operators. We demonstrate that both structures exhibit a sample complexity that depends on the essential dimension of the PDE itself, weakly depending on the PDE discretization size. This finding provides insights into why deep operator learning effectively mitigates the CoD.
|
41 |
+
|
42 |
+
◦ Most operator learning theories consider fixed-size neural networks. However, it is important to account for neural networks with discretization invariance properties, allowing training and evaluation on PDE data of various resolutions. Our theory is flexible and can be applied to derive error estimates for discretization invariant neural networks.
|
43 |
+
|
44 |
+
## 1.2 Organization
|
45 |
+
|
46 |
+
In Section 2, we introduce the neural network structures and outline the assumptions made on the PDE
|
47 |
+
operator. Furthermore, we present the main results for generic PDE operators, and PDE operators that have low-dimensional structure or low-complexity structure. At the end of the section, we show that the main results are also valid for discretization invariant neural networks. In Section 3, we show that the assumptions are satisfied and provide explicit estimates for five different PDEs. Finally, in Section 4, we discuss the limitations of our current work.
|
48 |
+
|
49 |
+
## 2 Problem Setup And Main Results
|
50 |
+
|
51 |
+
Notations. In a general Banach space X , we represent its associated norm as *∥ · ∥*X . Additionally, we denote EnX as the encoder mapping from the Banach space X to a Euclidean space R
|
52 |
+
dX , where dX denotes the encoding dimension. Similarly, we denote the decoder for X as DnX : R
|
53 |
+
dX → X . The Ω notation for neural network parameters in the main results section 2.2 denotes a lower bound estimate, that is, x = Ω(y)
|
54 |
+
means there exists a constant C > 0 such that x ≥ Cy. The O notation denotes an upper bound estimate, that is, x = O(y) means there exists a constant C > 0 such that x ≤ Cy.
|
55 |
+
|
56 |
+
## 2.1 Operator Learning And Loss Functions
|
57 |
+
|
58 |
+
We consider a general nonlinear PDE operator Φ : X ∋ u 7→ v ∈ Y over Banach spaces X and Y. In this context, the input variable u typically represents the initial condition, the boundary condition, or a source of a specific PDE, while the output variable v corresponds to the PDE solution or partial measurements of the solution. Our objective is to train a DNN denoted as ϕ(u; θ) to approximate the target nonlinear operator Φ using a given data set S = {(ui, vi), vi = Φ(ui) +εi, i = 1*, . . . ,* 2n}. The data set S is divided into S
|
59 |
+
n 1 = {(ui, vi), vi = Φ(ui) + εi, i = 1*, . . . , n*} that is used to train the encoder and decoders, and a training data set S
|
60 |
+
n 2 = {(ui, vi), vi = Φ(ui) + εi, i = n + 1*, . . . ,* 2n}. Both S
|
61 |
+
n 1 and S
|
62 |
+
n 2 are generated independently and identically distributed (i.i.d.) from a random measure γ over X , with εi representing random noise.
|
63 |
+
|
64 |
+
In practical implementations, DNNs operate on finite-dimensional spaces. Therefore, we utilize empirical encoder-decoder pairs, namely EnX : X → R
|
65 |
+
dX and DnX : R
|
66 |
+
dX → X , to discretize u ∈ X . Similarly, we employ empirical encoder-decoder pairs, En Y
|
67 |
+
: Y → R
|
68 |
+
dY and DnY
|
69 |
+
: R
|
70 |
+
dY → Y, for v ∈ Y. These encoderdecoder pairs are trained using the available data set S
|
71 |
+
n 1 or manually designed such that DnX ◦EnX ≈ IdX
|
72 |
+
and DnY
|
73 |
+
◦En Y ≈ IdY
|
74 |
+
. A common example of empirical encoders and decoders is the discretization operator, which
|
75 |
+
|
76 |
+
![3_image_0.png](3_image_0.png)
|
77 |
+
|
78 |
+
Figure 1: The target nonlinear operator Φ : u 7→ v is approximated by compositions of an encoder EnX ,
|
79 |
+
a DNN function Γ, and a decoder DnY
|
80 |
+
. The finite dimensional operator Γ is learned via the optimization problem equation 1.
|
81 |
+
maps a function to a vector representing function values at discrete mesh points. Other examples include finite element projections and spectral methods, which map functions to coefficients of corresponding basis functions. Our goal is to approximate the encoded PDE operator using a finite-dimensional operator Γ so that Φ ≈ DnY
|
82 |
+
◦ Γ ◦ EnX . Refer to Figure 1 for an illustration. This approximation is achieved by solving the following optimization problem:
|
83 |
+
|
84 |
+
$$\Gamma_{\rm NN}\in\mathop{\rm argmin}_{\Gamma\in{\cal F}_{\rm NN}}\frac{1}{n}\sum_{i=1}^{n}\|\Gamma\circ E_{\cal X}^{n}(u_{i})-E_{\cal Y}^{n}(v_{i})\|_{2}^{2}.\tag{1}$$
|
85 |
+
|
86 |
+
Here the function space FNN represents a collection of rectified linear unit (ReLU) feedforward DNNs denoted as f(x), which are defined as follows:
|
87 |
+
|
88 |
+
$$f(x)=W_{L}\phi_{L-1}\circ\phi_{L-2}\circ\cdots\circ\phi_{1}(x)+b_{L}\,,\quad\phi_{i}(x):=\sigma(W_{i}x+b_{i})\,,i=1,\ldots,L-1\,,\tag{2}$$
|
89 |
+
|
90 |
+
where σ is the ReLU activation function σ(x) = max{x, 0}, and Wi and bi represent weight matrices and bias vectors, respectively. The ReLU function is evaluated pointwise on all entries of the input vector. In practice, the functional space FNN is selected as a compact set comprising all ReLU feedforward DNNs. This work investigates two distinct architectures within FNN. The first architecture within FNN is defined as follows:
|
91 |
+
|
92 |
+
${\cal F}_{\rm NN}(d,L,p,K,\kappa,M)=\{\Gamma=[f_{1},f_{2},...,f_{d}]^{\top}:\ \mbox{for each}\ k=1,...,d,f_{k}(x)\ \mbox{is in the form of}\ (2)\}$
|
93 |
+
with $L$ layers, width bounded by $p,\|f_{k}\|_{\infty}\leq M,\ \|W_{l}\|_{\infty,\infty}\leq\kappa,\|b_{l}\|_{\infty}\leq\kappa,\ \sum_{l=1}^{L}\|W_{l}\|_{0}+\|b_{l}\|_{0}\leq K$, (3)
|
94 |
+
$$\mathrm{{rm~of~}}(2)$$
|
95 |
+
where ∥f∥∞ = maxx |f(x)|, ∥W∥∞,∞ = maxi,j |Wi,j |, ∥b∥∞ = maxi|bi| for any function f, matrix W, and vector b with *∥ · ∥*0 denoting the number of nonzero elements of its argument. The functions in this architecture satisfy parameter bounds with limited cardinalities. The second architecture relaxes some of the constraints compared to the first architecture; i.e.,
|
96 |
+
|
97 |
+
$\mathcal{F}_{\mathrm{NN}}(d,L,p,M)=\{\Gamma=[f_{1},f_{2},...,f_{d}]^{\top}:\text{for each}k=1,...,d\,,f_{k}(x)\text{is in the form of}(2)$ with $L$ layers, width bounded by $p$, $\|f_{k}\|_{\infty}\leq M\}$.
|
98 |
+
When there is no ambiguity, we use the notation FNN and omit its associated parameters.
|
99 |
+
|
100 |
+
We consider the following assumptions on the target PDE map Φ, the encoders EnX , En Y
|
101 |
+
, the decoders DnX , DnY
|
102 |
+
,
|
103 |
+
and the data set S in our theoretical framework.
|
104 |
+
|
105 |
+
Assumption 1 (Compactly supported measure). The probability measure γ is supported on a compact set ΩX ⊂ X . For any u ∈ ΩX , there exists RX > 0 such that ∥u∥X ≤ RX . Here, *∥ · ∥*X denotes the associated norm of the space X .
|
106 |
+
|
107 |
+
Assumption 2 (Lipschitz operator). There exists LΦ > 0 such that for any u1, u2 ∈ ΩX ,
|
108 |
+
|
109 |
+
$$\|\Phi(u_{1})-\Phi(u_{2})\|_{\mathcal{Y}}\leq L_{\Phi}\|u_{1}-u_{2}\|_{\mathcal{X}}.$$
|
110 |
+
|
111 |
+
Here, *∥ · ∥*Y denotes the associated norm of the space Y.
|
112 |
+
|
113 |
+
Remark 1. Assumption 1 and Assumption 2 imply that the images v = Φ(u) are bounded by RY := LΦRX
|
114 |
+
for all u ∈ ΩX . The Lipschitz constant LΦ will be explicitly computed in Section 3 for different PDE
|
115 |
+
operators.
|
116 |
+
|
117 |
+
Assumption 3 (Lipschitz encoders and decoders). The empirical encoders and decoders EnX , DnX , En Y
|
118 |
+
, DnY
|
119 |
+
satisfy the following properties:
|
120 |
+
|
121 |
+
$$E_{\mathcal{X}}^{n}(0_{\mathcal{X}})=\mathbf{0}\,,D_{\mathcal{X}}^{n}(\mathbf{0})=0_{\mathcal{X}}\,,E_{\mathcal{Y}}^{n}(0_{\mathcal{Y}})=\mathbf{0}\,,D_{\mathcal{Y}}^{n}(\mathbf{0})=0_{\mathcal{Y}}\,,$$
|
122 |
+
|
123 |
+
where 0 denotes the zero vector and 0X , 0Y denote the zero function in X and Y, respectively. Moreover, we assume all empirical encoders are Lipschitz operators such that
|
124 |
+
|
125 |
+
$$\|E_{\mathcal P}^{n}u_{1}-E_{\mathcal P}^{n}u_{2}\|_{2}\leq L_{E_{\mathcal P}^{n}}\|u_{1}-u_{2}\|_{\mathcal P}\,,\quad\mathcal P=\mathcal X,\mathcal Y\,,$$
|
126 |
+
|
127 |
+
where *∥ · ∥*2 denotes the Euclidean L
|
128 |
+
2 norm, *∥ · ∥*P denotes the associated norm of the Banach space P, and LEnP
|
129 |
+
is the Lipschitz constant of the encoder EnP . Similarly, we also assume that the decoders DnP ,P = X , Y
|
130 |
+
are also Lipschitz with constants LDnP
|
131 |
+
.
|
132 |
+
|
133 |
+
Assumption 4 (Noise). For i = 1*, . . . ,* 2n, the noise εi satisfies 1. εiis independent of ui; 2. E[εi] = 0; 3. There exists σ > 0 such that ∥εi∥Y ≤ σ.
|
134 |
+
|
135 |
+
Remark 2. The above assumptions on the noise and Lipschitz encoders imply that ∥En Y
|
136 |
+
(Φ(ui) + εi) −
|
137 |
+
En Y
|
138 |
+
(Φ(u))∥2 ≤ LEn Y
|
139 |
+
σ.
|
140 |
+
|
141 |
+
## 2.2 Main Results
|
142 |
+
|
143 |
+
For a trained neural network ΓNN over the data set S, we denote its generalization error as Egen(ΓNN) := ES Eu∼γ
|
144 |
+
-∥DnY ◦ ΓNN ◦ E
|
145 |
+
n X (u) − Φ(u)∥
|
146 |
+
2 Y
|
147 |
+
.
|
148 |
+
|
149 |
+
Note that we omit its dependence on S in the notation. We also define the following quantity, Enoise,proj := L
|
150 |
+
2 ΦES Eu
|
151 |
+
-∥Π
|
152 |
+
n X ,dX
|
153 |
+
(u) − u∥
|
154 |
+
2 X
|
155 |
+
+ ES Ew∼Φ\#γ
|
156 |
+
-∥Π
|
157 |
+
n Y,dY
|
158 |
+
(w) − w∥
|
159 |
+
2 Y
|
160 |
+
+ σ 2 + n
|
161 |
+
−1, where ΠnX ,dX
|
162 |
+
:= DnX ◦ EnX and ΠnY,dY
|
163 |
+
:= DnY
|
164 |
+
◦ En Y denote the encoder-decoder projections on X and Y
|
165 |
+
respectively. Here the first term shows that the encoder/decoder projection error ES Eu h∥ΠnX ,dX
|
166 |
+
(u) − u∥
|
167 |
+
2 X
|
168 |
+
i for X is amplified by the Lipschitz constant; the second term is the encoder/decoder projection error for Y;
|
169 |
+
the third term stands for the noise; and the last term is a small quantity n
|
170 |
+
−1. It will be shown later that this quantity appears frequently in our main results.
|
171 |
+
|
172 |
+
Theorem 1. Suppose Assumptions 1-4 hold. Let ΓNN be the minimizer of the optimization problem equation 1, with the network architecture FNN(dY , L, p, K, κ, M) *defined in equation 3 with parameters*
|
173 |
+
|
174 |
+
$$\begin{array}{l l}{{L=\Omega\left(\ln(\frac{n}{d y})\right)\,,}}&{{p=\Omega\left(d_{y}^{\frac{2-d_{X}}{2+d_{X}}}n^{\frac{d_{X}}{2+d_{X}}}\right)\,,}}\\ {{K=\Omega(p L)\,,}}&{{\kappa=\Omega(M^{2})\,,\quad M\geq\sqrt{d_{y}}L_{E_{y}^{n}}R_{y}\,,}}\end{array}$$
|
175 |
+
|
176 |
+
where the notation Ω *contains constants that solely depend on* LEn
|
177 |
+
$$\mathcal{E}_{g e n}(\Gamma_{\mathrm{NN}})\lesssim d_{y}^{\frac{6+d_{X}}{2+d_{X}}}n^{-\frac{2}{2+d_{X}}}(1+L_{\Phi}^{2-d_{X}})\left(\ln^{3}\frac{n}{d_{y}}+\ln^{2}n\right)+\mathcal{E}_{n o i s e,p r o j}\,.$$
|
178 |
+
, LDnY
|
179 |
+
, LEnX
|
180 |
+
, LDnX
|
181 |
+
, RX and dX *. Then there*
|
182 |
+
holds Here ≲ *contains constants that solely depend on* LEn Y
|
183 |
+
, LDnY
|
184 |
+
, LEnX
|
185 |
+
, LDnX
|
186 |
+
, RX and dX .
|
187 |
+
|
188 |
+
Theorem 2. Suppose Assumptions 1-4 hold ture. Let ΓNN *be the minimizer of the optimization problem* equation 1 with the network architecture FNN(dY , L, p, M) *defined in equation 4 with parameters,*
|
189 |
+
|
190 |
+
$$M\geq\sqrt{d_{\mathcal{Y}}}L_{E_{\mathcal{Y}}^{n}}R_{\mathcal{Y}}\,,\,\,\,a n d\,\,L p\geq\left[a_{\mathcal{Y}}^{\frac{4-d_{\mathcal{Y}}}{4+2d_{\mathcal{X}}}\,n^{\frac{d_{\mathcal{Y}}}{4+2d_{\mathcal{X}}}}}\right]\,.$$
|
191 |
+
$$\left({5}\right)$$
|
192 |
+
$$(6)$$
|
193 |
+
|
194 |
+
Then we have
|
195 |
+
|
196 |
+
$${\mathcal E}_{g e n}(\Gamma_{\mathrm{NN}})\lesssim L_{\Phi}^{2}\log(L_{\Phi})d_{\mathcal{Y}}^{\frac{8+d X}{2+d X}}\,n^{-\frac{2}{2+d X}}\log n+{\mathcal E}_{n o i s e,p o r j}\,,$$
|
197 |
+
2+dX log n + E*noise,proj* ,(6)
|
198 |
+
where ≲ contains constants that depend on dX , LEn Y
|
199 |
+
, LEnX
|
200 |
+
, LDnY
|
201 |
+
, LDnX
|
202 |
+
|
203 |
+
$$\operatorname{ind}\,R_{\chi}.$$
|
204 |
+
|
205 |
+
Remark 3. The aforementioned results demonstrate that by selecting an appropriate width and depth for the DNN, the generalization error can be broken down into three components: the generalization error of learning the finite-dimensional operator Γ, the projection error of the encoders/decoders, and the noise. Comparing to previous results Liu et al. (2022) under the Hilbert space setting, our estimates show that the noise term in the generalization bound is non-decaying without the inner-product structure in the Banach space setting.
|
206 |
+
|
207 |
+
This is mainly caused by circumventing the inner-product structure via triangle inequalities in the proof. As the number of samples n increases, the generalization error decreases exponentially. Although the presence of dX in the exponent of the sample complexity n initially appears pessimistic, we will demonstrate that it can be eliminated when the input data ΩX of the target operator exhibits a low-dimensional data structure or when the target operator itself has a low-complexity structure. These assumptions are often satisfied for specific PDE operators with appropriate encoders. These results also imply that when dX is large, the neural network width p does not need to increase as the output dimension dY increases. The main difference between Theorem 1 and Theorem 2 lies in the different neural network architectures FNN(dY *, L, p, K, κ, M*)
|
208 |
+
and FNN(dY *, L, p, M*). As a consequence, Theorem 2 has a smaller asymptotic lower bound Ω(n 1/2) of the neural network width p in the large dX regime, whereas the asymptotic lower bound is Ω(n) in Theorem 1.
|
209 |
+
|
210 |
+
## Estimates With Special Data And Operator Structures
|
211 |
+
|
212 |
+
The generalization error estimates presented in Theorems 1-2 are effective when the input dimension dX is relatively small. However, in practical scenarios, it often requires numerous bases to reduce the encoder/decoder projection error, resulting in a large value for dX . Consequently, the decay rate of the generalization error as indicated in Theorems 1-2 becomes stagnant due to its exponential dependence on dX .
|
213 |
+
|
214 |
+
Nevertheless, it is often assumed that the high-dimensional data lie within the vicinity of a low-dimensional manifold by the famous "manifold hypothesis". Specifically, we assume that the encoded vectors u lie on a d0-dimensional manifold with d0 ≪ dX . Such a data distribution has been observed in many applications, including PDE solution set, manifold learning, and image recognition. This assumption is formulated as follows.
|
215 |
+
|
216 |
+
Assumption 5. Let d0 < dX ∈ N. Suppose there exists an encoder EX : X → R
|
217 |
+
dX such that {EX (u) | u ∈
|
218 |
+
ΩX } lies in a smooth d0-dimensional Riemannian manifold M that is isometrically embedded in R
|
219 |
+
dX . The reach Niyogi et al. (2008) of M is τ > 0.
|
220 |
+
|
221 |
+
Under Assumption 5, the input data set exhibits a low intrinsic dimensionality. However, this may not hold for the output data set that is perturbed by noise. The reach of a manifold is the smallest osculating circle radius on the manifold. A manifold with large reach avoids rapid change and may be easier to learn by neural networks. In the following, we aim to demonstrate that the DNN naturally adjusts to the low-dimensional characteristics of the data set. As a result, the estimation error of the network depends solely on the intrinsic dimension d0, rather than the larger ambient dimension dX . We present the following result to support this claim.
|
222 |
+
|
223 |
+
Theorem 3. Suppose Assumptions 1-4, and Assumption 5 hold. Let ΓNN be the minimizer of the optimization problem equation 1 with the network architecture FNN(dY , L, p, M) *defined in equation 4 with parameters*
|
224 |
+
|
225 |
+
$$L=\Omega(\tilde{L}\log\tilde{L})\,,p=\Omega(d_{\chi}d_{\cal Y}\tilde{p}\log\tilde{p})\,,\quad M\geq\sqrt{d_{\cal Y}}L_{E_{\cal Y}^{n}}R_{\cal Y}\,,$$
|
226 |
+
RY , (7)
|
227 |
+
6
|
228 |
+
|
229 |
+
_where $\tilde{L},\tilde{p}>0$ are integers such that $\tilde{L}\tilde{p}\geq\left[d_{\tilde{y}}^{\frac{-34}{2+240}}n^{\frac{d_{0}}{2+240}}\right]$. Then we have_ $$\mathcal{E}_{gen}(\Gamma_{\mathrm{NN}})\lesssim L_{\Phi}^{2}\log(L_{\Phi})d_{\tilde{y}}^{\frac{6+4}{2+240}}n^{-\frac{2}{2+40}}\log^{6}n+\mathcal{E}_{noise,proj},$$
|
230 |
+
$$({\mathfrak{s}})$$
|
231 |
+
2+d0 log6 n + E*noise,proj* ,(8)
|
232 |
+
where the constants in ≲ and Ω(·) *solely depend on* d0, log dX , RX , LEnX
|
233 |
+
, LEn Y
|
234 |
+
, LDnX
|
235 |
+
, LDnY
|
236 |
+
, τ *, the surface area* of M.
|
237 |
+
|
238 |
+
It is important to note that the estimate equation 8 depends at most polynomially on dX and dY . The rate of decay with respect to the sample size is no longer influenced by the ambient input dimension dX . Thus, our findings indicate that the CoD can be mitigated through the utilization of the "manifold hypothesis." To effectively capture the low-dimensional manifold structure of the data, the width of the DNN should be on the order of O(dX ). Additionally, another characteristic often observed in PDE problems is the low complexity of the target operator. This holds true when the target operator is composed of several alternating sequences of a few linear and nonlinear transformations with only a small number of inputs. We quantify the notion of low-complexity operators in the following context.
|
239 |
+
|
240 |
+
Assumption 6. Let 0 < d0 ≤ dX . Assume there exists EX , DX , EY , DY such that for any u ∈ ΩX , we have
|
241 |
+
|
242 |
+
$$\Pi_{\mathcal{Y},d\mathcal{Y}}\circ\Phi(u)=D_{\mathcal{Y}}\circ g\circ E_{\mathcal{X}}(u),$$
|
243 |
+
|
244 |
+
where g : R
|
245 |
+
dX → R
|
246 |
+
dY is defined as
|
247 |
+
|
248 |
+
$$g(a)=\left[g_{1}(V_{1}^{\top}a),\cdots,g_{d y}(V_{d y}^{\top}a)\right]\,,$$
|
249 |
+
|
250 |
+
where the matrix is Vk ∈ R
|
251 |
+
dX ×d0 and the real valued function is gk : R
|
252 |
+
d0 → R for k = 1*, . . . , d*Y . See an illustration in (44).
|
253 |
+
|
254 |
+
In Assumption 6, when d0 = 1 and g1 = *· · ·* = gdY
|
255 |
+
, g(a) is the composition of a pointwise nonlinear transform and a linear transform on a. In particular, Assumption 6 holds for any linear maps.
|
256 |
+
|
257 |
+
Theorem 4. Suppose Assumptions 1-4, and Assumption 6 hold. Let ΓNN be the minimizer of the optimization problem (1) with the network architecture FNN(dY , L, p, M) *defined in (4) with parameters*
|
258 |
+
|
259 |
+
$$L p=\Omega\left(d_{\mathcal{Y}}^{\frac{4-d_{0}}{4+2d_{0}}}n^{\frac{d_{0}}{4+2d_{0}}}\right)\,,M\geq\sqrt{d_{\mathcal{Y}}}L_{E_{\mathcal{Y}}^{n}}R_{\mathcal{Y}}\,.$$
|
260 |
+
$$({\mathfrak{g}})$$
|
261 |
+
|
262 |
+
Then we have
|
263 |
+
$$\mathcal{E}_{g e n}(\Gamma_{\mathrm{NN}})\stackrel{<}{\sim}L_{\Phi}^{2}\log(L_{\Phi})d_{\mathcal{Y}}^{\frac{8+d_{0}}{2+d_{0}}}n^{-\frac{2}{2+d_{0}}}\log n+\mathcal{E}_{n o i s e,p r o j}\,,$$
|
264 |
+
2+d0 log n + E*noise,proj* ,(9)
|
265 |
+
where the constants in ≲ and Ω(·) solely depend on d0, RX , RY , LEnX
|
266 |
+
, LEn Y
|
267 |
+
, LDnX
|
268 |
+
, LDnY
|
269 |
+
.
|
270 |
+
|
271 |
+
Remark 4. Under Assumption 6, our result indicates that the CoD can be mitigated to a cost O(n
|
272 |
+
−2 2+d0 )
|
273 |
+
because the main task of DNNs is to learn the nonlinear transforms g1, · · · , gdY
|
274 |
+
that are functions over R
|
275 |
+
d0.
|
276 |
+
|
277 |
+
In practice, a PDE operator might be the repeated composition of operators in Assumption 6. This motivates a more general low-complexity assumption below.
|
278 |
+
|
279 |
+
Assumption 7. Let 0 < d1*, . . . , d*k ≤ dX and 0 < ℓ0*, . . . , ℓ*k ≤ min{dX , dY } with ℓ0 = dX and ℓk = dY . Assume there exists EX , DX , EY , DY such that for any u ∈ ΩX , we have
|
280 |
+
|
281 |
+
$$\Pi_{\mathcal{Y},d_{\mathcal{Y}}}\circ\Phi(u)=D_{\mathcal{Y}}\circ G^{k}\circ\cdots\circ G^{1}\circ E_{\mathcal{X}}(u),$$
|
282 |
+
|
283 |
+
where Gi: R
|
284 |
+
ℓi−1 → R
|
285 |
+
ℓiis defined as
|
286 |
+
|
287 |
+
$$G^{i}(a)=\left[g_{1}^{i}((V_{1}^{i})^{\top}a),\cdots,g_{\ell_{i}}^{i}((V_{\ell_{i}}^{i})^{\top}a)\right]\,,$$
|
288 |
+
|
289 |
+
where the matrix is V
|
290 |
+
i j ∈ R
|
291 |
+
di×ℓi−1 and the real valued function is g i j
|
292 |
+
: R
|
293 |
+
di → R for j = 1*, . . . , ℓ*i, i = 1*, . . . , k*.
|
294 |
+
|
295 |
+
See an illustration in (45).
|
296 |
+
|
297 |
+
Theorem 5. Suppose Assumptions 1-4, and Assumption 7 hold. Let ΓNN be the minimizer of the optimization (1) with the network architecture FNN(dY , kL, p, M) *defined in equation 4 with parameters*
|
298 |
+
|
299 |
+
$$L p=\Omega\left(d_{\mathcal{Y}}^{\frac{4-d_{m a x}}{4+2d_{m a x}}}n^{\frac{d_{m a x}}{4+2d_{m a x}}}\right)\,,M\geq\sqrt{\ell_{m a x}}L_{E_{\mathcal{Y}}^{n}}R_{\mathcal{Y}}\,,$$
|
300 |
+
_where $d_{max}=\max\{d_{i}\}_{i=1}^{k}$ and $\ell_{max}=\max\{\ell_{i}\}_{i=1}^{k}$. Then we have
|
301 |
+
$$\mathcal{E}_{g e n}(\Gamma_{\mathrm{NN}})\lesssim L_{\Phi}^{2}\log(L_{\Phi})\ell_{m a x}^{\frac{8+d_{m a x}}{2+d_{m a x}}}n^{-\frac{2}{2+d_{m a x}}}\log n+\mathcal{E}_{n o i s e,p r o j}\,,$$
|
302 |
+
|
303 |
+
where the constants in ≲ and Ω(·) solely depend on k, dmax, ℓmax, RX , LEnX
|
304 |
+
, LEn Y
|
305 |
+
|
306 |
+
$$,L_{D_{\chi}^{n}},L_{D_{\cal{\cal{\cal{D}}}}^{n}}.$$
|
307 |
+
|
308 |
+
## Discretization Invariant Neural Networks
|
309 |
+
|
310 |
+
In this subsection, we demonstrate that our main results also apply to neural networks with the discretization invariant property. A neural network is considered discretization invariant if it can be trained and evaluated on data that are discretized in various formats. For example, the input data ui, i = 1*, . . . , n* may consist of images with different resolutions, or ui = [ui(x1)*, . . . , u*i(xsi
|
311 |
+
)] representing the values of ui sampled at different locations. Neural networks inherently have fixed input and output sizes, making them incompatible for direct training on a data set {(ui, vi), i = 1*, . . . , n*} where the data pairs (ui ∈ R
|
312 |
+
di, vi ∈ R
|
313 |
+
di ) have different resolutions di, i = 1*, . . . , n*. Modifications of the encoders are required to map inputs of varying resolutions to a uniform Euclidean space. This can be achieved through linear interpolation or data-driven methods such as nonlinear integral transforms Ong et al. (2022).
|
314 |
+
|
315 |
+
Our previous analysis assumes that the data (ui, vi) *∈ X × Y* is mapped to discretized data (ui, vi) ∈
|
316 |
+
R
|
317 |
+
|
318 |
+
dX × R
|
319 |
+
dY using the encoders EnX and En Y
|
320 |
+
. Now, let us consider the case where the new discretized data
|
321 |
+
(ui, vi) ∈ R
|
322 |
+
si × R
|
323 |
+
si are vectors tabulating function values as follows:
|
324 |
+
|
325 |
+
$${\bf u}_{i}=\left[u_{i}(x_{1}^{i})\quad u_{i}(x_{2}^{i})\quad\ldots\quad u_{i}(x_{s_{i}}^{i})\right]\,,\quad{\bf v}_{i}=\left[v_{i}(x_{1}^{i})\quad v_{i}(x_{2}^{i})\quad\ldots\quad v_{i}(x_{s_{i}}^{i})\right]\,.\tag{10}$$
|
326 |
+
|
327 |
+
The sampling locations x i:= [x i 1
|
328 |
+
, . . . , xisi
|
329 |
+
] are allowed to be different for each data pair (ui, vi). We can now define the sampling operator on the location x i as Pxi : u 7→ u(x i), where u(x i) := -u(x i 1
|
330 |
+
) u(x i 2
|
331 |
+
) *. . . u*(x i si
|
332 |
+
). For the sake of simplicity, we assume that the sampling locations are equally spaced grid points, denoted as si = (ri + 1)d, where ri + 1 represents the number of grid points in each dimension. To achieve the discretization invariance, we consider the following interpolation operator Ixi : u(x i) 7→ u˜, where u˜ represents the multivariate Lagrangian polynomials (refer to Leaf & Kaper
|
333 |
+
(1974) for more details). Subsequently, we map the Lagrangian polynomials to their discretization on a uniformly spaced grid mesh xˆ ∈ R
|
334 |
+
dX using the sampling operator Pxˆ. Here dX = (r + 1)d and r is the highest degree among the Lagrangian polynomials u˜. We further assume that the grid points x i of all given discretized data are subsets of xˆ. We can then construct a discretization-invariant encoder as follows:
|
335 |
+
|
336 |
+
$$E_{\mathcal{X}}^{i}=P_{\mathbf{\hat{x}}}\circ I_{\mathbf{x}^{i}}\circ P_{\mathbf{x}^{i}}\,.$$
|
337 |
+
|
338 |
+
We can define the encoder EiY
|
339 |
+
in a similar manner. The aforementioned discussion can be summarized in the following proposition:
|
340 |
+
Proposition 1. Suppose that the discretized data {(ui, vi), i = 1, . . . , n} *defined in equation 10 are images* of a sampling operator Px i applied to smooth functions (ui, vi)*, and the sampling locations* x i are equally spaced grid points with grid size h*. Let* xˆ ∈ R
|
341 |
+
dX *represent equally spaced grid points that are denser than all* x i, i = 1, . . . , n *with* dX = (r+1)d*. Define the encoder* EiX = EiY = Pˆx ◦Ix i ◦Px i *, and decoder* DiX = DiY = Iˆx.
|
342 |
+
|
343 |
+
Then the encoding error can be bounded as the following:
|
344 |
+
|
345 |
+
$$\mathbb{E}_{u}\left[\|\Pi^{i}_{X,d_{X}}(u)-u\|_{\infty}^{2}\right]\leq Ch^{2r}\|u\|_{\mathbb{C}^{r+1}}^{2}\,,\quad\mathbb{E}_{v}\left[\|\Pi^{i}_{Y,d_{Y}}(v)-v\|_{\infty}^{2}\right]\leq Ch^{2r}\|v\|_{\mathbb{C}^{r+1}}^{2}\,,\tag{11}$$
|
346 |
+
_where $C>0$ is an absolute constant, $\Pi^{i}_{X,d_{X}}:=D^{i}_{X}\circ E^{i}_{X}$ and $\Pi^{i}_{Y,d_{Y}}:=D^{i}_{Y}\circ E^{i}_{Y}$._
|
347 |
+
Proof. This result follows directly from the principles of Multivariate Lagrangian interpolation and Theorem 3.2 in Leaf & Kaper (1974).
|
348 |
+
|
349 |
+
Remark 5. To simplify the analysis, we focus on the L∞ norm in equation 11. However, it is worth noting that L
|
350 |
+
pestimates can be easily derived by utilizing L
|
351 |
+
pspace embedding techniques. Furthermore, C
|
352 |
+
r estimates can be obtained through the proof of Theorem 3.2 in Leaf & Kaper (1974). By observing that the discretization invariant encoder and decoder in Proposition 1 satisfy Assumption 2 and Assumption 4, we can conclude that our main results are applicable to discretization invariant neural networks. In this section, we have solely considered polynomial interpolation encoders, which require the input data to possess a sufficient degree of smoothness and for all training data to be discretized on a finer mesh than the encoding space R
|
353 |
+
|
354 |
+
dX . The analysis of more sophisticated nonlinear encoders and discretization invariant neural networks is a topic for future research. In the subsequent sections, we will observe that numerous operators encountered in PDE problems can be expressed as compositions of low-complexity operators, as stated in Assumption 6 or Assumption 7.
|
355 |
+
|
356 |
+
Consequently, deep operator learning provides means to alleviate the curse of dimensionality, as confirmed by Theorem 4 or its more general form, as presented in Theorem 5.
|
357 |
+
|
358 |
+
## 3 Explicit Complexity Bounds For Various Pde Operator Learning
|
359 |
+
|
360 |
+
In practical scenarios, enforcing the uniform bound constraint in architecture (3) is often inconvenient. As a result, the preferred implementation choice is architecture (4). Therefore, in this section, we will solely focus on architecture (4). In this section, we will provide five examples of PDEs where the input space X and output space Y are not Hilbert. For simplicity, we assume that the computational domain for all PDEs is Ω = [−1, 1]d. Additionally, we assume that the input space X exhibits Hölder regularity. In other words, all inputs possess a bounded Hölder norm *∥ · ∥*Cs , where s > 0. The Hölder norm is defined as
|
361 |
+
∥f∥Cs = ∥f∥Ck + maxβ=k |Dβf|C0,α , where s = k + α, k is an integer, 0 *< α <* 1 and *| · |*C0,α represents the α-Hölder semi-norm |f|C0,α = supx̸=y |f(x)−f(y)|
|
362 |
+
∥x−y∥α . It can be shown that the output space Y also admits Hölder regularity for all examples considered in this section. Similar results can be derived when both the input space and output space have Sobolev regularity. Consequently, we can employ the standard spectral method as the encoder/decoder for both the input and output spaces. Specifically, the encoder EnX maps u ∈ X to the space P
|
363 |
+
r d
|
364 |
+
, which represents the product of univariate polynomials with a degree less than r. As a result, the input dimension is thus dX = dimP
|
365 |
+
r d = r d.We then assign L
|
366 |
+
p-norm (p > 1) to both the input space X and output space Y. The encoder/decoder projection error for both X and Y can be derived using the following lemma from Schultz (1969).
|
367 |
+
|
368 |
+
Lemma 1 (Theorem 4.3 (ii) of Schultz (1969)). Let an integer k ≥ 0 and 0 < α < 1*. For any* f ∈
|
369 |
+
C
|
370 |
+
s([−1, 1]d) with s = k + α, denote by ˜f *its spectral approximation in* P
|
371 |
+
r d
|
372 |
+
, there holds
|
373 |
+
|
374 |
+
$$\|f-{\tilde{f}}\|_{\infty}\leq C_{d}\|f\|_{C^{s}}r^{-s}.$$
|
375 |
+
|
376 |
+
We can then bound the projection error
|
377 |
+
|
378 |
+
$$\|\Pi_{X,d_{X}}^{n}u-u\|_{L^{p}([-1,1]^{d})}^{p}=\int_{[-1,1]^{d}}|u-\tilde{u}|^{p}d x\;\;\leq C_{d}^{p}2^{d}\|u\|_{C^{*}}^{p}r^{-p s}\leq C_{d}^{p}2^{d}\|u\|_{C^{*}}^{p}d_{X}^{-\frac{p s}{d}}.$$
|
379 |
+
|
380 |
+
Therefore,
|
381 |
+
$$\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{n}u-u\|_{L^{p}([-1,1]^{d})}^{2}\leq C_{d}^{2}2^{2d/p}\|u\|_{C^{s}}^{2}d_{\mathcal{X}}^{-\frac{2s}{d}}.$$ that
|
382 |
+
X. (12)
|
383 |
+
Similarly, we can also derive that
|
384 |
+
$$\|\Pi_{\mathcal{Y},d\mathcal{Y}}^{n}(w)-w\|_{L^{p}([-1,1]^{d})}^{2}\leq C_{d}^{2}2^{2d/p}\|u\|_{C^{t}}^{2}d_{\mathcal{Y}}^{-\frac{2t}{d}}L_{\Phi}^{2},$$
|
385 |
+
Φ, (13)
|
386 |
+
given that the output w = Φ(u) is in C
|
387 |
+
tfor some t > 0.
|
388 |
+
|
389 |
+
In the following, we present several examples of PDEs that satisfy different assumptions, including the lowdimensional Assumption 5, the low-complexity Assumption 6, and Assumption 7. In particular, the solution operators of Poisson equation, parabolic equation, and transport equation are linear operators, implying that Assumption 6 is satisfied with gi's being the identity functions with d0 = 1. The solution operator of Burgers
|
390 |
+
|
391 |
+
$$\left(12\right)$$
|
392 |
+
$$\left(13\right)$$
|
393 |
+
|
394 |
+
equation is the composition of multiple numerical integration, the pointwise evaluation of an exponential function g 1 j
|
395 |
+
(·) = exp(·), and the pointwise division g 2 j
|
396 |
+
(*a, b*) = a/b. It thus satisfies Assumption 7 with d1 = 1 and d2 = 2. In parametric equations, we consider the forward operator that maps a media function a(x)
|
397 |
+
to the solution u. In most applications of such forward maps, the media function a(x) represents natural images, such as CT scans for breast cancer diagnosis. Therefore, it is often assumed that Assumption 5 holds.
|
398 |
+
|
399 |
+
## 3.1 Poisson Equation
|
400 |
+
|
401 |
+
Consider the Poisson equation which seeks u such that
|
402 |
+
|
403 |
+
$$(14)$$
|
404 |
+
$$\Delta u=f,$$
|
405 |
+
∆u = f, (14)
|
406 |
+
where x ∈ R
|
407 |
+
d, and |u(x)| → 0 as |x*| → ∞*. The fundamental solution of equation 14 is given as
|
408 |
+
|
409 |
+
$$\Psi(x)=\left\{\begin{array}{l l}{{\frac{1}{2\pi}\ln|x|\,,}}&{{\mathrm{~for~}d=2,}}\\ {{\frac{-1}{w_{d}}|x|^{2-d}\,,}}&{{\mathrm{~for~}d\geq3,}}\end{array}\right.$$
|
410 |
+
|
411 |
+
where wd is the surface area of a unit ball in R
|
412 |
+
d. Assume that the source f(x) is a smooth function compactly supported in R
|
413 |
+
d. There exists a unique solution to equation 14 given by u(x) = Ψ∗f. Notice that the solution map f 7→ u is a convolution with the fundamental solution, u(x) = Ψ ∗ f. To show the solution operator is Lipschitz, we assume the sources *f, g* ∈ C
|
414 |
+
k(R
|
415 |
+
d) with compact support and apply Young's inequality to get
|
416 |
+
|
417 |
+
$$\|u-v\|_{C^{q}(\mathbb{R}^{d})}=\|D^{k}(u-v)\|_{L^{\infty}(\mathbb{R}^{d})}=\|\Psi*D^{k}(f-g)\|_{L^{\infty}(\mathbb{R}^{d})}\leq\|\Psi\|_{L^{q}(\mathbb{R}^{d})}\|f-g\|_{C^{q}(\Omega)}\Omega|^{1/q},\tag{15}$$
|
418 |
+
|
419 |
+
where *p, q* ≥ 1 so that 1/p + 1/q = 1. Here Ω is the support of f and g.
|
420 |
+
|
421 |
+
For the Poisson equation (14) on an unbounded domain, the computation is often implemented over a truncated finite domain Ω. For simplicity, we assume the source condition f is randomly generated in the space C
|
422 |
+
k(Ω) from a random measure γ. Since the solution u is a convolution of source f with a smooth kernel, both f and u are in C
|
423 |
+
k(Ω).
|
424 |
+
|
425 |
+
We then choose the encoder and decoder to be the spectral method. Applying equation 12, the encoder and decoder error of the input space can be calculated as follows
|
426 |
+
|
427 |
+
$$\mathbb{E}_{f}\left[\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{n}(f)-f\|_{L^{p}(\Omega)}^{2}\right]\leq C_{d,p}d_{\mathcal{X}}^{-\frac{2k}{d}}\mathbb{E}_{f}\left[\|f\|_{C^{k}(\Omega)}^{2}\right].$$
|
428 |
+
|
429 |
+
Similarly, applying Lemma 1 and equation 15, the encoder and decoder error of the output space is
|
430 |
+
|
431 |
+
$$\mathbb{E}_{S}\mathbb{E}_{f\sim\gamma}\left[\|\Pi_{\mathcal{Y},d_{\mathcal{Y}}}^{\mathrm{w}}(u)-u\|_{L^{p}(\Omega)}^{2}\right]\leq C_{d,p}d_{\mathcal{Y}}^{-\frac{2k}{2}}\mathbb{E}_{f}\left[\|\Psi*f\|_{\mathbb{C}^{k}(\Omega)}^{2}\right]\leq C_{d,p,\Omega}d_{\mathcal{Y}}^{-\frac{2k}{2}}\mathbb{E}_{f}\left[\|f\|_{\mathbb{C}^{k}(\Omega)}^{2}\right].$$
|
432 |
+
|
433 |
+
Notice that the solution u(y) = RRd Ψ(y − x)f(x)dx is a linear integral transform of f, and that all linear maps are special cases of Assumption 6 with g being the identity map. In particular, Assumption 6 thus holds true by setting the column vector Vk as the numerical integration weight of Ψ(x−yk), and setting gk's as the identity map with d0 = 1 for k = 1, · · · , dY . By applying Theorem 4, we obtain that
|
434 |
+
|
435 |
+
$$\mathbb{E}_{S}\mathbb{E}_{f}\|D_{S}^{n}\circ\Gamma_{\text{NN}}\circ E_{X}^{n}(f)-\Phi(f)\|_{L^{p}(\Omega)}^{2}\leqslant r^{3d}n^{-2/3}\log n+(\sigma^{2}+n^{-1})+r^{-2k}\mathbb{E}_{f}\left[\|f\|_{C^{k}(\Omega)}^{2}\right],\tag{16}$$
|
436 |
+
|
437 |
+
where the input dimension dX = dY = r d and ≲ contains constants that depend on dX , d, p and |Ω|.
|
438 |
+
|
439 |
+
Remark 6. The above result equation 16 suggests that the generalization error is small if we have a large number of samples, a small noise, and a good regularity of the input samples. Importantly, the decay rate with respect to the number of samples is independent from the encoding dimension dX or dY .
|
440 |
+
|
441 |
+
## 3.2 Parabolic Equation
|
442 |
+
|
443 |
+
We consider the following parabolic equation that seeks u(*x, t*) such that
|
444 |
+
|
445 |
+
$$\left\{\begin{array}{c c}{{u_{t}-\Delta u=0}}&{{\mathrm{in~\mathbb{R}^{d}\times(0,\infty),}}}\\ {{u=g}}&{{\mathrm{on~\mathbb{R}^{d}\times\{t=0\}.}}}\end{array}\right.$$
|
446 |
+
d × {t = 0}.(17)
|
447 |
+
$$(17)$$
|
448 |
+
|
449 |
+
The fundamental solution to equation 17 is given by Λ(*x, t*) = (4πt)
|
450 |
+
−d/2e
|
451 |
+
−
|
452 |
+
|x| 2 4t for x ∈ R
|
453 |
+
d*, t >* 0. The solution map g(·) 7→ u(T, ·) can be expressed as a convolution with the fundamental solution u(·, T) = Λ(·, T) ∗ g, where T is the terminal time. Applying Young's inequality, the Lipschitz constant is ∥Λ(·, T)∥p, where 1 ≤ p ≤ ∞. As an example, we can explicitly calculate this number in 3D as ∥Λ(·, T)∥p = p 3 2p . For the parabolic equation (17), we consider a truncated finite computation domain Ω × [0, T] and assume an initial condition g ∈ C
|
454 |
+
k(Ω). Due to the similar convolution structure of the solution map compared to the Poisson equation, we can obtain a similar result by applying Theorem 4.
|
455 |
+
|
456 |
+
$$\mathbb{E}_{g}\mathbb{E}_{g}\|D_{g}^{n}\circ\Gamma_{\text{NN}}\circ E_{X}^{n}(g)-\Phi(g)\|_{L^{p}(\Omega)}^{2}\leq\quad r^{2d}n^{-2/3}\log n+(\sigma^{2}+n^{-1})+r^{-2k}\mathbb{E}_{g}\left[\|g\|_{L^{q}(\Omega)}^{2}\right]\,,\tag{18}$$
|
457 |
+
|
458 |
+
where the encoding dimension dX = dY = r d, the symbol "≲" denotes that the expression on the left-hand side is bounded by the expression on the right-hand side, where the constants involved depend on dX , d, p, and |Ω|. The reduction of the CoD in the parabolic equation follows a similar approach as in the Poisson equation.
|
459 |
+
|
460 |
+
## 3.3 Transport Equation
|
461 |
+
|
462 |
+
We consider the following transport equation that seeks u such that
|
463 |
+
|
464 |
+
$$\begin{cases}u_{t}+a(x)\cdot\nabla u=0&\text{in}\quad(0,\infty)\times\mathbb{R}^{d},\\ u(0,x)=u_{0}(x)&\text{in}\quad\mathbb{R}^{d}\,,\end{cases}\tag{19}$$
|
465 |
+
|
466 |
+
where a(x) is the drift force field and u0(x) is the initial data. For convenience, we assume that the drift force field satisfies a ∈ C
|
467 |
+
2(R
|
468 |
+
d) ∩ W1,∞(R
|
469 |
+
d). By employing the classical theory of ordinary differential equations
|
470 |
+
(ODE), we consider the initial value problem dx(t)
|
471 |
+
dt = a(x(t)), x(0) = x, which admits a unique solution for any x ∈ R
|
472 |
+
d, t → x(t) = φt(x) ∈ C
|
473 |
+
1(R; R
|
474 |
+
d). Applying the Characteristic method, the solution of equation 19 is given by u(*t, x*) := u0(φ
|
475 |
+
−1 t(x)). If we further assume that u0 is randomly sampled with bounded Hs norm, s > 3d 2
|
476 |
+
, then by Theorem 5 of Section 7.3 of Evans (2010), we have u ∈ C
|
477 |
+
1([0, ∞); R
|
478 |
+
d). More specifically, we have
|
479 |
+
|
480 |
+
$$\|u(T,\cdot)\|_{C^{1}(\mathbb{R}^{d})}\leq\|u_{0}\|_{H^{s}(\mathbb{R}^{d})}C_{a,T,\Omega},$$
|
481 |
+
|
482 |
+
where C*a,T ,*Ω > 0 is a constant that depends on the media a, terminal time T, and the support Ω of the initial data. Since the initial data has C
|
483 |
+
1regularity, by equation 12 the encoder/decoder projection error of the input space is controlled via
|
484 |
+
|
485 |
+
$$\mathbb{E}_{u_{0}}\left[\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{n}(u_{0})-u_{0}\|_{L^{p}(\Omega)}^{2}\right]\leq C_{d,p,\Omega}d_{\mathcal{X}}^{-\frac{2}{2}}\mathbb{E}_{f}\left[\|u_{0}\|_{C^{1}(\Omega)}^{2}\right].$$
|
486 |
+
|
487 |
+
Similarly, for the projection error of the output space, we have
|
488 |
+
|
489 |
+
$$\mathbb{E}_{S}\mathbb{E}_{u\sim\Phi_{\sigma}\gamma}\left[\|\mathbb{H}_{S,d_{y}}^{\flat}(u)-u\|_{L^{p}(\Omega)}^{2}\right]\leq C_{d,p,\Omega}d_{y}^{-\frac{3}{2}}\mathbb{E}_{u_{0}}\left[\|u(T)\|_{\mathbb{H}^{s}(\Omega)}^{2}\right]\leq C_{d,p,\alpha,T,\Omega}d_{y}^{-\frac{3}{2}}\mathbb{E}_{u_{0}}\left[\|u_{0}\|_{H^{s}(\Omega)}^{2}\right].$$
|
490 |
+
|
491 |
+
We again use the spectral encoder/decoder so dX = dY = r d. Notice that solution u(*T, x*) = u0(φ
|
492 |
+
−1 T
|
493 |
+
(x)) is a translation of the initial data u0 by φ
|
494 |
+
−1 T
|
495 |
+
, which is a linear transform. Let V ∈ R
|
496 |
+
dX ×dY be the corresponding permutation matrix that characterizes the translation by φ
|
497 |
+
−1 T
|
498 |
+
, then V
|
499 |
+
⊤
|
500 |
+
kis the k-th row of V . Then by setting gk's as the identity map, Assumption 6 holds with d0 = 1. Apply Theorem 4 to derive that
|
501 |
+
|
502 |
+
ES Eu∥DnY ◦ ΓNN ◦ E n X (u) − Φ(u)∥ 2 Lp(Ω) ≲ r 3dn −2/3log n + (σ 2 + n −1)+r −2Eg h∥u0∥ 2 C1(Ω) + ∥u0∥ 2 Hs(Ω)i, (20)
|
503 |
+
where ≲ contains constants that depend on *d, p, a, r, T* and Ω. The CoD in transport equation is lessened
|
504 |
+
according to equation 20 in the same manner as in the Poisson and parabolic equations.
|
505 |
+
|
506 |
+
## 3.4 Burgers Equation
|
507 |
+
|
508 |
+
We consider the 1D Burgers equation with periodic boundary conditions:
|
509 |
+
|
510 |
+
$$\begin{cases}u_{t}+uu_{x}=\kappa u_{xx}\,,&\text{in}\mathbb{R}\times(0,\infty),\\ u(x,0)=u_{0}(x)\,,&\\ u(-\pi,t)=u(\pi,t)\,,&\end{cases}\tag{21}$$
|
511 |
+
|
512 |
+
where κ > 0 is the viscosity constant. and we consider the solution map u0(·) 7→ u(T, ·). This solution map can be explicitly written using the Cole-Hopf transformation u =
|
513 |
+
−2κvx v where the function v is the solution to the following diffusion equation
|
514 |
+
|
515 |
+
$$\begin{cases}v_{t}=\kappa v_{x x}\\ v(x,0)=v_{0}(x)=\exp\left(-\frac{1}{2\kappa}\int_{-\pi}^{x}u_{0}(s)d s\right)\,.\end{cases}$$
|
516 |
+
|
517 |
+
The solution to the above diffusion equation is given by
|
518 |
+
|
519 |
+
$$v(x,T)=-2\kappa\frac{\int_{\mathbb{R}}\partial_{x}\mathcal{K}(x,y,T)v_{0}(y)dy}{\int_{\mathbb{R}}\mathcal{K}(x,y,T)v_{0}(y)dy}\,,\tag{22}$$
|
520 |
+
|
521 |
+
where the integration kernel K is defined as K(*x, y, t*) = √
|
522 |
+
1 4πκt exp −(x−y)
|
523 |
+
2 4πt . Although there will be no shock formed in the solution of viscous Burger equation, the solution may form a large gradient in finite time for certain initial data, which makes it extremely hard to be approximated by a NN. We assume that the terminal time T is small enough so a large gradient is not formed yet. In fact, it is shown in Heywood
|
524 |
+
& Xie (1997) (Theorem 1) that if T ≤ C∥u0∥
|
525 |
+
−4 H1 , then ∥u(·, T)∥H1 ≤ C∥u0∥H1 . We then assume an initial data u0 is randomly sampled with a uniform bounded H1 norm. By Sobolev embedding, we have
|
526 |
+
|
527 |
+
$$\|u_{0}\|_{C^{0,1/2}}\leq C\|u_{0}\|_{H^{1}}\,,\quad\|u(\cdot,T)\|_{C^{0,1/2}}\leq C\|u_{0}\|_{H^{1}}\,,$$
|
528 |
+
|
529 |
+
By 12, we can control the encoder/decoder projection error for the initial data
|
530 |
+
|
531 |
+
$$\mathbb{E}_{u_{0}}\left[\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{n}(u_{0})-u_{0}\|_{L^{p}(\Omega)}^{2}\right]\leq C_{d,p}d_{\mathcal{X}}^{-1}\mathbb{E}_{u_{0}}\left[\|u_{0}\|_{H^{1}}^{2}\right]\;.$$
|
532 |
+
|
533 |
+
Since the terminal solution u(·, T) has same regularity as the initial solution, by 13 we also have
|
534 |
+
|
535 |
+
$$\mathbb{E}_{u_{0}}\left[\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{n}(u(\cdot,T))-u(\cdot,T)\|_{L^{p}(\Omega)}^{2}\right]\leq C_{d,p}d_{\mathcal{Y}}^{-1}\mathbb{E}_{u_{0}}\left[\|u(\cdot,T)\|_{H^{+}}^{2}\right]\leq C_{d,p}d_{\mathcal{Y}}^{-1}\mathbb{E}_{u_{0}}\left[\|u_{0}\|_{H^{1}}^{2}\right].$$
|
536 |
+
|
537 |
+
Similarly, we can choose dX = dY = r. The solution map is a composition of three mappings u0 7→ v0, v0 7→
|
538 |
+
v(·, T) and v(·, T) 7→ u(·, T). More specifically, v0(x) = exp −
|
539 |
+
1 2κ R x
|
540 |
+
−π u0(s)dsso we can set V
|
541 |
+
1 k = R
|
542 |
+
dX ×1 as the numerical integration vector on [−*π, x*k] and g 1 k
|
543 |
+
(x) = exp( −x 2κ
|
544 |
+
) for all k = 1*, . . . , d*Y . For the second mapping v0 7→ v(·, T) (c.f. 22), we set V
|
545 |
+
2 k ∈ R
|
546 |
+
dX ×2 where the first row is the numerical integration with kernel ∂xK and the second row is the numerical integration with kernel K, and we let g 2 k
|
547 |
+
(*x, y*) = −2κx yfor all k = 1, · · · , dX . For the third mapping u =
|
548 |
+
−2κvx v, we can set V
|
549 |
+
3 k ∈ R
|
550 |
+
dX ×2, where the first row is the k-row of the numerical differentiation matrix, and the second row is the Dirac-delta vector at xk, and we let g 3 k
|
551 |
+
(*x, y*) = −2κx yfor all k = 1, · · · , dY . Therefore, Assumption 7 holds with dmax = 2 and lmax = dX = dY = r.
|
552 |
+
|
553 |
+
Then apply Theorem 5 to derive that
|
554 |
+
|
555 |
+
$$\mathbb{E}_{S}\mathbb{E}_{u}\|D_{S}^{n}\circ\Gamma_{\mathrm{NN}}\circ E_{X}^{n}(u_{0})-u(\cdot,T)\|_{L^{p}(\Omega)}^{2}\lesssim r^{3/2}n^{-1/2}\log n+\left(\sigma^{2}+n^{-1}\right)+r^{-1}\mathbb{E}_{g}\left[\|u_{0}\|_{H^{*}(\Omega)}^{2}\right]\,,$$
|
556 |
+
Hs(Ω)i, (23)
|
557 |
+
where ≲ contains constants that depend on *p, r* and T. The CoD in Burgers equations is lessened according to equation 23 as well as in all other PDE examples.
|
558 |
+
|
559 |
+
$$(23)$$
|
560 |
+
|
561 |
+
## 3.5 Parametric Elliptic Equation
|
562 |
+
|
563 |
+
We consider the 2D elliptic equation with heterogeneous media in this subsection.
|
564 |
+
|
565 |
+
$$\begin{cases}-\operatorname{div}(a(x)\nabla_{x}u(x))=0\,,&\text{in}\Omega\subset\mathbb{R}^{2},\\ u=f\,,&\text{on}\partial\Omega.\end{cases}$$
|
566 |
+
$$(24)$$
|
567 |
+
|
568 |
+
The media coefficient a(x) satisfies that α ≤ a(x) ≤ β for all x ∈ Ω, where α and β are positive constants. We further assume that a(x) ∈ C
|
569 |
+
1(Ω). We are interested NN approximation of the forward map Φ : a 7→ u with a fixed boundary condition f, which has wide applications in inverse problems. The forward map is Lipschitz, see Appendix A.2. We apply Sobolev embedding and derive that u ∈ C
|
570 |
+
0,1/2(Ω). Since the parameter a has C
|
571 |
+
1regularity, the encoder/decoder projection error of the input space is controlled
|
572 |
+
|
573 |
+
$$\mathbb{E}_{a}\left[\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{n}(a)-a\|_{L^{p}(\Omega)}^{2}\right]\leq C_{p}d_{\mathcal{X}}^{-1}\mathbb{E}_{f}\left[\|a\|_{C^{1}(\Omega)}^{2}\right].$$
|
574 |
+
|
575 |
+
The solution has 12 Hölder regularity, so we have
|
576 |
+
|
577 |
+
ES Eu∼Φ#γ h∥Π n Y,dY (u) − u∥ 2 Lp(Ω)i= Ea -∥Π n Y,dY (u) − u∥Lp(Ω)2≤ Cpd − 12 Y Ea h∥u∥ 2 C0,1/2(Ω)i≤ Cp,α,β,f d − 12 Y.
|
578 |
+
We use the spectral encoder/decoder and choose dX = dY = r 2. We further assume that the media functions a(x) are randomly sampled on a smooth d0-dimensional manifold. Applying Theorem 3, the generalization error is thus bounded by
|
579 |
+
|
580 |
+
$$\mathbb{E}_{S}\mathbb{E}_{u}\|D_{y}^{\alpha}\circ\Gamma_{\mathrm{NN}}\circ E_{X}^{\alpha}(u)-\Phi(u)\|_{L^{p}(\Omega)}^{2}\leq\frac{s+s_{0}}{d_{y}^{n-1}}\,n^{-\frac{1}{s+2\alpha}}\log^{6}n+(\sigma^{2}+n^{-1})+r^{-2}\mathbb{E}_{u}\left[\|a\|_{C^{1}(\Omega)}^{2}\right]+r^{-1}\,,$$
|
581 |
+
|
582 |
+
where ≲ contains constants that depend on p, Ω*, α, β* and f. Here d0 is a constant that characterized the manifold dimension of the data set of media function a(x). For instance, the 2D Shepp-Logan phantom Gach et al. (2008) contains multiple ellipsoids with different intensities thus the images in this data set lies on a manifold with a small d0. The decay rate in terms of the number of samples n solely depends on d0, therefore the CoD of the parametric elliptic equations is mitigated.
|
583 |
+
|
584 |
+
## 4 Limitations And Discussions
|
585 |
+
|
586 |
+
Our work focuses on exploring the efficacy of fully connected DNNs as surrogate models for solving general PDE problems. We provide an explicit estimation of the training sample complexity for generalization error. Notably, when the PDE solution lies in a low-dimensional manifold or the solution space exhibits low complexity, our estimate demonstrates a logarithmic dependence on the problem resolution, thereby reducing the CoD. Our findings offer a theoretical explanation for the improved performance of deep operator learning in PDE applications.
|
587 |
+
|
588 |
+
However, our work relies on the assumption of Lipschitz continuity for the target PDE operator. Consequently, our estimates may not be satisfactory if the Lipschitz constant is large. This limitation hampers the application of our theory to operator learning in PDE inverse problems, which focus on the solutionto-parameter map. Although the solution-to-parameter map is Lipschitz in many applications (e.g., electric impedance tomography, optical tomography, and inverse scattering), certain scenarios may feature an exponentially large Lipschitz constant, rendering our estimates less practical. Therefore, our results cannot fully explain the empirical success of PDE operator learning in such cases.
|
589 |
+
|
590 |
+
While our primary focus is on neural network approximation, determining suitable encoders and decoders with small encoding dimensions (dX and dY ) remains a challenging task that we did not emphasize in this work. In Section 2.2, we analyze the naive interpolation as a discretization invariant encoder using a fully connected neural network architecture. However, this analysis is limited to cases where the training data is sampled on an equally spaced mesh and may not be applicable to more complex neural network architectures or situations where the data is not uniformly sampled. Investigating the discretization invariant properties of other neural networks, such as IAE-net Ong et al. (2022), FNO Li et al. (2021), and DeepONet, would be an interesting avenue for future research.
|
591 |
+
|
592 |
+
## Acknowledgements
|
593 |
+
|
594 |
+
K. C. and H. Y. were partially supported by the US National Science Foundation under awards DMS-2244988, DMS-2206333, and the Office of Naval Research Award N00014-23-1-2007. C. W. was partially supported by the National Science Foundation under awards DMS-2136380 and DMS-2206332.
|
595 |
+
|
596 |
+
## References
|
597 |
+
|
598 |
+
Ben Adcock, Simone Brugiapaglia, Nick Dexter, and Sebastian Moraga. Near-optimal learning of banachvalued, high-dimensional functions via deep neural networks. *arXiv preprint arXiv:2211.12633*, 2022.
|
599 |
+
|
600 |
+
Martin Anthony, Peter L Bartlett, Peter L Bartlett, et al. *Neural network learning: Theoretical foundations*,
|
601 |
+
volume 9. cambridge university press Cambridge, 1999.
|
602 |
+
|
603 |
+
Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. *IEEE*
|
604 |
+
Transactions on Information theory, 39(3):930–945, 1993.
|
605 |
+
|
606 |
+
Peter L Bartlett, Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. *The Journal of Machine Learning Research*,
|
607 |
+
20(1):2285–2301, 2019.
|
608 |
+
|
609 |
+
Benedikt Bauer and Michael Kohler. On deep learning as a remedy for the curse of dimensionality in nonparametric regression. *The Annals of Statistics*, 47(4):2261–2285, 2019.
|
610 |
+
|
611 |
+
Sergio Botelho, Ameya Joshi, Biswajit Khara, Vinay Rao, Soumik Sarkar, Chinmay Hegde, Santi Adavani, and Baskar Ganapathysubramanian. Deep generative models that solve pdes: Distributed computing for training large data-free models. In *2020 IEEE/ACM Workshop on Machine Learning in High Performance* Computing Environments (MLHPC) and Workshop on Artificial Intelligence and Machine Learning for Scientific Applications (AI4S), pp. 50–63. IEEE, 2020.
|
612 |
+
|
613 |
+
Shengze Cai, Zhicheng Wang, Lu Lu, Tamer A Zaki, and George Em Karniadakis. Deepm&mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks. Journal of Computational Physics, 436:110296, 2021.
|
614 |
+
|
615 |
+
Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao. Efficient approximation of deep relu networks for functions on low dimensional manifolds. *Advances in neural information processing systems*, 32, 2019.
|
616 |
+
|
617 |
+
Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao. Nonparametric regression on low-dimensional manifolds using deep relu networks: Function approximation and statistical recovery. Information and Inference: A Journal of the IMA, 11(4):1203–1253, 2022.
|
618 |
+
|
619 |
+
Abdellah Chkifa, Albert Cohen, and Christoph Schwab. Breaking the curse of dimensionality in sparse polynomial approximation of parametric pdes. *Journal de Mathématiques Pures et Appliquées*, 103(2):
|
620 |
+
400–428, 2015.
|
621 |
+
|
622 |
+
Alexander Cloninger and Timo Klock. Relu nets adapt to intrinsic dimensionality beyond the target domain.
|
623 |
+
|
624 |
+
arXiv preprint arXiv:2008.02545, 2020.
|
625 |
+
|
626 |
+
George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989.
|
627 |
+
|
628 |
+
Maarten V de Hoop, Nikola B Kovachki, Nicholas H Nelsen, and Andrew M Stuart. Convergence rates for learning linear operators from noisy data. *arXiv preprint arXiv:2108.12515*, 2021.
|
629 |
+
|
630 |
+
Mo Deng, Shuai Li, Alexandre Goy, Iksung Kang, and George Barbastathis. Learning to synthesize: robust phase retrieval at low photon counts. *Light: Science & Applications*, 9(1):1–16, 2020.
|
631 |
+
|
632 |
+
Lawrence C Evans. *Partial differential equations*, volume 19. American Mathematical Soc., 2010.
|
633 |
+
|
634 |
+
Max H Farrell, Tengyuan Liang, and Sanjog Misra. Deep neural networks for estimation and inference.
|
635 |
+
|
636 |
+
Econometrica, 89(1):181–213, 2021.
|
637 |
+
|
638 |
+
H Michael Gach, Costin Tanase, and Fernando Boada. 2d & 3d shepp-logan phantom standards for mri. In 2008 19th International Conference on Systems Engineering, pp. 521–526. IEEE, 2008.
|
639 |
+
|
640 |
+
Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In *2013 IEEE international conference on acoustics, speech and signal processing*, pp. 6645–6649.
|
641 |
+
|
642 |
+
Ieee, 2013.
|
643 |
+
|
644 |
+
John G Heywood and Wenzheng Xie. Smooth solutions of the vector burgers equation in nonsmooth domains.
|
645 |
+
|
646 |
+
Diferential and Integral Equations, 10(5):961–974, 1997.
|
647 |
+
|
648 |
+
Kurt Hornik. Approximation capabilities of multilayer feedforward networks. *Neural networks*, 4(2):251–257, 1991.
|
649 |
+
|
650 |
+
Teeratorn Kadeethum, Daniel O'Malley, Jan Niklas Fuhg, Youngsoo Choi, Jonghyun Lee, Hari S
|
651 |
+
Viswanathan, and Nikolaos Bouklas. A framework for data-driven solution and parameter estimation of pdes using conditional generative adversarial networks. *Nature Computational Science*, 1(12):819–829, 2021.
|
652 |
+
|
653 |
+
Yuehaw Khoo and Lexing Ying. Switchnet: a neural network model for forward and inverse scattering problems. *SIAM Journal on Scientific Computing*, 41(5):A3182–A3201, 2019.
|
654 |
+
|
655 |
+
Michael Kohler and Adam Krzyżak. Adaptive regression estimation with multilayer feedforward neural networks. *Nonparametric Statistics*, 17(8):891–913, 2005.
|
656 |
+
|
657 |
+
Nikola Kovachki, Samuel Lanthaler, and Siddhartha Mishra. On universal approximation and error bounds for fourier neural operators. *Journal of Machine Learning Research*, 22:Art–No, 2021.
|
658 |
+
|
659 |
+
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Communications of the ACM*, 60(6):84–90, 2017.
|
660 |
+
|
661 |
+
Samuel Lanthaler, Siddhartha Mishra, and George E Karniadakis. Error estimates for deeponets: A deep learning framework in infinite dimensions. *Transactions of Mathematics and Its Applications*, 6(1):tnac001, 2022.
|
662 |
+
|
663 |
+
Gary K Leaf and Hans G Kaper. l∞-error bounds for multivariate lagrange approximation. SIAM Journal on Numerical Analysis, 11(2):363–381, 1974.
|
664 |
+
|
665 |
+
Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Burigede liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations.
|
666 |
+
|
667 |
+
In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?
|
668 |
+
|
669 |
+
id=c8P9NQVtmnO.
|
670 |
+
|
671 |
+
Chensen Lin, Zhen Li, Lu Lu, Shengze Cai, Martin Maxey, and George Em Karniadakis. Operator learning for predicting multiscale bubble growth dynamics. *The Journal of Chemical Physics*, 154(10):104118, 2021.
|
672 |
+
|
673 |
+
Hao Liu, Minshuo Chen, Tuo Zhao, and Wenjing Liao. Besov function approximation and binary classification on low-dimensional manifolds using convolutional residual networks. In International Conference on Machine Learning, pp. 6770–6780. PMLR, 2021.
|
674 |
+
|
675 |
+
Hao Liu, Haizhao Yang, Minshuo Chen, Tuo Zhao, and Wenjing Liao. Deep nonparametric estimation of operators between infinite dimensional spaces. *arXiv preprint arXiv:2201.00217*, 2022.
|
676 |
+
|
677 |
+
Jianfeng Lu, Zuowei Shen, Haizhao Yang, and Shijun Zhang. Deep network approximation for smooth functions. *SIAM Journal on Mathematical Analysis*, 53(5):5465–5506, 2021a.
|
678 |
+
|
679 |
+
Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. *Nature Machine* Intelligence, 3(3):218–229, 2021b.
|
680 |
+
|
681 |
+
Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and George Em Karniadakis. A comprehensive and fair comparison of two neural operators (with practical extensions)
|
682 |
+
based on fair data. *Computer Methods in Applied Mechanics and Engineering*, 393:114778, 2022.
|
683 |
+
|
684 |
+
Chao Ma, Lei Wu, et al. The barron space and the flow-induced function spaces for neural network models.
|
685 |
+
|
686 |
+
Constructive Approximation, 55(1):369–406, 2022.
|
687 |
+
|
688 |
+
Riccardo Miotto, Fei Wang, Shuang Wang, Xiaoqian Jiang, and Joel T Dudley. Deep learning for healthcare:
|
689 |
+
review, opportunities and challenges. *Briefings in bioinformatics*, 19(6):1236–1246, 2018.
|
690 |
+
|
691 |
+
Ryumei Nakada and Masaaki Imaizumi. Adaptive approximation and generalization of deep neural network with intrinsic dimensionality. *J. Mach. Learn. Res.*, 21(174):1–38, 2020.
|
692 |
+
|
693 |
+
Nicholas H Nelsen and Andrew M Stuart. The random feature model for input-output maps between banach spaces. *SIAM Journal on Scientific Computing*, 43(5):A3212–A3243, 2021.
|
694 |
+
|
695 |
+
Partha Niyogi, Stephen Smale, and Shmuel Weinberger. Finding the homology of submanifolds with high confidence from random samples. *Discrete & Computational Geometry*, 39(1):419–441, 2008.
|
696 |
+
|
697 |
+
Yong Zheng Ong, Zuowei Shen, and Haizhao Yang. Integral autoencoder network for discretization-invariant learning. *The Journal of Machine Learning Research*, 23(1):12996–13040, 2022.
|
698 |
+
|
699 |
+
Benjamin Peherstorfer and Karen Willcox. Data-driven operator inference for nonintrusive projection-based model reduction. *Computer Methods in Applied Mechanics and Engineering*, 306:196–215, 2016.
|
700 |
+
|
701 |
+
Chang Qiao, Di Li, Yuting Guo, Chong Liu, Tao Jiang, Qionghai Dai, and Dong Li. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. *Nature Methods*,
|
702 |
+
18(2):194–202, 2021.
|
703 |
+
|
704 |
+
Md Ashiqur Rahman, Manuel A Florez, Anima Anandkumar, Zachary E Ross, and Kamyar Azizzadenesheli.
|
705 |
+
|
706 |
+
Generative adversarial neural operators. *arXiv preprint arXiv:2205.03017*, 2022.
|
707 |
+
|
708 |
+
Johannes Schmidt-Hieber. Deep relu network approximation of functions on a manifold. arXiv preprint arXiv:1908.00695, 2019.
|
709 |
+
|
710 |
+
Johannes Schmidt-Hieber. Nonparametric regression using deep neural networks with relu activation function. *The Annals of Statistics*, 48(4):1875–1897, 2020.
|
711 |
+
|
712 |
+
Martin H Schultz. l∞-multivariate approximation theory. *SIAM Journal on Numerical Analysis*, 6(2):
|
713 |
+
161–183, 1969.
|
714 |
+
|
715 |
+
Zuowei Shen, Haizhao Yang, and Shijun Zhang. Deep network approximation characterized by number of neurons. *arXiv preprint arXiv:1906.05497*, 2019.
|
716 |
+
|
717 |
+
Zuowei Shen, Haizhao Yang, and Shijun Zhang. Deep network with approximation error being reciprocal of width to power of square root of depth. *Neural Computation*, 33(4):1005–1036, 2021.
|
718 |
+
|
719 |
+
Charles J Stone. Optimal global rates of convergence for nonparametric regression. *The annals of statistics*,
|
720 |
+
pp. 1040–1053, 1982.
|
721 |
+
|
722 |
+
Taiji Suzuki. Adaptivity of deep relu network for learning in besov and mixed smooth besov spaces: optimal rate and curse of dimensionality. *arXiv preprint arXiv:1810.08033*, 2018.
|
723 |
+
|
724 |
+
Chunwei Tian, Lunke Fei, Wenxian Zheng, Yong Xu, Wangmeng Zuo, and Chia-Wen Lin. Deep learning on image denoising: An overview. *Neural Networks*, 131:251–275, 2020.
|
725 |
+
|
726 |
+
Ting Wang, Petr Plechac, and Jaroslaw Knap. Generative diffusion learning for parametric partial differential equations. *arXiv preprint arXiv:2305.14703*, 2023.
|
727 |
+
|
728 |
+
Dmitry Yarotsky. Error bounds for approximations with deep relu networks. *Neural Networks*, 94:103–114, 2017.
|
729 |
+
|
730 |
+
Dmitry Yarotsky. Optimal approximation of continuous functions by very deep relu networks. In Conference on learning theory, pp. 639–649. PMLR, 2018.
|
731 |
+
|
732 |
+
Dmitry Yarotsky. Elementary superexpressive activations. In *International Conference on Machine Learning*,
|
733 |
+
pp. 11932–11940. PMLR, 2021.
|
734 |
+
|
735 |
+
## A Appendix A.1 Proofs Of The Main Theorems
|
736 |
+
|
737 |
+
Proof of Theorem 1. The L
|
738 |
+
2squared error can be decomposed as
|
739 |
+
|
740 |
+
$$\begin{array}{r}{\mathbb{E}_{S}\mathbb{E}_{u\sim\gamma}\left[\|D_{\mathcal{Y}}^{n}\circ\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}^{n}(u)-\Phi(u)\|_{\mathcal{Y}}^{2}\right]}\\ {\leq2\mathbb{E}_{S}\mathbb{E}_{u\sim\gamma}\left[\|D_{\mathcal{Y}}^{n}\circ\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}^{n}(u)-D_{\mathcal{Y}}^{n}\circ E_{\mathcal{Y}}^{n}(u)\right]}\end{array}$$
|
741 |
+
$u)-\Phi(u)\|^2_{\mathcal{Y}}]$ $(u)-D^n_{\mathcal{N}}\circ E^n_{\mathcal{N}}\circ\Phi(u)\|$
|
742 |
+
Y ◦ Φ(u)∥
|
743 |
+
2 Y
|
744 |
+
+ 2ES Eu∼γ
|
745 |
+
|
746 |
+
$$\mathbb{E}_{S}\mathbb{E}_{u\sim\gamma}\left[\|D_{\mathcal{Y}}^{n}\circ E_{\mathcal{Y}}^{n}\circ\Phi(u)-\Phi(u)\|_{\mathcal{Y}}^{2}\right],$$
|
747 |
+
|
748 |
+
where the first term I = 2ES Eu∼γ
|
749 |
+
-∥DnY
|
750 |
+
◦ ΓNN ◦ EnX (u) − DnY
|
751 |
+
◦ En Y
|
752 |
+
◦ Φ(u)∥
|
753 |
+
2 Y
|
754 |
+
is the network estimation error in the Y space, and the second term II = 2ES Eu∼γ
|
755 |
+
-∥DnY
|
756 |
+
◦ En Y
|
757 |
+
◦ Φ(u) − Φ(u)∥
|
758 |
+
2 Y
|
759 |
+
is the empirical projection error, which can be rewritten as
|
760 |
+
|
761 |
+
$$\Pi=2\mathbb{E}_{\mathcal{S}}\mathbb{E}_{w\sim\Phi_{\#}\gamma}\left[\|\Pi_{\mathcal{Y},d_{\mathcal{Y}}}^{n}(w)-w\|_{\mathcal{Y}}^{2}\right].$$
|
762 |
+
|
763 |
+
We aim to derive an upper bound of the first term I. First, note that the decoder DnY
|
764 |
+
is Lipschitz (Assumption
|
765 |
+
|
766 |
+
3). We have
|
767 |
+
$$\begin{array}{l}{{\mathrm{I}=2\mathbb{E}_{S}\mathbb{E}_{u\sim\gamma}\left[\|D_{\mathcal{Y}}^{n}\circ\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}^{n}(u)-D_{\mathcal{Y}}^{n}\circ E_{\mathcal{Y}}^{n}\circ\Phi(u)\|_{\mathcal{Y}}^{2}\right]}}\\ {{\mathrm{}\leq2L_{D_{\mathcal{Y}}^{n}}^{2}\mathbb{E}_{S}\mathbb{E}_{u\sim\gamma}\left[\|\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}^{n}(u)-E_{\mathcal{Y}}^{n}\circ\Phi(u)\|_{2}^{2}\right].}}\end{array}$$
|
768 |
+
Conditioned on the data set S1, we can obtain
|
769 |
+
|
770 |
+
ES2 Eu∼γ -∥ΓNN ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 =2ES2 "1 n X 2n i=n+1 ∥ΓNN ◦ E n X (ui) − E n Y ◦ Φ(ui)∥ 2 2 # (26) + ES2 Eu∼γ -∥ΓNN ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 − 2ES2 "1 n X 2n i=n+1 ∥ΓNN ◦ E n X (ui) − E n Y ◦ Φ(ui)∥ 2 2 # =T1 + T2,
|
771 |
+
$$(25)$$
|
772 |
+
$$T_{1}+T_{2},$$
|
773 |
+
where the first term T1 = 2ES2 h1 n P2n i=n+1 ∥ΓNN ◦ EnX (ui) − En Y
|
774 |
+
◦ Φ(ui)∥
|
775 |
+
2 2 iincludes the DNN approximation error and the projection error in the X space, and the second term T2 =
|
776 |
+
ES2 Eu∼γ
|
777 |
+
-∥ΓNN ◦ EnX (u) − En Y
|
778 |
+
◦ Φ(u)∥
|
779 |
+
2 2
|
780 |
+
− T1 captures the variance.
|
781 |
+
|
782 |
+
To obtain an upper bound of T1, we apply triangle inequality to separate the noise from T1
|
783 |
+
|
784 |
+
$$T_{1}\leq2\mathbb{E}_{\mathcal{S}_{2}}\left[\frac{1}{n}\sum_{i=n+1}^{2n}\|\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}^{n}(u_{i})-E_{\mathcal{Y}}^{n}(v_{i})\|_{2}^{2}\right]+2\mathbb{E}_{\mathcal{S}_{2}}\left[\frac{1}{n}\sum_{i=n+1}^{2n}\|E_{\mathcal{Y}}^{n}\circ\Phi(u_{i})-E_{\mathcal{Y}}^{n}(v_{i})\|_{2}^{2}\right].$$
|
785 |
+
|
786 |
+
Using the definition of ΓNN, we have
|
787 |
+
|
788 |
+
$$T_{1}\leq2\mathbb{E}s_{2}\left[\operatorname*{inf}_{\Gamma\in\mathrm{NN}}{\frac{1}{n}}\sum_{i=n+1}^{2n}\|\Gamma\circ E_{X}^{n}(u_{i})-E_{Y}^{n}(v_{i})\|_{2}^{2}\right]+2L_{E_{Y}^{n}}^{2}\mathbb{E}s_{2}{\frac{1}{n}}\sum_{i=n+1}^{2n}\|\varepsilon_{i}\|_{Y}^{2}.$$
|
789 |
+
|
790 |
+
Using Fatou's lemma, we have
|
791 |
+
|
792 |
+
T1 ≤ 4ES2 " inf Γ∈FNN 1 n X 2n i=n+1 ∥Γ ◦ E n X (ui) − E n Y ◦ Φ(ui)∥ 2 2 # + 6L 2 En Y ES2 1 n X 2n i=n+1 ∥εi∥ 2 Y ≤ 4 inf Γ∈FNN ES2 "1 n X 2n i=n+1 ∥Γ ◦ E n X (ui) − E n Y ◦ Φ(ui)∥ 2 2 # + 6L 2 En Y ES2 1 n X 2n i=n+1 ∥εi∥ 2 Y (27) = 4 inf Γ∈FNN Eu -∥Γ ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 + 6L 2 En Y ES2 1 n X 2n i=n+1 ∥εi∥ 2 Y .
|
793 |
+
To bound the first term on the last line of Equation equation 27, we consider the discrete transform Γ
|
794 |
+
n d
|
795 |
+
:=
|
796 |
+
En Y
|
797 |
+
◦ Φ ◦ DnX ∈ FNN. Note that it is a vector filed that maps R
|
798 |
+
dX to R
|
799 |
+
dY , and by Assumption 1, 2, and 3 each component h(·) is a function supported on [−*B, B*]
|
800 |
+
dX with Lipschitz constant M := LDnX
|
801 |
+
LΦLEn Y
|
802 |
+
,
|
803 |
+
where B = RX LEnX
|
804 |
+
. This implies that each component h has an infinity bound ∥h∥∞ ≤ R := BM =
|
805 |
+
LDnX
|
806 |
+
LΦLEn Y
|
807 |
+
RX LEnX
|
808 |
+
.
|
809 |
+
|
810 |
+
We now apply the following lemma to the component functions of Γ
|
811 |
+
n d
|
812 |
+
.
|
813 |
+
|
814 |
+
Lemma 2. For any function f ∈ Wn,∞([−1, 1]d), and ϵ ∈ (0, 1), we assume that ∥f∥Wn,∞ ≤ 1. There exists a function ˜f ∈ FNN(1, L, p, K, κ, M) *such that*
|
815 |
+
|
816 |
+
$$\|{\tilde{f}}-f\|_{\infty}<\epsilon,$$
|
817 |
+
$$\square$$
|
818 |
+
2017) for $F_{n,d}$.
|
819 |
+
where the parameters of FNN *are chosen as*
|
820 |
+
|
821 |
+
$$\begin{array}{l l}{{}}&{{L=\Omega((n+d)\ln\epsilon^{-1}+n^{2}\ln d+d^{2})\,,\quad p=\Omega(d^{d+n}\epsilon^{-\frac{d}{n}}n^{-d}2d^{d^{2}/n})\,,}}\\ {{}}&{{}}\\ {{}}&{{K=\Omega(n^{2-d}d^{d+n+2}2^{\frac{d^{2}}{n}}\epsilon^{-\frac{d}{n}}\ln\epsilon)\,,\quad\kappa=\Omega(M^{2})\,,\quad M=\Omega(d+n).}}\end{array}$$
|
822 |
+
$$(28)$$
|
823 |
+
|
824 |
+
Here all constants hidden in Ω(·) do not dependent on any parameters.
|
825 |
+
|
826 |
+
Proof. This is a direct consequence of proof of Theorem 1 in Yarotsky (2017) for Fn,d.
|
827 |
+
|
828 |
+
Let hi: R
|
829 |
+
dX → R, i = 1*, . . . , d*Y be the components of Γ
|
830 |
+
n d
|
831 |
+
, then apply Lemma 2 to the rescaled component 1 R
|
832 |
+
hi(B ·) with n = 1. It can be derived that there exists h˜i ∈ FNN(1, L, *p, K, κ, M* ˜ ) such that
|
833 |
+
|
834 |
+
$$\operatorname*{max}_{x\in[-1,1]^{d,x}}|\frac{1}{R}h_{i}(B x)-\tilde{h}_{i}(x)|\leq\tilde{\varepsilon}_{1}\;,$$
|
835 |
+
|
836 |
+
with parameters chosen as in equation 28, with n = 1, d = dX , and ϵ = ˜ε. Using a change of variable, we obtain that
|
837 |
+
|
838 |
+
$$\operatorname*{max}_{x\in[-B,B]^{d_{X}}}|h_{i}(x)-R\tilde{h}_{i}({\frac{x}{B}})|\leq R\tilde{\varepsilon}_{1}\,.$$
|
839 |
+
|
840 |
+
Assembling the neural networks Rh˜i(
|
841 |
+
·
|
842 |
+
B
|
843 |
+
) together, we obtain an neural network Γ˜n
|
844 |
+
d ∈ FNN(dY *, L, p, K, κ, M*)
|
845 |
+
with p = dY p˜, such that
|
846 |
+
$$\|\hat{\Gamma}_{d}^{n}-\Gamma_{d}^{n}\|_{\infty}\leq\varepsilon_{1},$$
|
847 |
+
d ∥∞ ≤ ε1, (29)
|
848 |
+
Here the parameters of FNN(dY *, L, p, K, κ, M*) are chosen as
|
849 |
+
$$\begin{array}{l l}{{L=\Omega(d_{\mathcal{X}}\ln\varepsilon_{1}^{-1})\,,}}&{{p=\Omega(d_{\mathcal{Y}}\varepsilon_{1}^{-d_{\mathcal{X}}}L_{\Phi}^{-d_{\mathcal{X}}}2^{d_{\mathcal{X}}^{2}})\,,}}\\ {{K=\Omega(p L)\,,}}&{{\kappa=\Omega(M^{2})\,,}}&{{M\geq\sqrt{d_{\mathcal{Y}}}L_{\mathcal{X}}^{n}R_{\mathcal{Y}}\,.}}\end{array}$$
|
850 |
+
$$(30)$$
|
851 |
+
Here the constants in Ω may depend on LDnX
|
852 |
+
, LEnX
|
853 |
+
, LEn
|
854 |
+
Y
|
855 |
+
and RX . Then we can develop an estimate of T1 as
|
856 |
+
follows. inf Γ∈FNN Eu -∥Γ ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 ≤Eu -∥Γ˜n d ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 ≤2Eu -∥Γ˜n d ◦ E n X (u) − Γ n d ◦ E n X (u)∥ 2 2 + 2Eu -∥Γ n d ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 ≤2dY ε 2 1 + 2Eu -∥Γ n d ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 , (31) where we used the definition of infinimum in the first inequality, the triangle inequality in the second
|
857 |
+
$$(29)$$
|
858 |
+
$$(31)$$
|
859 |
+
inequality, and the approximation equation 29 in the third inequality. Using the definition of Φ, we obtain
|
860 |
+
|
861 |
+
inf Γ∈FNN Eu -∥Γ ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 =2dY ε 2 1 + 2Eu -∥E n Y ◦ Φ ◦ DnX ◦ E n X (u) − E n Y ◦ Φ(u)∥ 2 2 ≤2dY ε 2 1 + 2L 2 En Y L 2 ΦEu -∥DnX ◦ E n X (u) − u∥ 2 X =2dY ε 2 1 + 2L 2 En Y L 2 ΦEu -∥Π n X ,dX (u) − u∥ 2 X ,
|
862 |
+
$$(32)$$
|
863 |
+
where we used the Lipschitz continuity of Φ and En Y
|
864 |
+
in the inequality above. Combining equation 32 and equation 27, and apply Assumption 4, we have
|
865 |
+
|
866 |
+
$$T_{1}\leq8d_{3^{\varphi}}\varepsilon_{1}^{2}+8L_{E_{5^{\varphi}}}^{2}L_{\Phi}^{2}\mathbb{E}_{u}\left[\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{\star}(u)-u\|_{\mathcal{X}}^{2}\right]+6L_{E_{5^{\varphi}}}^{2}\sigma^{2}.\tag{33}$$
|
867 |
+
|
868 |
+
To deal with the term T2, we shall use the covering number estimate of FNN(dY *, L, p, K, κ, M*), which has been done in Lemma 6 and Lemma 7 in Liu et al. (2022). A direct consequence of these two lemmas is
|
869 |
+
|
870 |
+
and Lemma 1 in and (1002). A direct consequence of the $$T_{2}\leq\frac{35dyL_{E_{y}^{n}}^{2}R_{3}^{2}}{n}\log\mathcal{N}\left(\frac{\delta}{4d_{\mathcal{Y}}L_{E_{y}^{n}}^{n}},\mathcal{F}_{\mathrm{NN}},\|\cdot\|_{\infty}\right)+6\delta$$ $$\leq\frac{d_{3}^{2}KL_{\Phi}^{2}}{n}\left(\ln\delta^{-1}+\ln L+\ln(pB)+L\ln\kappa+L\ln p\right)+\delta$$ $$\leq\frac{d_{3}^{2}KL_{\Phi}^{2}}{n}\left(\ln\delta^{-1}+\ln(B)+L\ln\kappa+L\ln p\right)+\delta,$$ $\delta$\(\
|
871 |
+
|
872 |
+
where we used Lemma 6 and 7 from Liu et al. (2022) for the second inequality. The constant in ≲ depends on LEn Y
|
873 |
+
and RX . Substituting parameters *K, B, κ* from equation 30, the above estimate gives
|
874 |
+
|
875 |
+
$$\begin{array}{l}{{T_{2}\lesssim L_{\Phi}^{2}d_{3}^{2}n^{-1}p L\left(\ln\delta^{-1}+L\ln B+L\ln R+L\ln p\right)+\delta}}\\ {{\lesssim L_{\Phi}^{2}d_{3}^{2}n^{-1}p L(\ln\delta^{-1}+L^{2})+\delta}}\\ {{\lesssim L_{\Phi}^{2}d_{3}^{2}n^{-1}p\left(L^{3}+(\ln\delta^{-1})^{2}\right)+\delta,}}\end{array}$$
|
876 |
+
|
877 |
+
where we used the fact ln δ
|
878 |
+
−1 ≲ L with the choice equation 34. The constant in ≲ depends on LEn Y
|
879 |
+
, LDnY
|
880 |
+
, LEnX
|
881 |
+
, LDnX
|
882 |
+
, RX and dX . We further substitute the values of p and L from equation 30 into the above estimate
|
883 |
+
|
884 |
+
$$T_{2}\lesssim L_{\Phi}^{2-d_{X}}d_{\mathcal{Y}}^{3}n^{-1}\varepsilon_{1}^{-d_{X}}\left(d_{X}^{3}(\ln\varepsilon_{1}^{-1})^{3}+(\ln\delta^{-1})^{2}\right)+\delta$$
|
885 |
+
|
886 |
+
Combining the T1 estimate above and the T2 estimate in equation 33 yields that
|
887 |
+
|
888 |
+
$$\begin{array}{c}{{T_{1}+T_{2}\lesssim\!\!d_{\mathcal{Y}}\varepsilon_{1}^{2}+L_{\Phi}^{2-d_{X}}d_{\mathcal{Y}}^{3}n^{-1}\varepsilon_{1}^{-d_{X}}\left((\ln\varepsilon_{1}^{-1})^{3}+(\ln\delta^{-1})^{2}\right)}}\\ {{+L_{\Phi}^{2}\mathbb{E}_{u}\left[\|\Pi_{\mathcal{X},d_{X}}^{n}(u)-u\|_{\mathcal{X}}^{2}\right]+\sigma^{2}+\delta.}}\end{array}$$
|
889 |
+
|
890 |
+
In order to balance the above error, we choose
|
891 |
+
|
892 |
+
$$\delta=n^{-1}\,,\quad\varepsilon_{1}=d_{\mathcal{V}}^{\frac{2}{2+d_{X}}}\,n^{-\frac{1}{2+d_{X}}}\,.$$
|
893 |
+
|
894 |
+
$$(34)$$
|
895 |
+
2+dX . (34)
|
896 |
+
Therefore,
|
897 |
+
|
898 |
+
$$\begin{array}{c}{{T_{1}+T_{2}\mathop{<}\nolimits_{d}^{\frac{6+d X}{2+d X}}n^{-\frac{2}{2+d X}}(1+L_{\Phi}^{2-d x})\left((\ln\frac{n}{d y})^{3}+(\ln n)^{2}\right)}}\\ {{+L_{\Phi}^{2}\mathbb{E}_{u}\left[\|\Pi_{x,d x}^{n}(u)-u\|_{X}^{2}\right]+\sigma^{2}+n^{-1},}}\end{array}$$ series in equation 34 and equation 30 as
|
899 |
+
$$(35)$$
|
900 |
+
|
901 |
+
where we combine the choice in equation 34 and equation 30 as
|
902 |
+
|
903 |
+
$L=\Omega(\ln(\frac{n}{d_{\cal Y}}))\,,\quad p=\Omega(d_{\cal Y}^{\frac{2-d_{\cal X}}{2+d_{\cal X}}}\,n^{\frac{d_{\cal X}}{2+d_{\cal X}}})\,,$ $K=\Omega(pL)\,,\quad\kappa=\Omega(M^{2})\,,\quad M\geq\sqrt{d_{\cal Y}}L_{E_{\cal X}^{n}}R_{\cal Y}\,.$
|
904 |
+
Here the notation Ω contains constants that depends on LEn Y
|
905 |
+
, LDnY
|
906 |
+
, LEnX
|
907 |
+
, LDnX
|
908 |
+
, RX and dX .
|
909 |
+
|
910 |
+
Combining equation 25 and equation 35, we have
|
911 |
+
|
912 |
+
$$\mathbb{E}_{S}\mathbb{E}_{u\to\gamma}\left(\|D_{y}^{n}\circ\Gamma_{\text{NN}}\circ E_{X}^{n}(u)-\Phi(u)\|_{\mathcal{Y}}^{2}\right)\leq\eta_{\mathcal{Y}}^{\frac{n+\delta_{\mathcal{Y}}}{2}}n^{-\frac{2}{n+\delta_{\mathcal{Y}}}}(1+L_{u}^{2-d_{X}})\left((\ln\frac{n}{d_{Y}})^{3}+(\ln n)^{2}\right)$$ $$+L_{u}^{2}\mathbb{E}_{u}\left[\|\mathbb{T}_{\mathcal{M},d_{X}}^{n}(u)-u\|_{\mathcal{X}}^{2}\right]+\mathbb{E}_{S}\mathbb{E}_{u\to\Phi_{\mathcal{Y}}}\left[\|\mathbb{T}_{\mathcal{Y},d_{X}}^{n}(w)-w\|_{\mathcal{Y}}^{2}\right]$$ $$+\sigma^{2}+n^{-1}.$$
|
913 |
+
|
914 |
+
Proof of Theorem 2. Similarly to the proof of Theorem 1, we have
|
915 |
+
|
916 |
+
$$\mathbb{E}_{S}\mathbb{E}_{u}\|D_{\mathcal{Y}}^{n}\circ\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}^{n}(u)-\Phi(u)\|_{\mathcal{Y}}^{2}\leq\mathrm{I}+\mathrm{II},$$
|
917 |
+
and
|
918 |
+
$$\mathrm{I}\leq2L_{D_{\mathrm{V}}^{n}}^{2}\left(T_{1}+T_{2}\right),$$
|
919 |
+
|
920 |
+
where T1 and T2 are defined in equation 26. Following the same procedure in equation 27, we have
|
921 |
+
|
922 |
+
$$T_{1}\leq4\operatorname*{inf}_{\Gamma\in\mathcal{F}_{\mathrm{NN}}}\mathbb{E}_{u}\left[\|\Gamma\circ E_{\mathcal{X}}^{n}(u)-E_{\mathcal{Y}}^{n}\circ\Phi(u)\|_{2}^{2}\right]+6\mathbb{E}_{\mathcal{S}_{2}}\frac{1}{n}\sum_{i=n+1}^{2n}\|\varepsilon_{i}\|_{\mathcal{Y}}^{2}\,.$$
|
923 |
+
|
924 |
+
To obtain an approximation of the discretized target map Γ
|
925 |
+
n d
|
926 |
+
:= En Y
|
927 |
+
◦ Φ ◦ DnX , we apply the following lemma for each component function of Γ
|
928 |
+
n d
|
929 |
+
.
|
930 |
+
|
931 |
+
Lemma 3 (Theorem 1.1 in Shen et al. (2019)). *Given* f ∈ C([0, 1]d)*, for any* L ∈ N
|
932 |
+
+, p ∈ N
|
933 |
+
+*, there exists* a function ϕ *implemented by a ReLU FNN with width* 3 d+3 max{d⌊p 1/d⌋, p + 1}*, and depth* 12L + 14 + 2d
|
934 |
+
|
935 |
+
such that
|
936 |
+
$$\|f-\phi\|_{\infty}\leq19\sqrt{d}\omega_{f}(p^{-2/d}L^{-2/d})\,,$$
|
937 |
+
where ωf (·) *is the modulus of continuity.*
|
938 |
+
Apply Lemma 3 to each component hi of Γ
|
939 |
+
n d
|
940 |
+
, we can find a neural network h˜i ∈ FNN(1, L, *p, M*˜ ) such that
|
941 |
+
|
942 |
+
$$\|h_{i}-\tilde{h}_{i}\|_{\infty}\leq C L_{\Phi}\varepsilon_{1}\,,$$
|
943 |
+
|
944 |
+
where L, p >˜ 0 are integers such that Lp = ⌈ε
|
945 |
+
−dX /2 1⌉, and the constant C depends on dX . Assembling the neural networks h˜i together, we can find a neural network Γ˜n d in FNN(dY *, L, p, M*) with p = dY p˜, such that
|
946 |
+
|
947 |
+
$$\|\tilde{\Gamma}_{d}^{n}-\Gamma_{d}^{n}\|_{\infty}\leq C L_{\Phi}\varepsilon_{1}\,.$$
|
948 |
+
$$(36)$$
|
949 |
+
|
950 |
+
Similarly to the derivations in equation equation 31 and equation 32, we obtain that
|
951 |
+
|
952 |
+
$$T_{1}\lesssim L_{\Phi}^{2}d_{\mathcal{Y}}\varepsilon_{1}^{2}+L_{\Phi}^{2}\mathbb{E}_{u}\left[\|\Pi_{\mathcal{X},d_{\mathcal{X}}}^{n}(u)-u\|_{\mathcal{X}}^{2}\right]+\sigma^{2}\,,$$
|
953 |
+
2, (36)
|
954 |
+
where the notation ≲ contains constants that depend on dX and LEn Y
|
955 |
+
. To deal with term T2, we apply the following lemma concerning the covering number.
|
956 |
+
|
957 |
+
Lemma 4. *[Lemma 10 in Liu et al. (2022)] Under the conditions of Theorem 2, we have*
|
958 |
+
|
959 |
+
$$\mathrm{T}_{2}\leq\frac{35d_{\mathcal{Y}}R_{\mathcal{Y}}^{2}}{n}\log{\mathcal{N}}\left(\frac{\delta}{4d_{\mathcal{Y}}L_{E_{\mathcal{Y}}^{n}}R_{\mathcal{Y}}},{\mathcal{F}}_{\mathrm{NN}},2n\right)+6\delta.$$
|
960 |
+
|
961 |
+
Combining Lemma 4 with equation 36, we derive that
|
962 |
+
|
963 |
+
I ≤CL2ΦL 2 DnY dY ε 2 1 + 16L 2 DnY L 2 En Y L 2 ΦEu -∥Π n X ,dX (u) − u∥ 2 X + 12L 2 DnY σ 2 + 70L 2 DnY dY R2Y nlog N δ 4dY LEn Y RY , FNN(dY , L, p, M), 2n ! + 12L 2 DnY δ. (37)
|
964 |
+
By the definition of covering number (c.f. Definition 5 in Liu et al. (2022)), we first note that the covering number of FNN(dY *, L, p, M*) is bounded by that of FNN(1*, L, p, M*):
|
965 |
+
|
966 |
+
$${\mathcal{N}}\left({\frac{\delta}{4d y_{L E_{\mathrm{S}}^{*}R_{\mathrm{S}}}}},{\mathcal{F}}_{\mathrm{NN}}(d_{y},L,p,M),2n\right)\leq C e^{d y}{\mathcal{N}}\left({\frac{\delta}{4d y_{L E_{\mathrm{S}}^{*}R_{\mathrm{S}}}}},{\mathcal{F}}_{\mathrm{NN}}(1,L,p,M),2n\right).$$
|
967 |
+
|
968 |
+
Thus it suffices to find an estimate on the covering number of FNN(1*, L, p, M*). A generic bound for classes of functions is provided by the following lemma. Lemma 5 (Theorem 12.2 of Anthony et al. (1999)). Let F *be a class of functions from some domain* Ω to
|
969 |
+
[−M, M]*. Denote the pseudo-dimension of* F by Pdim(F). For any δ > 0*, we have*
|
970 |
+
|
971 |
+
$${\mathcal{N}}(\delta,F,m)\leq\left({\frac{2e M m}{\delta\mathrm{Pdim}(F)}}\right)^{\mathrm{Pdim}(F)}$$
|
972 |
+
|
973 |
+
for m > Pdim(F).
|
974 |
+
|
975 |
+
The next lemma shows that for a DNN FNN(1*, L, p, M*), its pseudo-dimension of can be bounded by the network parameters.
|
976 |
+
|
977 |
+
Lemma 6 (Theorem 7 of Bartlett et al. (2019)). For any network architecture FNN with L *layers and* U
|
978 |
+
parameters, there exists an universal constant C *such that*
|
979 |
+
|
980 |
+
$$\operatorname{Pdim}({\mathcal{F}}_{\mathrm{NN}})\leq C L U\log(U).$$
|
981 |
+
Pdim(FNN) ≤ CLU log(U). (39)
|
982 |
+
For the network architecture FNN(1*, L, p, M*), the number of parameters is bounded by U = Lp2. We apply Lemma 5 and 6 to bound the covering number by its parameters:
|
983 |
+
|
984 |
+
$$\log{\cal N}\left(\frac{\delta}{4dy_{L}E_{\cal F}R_{\cal V}},{\cal F}_{\rm NSI}(dy,L,p,M),2n\right)\leq C_{1}dy^{p}L^{2}\log\left(p^{2}L\right)\left(\log\left(\frac{R_{\cal L}^{2}d_{\cal V}L_{E_{\cal V}^{2}}L_{\cal A}}{L^{2}p^{2}\log(Lp^{2})}\right)+\log\delta^{-1}+\log n\right)\tag{40}$$
|
985 |
+
! , (40)
|
986 |
+
$$(38)$$
|
987 |
+
$$(39)$$
|
988 |
+
|
989 |
+
when 2*n > C*2p 2L
|
990 |
+
2log(p 2L) for some universal constants C1 and C2. Note that *p, L* are integers such that pL =
|
991 |
+
ldY ε
|
992 |
+
−dX /2 1m, therefore we have
|
993 |
+
|
994 |
+
$p_{\Phi}=\left[\begin{array}{cc}\delta\phi_{1}&\left[\right]\\ \end{array}\right]$, where we have $$\log\mathcal{N}\left(\frac{\delta}{d\phi_{L}E_{\mathrm{S}}\,R_{\mathrm{y}}},F_{\mathrm{SN}}(d_{\mathrm{y}},L,p,M),2n\right)\leq d_{\phi}^{3}\varepsilon_{1}^{-4\nu}\log(d_{\mathrm{y}}\varepsilon_{1}^{-1})\left(\log L_{\Phi}-\log(d_{\mathrm{y}}\varepsilon^{-1})+\log\delta^{-1}+\log n\right)\,,\tag{41}$$ where the notation $\lesssim$ contains constants that depend on $R_{\mathrm{y}},d_{\mathcal{X}}$ and $L_{\mathrm{ES}}$.
|
995 |
+
Y
|
996 |
+
Substituting the above covering number estimate back to equation 37 gives
|
997 |
+
|
998 |
+
$$\begin{array}{c}{{\mathrm{I}\!\!<\!\!\!L_{\Phi}^{2}d y\varepsilon_{1}^{2}+L_{\Phi}^{2}\mathbb{E}_{u}\left[\|\Pi_{{\mathcal{X}},d x}^{n}(u)-u\|_{{\mathcal{X}}}^{2}\right]+\sigma^{2}}}\\ {{\qquad+L_{\Phi}^{2}n^{-1}d_{{\mathcal{Y}}}^{4}\varepsilon_{1}^{-d x}\log(d y\varepsilon_{1}^{-1})\left(\log L_{\Phi}-\log(d y\varepsilon^{-1})+\log\delta^{-1}+\log n\right)+\delta,}}\end{array}$$
|
999 |
+
|
1000 |
+
where the notation ≲ contains constant that depends on LEn Y
|
1001 |
+
, LDnY
|
1002 |
+
, LEnX
|
1003 |
+
, LDnX
|
1004 |
+
, RX and dX . Letting
|
1005 |
+
|
1006 |
+
$$\varepsilon_{1}=d_{y}^{\frac{3}{2+d X}}\,n^{-\frac{1}{2+d X}}\,,\delta=n^{-1},$$
|
1007 |
+
|
1008 |
+
we have
|
1009 |
+
|
1010 |
+
$$\begin{array}{l}\leq L_{0}^{2}\frac{n+\delta}{\delta y^{\delta}}n^{-\frac{\delta+\delta}{\delta y^{\delta}}}+L_{0}^{2}\mathbb{E}_{n}\left[\|\Pi_{X,d,e}^{n}(u)-u\|_{X}^{2}\right]+\left(\sigma^{2}+n^{-1}\right)+L_{0}^{2}\log(L_{0})\frac{n+\delta}{\delta y^{\delta}}n^{-\frac{\delta+\delta}{\delta y^{\delta}}}\log\left(n\right)\\ \leq L_{0}^{2}\log(L_{0})\frac{n+\delta}{\delta y^{\delta}}n^{-\frac{\delta+\delta}{\delta y^{\delta}}}\log n+\left(\sigma^{2}+n^{-1}\right)+L_{0}^{2}\mathbb{E}_{n}\left[\|\Pi_{X,d,e}^{n}(u)-u\|_{X}^{2}\right],\end{array}\tag{42}$$
|
1011 |
+
|
1012 |
+
where ≲ contains constants that depend on LEn Y
|
1013 |
+
, LDnY
|
1014 |
+
, LEnX
|
1015 |
+
, LDnX
|
1016 |
+
, RX and dX . Combining our estimate equation 42 and equation 25, we have
|
1017 |
+
|
1018 |
+
$$\mathbb{E}_{\mathcal{G}}\mathbb{E}_{u}\|D_{\mathcal{Y}}^{n}\circ\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}^{n}(u)-\Psi(u)\|_{\mathcal{Y}}^{2}\lesssim L_{\mathcal{G}}^{2}\log(L_{\Phi})^{\frac{n+L_{\mathcal{X}}}{2^{n+2}}}n^{-\frac{2}{2^{n+2}}}\log n+(\sigma^{2}+n^{-1})$$ $$+L_{\mathcal{G}}^{2}\mathbb{E}_{u}\left[\|\mathbb{H}_{\mathcal{A},d_{\mathcal{X}}}^{n}(u)-u\|_{\mathcal{X}}^{2}\right]+\mathbb{E}_{\mathcal{G}}\mathbb{E}_{w\mapsto\Phi_{\mathcal{G}}\circ}\left[\|\mathbb{H}_{\mathcal{Y},d_{\mathcal{Y}}}^{n}(w)-w\|_{\mathcal{Y}}^{2}\right].$$
|
1019 |
+
|
1020 |
+
Proof of Theorem 3. Under Assumption 5, the target finite dimensional map becomes Γ
|
1021 |
+
n d
|
1022 |
+
:= EY ◦ Φ ◦ DX :
|
1023 |
+
M → R
|
1024 |
+
dY , which is a Lipschitz map defined on M ⊂ R
|
1025 |
+
dX . Similar to the proof of Theorem 2, the generalization error is decomposed as the following
|
1026 |
+
|
1027 |
+
$$\mathbb{E}_{S}\mathbb{E}_{u}\|D_{\mathcal{Y}}\circ\Gamma_{\mathrm{NN}}\circ E_{\mathcal{X}}(u)-\Phi(u)\|_{\mathcal{Y}}^{2}\leq T_{1}+T_{2}+\mathrm{II}\,,$$
|
1028 |
+
|
1029 |
+
where T1, T2 and II are defined in equation 26 and equation 25 respectively. Following the same procedure in equation 27, we obtained that
|
1030 |
+
|
1031 |
+
$$T_{1}\leq4\operatorname*{inf}_{\Gamma\in\mathcal{F}_{\mathrm{NN}}}\mathbb{E}_{u}\left[\|\Gamma\circ E_{\mathcal{X}}^{n}(u)-E_{\mathcal{Y}}^{n}\circ\Phi(u)\|_{2}^{2}\right]+6\mathbb{E}_{\mathcal{S}_{2}}\frac{1}{n}\sum_{i=n+1}^{2n}\|\varepsilon_{i}\|_{\mathcal{Y}}^{2}\,.$$
|
1032 |
+
$$(43)$$
|
1033 |
+
|
1034 |
+
We then replace Lemma 3 by the following modified version of lemma 17 from Liu et al. (2022) to obtain an FNN approximation to Γ
|
1035 |
+
n d
|
1036 |
+
.
|
1037 |
+
|
1038 |
+
Lemma 7 (Lemma 17 in Liu et al. (2022)). Suppose assumption 5 holds, and assume that ∥a∥∞ ≤ B for all a ∈ M. For any Lipschitz function f with Lipschitz constant R on M, and any integers L, ˜ p >˜ 0*, there* exists ˜f ∈ FNN(1, L, p, M) *such that*
|
1039 |
+
|
1040 |
+
$$\|\tilde{f}-f\|_{\infty}\leq C R\tilde{L}^{-\frac{2}{d_{0}}}\tilde{p}^{-\frac{2}{d_{0}}}\,,$$
|
1041 |
+
|
1042 |
+
where the constant C solely depends on d0, B, τ and the surface area of M. The parameters of FNN(1*, L, p, M*)
|
1043 |
+
are chosen as the following L = Ω(L˜ log L˜), p = Ω(dX p˜log ˜p), M = R .
|
1044 |
+
|
1045 |
+
The constants in Ω depend on d0, B, τ *and the surface area of* M.
|
1046 |
+
|
1047 |
+
Apply the above lemma to each component of EY ◦ Φ ◦ DX and assemble all individual neural networks together, we obtain a neural network Γ˜n d ∈ F(dY *, L, p, M*) such that
|
1048 |
+
|
1049 |
+
$$\|\tilde{\Gamma}_{d}^{n}-\Gamma_{d}^{n}\|_{\infty}\lesssim L_{\Phi}\varepsilon\,,$$
|
1050 |
+
|
1051 |
+
Here the parameters L = Ω(L˜ log L˜), p = Ω(dX dY p˜log ˜p), M = Ω(LΦ) with L˜p˜ = Ω(ε). The notation ≲
|
1052 |
+
and Ω contains constants that solely depend on d0, RX , LEX
|
1053 |
+
, τ and surface area of M. The rest of the proof follows the same procedure as in proof of Theorem 2.
|
1054 |
+
|
1055 |
+
Proof of Theorem 4. The proof is similar to that of Theorem 2 with a slight change of the neural network construction, so we only provide a brief proof below.
|
1056 |
+
|
1057 |
+
While Assumption 6 holds, the target map Φ : *X 7→ Y* can be decomposed as the following
|
1058 |
+
|
1059 |
+
$$\mathcal{X}\xrightarrow{E_{X}^{\mathcal{X}}}\mathbb{R}^{d_{X}}\xrightarrow{V_{\mathcal{X}}}\mathbb{R}^{d_{0}}\xrightarrow{g_{1}}\mathbb{R}^{d_{0}}\xrightarrow{g_{2}}\mathbb{R}^{d_{Y}}\xrightarrow{D_{X}^{\mathcal{Y}}}\mathcal{Y}\,.\tag{44}$$
|
1060 |
+
|
1061 |
+
$\square$
|
1062 |
+
|
1063 |
+
Notice that each route contains a composition of a linear function Vi and a nonlinear map gi: R
|
1064 |
+
d0 → R.
|
1065 |
+
|
1066 |
+
The nonlinear function gi can be approximated by a neural network with a size that is independent from dX , while the linear functions Vi can be learned through a linear layer of neural network. Consequently, the function hi:= Vi ◦ gi can be approximated by a neural network h˜i ∈ FNN(1, L + 1, *p, M*˜ ) such that
|
1067 |
+
|
1068 |
+
$$\|h_{i}-\tilde{h}_{i}\|_{\infty}\leq C L_{\Phi}\varepsilon$$
|
1069 |
+
|
1070 |
+
where L, p >˜ 0 are integers with Lp = ⌈ε
|
1071 |
+
−d0/2 1⌉, and the constant C depends on d0. Assembling the neural networks h˜i together, we can find a neural network Γ˜n d in FNN(dY , L + 1*, p, M*) with p = dY p˜, such that
|
1072 |
+
|
1073 |
+
$$\|\tilde{\Gamma}_{d}^{n}-\Gamma_{d}^{n}\|_{\infty}\leq C L_{\Phi}\varepsilon_{1}\,.$$
|
1074 |
+
|
1075 |
+
The rest of the proof follows the same as in the proof of Theorem 2.
|
1076 |
+
|
1077 |
+
Proof of Theorem 5. The proof is very similar to that of Theorem 4. Under Assumption 7, the target map Φ has the following structure:
|
1078 |
+
|
1079 |
+
$X$$\mathbb{R}^{d_{x}}$$\mathbb{R}^{d_{x}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$$\mathbb{R}^{d_{1}}$\(\mathbb{R}^{d_{1}}
|
1080 |
+
|
1081 |
+
$\square$
|
1082 |
+
(45)
|
1083 |
+
where the abbreviation notation *· · ·* denotes blocks Gi, i = 3*, . . . , G*k. The neural network construction for each block Giis the same as in the proof of Theorem 4. Specifically, there exists a neural network Hi ∈ FNN(li, L + 1, li*p, M*˜ ) such that
|
1084 |
+
|
1085 |
+
$||G^{i}-H^{i}||_{\infty}\leq CL_{G_{i}}\varepsilon_{1}\,,\,$ for all $i=1,\ldots,k$.
|
1086 |
+
Concatenate all neural networks Hi together, we obtain the following approximation
|
1087 |
+
|
1088 |
+
$$|G^{k}\circ\cdots\circ G^{1}-H$$
|
1089 |
+
|
1090 |
+
1 − Hk◦ · · · ◦ H1∥ ≤ CLΦε1 .
|
1091 |
+
|
1092 |
+
The rest of the proof follows the same as in the proof of Theorem 2.
|
1093 |
+
|
1094 |
+
## A.2 Lipschitz Constant Of Parameter To Solution Map For Parametric Elliptic Equation
|
1095 |
+
|
1096 |
+
The solution u to equation 24 is unique for any given boundary condition f so we can define the solution
|
1097 |
+
map:
|
1098 |
+
Sa : f ∈ H17→ u ∈ H3/2.
|
1099 |
+
To obtain an estimate of the Lipschitz constant of the parameter-to-solution map Φ, we compute the Frechét
|
1100 |
+
derivative DSa[δ] with respect to a and derive an upper bound of the Lipschitz constant. It can be shown
|
1101 |
+
that the Frechét derivative is
|
1102 |
+
$$D S_{a}[\delta]:f\mapsto v_{\delta},$$
|
1103 |
+
where vδ satisfies the following equation
|
1104 |
+
$$\begin{cases}-\operatorname{div}(a(x)\nabla_{x}v_{\delta}(x))=\operatorname{div}(\delta\nabla u)\,,&\mathrm{in~\Omega,}\\ v_{\delta}=0\,,&\mathrm{on~\partial\Omega.}\end{cases}$$
|
1105 |
+
The above claim can be proved by using standard linearization argument and adjoint equation methods.
|
1106 |
+
|
1107 |
+
Using classical elliptic regularity results, we derive that
|
1108 |
+
|
1109 |
+
$\|v_{\delta}\|_{H^{3/2}}\leq C\|\mbox{div}(\delta\mbox{V}u)\|_{H^{-1/2}}$ $\leq C\|\delta\|_{L^{\infty}}\|u\|_{H^{3/2}}\leq C\|\delta\|_{L^{\infty}}\|f\|_{H^{1}}$
|
1110 |
+
where C solely depends on the ambient dimension d = 2 and *α, β*. Therefore, the Lipschitz constant is C∥f∥H1 .
|
zmBFzuT2DN/zmBFzuT2DN_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 24,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 24,
|
14 |
+
"code": 0,
|
15 |
+
"table": 0,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 135,
|
18 |
+
"unsuccessful_ocr": 10,
|
19 |
+
"equations": 145
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|