Optimal rates of approximation by shallow ReLU k neural networks and applications to nonparametric regression

We study the approximation capacity of some variation spaces corresponding to shallow ReLU k neural networks. It is shown that suﬃciently smooth functions are contained in these spaces with ﬁnite variation norms. For functions with less smoothness, the approximation rates in terms of the variation norm are established. Using these results, we are able to prove the optimal approximation rates in terms of the number of neurons for shallow ReLU k neural networks. It is also shown how these results can be used to derive approximation bounds for deep neural networks and convolutional neural networks (CNNs). As applications, we study convergence rates for nonparametric regression using three ReLU neural network models: shallow neural network, over-parameterized neural network, and CNN. In particular, we show that shallow neural networks can achieve the minimax optimal rates for learning H¨older functions, which complements recent results for deep neural networks. It is also proven that over-parameterized (deep or shallow) neural networks can achieve nearly optimal rates for nonparametric regression.


Introduction
Neural networks generate very popular function classes used in machine learning algorithms [2,28].The fundamental building blocks of neural networks are ridge functions (also called neurons) of the form x ∈ R d → ρ((x ⊺ , 1)v), where ρ : R → R is a continuous activation function and v ∈ R d+1 is a trainable parameter.It is well-known that a shallow neural network with non-polynomial activation is universal in the sense that it can approximate any continuous functions on any compact set with desired accuracy when the number of neurons N is sufficiently large [11,23,48].The approximation and statistical properties of neural networks with different architectures have also been widely studied in the literature [64,67,6,52], especially when ρ is a sigmoidal activation or ρ is the ReLU k function σ k (t) = max{0, t} k , the k-power of the rectified linear unit (ReLU) with k ∈ N 0 := N ∪ {0}.
The main focus of this paper is rates of approximation by neural networks.For classical smooth function classes, such as Hölder functions, Mhaskar [39] (see also [48,Theorem 6.8]) presented approximation rates for shallow neural networks, when the activation function ρ ∈ C ∞ (Ω) is not a polynomial on some open interval Ω (ReLU k does not satisfy this condition).It is known that the rates obtained by Mhaskar are optimal if the network weights are required to be continuous functions of the target function.Recently, optimal rates of approximation have also been established for deep ReLU neural networks [64,65,54,31], even without the continuity requirement on the network weights.All these approximation rates are obtained by using the idea that one can construct neural networks to approximate polynomials efficiently.There is another line of works [4,34,25,56,57] studying the approximation rates for functions of certain integral forms (such as (1.2)) by using a random sampling argument due to Maurey [49].In particular, Barron [4] derived dimension independent approximation rates for sigmoid type activations and functions h, whose Fourier transform h satisfies R d |ω|| h(ω)|dω < ∞.This result has been improved and generalized to ReLU activation in recent articles [25,56,57].
In this paper, we continue the study of these two lines of approximation theories for neural networks (i.e. the constructive approximation of smooth functions and the random approximation of integral representations).Our main result shows how well integral representations corresponding to ReLU k neural networks can approximate smooth functions.By combining this result with the random approximation theory of integral forms, we are able to establish the optimal rates of approximation for shallow ReLU k neural networks.Specifically, we consider the following function class defined on the unit ball B d of R d induced by vectors on unit sphere S d of R d+1 as which can be regarded as a shallow ReLU k neural network with infinite width [3].The restriction on the total variation ∥µ∥ := |µ|(S d ) ≤ M gives a constraint on the size of the weights in the network.We study how well F σ k (M ) approximates the unit ball of Hölder class H α with smoothness index α > 0 as M → ∞.Roughly speaking, our main theorem shows that, if α > (d + 2k + 1)/2, then H α ⊆ F σ k (M ) for some constant M depending on k, d, α, and if α < (d + 2k + 1)/2, we obtain the approximation bound sup where, for two quantities X and Y , X ≲ Y (or Y ≳ X) denotes the statement that X ≤ CY for some constant C > 0 (we will also denote X ≍ Y when X ≲ Y ≲ X).In other words, sufficiently smooth functions are always contained in the shallow neural network space F σ k (M ).And, for less smooth functions, we can characterize the approximation error by the variation norm.Furthermore, combining our result with the random approximation bounds from [3,57,55], we are able to prove that shallow ReLU k neural network of the form (1.1) achieves the optimal approximation rate O(N −α/d ) for H α with α < (d + 2k + 1)/2, which generalizes the result of Mhaskar [39] to ReLU k activation.
In addition to shallow neural networks, we can also apply our results to derive approximation bounds for multi-layer neural networks and convolutional neural networks (CNNs) when k = 1 (ReLU activation σ := σ 1 ).These approximation bounds can then be used to study the performances of machine learning algorithms based on neural networks [2].Here, we illustrate the idea by studying the nonparametric regression problem.The goal of this problem is to learn a function h from a hypothesis space H from its noisy samples where X i is sampled from an unknown probability distribution µ and η i is Gaussian noise.One popular algorithm for solving this problem is the empirical least square minimization argmin where F n is an appropriately chosen function class.For instance, in deep learning, F n is parameterized by deep neural networks and one solves the minimization by (stochastic) gradient descent methods.Assuming that we can compute a minimizer f * n ∈ F n , the performance of the algorithm is often measured by the square loss ∥f * n − h∥ 2 L 2 (µ) .A fundamental question in learning theory is to determine the convergence rate of the error ∥f * n − h∥ 2 L 2 (µ) → 0 as the sample size n → ∞.The error can be decomposed into two components: approximation error and generalization error (also called estimation error).For neural network models F n , our results provide bounds for the approximation errors, while the generalization errors can be bounded by the complexity of the models [53,40].We study the cases H = H α with α < (d + 3)/2 or H = F σ (1) for three ReLU neural network models: shallow neural network, over-parameterized neural network, and CNN.The models and our contributions are summarized as follows: (1) Shallow ReLU neural network F σ (N, M ), where N is the number of neurons and M is a bound for the variation norm that measures the size of the weights.We prove optimal approximation rates (in terms of N ) for this model.It is also shown that this model can achieve the optimal convergence rates for learning H α and F σ (1), which complements the recent results for deep neural networks [52,27].
(2) Over-parameterized (deep or shallow) ReLU neural network N N (W, L, M ) studied in [24], where W, L are the width and depth respectively, and M is a constraint on the weight matrices.For fixed depth L, the generalization error for this model can be controlled by M [24].When H = H α , we characterize the approximation error by M , and allow the width W to be arbitrary large so that the model can be over-parameterized (the number of parameters is larger than the number of samples).When H = F σ (1), we can simply increase the width to reduce the approximation error so that the model can also be over-parameterized.Our result shows that this model can achieve nearly optimal convergence rates for learning H α and F σ (1).Both the approximation and convergence rates improve the results of [24].
(3) Sparse convolutional ReLU neural network CN N (s, L) introduced by [67], where L is the depth and s ≥ 2 is a fixed integer that controls the filter length.This model is shown to be universal for approximation [67] and universal consistent for regression [30].We improve the approximation bound in [67] and give new convergence rates of this model for learning H α and F σ (1).

Approximation
Nonparametric regression Table 1.1: Approximation rates and convergence rates of nonparametric regression for three neural network models, ignoring logarithmic factors.
The approximation rates and convergence rates of nonparametric regression for these models are summarized in Table 1.1, where we use the notation a ∨ b := max{a, b}.
The rest of the paper is organized as follows.In Section 2, we present our approximation results for shallow neural networks.Section 3 gives a proof of our main theorem.In Section 4, we apply our approximation results to study these neural network models and derive convergence rates for nonparametric regression using these models.Section 5 concludes this paper with a discussion on possible future directions of research.

Approximation rates for shallow neural networks
Let us begin with some notations for function classes.Let 1} be the unit ball and the unit sphere of R d .We are interested in functions of the integral form where µ is a signed Radon measure on S d with finite total variation ∥µ∥ := |µ|(S d ) < ∞ and σ k (t) := max{t, 0} k with k ∈ N 0 := N ∪ {0} is the ReLU k function (when k = 0, σ 0 (t) is the Heaviside function).For simplicity, we will also denote the ReLU function by σ := σ 1 .The variation norm γ(f ) of f is the infimum of ∥µ∥ over all decompositions of f as (2.1) [3].By the compactness of S d , the infimum can be attained by some signed measure µ.We denote F σ k (M ) as the function class that contains all functions f in the form (2.1) whose variation norm γ(f ) ≤ M , see (1.2).The class F σ k (M ) can be thought of as an infinitely wide neural network with a constraint on its weights.The variation spaces corresponding to shallow neural networks have been studied by many researchers.We refer the reader to [51,44,8,45,46,55,56,57,58] for several other definitions and characterizations of these spaces.
We will also need the notion of classical smoothness of functions on Euclidean space.Given a smoothness index α > 0, we write α = r + β where r ∈ N 0 and β ∈ (0, 1].Let C r,β (R d ) be the Hölder space with the norm where s = (s 1 , . . ., s d ) ∈ N d 0 is a multi-index and Here we use ∥ • ∥ L ∞ to denote the supremum norm, since we mainly consider continuous functions.We write C r,β (B d ) for the Banach space of all restrictions to B d of functions in C r,β (R d ).The norm is given by ∥f For convenience, we will denote the unit ball of C r,β (B d ) by Note that, for α = 1, H α is a class of Lipschitz continuous functions.Due to the universality of shallow neural networks [48], F σ k (M ) can approximate any continuous functions on B d if M is sufficiently large.Our main theorem estimates the rate of this approximation for Hölder class.
where the implied constants only depend on k, d, α.
The proof of Theorem 2.1 is deferred to the next section.Our proof uses similar ideas as [3, Proposition 3], which obtained the same approximation rate for α = 1 (with an additional logarithmic factor).The conclusion is more complicated for the critical value α = (d+2k +1)/2.We think this is due to the proof technique and conjecture that H α ⊆ F σ k (M ) for all α ≥ (d + 2k + 1)/2, see Remark 3.4.Nevertheless, in practical applications of machine learning, the dimension d is large and it is reasonable to expect that α < (d + 2k + 1)/2.
In order to apply Theorem 2.1 to shallow neural networks with finite neurons, we can approximate F σ k (M ) by the subclass where we restrict the measure µ to be a discrete one supported on at most N points.The next proposition shows that any function in where µ + and µ − are the positive and negative parts of µ.If f + , f − ∈ F σ k (1), then f ∈ F σ k (1).Hence, without loss of generality, we can assume µ is a probability measure.We are going to approximate f by uniform laws of large numbers.Let {v i } N i=1 be N i.i.d.samples from µ.By symmetrization argument (see [60,Theorem 4.10] for example), we can bound the expected approximation error by Rademacher complexity [7]: where (ϵ 1 , . . ., ϵ N ) is an i.i.d.sequence of Rademacher random variables.For k ∈ N, the Lipschitz constant of . By the contraction property of Rademacher complexity [29, Corollary 3.17], For k = 0, the VC dimension of the function class For k ∈ N, by the compactness of S d and Prokhorov's theorem, there exists a weakly convergent subsequence µ n i → µ.In particular, ∥µ∥ ≤ 1 and for any x ∈ B d , lim For k = 0, we use the idea from [58, Lemma 3].We can view f n as a Bochner integral . By Prokhorov's theorem, there exists a weakly convergent subsequence by viewing the Bochner integral as an integral over S d .If we choose a countable dense sequence {g j } ∞ j=1 of L 2 (B d ), then the weak convergence implies that lim , for all j.The strong convergence The proof of Proposition 2.2 actually shows the approximation rate O(N −1/2 ) for the subclass F σ k (N, 1).This rate can be improved if we take into account the smoothness of the activation function.For ReLU activation, Bach [3, Proposition 1] showed that approximating f ∈ F σ (1) by neural networks with finitely many neurons is essentially equivalent to the approximation of a zonoid by zonotopes [10,37].Using this equivalence, he obtained the rate O(N − 1 2 − 3 2d ) for ReLU neural networks.Similar idea was applied to the Heaviside activation in [32,Theorem 4], which proved the rate O(N − 1 2 − 1 2d ) for such an activation function.For ReLU k neural networks, the general approximation rate O(N − 1 2 − 2k+1 2d ) was established in L 2 norm by [57], which also showed that this rate is sharp.The recent work [55] further proved that this rate indeed holds in the uniform norm.We summarize their results in the following lemma.
Combining Theorem 2.1 and Lemma 2.3, we can derive the rate of approximation by shallow neural network F σ k (N, M ) for Hölder class H α .Recall that we use the notation a ∨ b := max{a, b}.
Proof.We only present the proof for part (3), since other parts can be derived similarly.If α < (d + 2k + 1)/2, then by Theorem 2.1, for any h ∈ H α , there exists Combining the two bounds gives the desired result.
We make some comments on the approximation rate for H α with α < (d + 2k + 1)/2.As shown by [48,Corollary 6.10], the rate O(N −α/d ) in the L 2 norm is already known for α = 1, 2, . . ., (d + 2k + 1)/2.For ReLU activation, the recent paper [36] obtained the rate ) in the supremum norm.Corollary 2.4 shows that the rate O(N −α/d ) holds in the supremum norm for all ReLU k activations.And more importantly, we also provide an explicit control on the network weights to ensure that this rate can be achieved, which is useful for estimating generalization errors (see Section 4.2).It is well-known that the optimal approximation rate for H α is O(N −α/d ), if we approximate h ∈ H α by a function class with N parameters and the parameters are continuously dependent on the target function h [13].However, this result is not directly applicable to neural networks, because we do not have guarantee that the parameters in the network depend continuously on the target function (in fact, this is not true for some constructions [65,31,63]).Nevertheless, one can still prove that the rate O(N −α/d ) is optimal for shallow ReLU k neural networks by arguments based on pseudo-dimension as done in [64,31,63].
We describe the idea of proving approximation lower bounds through pseudo-dimension by reviewing the result of Maiorov and Ratsaby [33] (see also [1]).Recall that the pseudodimension Pdim (F) of a real-valued function class F defined on B d is the largest integer n for which there exist points x 1 , . . ., x n ∈ B d and constants c 1 , . . ., c n ∈ R such that Maiorov and Ratsaby [33] introduced a nonlinear n-width defined as where p ∈ [1, ∞] and F n runs over all the classes in L p (B d ) with Pdim (F n ) ≤ n.They constructed a well-separated subclass of H α such that if a function class F can approximate this subclass with small error, then Pdim (F) should be large.In other words, the approximation error of any class F n with Pdim (F n ) ≤ n can be lower bounded.Consequently, they proved that By [6], we can upper bound the pseudo-dimension of shallow ReLU k neural networks as which shows that the rate O(N −α/d ) in Corollary 2.4 is optimal in the L p norm (ignoring logarithmic factors).This also implies the optimality of Theorem 2.1 (otherwise, the proof of Corollary 2.4 would give a rate better than O(N −α/d )).
3 Proof of Theorem 2.1 Following the idea of [3], we first transfer the problem to approximation on spheres.Let us begin with a brief review of harmonic analysis on spheres [12].For n ∈ N 0 , the spherical harmonic space Y n of degree n is the linear space that contains the restrictions of real harmonic homogeneous polynomials of degree n on R d+1 to the sphere Spherical harmonics are eigenfunctions of the Laplace-Beltrami operator: Spherical harmonics of different degrees are orthogonal with respect to the inner product ⟨f, g⟩ = , where τ d is the surface area measure of S d (normalized by the surface area ω d := 2π (d+1)/2 /Γ((d + 1)/2) so that τ d (S d ) = 1).
Let P n : L 2 (S d ) → Y n denote the orthogonal projection operator.For any orthonormal basis where P n is the Gegenbauer polynomial with normalization P n (1) = 1.Applying the Cauchy-Schwarz inequality to (3.1), we get |P n (t)| ≤ 1.For n ̸ = 0, P n (t) is odd (even) if n is odd (even).Note that, for d = 1 and n ̸ = 0, N (d, n) = 2 and P n (t) is the Chebyshev polynomial such that P n (cos θ) = cos(nθ).We can write the projection P n as This motivates the following definition of a convolution operator on the sphere.

Definition 3.1 (Convolution). Let ϱ be the probability distribution with density
The convolution on the sphere satisfies Young's inequality [12, Theorem 2.1.2]:for p, q, r ≥ 1 with p where the norm is the uniform one when r = ∞.Observe that the projection ), let g(n) denote the Fourier coefficient of g with respect to the Gegenbauer polynomials, By the Funk-Hecke formula, one can show that [12, Theorem 2.1.3] This identity is analogous to the Fourier transform of ordinary convolution.
One of the key steps in our proof of Theorem 2.1 is the observation that functions of the form We summarize the result in the following. .
Next, we introduce the smoothness of functions on the sphere.For 0 ≤ θ ≤ π, the translation operator T θ , also called spherical mean operator, is defined by where S ⊥ u := {v ∈ S d : u ⊺ v = 0} is the equator in S d with respect to u (hence S ⊥ u is isomorphic to the sphere S d−1 ).We note that the translation operator satisfies P n (T θ f ) = P n (cos θ)P n (f ).For α > 0 and 0 < θ < π, we define the α-th order difference operator where , in a distributional sense by ) and p = ∞, the α-th order modulus of smoothness is defined by For even integers α = 2s, one can also use combinations of T jθ and obtain [50,15] Another way to characterize the smoothness is through the K-functionals.We first introduce the fractional Sobolev space induced by the Laplace-Beltrami operator.We say a function f ∈ L p (S d ) belong to the Sobolev space W α,p (S d ) if there exists a function in L p (S d ), which will be denoted by (−∆) α/2 f , such that where we assume f, (−∆) α/2 f ∈ C(S d ) for p = ∞.Then we can define the α-th K-functional of f ∈ L p (S d ) as It can be shown [12,Theorem 10.4.1] that the moduli of smoothness and the K-functional are equivalent: To prove Theorem 2.1, we denote the function class The operator S k satisfies: the smallest integer such that α ≤ 2s * and C is a constant independent of h.Furthermore, h can be chosen to be odd or even. Proof.
Hence, S k g ∈ F σ k (M ) by definition.
(2) Given h ∈ H α , for any u = (u 1 , . . ., u d+1 ) ⊺ ∈ Ω, we define h(u) := u k d+1 h(u −1 d+1 u ′ ), where u ′ = (u 1 , . . ., u d ) ⊺ .It is easy to check that S k h = h.Note that h is completely determined by the function values of h on Ω. Observe that the smoothness of h on Ω can be controlled by the smoothness of h.We can extend h to R d+1 so that ∥ h∥ C r,β (R d+1 ) ≤ C 0 for some constant C 0 independent of h, by using (refined version of) Whitney's extension theorem [17,18,19].It remains to show that ω 2s * ( h, t) ∞ ≲ t α .
By the equivalence (3.3), Next, we estimate sup 0<θ≤t |H(u, v, θ)| for small t > 0 and fixed u, v.One can check that the function for n ∈ N. The binomial theorem shows H(u, v, θ) = ∆ 2s * θ f (0).Then, the classical theory of moduli of smoothness [14, Chapter 2.6-2.9]implies Consequently, we get the desired bound ω 2s * ( h, t) ∞ ≲ t α .Finally, in order to ensure that h is odd or even, we can multiply h by an infinitely differentiable function, which is equal to one on Ω and zero for u d+1 ≤ 1/(2 √ 2), and extend h to be odd or even.These operations do not decrease the smoothness of h.By Proposition 3.3, for any h ∈ H α and g ∈ G σ k (M ), we have for some h ∈ C(S d ).Since S k g ∈ F σ k (M ), we can derive approximation bounds for F σ k (M ) by studying the approximation capacity of G σ k (M ).Now, we are ready to prove Theorem 2.1.

Proof of Theorem 2.1. By Proposition 3.3, for any
where s * ∈ N is the smallest integer such that α ≤ 2s * .We choose h to be odd (even) if k is even (odd).Using ω s ( h, t) 2 ≤ 2 s−2s * +2 ω 2s * ( h, t) 2 for s > 2s * [12, Proposition 10.1.2]and the Marchaud inequality [15, Eq.(9.6)] We study how well g ∈ G σ k (M ) approximates h.It turns out that it is enough to consider a subset of G σ k (M ) that contains functions of the form , where the infimum is taken over all ϕ ∈ L 2 (S d ) satisfy the integral representation of g.Observing that g = ϕ * σ k is a convolution, by identity (3.2), P n g = σ k (n)P n ϕ.Hence, we have the Fourier decomposition By Proposition 3.2, we know that σ k (n) = 0 if and only if n ≥ k + 1 and n ≡ k mod 2. For We consider the convolutions where η is a C ∞ -function on [0, ∞) such that η(t) = 1 for 0 ≤ t ≤ 1 and η(t) = 0 for t ≥ 2. Since η is supported on [0, 2], the summation can be terminated at n = 2m − 1, so that g m is a polynomial of degree at most 2m − 1.Since h is odd (even) if k is even (odd), P n g m = η(n/m)P n h = 0 for any n ≡ k mod 2. Furthermore, [12, Theorem 10.3.2]shows that By the equivalence (3.4) and ω 2s * ( h, m −1 ) ∞ ≲ m −α , the equivalence (3.6) for p = ∞ implies that we can bound the approximation error as Applying the estimate (3.5) to the equivalence (3.6) with p = 2, we get Using P n ((−∆) s/2 g m ) = (n(n + d − 1)) s/2 P n g m , we can estimate the norm γ(g m ) as follows where we choose s = (d + 2k + 1)/2 in the last inequality.We continue the proof in three different cases.
Remark 3.4.Since we are only able to estimate the smoothness ω 2s * ( h, t) ∞ for even integer 2s * , we have an extra logarithmic factor for the bound ω s ( h, t) 2 ≲ t α log(1/t) in (3.5) when s = α ̸ = 2s * , due to the Marchaud inequality.Consequently, we can only obtain exponential convergence rate when α = (d + 2k + 1)/2 is not an even integer.We conjecture the bound ω s ( h, t) 2 ≲ t α holds for all s ≥ α.If this is the case, then the proof of Theorem 2.1 implies

Nonparametric regression
In this section, we apply our approximation results to nonparametric regression using neural networks.For simplicity, we will only consider ReLU activation function (k = 1), which is the most popular activation in deep learning.
We study the classical problem of learning a d-variate function h ∈ H from its noisy samples, where we will assume H = H α with α < (d + 3)/2 or H = F σ (1).Note that, due to Theorem 2.1, the results for F σ (1) can be applied to H α with α > (d + 3)/2 by scaling the variation norm.Suppose we have a data set of n ≥ 2 samples

independently and identically generated from the regression model
where µ is the marginal distribution of the covariates X i supported on B d , and η i is an i.i.d.Gaussian noise independent of X i (we will treat the variance V 2 as a fixed constant).We are interested in the empirical risk minimizer (ERM) where F n is a function class parameterized by neural networks.For simplicity, we assume here and in the sequel that the minimum above indeed exists.The performance of the estimation is measured by the expected risk It is equivalent to evaluating the estimator by the excess risk In the statistical analysis of learning algorithms, we often require that the hypothesis class is uniformly bounded.We define the truncation operator T B with level B > 0 for real-valued functions f as Since we always assume the regression function h is bounded, truncating the output of the estimator f * n appropriately dose not increase the excess risk.We will estimate the convergence rate of , where B n ≲ log n, as the number of samples n → ∞.

Shallow neural networks
The rate of convergence of neural network regression estimates has been analyzed by many papers [38,26,9,52,41,27].It is well-known that the optimal minimax rate of convergence for learning a regression function h ∈ H α is n −2α/(d+2α) [59].This optimal rate has been established (up to logarithmic factors) for two-hidden-layers neural networks with certain squashing activation functions [26] and for deep ReLU neural networks [52,27].For shallow networks, [38] proved a rate of n −2α/(2α+d+5)+ϵ with ϵ > 0 for a certain cosine squasher activation function.However, to the best of our knowledge, it is unknown whether shallow neural networks can achieve the optimal rate.In this section, we provide an affirmative answer to this question by proving that shallow ReLU neural networks can achieve the optimal rate for H α with α < (d + 3)/2.We will use the following lemma to analyze the convergence rate.It decomposes the error of the ERM into generalization error and approximation error, and bounds the generalization error by the covering number of the hypothesis class F n .

Lemma 4.1 ([27]
).Let f * n be the estimator (4.2) and set B n = c 1 log n for some constant c 1 > 0.Then, for n > 1 and some constant c 2 > 0 (independent of n and f * n ), where X 1:n = (X 1 , . . ., X n ) denotes a sequence of sample points in B d and N (ϵ, For shallow neural network model F n = F σ (N n , M n ), Lemma 2.3 and Corollary 2.4 provide bounds for the approximation errors.The covering number of the function class T Bn F n can be estimated by using the pseudo-dimension of T Bn F n [22].Choosing N n , M n appropriately to balance the approximation and generalization errors, we can derive convergence rates for the ERM.Theorem 4.2.Let f * n be the estimator (4.2) with ( Proof.To apply the bound in Lemma 4.1, we need to estimate the covering number N (ϵ, . The classical result of [22,Theorem 6] showed that the covering number can be bounded by pseudo-dimension: where Pdim (T Bn F n ) is the pseudo-dimension of the function class T Bn F n , see (2.2).For ReLU neural networks, [6] showed that Consequently, we have Applying Lemma 4.1 and Corollary 2.4, if H = H α with α < (d + 3)/2, then We choose Remark 4.3.The Gaussian noise assumption on the model (4.1) can be weaken for Lemma 4.1 and hence for Theorem 4.2.We refer the reader to [27, Appendix B, Lemma 18] for more details.Theorem 4.2 can be easily generalized to shallow ReLU k neural networks for k ≥ 1 by using the same proof technique.For example, one can show that, if h ∈ H α with α < (d + 2k + 1)/2, then we can choose , such that Theorem 4.2 shows that least square minimization using shallow ReLU neural networks can achieve the optimal rate n − 2α d+2α for learning functions in H α with α < (d + 3)/2.For the function class F σ (1), the rate n − d+3 2d+3 is also minimax optimal as proven by [46,Lemma 25] (they studied a slightly different function class, but their result also holds for F σ (1)).Specifically, [57, Theorem 4 and Theorem 8] give a sharp estimate for the metric entropy Combining this estimate with the classical result of Yang and Barron (see [ where the infimum taken is over all estimators based on the samples D n , which are generated from the model (4.1).

Deep neural networks and over-parameterization
There is a direct way to generalize the analysis in the last section to deep neural networks: we can implement shallow neural networks by sparse multi-layer neural networks with the same order of parameters, and estimate the approximation and generalization performance of the constructed networks.Since the optimal convergence rates of deep neural networks have already been established in [52,27], we do not pursue in this direction.Instead, we study the convergence rates of over-parameterized neural networks by using the idea discussed in [24].The reason for studying such networks is that, in modern applications of deep learning, the number of parameters in the networks is often much larger than the number of samples.However, in the convergence analysis of [52,27], the network that achieves the optimal rate is under-parameterized (see also the choice of N n in Theorem 4.2).Hence, the analysis can not explain the empirical performance of deep learning models used in practice.
Following [24], we consider deep neural networks with norm constraints on weight matrices.For W, L ∈ N, we denote by N N (W, L) the set of functions that can be parameterized by ReLU neural networks in the form where where we use ∥A∥ := sup ∥x∥∞≤1 ∥Ax∥ ∞ to denote the operator norm (induced by the ℓ ∞ norm) of a matrix A = (a i,j ) ∈ R m×n .It is well-known that ∥A∥ is the maximum 1-norm of the rows of A: The motivation for such a definition of κ(θ) is discussed in [24].For M ≥ 0, we denote by N N (W, L, M ) as the set of functions [24] use the convention that the bias b (L) = 0 in the last layer.But the results can be easily generalized to the case b (L) ̸ = 0, see [62, Section 2.1] for details.) To derive approximation bounds for deep neural networks, we consider the relationship of F σ (N, M ) and N N (N, 1, M ).The next proposition shows the function classes N N (N, 1, M ) and F σ (N, M ) have essentially the same approximation power.Proof.Each function f (x) = N i=1 a i σ((x ⊺ , 1)v i ) in F σ (N, M ) can be parameterized in the form (4.4) with W = N, L = 1 and (A (0) , b (0) ) = (v 1 , . . ., v N ) ⊺ , (A (1) , b (1) ) = (a 1 , . . ., a N , 0).
Corollary 4.5.For H α with 0 < α < (d + 3)/2, we have For F σ (1), there exists a constant M ≥ 1 such that Proof.The first part is a direct consequence of Corollary 2.4 and the inclusion F σ (W, M ) ⊆ N N (W, L, √ d + 1M ).The second part follows from Lemma 2.3 and we can choose In the first part of Corollary 4.5, if we allow the width W to be arbitrary large, say W ≳ M 2d/(d+3−2α) , then we can bound the approximation error by the size of weights.Hence, this result can be applied to over-parameterized neural networks.(Note that, in Theorem 4.2, we use a different regime of the bound.)For the approximation of F σ (1), the size of weights is bounded by a constant.We will show that this constant can be used to control the generalization error.Since the approximation error is bounded by W and is independent of M , we do not have trade-off in the error decomposition of ERM and only need to choose W sufficiently large to reduce the approximation error.Hence, it can also be applied to over-parameterized neural networks.
The approximation rate M − 2α d+3−2α for H α in Corollary 4.5 improves the rate M − α d+1 proven by [24].Using the upper bound for Rademacher complexity of N N (W, L, M ) (see Lemma 4.6), [24] also gave an approximation lower bound (M √ L) − 2α d−2α .For fixed depth L, our upper bound is very close to this lower bound.We conjecture that the rate in Corollary 4.5 is optimal with respect to M (for fixed depth L).The discussion of optimality at the end of Section 2 implies that the conjecture is true for shallow neural networks (i.e.L = 1).
To control the generalization performance of over-parameterized neural networks, we need to have size-independent sample complexity bounds for such networks.Several methods have been applied to obtain such kind of bounds in recent works [43,42,5,21].Here, we will use the result of [21], which estimates the Rademacher complexity of deep neural networks [7] Now, we can estimate the convergence rates of the ERM based on over-parameterized neural networks.As usual, we decompose the excess risk of the ERM into approximation error and generalization error, and bound them by Corollary 4.5 and Lemma 4.6, respectively.Note that the convergence rates in the following theorem are worse than the optimal rates in Theorem 4.2.(1) If H = H α with α < (d + 3)/2, we choose ( , we choose a large enough constant M and let Proof.The proof is essentially the same as [24,Theorem 4.1].Observe that, for any Using E Dn [L n (f )] = L(f ) and taking the infimum over f ∈ F n , we get Let us denote the collections of sample points and noises by X 1:n = (X 1 , . . ., X n ) and η 1:n = (η 1 , . . ., η n ).We can bound the generalization error as follows where we denote Φ n := {f − h : f ∈ F n }.By a standard symmetrization argument (see [60,Theorem 4.10]), we can bound the first term in (4.6) by the Rademacher complexity: where we apply Lemma 4.6 in the last inequality.Note that the second term in (4.6) is a Gaussian complexity.We can also bound it by the Rademacher complexity [7, Lemma 4]: In summary, we conclude that If H = H α with α < (d + 3)/2, by Corollary 4.5, we have sup Applying [67,Theorem 3] to the sequence v, we can construct filters {w (ℓ) } L−1 ℓ=0 supported on {0, 1, . . ., s} such that v = w (L−1) * w d+Ls)×d .Note that, by definition, for i = 1, . . ., N , the id-th row of A v is exactly a ⊺ i .Then, for ℓ = 0, . . ., L−2, we can choose b (ℓ) satisfying (4.8) such that f (ℓ+1) (x) = A w (ℓ) • • • A w (0) x+B (ℓ) , where B (ℓ) > 0 is a sufficiently large constant that makes the components of f (ℓ+1) (x) positive for all x ∈ B d .Finally, we can construct b (L−1) such that f Note that Proposition 4.8 shows each shallow neural network can be represent by a CNN, with the same order of number of parameters.As a corollary, we obtain approximation rates for CNNs.Corollary 4.9.Let s ≥ 2 be an integer.
(1) For H α with 0 < α < (d + 3)/2, we have (2) For F σ (1), we have Since the number of parameters in CN N (s, L) is approximately L, the rate O(L −α/d ) in part (1) of Corollary 4.9 is the same as the rate in [64] for fully connected neural networks.However, [65,31] showed that this rate can be improved to O(L −2α/d ) for fully connected neural networks by using the bit extraction technique [6].It would be interesting to see whether this rate also holds for CN N (s, L).
As in Theorem 4.2, we use Lemma 4.1 to decompose the error and bound the approximation error by Corollary 4.9.The covering number is bounded again by pseudo-dimension.(2) If H = F σ (1), we choose Proof.The proof is the same as Theorem 4.2 and [68].We can use (4.3) to bound the covering number by the pseudo-dimension.For convolutional neural networks, [6] gave the following estimate of the pseudo-dimension: where p(s, L n ) = (5s + 2)L n + 2d − 2s ≲ L n and q(s, L n ) ≤ L n (d + sL n ) ≲ L 2 n are the numbers of parameters and neurons of the network CN N (s, L n ), respectively.Therefore, log N (ϵ, T Bn F n , ∥ • ∥ L 1 (X 1:n ) ) ≲ L 2 n log(L n ) log(B n /ϵ).
Finally, we note that the recent paper [68] also studied the convergence of CNNs and proved the rate O(n −1/3 (log n) 2 ) for H α with α > (d + 4)/2.The convergence rate we obtained in Theorem 4.10 for F σ (1), which includes H α with α > (d + 3)/2 by Theorem 2.1, is slightly better than their rate.

Conclusion
This paper has established approximation bounds for shallow ReLU k neural networks.We showed how to use these bounds to derive approximation rates for (deep or shallow) neural networks with constraints on the weights and convolutional neural networks.We also applied the approximation results to study the convergence rates of nonparametric regression using neural networks.In particular, we established the optimal convergence rates for shallow neural networks and showed that over-parameterized neural networks can achieve nearly optimal rates.
There are a few interesting questions we would like to propose for future research.First, for approximation by shallow neural networks, we establish the optimal rate in the supremum norm by using the results of [55] (Lemma 2.3).The paper [55] actually showed that approximation bounds similar to Lemma 2.3 also hold in Sobolev norms.We think it is a promising direction to extend our approximation results in the supremum norm (Theorem 2.1 and Corollary 2.4) to the Sobolev norms.Second, it is unclear whether over-parameterized neural networks can achieve the optimal rate for learning functions in H α .It seems that refined generalization error analysis is needed.Finally, it would be interesting to extend the theory developed in this paper to general activation functions and study how the results are affected by the activation functions.

Theorem 4 . 7 .
Let f * n be the estimator (4.2) with F n = {T Bn f : f ∈ N N (W n , L, M n )}, where L ∈ N is a fixed constant, 1 ≤ B n ≲ log n in case (1) and √ 2 ≤ B n ≲ log n in case (2).

Theorem 4 . 10 .
Let f * n be the estimator (4.2) with F n = CN N (s, L n ), where s ≥ 2 is a fixed integer, and set B n = c 1 log n for some constant c 1 > 0.