1 Introduction

This paper is a continuation of our previous works where we approximated, in the spectral sense, the Riemannian Laplace-Beltrami operator with the discrete graph Laplacian [2, 11] and a convolution-type operator [3]. This convolution-type operator, called the \(\rho \)-Laplacian (with a small parameter \(\rho >0\)), is defined by averaging over metric balls of small radius, and it is a natural extension of the discrete graph Laplacian in a continuous setting. A notable feature of the \(\rho \)-Laplacian is that it is not based on differentiation and is readily available on general metric-measure spaces. Furthermore, we proved in [3] that the spectrum of the \(\rho \)-Laplacian enjoys stability under metric-measure approximations in a large class of metric-measure spaces. Ideally, we define the \(\rho \)-Laplacian as a notion of Laplacian on metric-measure spaces (in the spectral sense). Our earlier results in [2, 3, 11] show that the definition makes sense for Riemannian manifolds. We hope that the spectra of the \(\rho \)-Laplacians could converge as \(\rho \rightarrow 0\), in a large class of metric-measure spaces, with the limit related to known concepts of Laplacian in [5, 6].

The present paper is concerned with the connection Laplacian on vector bundles. In this paper, we introduce an analogous convolution Laplacian acting on vector bundles over metric-measure spaces. This operator can be regarded as a generalization of the \(\rho \)-Laplacian (on functions), and its discretization, also known as the graph connection Laplacian, is a generalization of the graph Laplacian. We prove that for Euclidean or Hermitian connections on closed Riemannian manifolds, our convolution Laplacian and its discretization both approximate the standard connection Laplacian in the spectral sense. The spectral convergence of the graph connection Laplacians may have applications in numerical computations and manifold learning, in particular analyzing high-dimensional data sets, see e.g. [1, 4, 7, 8, 14,15,16] and the references therein.

In this introduction, we define our operator for vector bundles over Riemannian manifolds. The general definition for metric-measure spaces can be found in Sect. 2. Let \(M^n\) be a compact, connected Riemannian manifold of dimension n without boundary, and let E be a smooth Euclidean (or Hermitian) vector bundle over M equipped with a smooth Euclidean (or Hermitian) connection \(\nabla \). Recall that an Euclidean (resp. Hermitian) connection is a connection that is compatible with the Euclidean (resp. Hermitian) metric on the vector bundle. We denote by \(L^2(M,E)\) the space of \(L^2\)-sections of the vector bundle E, and by \(E_x\) the fiber over a point \(x\in M\). Fix \(\rho >0\) smaller than the injectivity radius \(r_{inj}(M)\). Given any pair of points \(x,y\in M\) with \(d(x,y)\le \rho \), let \(P_{xy}:E_y\rightarrow E_x\) be the parallel transport canonically associated with \(\nabla \) from y to x along the unique minimizing geodesic [yx].

For an \(L^2\)-section \(u\in L^2(M,E)\), we define the \(\rho \)-connection Laplacian operator \(\Delta ^{\rho }\) by

$$\begin{aligned} \Delta ^\rho u(x) = \frac{2(n+2)}{\nu _n\rho ^{n+2}} \int _{B_\rho (x)} \big (u(x)-P_{xy}(u(y)) \big )\,dy, \end{aligned}$$
(1.1)

where \(\nu _n\) is the volume of the unit ball in \({\mathbb {R}}^n\), and \(B_{\rho }(x)\) is the geodesic ball in M of radius \(\rho \) centered at \(x\in M\).

The operator \(\Delta ^{\rho }\) is nonnegative and self-adjoint with respect to the standard inner product on \(L^2(M,E)\). Furthermore, the lower part of the spectrum of \(\Delta ^{\rho }\) is discrete. We denote by \(\widetilde{\lambda }_k\) the k-th eigenvalue of \(\Delta ^{\rho }\) from the discrete part of the spectrum. Denote by \(\Delta \) the standard connection Laplacian of the connection \(\nabla \), and by \(\lambda _k\) the k-th eigenvalue of \(\Delta \). Our first result states that the spectrum of the \(\rho \)-connection Laplacian \(\Delta ^{\rho }\) approximates the spectrum of the connection Laplacian \(\Delta \).

Theorem 1

There exist constants \(C_n>1\) and \(c_n,\sigma _n\in (0,1]\), depending only on n, such that the following holds. Suppose that the absolute value of the sectional curvatures of M and the norm of the curvature tensor of \(\nabla \) are bounded by constants \(K^{}_{\!M}\) and \(K^{}_{\!E}\), respectively. Assume that \(\rho >0\) satisfies

$$\begin{aligned} \rho <\min \big \{ r_{inj}(M), c_n K_M^{-1/2} \big \}. \end{aligned}$$
(1.2)

Then for every \(k\in {{\mathbb {N}}}_+\) satisfying \(\widetilde{\lambda }_k\le \sigma _n \rho ^{-2}\), we have

$$\begin{aligned} \big | \widetilde{\lambda }_k^{1/2} - \lambda _k^{1/2} \big | \le \big (C_nK^{}_{\!M}+\lambda _k \big )\lambda _k^{1/2} \rho ^2 + C_nK^{}_{\!E}\rho \, . \end{aligned}$$

Remark 1.1

One can track the dependence of \(c_n\) on n and find that it suffices to assume

$$\begin{aligned} \frac{\sinh ^{n-1} c_n}{\sin ^{n-1} c_n} < 2. \end{aligned}$$

Remark 1.2

In the case of E being the trivial bundle, the connection Laplacian is simply the Laplace-Beltrami operator on functions, and the \(\rho \)-connection Laplacian \(\Delta ^{\rho }\) reduces to an operator on functions

$$\begin{aligned}\Delta ^\rho f(x) = \frac{2(n+2)}{\nu _n\rho ^{n+2}} \int _{B_\rho (x)} \big (f(x)-f(y) \big )\,dy.\end{aligned}$$

This operator is the \(\rho \)-Laplacian we introduced in [3] up to a normalization adjustment, and its discretization is the graph Laplacian studied in [2]. In this case, Theorem 1 reduces to the convergence of the spectra of \(\rho \)-Laplacians (on functions) to the spectrum of the Laplace-Beltrami operator, which is known from Theorem 1 in [2] and Theorem 1.2 in [3].

Now let us turn to the discrete side. We define a discretization of a compact Riemannian manifold M as follows (see [2]).

Definition 1.3

Let \(\varepsilon \ll \rho \) and \(X_{\varepsilon }=\{x_i\}_{i=1}^N\) be a finite \(\varepsilon \)-net in M. The distance function on \(X_{\varepsilon }\) is the Riemannian distance d of M restricted onto \(X_{\varepsilon }\times X_{\varepsilon }\), denoted by \(d|_{X_{\varepsilon }}\). Suppose that \(X_{\varepsilon }\) is equipped with a discrete measure \(\mu =\sum _{i=1}^N \mu _i\delta _{x_i}\) which approximates the volume on M in the following sense: there exists a partition of M into measurable subsets \(\{V_i\}_{i=1}^N\) such that \(V_i\subset B_{\varepsilon }(x_i)\) and \({\text {vol}}(V_i)=\mu _i\) for every i. Denote this discrete metric-measure space by \(\Gamma _{\varepsilon }=(X_{\varepsilon },d|_{X_{\varepsilon }},\mu )\), and we write \(\Gamma \) for short.

Let \(M,E,\nabla \) be defined as before, and we consider the \(\rho \)-connection Laplacian on this discrete metric-measure space \(\Gamma \), acting on the restriction of the vector bundle E onto \(X_{\varepsilon }\). Namely, let \(P=\{P_{x_i x_j}: d(x_i,x_j)<\rho \}\) be the parallel transport between points in \(X_{\varepsilon }\). We call P a \(\rho \)-connection on the restriction \(E|_{X_{\varepsilon }}\). For \({\bar{u}}\in L^2(X_{\varepsilon },E|_{X_{\varepsilon }})\), we define

$$\begin{aligned} \Delta ^{\rho }_{\Gamma } {\bar{u}}(x_i):=\frac{2(n+2)}{\nu _n \rho ^{n+2}} \sum _{d(x_i,x_j)<\rho } \mu _j \big ({\bar{u}}(x_i)-P_{x_i x_j}{\bar{u}}(x_j)\big ). \end{aligned}$$
(1.3)

This operator is known as the graph connection Laplacian. The graph connection Laplacian is a nonnegative self-adjoint operator of dimension r(E)N with respect to the weighted discrete \(L^2\)-inner product, where r(E) is the rank of the vector bundle E. We denote the k-th eigenvalue of \(\Delta _{\Gamma }^{\rho }\) by \(\widetilde{\lambda }_k(\Gamma )\).

Our second result can be viewed as a discretized version of Theorem 1.

Theorem 2

Suppose that the absolute value of the sectional curvatures of M and the norm of the curvature tensor of \(\nabla \) are bounded by constants \(K^{}_{\!M}\) and \(K^{}_{\!E}\). Then there exists \(\rho _0=\rho _0(n,K_M,K_E)<r_{inj}(M)/2\), such that for any \(\rho <\rho _0\),  \(\varepsilon <\rho /4\),  \(k\le r(E)N\) satisfying \(\lambda _k<\rho ^{-2}/16\), we have

$$\begin{aligned} \big | \widetilde{\lambda }_k(\Gamma ) - \lambda _k \big | \le C_{n,K_M}\bigg (\rho +\frac{\varepsilon }{\rho }+\lambda _k^{1/2}\rho \bigg )\lambda _k +C_{n,K_E} \left( \rho +\frac{\varepsilon }{\rho }\right) . \end{aligned}$$

For compact Riemannian manifolds without boundary, Theorems 1 and 2 imply the closeness between the \(\rho \)-connection Laplacian and the graph connection Laplacian in the spectral sense. In the case of trivial bundles, this gives another proof for the closeness between the \(\rho \)-Laplacian (on functions) and the graph Laplacian in the spectral sense (as a special case of Theorem 1.2 in [3]).

This paper is organized as follows. We introduce the general concept of \(\rho \)-connection Laplacians for metric-measure spaces in Sect. 2. In Sect. 3, we focus on the case of smooth connections on Riemannian manifolds and prove Theorem 1. We turn to the graph connection Laplacian in Sect. 4 and prove Theorem 2.

2 General metric-measure setup

Let (Xd) be a metric space and E be an (continuous) Euclidean or Hermitian vector bundle over X. That is, we have a fiber bundle \(\pi :E\rightarrow X\), and each fiber \(E_x:=\pi ^{-1}(x)\), \(x\in X\), is a real or complex vector space equipped with an Euclidean or Hermitian inner product \(\langle \, ,\rangle _{E_x}\). When it is clear at which point the inner product is taken, we omit the subscript \(E_x\).

Let \(\rho >0\) be a small parameter. We denote by \(X^2(\rho )\) the set of pairs \((x,y)\in X\times X\) such that \(d(x,y)\le \rho \).

Definition 2.1

A \(\rho \)-connection on E is a (Borel measurable) family of uniformly bounded linear maps \(P_{xy}:E_y\rightarrow E_x, \,\sup _{(x,y)}\Vert P_{xy}\Vert < \infty ,\) where (xy) ranges over \(X^2(\rho )\). That is, \(P_{xy}\) transports vectors from \(E_y\) to \(E_x\).

A \(\rho \)-connection \(P=\{P_{xy}\}\) is said to be Euclidean (resp. Hermitian) if all maps \(P_{xy}\) are Euclidean isometries (resp. unitary operators). A \(\rho \)-connection P is said to be symmetric, if \(P_{xy}\) is invertible and \(P_{xy}^{-1}=P_{yx}\) for all \((x,y)\in X^2(\rho )\).

Our primary example of \(\rho \)-connection is the one associated with a connection \(\nabla \) on an (smooth) Euclidean or Hermitian vector bundle over a Riemannian manifold M. Namely, there is a parallel transport canonically associated with the connection \(\nabla \). Then given two points \(x,y\in M\) with \(d(x,y)\le \rho <r_{inj}(M)\), one can transport vectors from \(E_y\) to \(E_x\) along the unique minimizing geodesic [yx].

Now suppose (Xd) is equipped with a measure \(\mu \): we are working in a metric-measure space \((X,d,\mu )\). Assume that X is compact and \(\mu (X)<\infty \). We denote by \(L^2(X,E)\) the space of \(L^2\)-sections of the vector bundle E.

The following expression \(u(x) - P_{xy}(u(y))\), where \(u\in L^2(X,E)\) and \(x,y\in X\), shows up frequently. We introduce a short notation for it:

$$\begin{aligned} \Gamma _{xy}(u) := u(x) - P_{xy}(u(y)) . \end{aligned}$$
(2.1)

Note that \(\Gamma _{xy}(u)\) is only defined for \((x,y)\in X^2(\rho )\) and it belongs to \(E_x\).

Let \(\alpha :X\rightarrow {{\mathbb {R}}}_+\) and \(\beta :X^2(\rho )\rightarrow {{\mathbb {R}}}_+\) be positive \(L^\infty \) functions satisfying that \(\alpha \) is bounded away from 0 and \(\beta \) is symmetric: \(\beta (x,y)=\beta (y,x)\) for all xy. We define the \(\rho \)-connection Laplacian \(\Delta ^\rho _{\alpha ,\beta }\) associated with the \(\rho \)-connection P with weights \(\alpha ,\beta \) as follows. First we define an \(L^2\)-type inner product \(\langle \!\langle \, ,\rangle \!\rangle _\alpha \) on \(L^2(X,E)\) by

$$\begin{aligned} \langle \!\langle u,v\rangle \!\rangle _\alpha := \int _X \alpha (x) \, \big \langle u(x), v(x) \big \rangle _{E_x} \,d\mu (x), \end{aligned}$$

for \(u,v\in L^2(X,E)\), and the associated norm \(|\!|\cdot |\!|_\alpha \) is

$$\begin{aligned} |\!|u|\!|_\alpha ^2 := \langle \!\langle u,u\rangle \!\rangle _{\alpha } = \int _X \alpha (x) \, |u(x)|^2 \,d\mu (x) . \end{aligned}$$

Here \(|u(x)|^2=\langle u(x), u(x) \rangle \) is taken with respect to the Euclidean or Hermitian inner product in the fiber \(E_x\). Note that the standard inner product on \(L^2(X,E)\) corresponds to the case \(\alpha \equiv 1\), in which case the norm is denoted by the usual \(\Vert u\Vert _{L^2}\).

Next we define a symmetric form \(D_\beta ^{\rho }\) on \(L^2(X,E)\) by

$$\begin{aligned} D_\beta ^{\rho }(u,v) = \frac{1}{2} \iint _{X^2(\rho )} \beta (x,y)\, \big \langle \Gamma _{xy}(u),\Gamma _{xy}(v)\big \rangle _{E_x} \, d\mu (x) d\mu (y) . \end{aligned}$$
(2.2)

Finally, our Laplacian \(\Delta ^\rho _{\alpha ,\beta }\) is the unique operator from \(L^2(X,E)\) to itself satisfying

$$\begin{aligned} \langle \!\langle \Delta ^\rho _{\alpha ,\beta }u, v\rangle \!\rangle _\alpha = D_\beta ^{\rho }(u,v) \end{aligned}$$
(2.3)

for all \(u,v\in L^2(X,E)\). In other words, \(\Delta ^\rho _{\alpha ,\beta }\) is the self-adjoint operator on \(L^2(X,\alpha \mu ,E)\) associated with the quadratic form \(D_\beta ^{\rho }\). Note that the boundedness of \(P_{xy}\) and \(\mu (X)\) imply that both \(D_{\beta }^{\rho }\) and \(\Delta ^\rho _{\alpha ,\beta }\) are bounded.

The following proposition gives an explicit formula for \(\Delta ^\rho _{\alpha ,\beta }\).

Proposition 2.2

Assume that X is compact and \(\mu (X)<\infty \). Then \(\Delta ^\rho _{\alpha ,\beta }\) can be written as

$$\begin{aligned} \Delta ^\rho _{\alpha ,\beta }u(x) = \frac{1}{2\alpha (x)} \int _{B_\rho (x)} \beta (x,y) \big (\Gamma _{xy}(u) - P_{yx}^*\Gamma _{yx}(u)\big ) \,d\mu (y) \, , \end{aligned}$$
(2.4)

where \(P_{xy}^*:E_x\rightarrow E_y\) is the operator adjoint to \(P_{xy}\) with respect to the inner products on the fibers \(E_x\) and \(E_y\).

In particular, if the \(\rho \)-connection P is Euclidean or Hermitian and is symmetric, then

$$\begin{aligned} \Delta ^\rho _{\alpha ,\beta }u(x) = \frac{1}{\alpha (x)} \int _{B_\rho (x)} \beta (x,y) \, \Gamma _{xy}(u) \,d\mu (y) \, . \end{aligned}$$
(2.5)

Proof

The proof is a straightforward calculation. For brevity, we write dx and dy instead of \(d\mu (x)\) and \(d\mu (y)\). By definitions of \(D_\beta ^{\rho }\) and \(\Gamma _{xy}(v)\), we have

$$\begin{aligned} D_\beta ^{\rho }(u,v) = \frac{1}{2} \iint _{X^2(\rho )} \beta (x,y) \big \langle \Gamma _{xy}(u), v(x) - P_{xy}(v(y)) \big \rangle \,dxdy . \end{aligned}$$

Expand it and rewrite the term \(\langle \Gamma _{xy}(u), P_{xy}(v(y)) \rangle \) as follows:

$$\begin{aligned} \big \langle \Gamma _{xy}(u), P_{xy}(v(y)) \big \rangle _{E_x} = \big \langle P_{xy}^*\Gamma _{xy}(u), v(y) \big \rangle _{E_y} . \end{aligned}$$

By swapping xy and using the symmetry of \(\beta \), one gets

$$\begin{aligned} \iint _{X^2(\rho )} \beta (x,y) \big \langle P_{xy}^*\Gamma _{xy}(u), v(y) \big \rangle \,dxdy = \iint _{X^2(\rho )} \beta (x,y) \big \langle P_{yx}^*\Gamma _{yx}(u), v(x) \big \rangle \,dxdy . \end{aligned}$$

Substituting the last two formulae into the first one yields

$$\begin{aligned} \begin{aligned} D_\beta ^{\rho }(u,v)&= \frac{1}{2}\iint _{X^2(\rho )} \beta (x,y) \big \langle \Gamma _{xy}(u)-P_{yx}^*\Gamma _{yx}(u), v(x)) \big \rangle \,dxdy \\&= \frac{1}{2} \int _X \alpha (x) \bigg \langle \frac{1}{\alpha (x)} \int _{B_\rho (x)} \beta (x,y) \big (\Gamma _{xy}(u)-P_{yx}^*\Gamma _{yx}(u)\big )\, dy , v(x)\bigg \rangle dx. \end{aligned} \end{aligned}$$

The right-hand side of the last formula is the \(\langle \!\langle \, ,\rangle \!\rangle _\alpha \)-product of the right-hand side of (2.4) and v. This proves (2.4).

To deduce (2.5), observe that \(P_{yx}^*=P_{yx}^{-1}=P_{xy}\), since P is Euclidean or Hermitian and is symmetric. Hence by (2.1),

$$\begin{aligned} P_{yx}^*\Gamma _{yx}(u) = P_{yx}^{-1}\big (u(y)-P_{yx}(u(x))\big ) = P_{xy}(u(y)) - u(x) = - \Gamma _{xy}(u) . \end{aligned}$$

This and (2.4) prove (2.5). \(\square \)

Remark 2.3

We can assume that \(\beta (x,y)\) is defined for all pairs \((x,y)\in X^2(\rho )\) and is equal to 0 whenever \(d(x,y)>\rho \). This allows us to assume that \(P_{xy}\) is defined for all pairs \((x,y)\in X\times X\). It does not matter how P is extended to pairs xy with \(d(x,y)>\rho \), since in this case it will always be multiplied by 0. This allows us to write integration over X rather than over \(\rho \)-balls whenever convenient.

Denote by \(\sigma ( \Delta ^\rho _{\alpha ,\beta })\) the spectrum of \( \Delta ^\rho _{\alpha ,\beta }\) and by \(\sigma _{ess}( \Delta ^\rho _{\alpha ,\beta })\) the essential spectrum.

Corollary 2.4

$$\begin{aligned} \sigma _{ess}( \Delta ^\rho _{\alpha ,\beta }) \subset \big [a(\rho , \alpha , \beta ),\, \infty \big ), \end{aligned}$$

where \(a(\rho , \alpha , \beta )=\min _{x \in X} \frac{1}{2 \alpha (x)} \int _X \beta (x, y) dy\).

In particular, if the \(\rho \)-connection P is Euclidean or Hermitian and is symmetric, then

$$\begin{aligned} \sigma _{ess}( \Delta ^\rho _{\alpha ,\beta }) \subset \big [2 a(\rho , \alpha , \beta ),\, \infty \big ). \end{aligned}$$

Proof

Due to (2.1) and (2.4), \(\Delta ^\rho _{\alpha ,\beta }\) is the sum of three operators: the multiplication operator

$$\begin{aligned} \Delta _1 u(x) = \frac{1}{2 \alpha (x)} u(x) \int _X \beta (x, y)\,dy, \end{aligned}$$

the operator

$$\begin{aligned} \Delta _2 u(x)=\frac{1}{2 \alpha (x)} \int _X \beta (x, y) P^*_{yx} P_{yx} u(x)\, dy, \end{aligned}$$

and

$$\begin{aligned} \Delta _3 u(x)=\frac{1}{2 \alpha (x)} \int _X \beta (x, y) \left( -P_{xy}-P^*_{yx} \right) u(y) \,dy. \end{aligned}$$

Observe that \(\sigma (\Delta _1) \subset [a(\rho , \alpha , \beta ), \infty )\); while \(\Delta _2\), as seen from its quadratic form

$$\begin{aligned} \langle \!\langle \Delta _2 u, u\rangle \!\rangle _\alpha = \frac{1}{2} \iint _{X^2(\rho )} \beta (x, y) \langle P_{xy} u,\, P_{xy} u \rangle dxdy, \end{aligned}$$

is non-negative, and \( \Delta _3\) is compact as an operator with bounded kernel, which proves the first claim. The second claim follows from the same considerations by using the simpler form (2.5). \(\square \)

2.1 Examples

In this paper, we only need a few choices for \(\alpha \) and \(\beta \). First observe that the Riemannian \(\rho \)-connection Laplacian (1.1) is obtained by using \(\alpha (x)=1\) and \(\beta (x,y)=\frac{2(n+2)}{\nu _n\rho ^{n+2}}\) if \(d(x,y)\le \rho \). Equivalently, one can use \(\alpha (x)=\frac{\nu _n\rho ^{n+2}}{2(n+2)}\) and \(\beta (x,y)=1\) if \(d(x,y)\le \rho \). Another convenient normalization, as seen in [3], is by volumes of \(\rho \)-balls: \(\beta (x,y)=1\) if \(d(x,y)\le \rho \) and \(\alpha (x)=\rho ^2\mu (B_\rho (x))\) for all \(x\in X\). We call the operator \(\Delta ^\rho _{\alpha ,\beta }\) with these \(\alpha ,\beta \) the volume-normalized \(\rho \)-connection Laplacian.

Note that Laplacians on real-or complex-valued functions are a special case of connection Laplacians. Namely, for a Riemannian manifold M, one simply considers the trivial bundle \(E=M\times {{\mathbb {R}}}\) or \(E=M\times {{\mathbb {C}}}\) equipped with the trivial connection \(\nabla \). The sections of the trivial bundle are functions on M and the connection Laplacian is simply the Laplace-Beltrami operator (on functions). Similarly, for a metric-measure space X, one can consider the same trivial bundle with the trivial \(\rho \)-connection defined by \(P_{xy}(y,t)=(x,t)\) for \(x,y\in X\) and \(t\in {{\mathbb {R}}}\) (or \(t\in {{\mathbb {C}}}\)). Then (2.5) boils down to

$$\begin{aligned} \Delta ^\rho _{\alpha ,\beta }u(x) = \frac{1}{\alpha (x)} \int _{B_\rho (x)} \beta (x,y) \big (u(x)-u(y) \big )\,d\mu (y), \end{aligned}$$

where \(u\in L^2(X)\). Such operators are called \(\rho \)-Laplacians in [3]. Some analogues of the results of this paper in the case of Laplacians on functions can be found in [2, 3, 10, 11].

2.2 Spectra of \(\rho \)-connection Laplacians

Since \(\Delta ^\rho _{\alpha ,\beta }\) is self-adjoint with respect to the \(L^2\)-compatible inner product \(\langle \!\langle \, ,\rangle \!\rangle _\alpha \) and the corresponding quadratic form \(D_\beta ^{\rho }\) is positive semi-definite, the spectrum of \(\Delta ^\rho _{\alpha ,\beta }\) is contained in \({{\mathbb {R}}}_{\ge 0}\).

This spectrum consists of the discrete and essential spectra. It follows from Corollary 2.4 that the essential spectrum has a lower bound in all cases in question. We are only interested in the part of the spectrum below this bound. We enumerate this part of the spectrum as follows (cf. Notation 2.1 in [3]).

Notation 2.5

Denote by \(\widetilde{\lambda }_\infty =\widetilde{\lambda }_\infty (E,P,\rho ,\alpha ,\beta )\) the infimum of the essential spectrum of \(\Delta ^\rho _{\alpha ,\beta }\). If the essential spectrum is empty (e.g. if X is a discrete space), we set \(\widetilde{\lambda }_\infty =\infty \). For every \(k\in {{\mathbb {N}}}_+\), we define \(\widetilde{\lambda }_k=\widetilde{\lambda }_k(E,P,\rho ,\alpha ,\beta )\in [0,+\infty ]\) as follows. Let \(0\le \widetilde{\lambda }_1\le \widetilde{\lambda }_2\le \cdots \) be the eigenvalues of \(\Delta ^\rho _{\alpha ,\beta }\) (counting multiplicities) that are smaller than \(\widetilde{\lambda }_\infty \). If there are only finitely many of such eigenvalues, we set \(\widetilde{\lambda }_k=\widetilde{\lambda }_\infty \) for all larger values of k.

We abuse the language and refer to \(\widetilde{\lambda }_k(E,P,\rho ,\alpha ,\beta )\) as the k-th eigenvalue of \(\Delta ^\rho _{\alpha ,\beta }\) even though it may be equal to \(\widetilde{\lambda }_\infty \).

By the standard min-max formula, for every \(k\in {{\mathbb {N}}}_+\), we have

$$\begin{aligned} \widetilde{\lambda }_k(E,P,\rho ,\alpha ,\beta ) = \inf \nolimits _{\dim L=k} \ \sup \nolimits _{ u \in L\setminus \{0\}} \left( \frac{ D_\beta ^{\rho }(u,u)}{|\!|u|\!|_\alpha ^2}\right) , \end{aligned}$$
(2.6)

where the infimum is taken over all k-dimensional linear subspaces L of \(L^2(X,E)\). We emphasize that (2.6) holds in both cases of \(\widetilde{\lambda }_k<\widetilde{\lambda }_\infty \) and \(\widetilde{\lambda }_k=\widetilde{\lambda }_\infty \).

3 Smooth connections on Riemannian manifolds

In this section, suppose \(X=M^n\) is a compact Riemannian manifold of dimension n without boundary, and E is a smooth Euclidean (or Hermitian) vector bundle over M equipped with a smooth Euclidean (or Hermitian) connection \(\nabla \). Recall that an Euclidean (resp. Hermitian) connection is a connection that is compatible with the Euclidean (resp. Hermitian) metric on the vector bundle. For a (sufficiently smooth) section u of E, \(\nabla u\) is a section of the fiber bundle \({\text {Hom}}(TM,E)\) over M. Here \({\text {Hom}}(TM,E)\) is the fiber bundle over M whose fiber over \(x\in M\) is the space \({\text {Hom}}(T_xM,E_x)\) of \({{\mathbb {R}}}\)-linear maps from \(T_xM\) to \(E_x\). Note that in the case when E is a complex fiber bundle, \({\text {Hom}}(T_xM,E_x)\) has a natural complex structure. The standard (Euclidean or Hermitian) inner product on \({\text {Hom}}(T_xM,E_x)\) is defined by

$$\begin{aligned} \langle \xi ,\eta \rangle = \sum _{i=1}^n \langle \xi (e_i), \eta (e_i) \rangle _{E_x} \end{aligned}$$

for \(\xi ,\eta \in {\text {Hom}}(T_xM,E_x)\), where \(\{e_i\}\) is an orthonormal basis of \(T_xM\). This defines the standard norm

$$\begin{aligned} \Vert \xi \Vert ^2 = \sum _{i=1}^n |\xi (e_i)|^2 \end{aligned}$$
(3.1)

on fibers and the standard \(L^2\)-norm on sections of \({\text {Hom}}(TM,E)\).

Let \(\Delta =\nabla ^*\nabla \) be the connection Laplacian of \(\nabla \) (e.g. [12, Chapter 7.3.2] or [9]). It is a nonnegative self-adjoint operator acting on \(H^2\)-sections of E. The corresponding energy functional is given by

$$\begin{aligned} \langle \Delta u,u \rangle _{L^2} = \Vert \nabla u\Vert _{L^2}^2 \end{aligned}$$

for \(u\in H^2(M,E)\).

Since \(\Delta \) is a nonnegative self-adjoint elliptic operator, it has a discrete spectrum \(0\le \lambda _1\le \lambda _2\le \dots \), and \(\lambda _k\rightarrow \infty \) as \(k\rightarrow \infty \). The min-max formula for \(\lambda _k\) takes the form

$$\begin{aligned} \lambda _k = \inf \nolimits _{\dim L=k} \ \sup \nolimits _{ u \in L\setminus \{0\}} \left( \frac{\Vert \nabla u\Vert ^2_{L^2}}{\Vert u\Vert ^2_{L^2}}\right) \end{aligned}$$
(3.2)

where the infimum is taken over all k-dimensional linear subspaces L of \(H^1(M,E)\). The goal of this section is to prove that \(\lambda _k\) are approximated by eigenvalues of a \(\rho \)-connection Laplacian defined below.

Let \(\rho >0\) be smaller than the injectivity radius of M. Let \(P=\{P_{xy}\}\) be the \(\rho \)-connection associated with the connection \(\nabla \), which is given by the parallel transport from y to x along the unique minimizing geodesic [yx]. This particular \(\rho \)-connection P is unitary and symmetric since the connection \(\nabla \) is Euclidean (or Hermitian). We consider the \(\rho \)-connection Laplacian \(\Delta ^\rho =\Delta ^\rho _{\alpha ,\beta }\) given by (2.5) with \(\alpha (x)=1\) and \(\beta (x,y)=\frac{2(n+2)}{\nu _n\rho ^{n+2}}\) if \(d(x,y)\le \rho \). That is,

$$\begin{aligned} \Delta ^\rho u(x) = \frac{2(n+2)}{\nu _n\rho ^{n+2}} \int _{B_\rho (x)} \Gamma _{xy}(u) \,dy. \end{aligned}$$
(3.3)

Recall that \(\Gamma _{xy}(u)\) is defined in (2.1). Here and later on in this paper, we denote by dxdy the integration with respect to the Riemannian volume on M. Denote by \(\widetilde{\lambda }_k\) the k-th eigenvalue of \(\Delta ^\rho \) (see Notation 2.5).

We introduce a quadratic form \(D^{\rho }\) on \(L^2(M,E)\) by

$$\begin{aligned} D^{\rho }(u) = \int _M\int _{B_\rho (x)} |\Gamma _{xy}(u)|^2 \, dx dy . \end{aligned}$$
(3.4)

Note that for the constant weight \(\beta (x,y)=\frac{2(n+2)}{\nu _n\rho ^{n+2}}\) chosen for \(\Delta ^\rho \), we have

$$\begin{aligned} D_\beta ^{\rho }(u,u) = \frac{n+2}{\nu _n\rho ^{n+2}} D^{\rho }(u) . \end{aligned}$$

Hence (2.6) takes the form

$$\begin{aligned} \widetilde{\lambda }_k = \frac{n+2}{\nu _n\rho ^{n+2}} \ \inf \nolimits _{\dim L=k} \ \sup \nolimits _{ u \in L\setminus \{0\}} \left( \frac{D^{\rho }(u)}{\Vert u\Vert _{L^2}^2}\right) . \end{aligned}$$
(3.5)

The rest of this section is a proof of Theorem 1.

3.1 Preparations and notations

For \(x\in M\), denote by \(\exp _x:T_xM\rightarrow M\) the Riemannian exponential map. We only need its restriction onto the \(\rho \)-ball \({\mathcal {B}}_\rho (0)\subset T_xM\). For \(v\in T_xM\), denote by \(J_x(v)\) the Jacobian of \(\exp _x\) at v. Let \(J_{\min }(r)\) and \(J_{\max }(r)\) denote the minimum and maximum of \(J_x(v)\) over all xv with \(|v|\le r\). The Rauch Comparison Theorem implies that

$$\begin{aligned} \left( \frac{\sin r}{r}\right) ^{n-1} \le J_{\min }(r) \le 1 \le J_{\max }(r) \le \left( \frac{\sinh r}{r}\right) ^{n-1}, \end{aligned}$$

for \(r<K_M^{1/2}\rho <c_n\), see (1.2). In particular,

$$\begin{aligned} (1+C_nK^{}_{\!M}\rho ^2)^{-1} \le J_{\min }(r) \le 1 \le J_{\max }(r) \le 1+C_nK^{}_{\!M}\rho ^2 . \end{aligned}$$
(3.6)

Moreover, we choose \(c_n\) to be sufficiently small such that \(J_{\max }(r)/J_{\min }(r)<2\). Later we will mostly take \(r=\rho \) and we denote \(J_{\min }:=J_{\min }(\rho ),\, J_{\max }:=J_{\max }(\rho )\) for short.

As a consequence of (3.6), Corollary 2.4 implies that

$$\begin{aligned} \widetilde{\lambda }_\infty \ge \frac{2(n+2)}{\nu _n \rho ^{n+2}}\, \min _{x \in M} \mu (B_\rho (x)) \ge \frac{2(n+2)}{ \rho ^{2}} \frac{1}{1+C_n K_M \rho ^2} \ge \sigma _n \rho ^{-2}, \end{aligned}$$
(3.7)

for some constant \(\sigma _n\) depending only on n due to our choice of \(\rho \) in (1.2). This shows that \(\widetilde{\lambda }_\infty \) is of order \(\rho ^{-2}\) in the present case.

Later we use the following well-known inequality. We did not find a precise reference for it, so we give a short proof here.

Lemma 3.1

Let \(\gamma _s:[0,1]\rightarrow M\) be a smooth family of paths from a fixed point \(y\in M\) to x(s), \(s \in [-\varepsilon ,\varepsilon ]\). Let \(P_{\gamma _s} (v)\) be the \(\nabla \)-parallel transport along \(\gamma _s\) of \(v \in E_{y} \) to \(E_{x(s)}\). Then

$$\begin{aligned} \big | \nabla _{s} P_{\gamma _s}(v) \big | \le |v| \cdot K_E \cdot \text {length}(\gamma _s) \cdot \sup _{t\in [0,1]} \left| \frac{d \gamma _s(t)}{ds} \right| . \end{aligned}$$
(3.8)

Proof

Let \( v(t, s)= P_{\gamma _s[0, t]}(v), \) where \(P_{\gamma _s[t_1, t_2]}\) is the \(\nabla \)-parallel transport along \(\gamma _s\) from \(\gamma _s(t_1)\) to \(\gamma _s(t_2)\). Note that \(P_{\gamma _s[t_1, t_2]}\) is a unitary operator.

Observe that \(\nabla _{t} \,v(t, s)=0\) and thus \(\nabla _{s} \nabla _{t}\, v(t,s)=0\). Hence, using the definition of the curvature operator \(R_E\), we see that

$$\begin{aligned} \nabla _{t} \nabla _{s} v(t,s)= R_E\left( \frac{d\gamma _s(t)}{dt},\,\frac{d\gamma _s(t)}{ds}\right) v(t, s) \in E_{\gamma _s(t)}. \end{aligned}$$
(3.9)

Estimating the right-hand side yields

$$\begin{aligned} |\nabla _{t} \nabla _{s} v(t,s)| \le |v| \cdot K^{}_{\!E}\cdot \left| \frac{d\gamma _s(t)}{dt} \right| \cdot \sup _{t\in [0,1]} \left| \frac{d \gamma _s(t)}{ds} \right| , \end{aligned}$$
(3.10)

where we have used the fact that \(|v(t,s)|=|v|\) since the parallel transport P is unitary. The plan is to integrate (3.9) with respect to t. However, the fibers \(E_{\gamma _s(t)}\) vary with t. Thus, we use \(P_{\gamma _s[t, 1]}\) to identify them with \(E_{x(s)}\). Recall that by the definition of parallel translations, for any vector field X along \(\gamma _s\), one has

$$\begin{aligned} \frac{d}{dt} \bigl (P_{\gamma _s[t,1]} X(t)\bigr ) = P_{\gamma _s[t,1]} \big ( \nabla _t X(t) \big ) . \end{aligned}$$

Note that the vectors under \(\frac{d}{dt}\) in this formula lie in the same vector space \(E_{x(s)}\) for all t. We apply this to \(X(t)= \nabla _{s} v(t,s)\) and obtain

$$\begin{aligned} \frac{d}{dt} \bigl (P_{\gamma _s[t,1]} \nabla _{s} v(t,s)\bigr ) =P_{\gamma _s[t,1]} \big (\nabla _{t} \nabla _{s} v(t,s)\big ) . \end{aligned}$$

Integrating with respect to t and taking into account that \(\nabla _s v(0,s)=0\) yield

$$\begin{aligned} \nabla _{s} P_{\gamma _s}(v) = \nabla _{s} v(1,s) = \int _0^1 P_{\gamma _s[t,1]} \big (\nabla _{t} \nabla _{s} v(t,s)\big ) \,dt . \end{aligned}$$

Therefore, using the fact that \(P_{\gamma _s[t,1]}\) is unitary, we have

$$\begin{aligned} |\nabla _{s} P_{\gamma _s}(v)| \le \int _0^1 \bigl |\nabla _{t} \nabla _{s} v(t,s)\bigr | \,dt \le |v| \cdot K^{}_{\!E}\cdot \sup _{t\in [0,1]} \left| \frac{d \gamma _s(t)}{ds} \right| \cdot \int _0^1 \left| \frac{d\gamma _s(t)}{dt} \right| \,dt , \end{aligned}$$

where the second inequality follows from (3.10). This formula is exactly (3.8). \(\square \)

We need the following elementary fact from the linear algebra (e.g. [2, §2.3]): If S is a quadratic form on \({{\mathbb {R}}}^n\), then

$$\begin{aligned} \int _{B_\rho (0)} S(x) \,dx = \frac{\nu _n\rho ^{n+2}}{n+2} {\text {trace}}(S) . \end{aligned}$$
(3.11)

In the following two subsections, we control the upper and lower bounds for \(\widetilde{\lambda }_k\) by following the method we established in [2].

3.2 Upper bound for \({\widetilde{\lambda }}_k\)

Lemma 3.2

For any \(u\in H^1(M,E)\), we have

$$\begin{aligned} D^{\rho }(u) \le J_{\max } \frac{\nu _n}{n+2} \, \rho ^{n+2} \, \Vert \nabla u\Vert ^2_{L^2}. \end{aligned}$$

Proof

This lemma is similar to Lemma 3.3 in [2] and the proof is essentially the same. We may assume that u is smooth. By substituting \(y=\exp _x(v)\), we have

$$\begin{aligned} \int _{B_\rho (x)} |\Gamma _{xy}(u)|^2 \,dy= & {} \int _{{\mathcal {B}}_\rho (0)\subset T_xM} |\Gamma _{x,\exp _x(v)}(u)|^2 J_x(v)\,dv \\\le & {} J_{\max } \int _{{\mathcal {B}}_\rho (0)} |\Gamma _{x,\exp _x(v)}(u)|^2 \,dv . \end{aligned}$$

Hence

$$\begin{aligned} D^{\rho }(u) \le J_{\max } A, \end{aligned}$$
(3.12)

where

$$\begin{aligned} A = \int _M \int _{{\mathcal {B}}_\rho (0)\subset T_xM} |\Gamma _{x,\exp _x(v)}(u)|^2 \,dv dx . \end{aligned}$$

Note that the right-hand side is an integral with respect to the Liouville measure on TM. Let us estimate A.

For every constant-speed minimizing geodesic \(\gamma :[0,1]\rightarrow M\), we have

$$\begin{aligned} \Gamma _{\gamma (0)\gamma (1)} = u(\gamma (0)) - P_{\gamma (0)\gamma (1)} \big (u(\gamma (1))\big ) = - \int _0^1 \frac{d}{dt} P_{\gamma (0)\gamma (t)} \big ( u(\gamma (t)) \big ) \, dt, \end{aligned}$$

and

$$\begin{aligned} \frac{d}{dt} P_{\gamma (0)\gamma (t)} \big (u(\gamma (t)) \big ) = P_{\gamma (0)\gamma (t)} \big ( \nabla _{{\dot{\gamma }}(t)} u \big ) \end{aligned}$$

by the definition of the parallel transport P. Therefore,

$$\begin{aligned} |\Gamma _{\gamma (0)\gamma (1)}| \le \int _0^1 \bigl | P_{\gamma (0)\gamma (t)} \big ( \nabla _{{\dot{\gamma }}(t)} u \big ) \bigr | = \int _0^1 |\nabla _{{\dot{\gamma }}(t)} u| \,dt, \end{aligned}$$

where the last equality is due to P being unitary. Then,

$$\begin{aligned} |\Gamma _{\gamma (0)\gamma (1)}|^2 \le \int _0^1 |\nabla _{{\dot{\gamma }}(t)} u|^2 \,dt . \end{aligned}$$
(3.13)

For \(x\in M\) and \(v\in {\mathcal {B}}_\rho (0)\subset T_xM\), denote by \(\gamma _{x,v}\) the constant-speed geodesic with the initial data \(\gamma _{x,v}(0)=x\) and \({\dot{\gamma }}_{x,v}(0)=v\). Equivalently, \(\gamma _{x,v}(t)=\exp _x(tv)\). Applying (3.13) to \(\gamma _{x,v}\) yields

$$\begin{aligned} |\Gamma _{x,\exp _x(v)}(u)|^2 \le \int _0^1 |\nabla _{{\dot{\gamma }}_{x,v}(t)} u|^2 \,dt . \end{aligned}$$

This and the definition of A imply that

$$\begin{aligned} A \le \int _0^1 f(t)\,dt, \end{aligned}$$

where

$$\begin{aligned} f(t)=\int _M \int _{{\mathcal {B}}_\rho (0)\subset T_xM} |\nabla _{{\dot{\gamma }}_{x,v}(t)}u|^2 \,dv dx . \end{aligned}$$

Note that \({\dot{\gamma }}_{x,v}(t)\) is the image of v under the time t map of the geodesic flow. Since the geodesic flow preserves the Liouville measure and the subset \(\{(x,v)\in TM: v\in {\mathcal {B}}_{\rho }(0)\subset T_x M\}\), f(t) does not depend on t. Hence,

$$\begin{aligned} A \le f(0) = \int _M \int _{{\mathcal {B}}_\rho (0)\subset T_x M} |\nabla _{v}u|^2 \,dv dx = \int _M \frac{\nu _n\rho ^{n+2}}{n+2} \Vert \nabla u(x)\Vert ^2 \, dx = \frac{\nu _n\rho ^{n+2}}{n+2} \Vert \nabla u\Vert ^2_{L^2}, \end{aligned}$$

where the second equality follows from (3.11). This and (3.12) yield the lemma. \(\square \)

The lemma above gives an upper bound for \(\widetilde{\lambda }_k\).

Proposition 3.3

For every \(k\in {{\mathbb {N}}}_+\), we have

$$\begin{aligned} \widetilde{\lambda }_k \le J_{\max } \lambda _k \le (1+C_nK^{}_{\!M}\rho ^2) \lambda _k . \end{aligned}$$

Proof

The second inequality follows from (3.6). The first one follows immediately from combining the min-max formulae (3.2) and (3.5) and Lemma 3.2. \(\square \)

3.3 Lower bound for \({\widetilde{\lambda }}_k\)

As in [2, Section 5], define \(\psi :{{\mathbb {R}}}_{\ge 0}\rightarrow {{\mathbb {R}}}_{\ge 0}\) by

$$\begin{aligned} \psi (t) = {\left\{ \begin{array}{ll} \frac{n+2}{2\nu _n} (1-t^2), &{} 0\le t\le 1, \\ 0, &{} t\ge 1 . \end{array}\right. } \end{aligned}$$

The normalization constant \(\frac{n+2}{2\nu _n}\) is chosen so that \(\int _{{{\mathbb {R}}}^n} \psi (|x|)\,dx=1\).

We define \(k_{\rho }:M\times M\rightarrow {{\mathbb {R}}}_{\ge 0}\) by

$$\begin{aligned} k_{\rho }(x,y) = \rho ^{-n} \psi \big (\frac{d(x,y)}{\rho } \big ) , \end{aligned}$$
(3.14)

and \(\theta :M\rightarrow {{\mathbb {R}}}_{\ge 0}\) by

$$\begin{aligned} \theta (x) = \int _M k_{\rho }(x,y) \,dy = \int _{B_\rho (x)} k_{\rho }(x,y) \,dy . \end{aligned}$$
(3.15)

(The second identity follows from the fact that \(k_{\rho }(x,y)=0\) if \(d(x,y)\ge \rho \).)

We need the following estimates on \(\theta \):

$$\begin{aligned} J_{\min } \le \theta (x) \le J_{\max } \end{aligned}$$
(3.16)

and

$$\begin{aligned} |d_x \theta | \le C_nK^{}_{\!M}\rho \end{aligned}$$
(3.17)

for all \(x\in M\). See [2, Lemma 5.1] for a proof.

Define a convolution operator \(I:L^2(M,E)\rightarrow C^{0,1}(M,E)\) by

$$\begin{aligned} Iu(x) = \frac{1}{\theta (x)}\int _M k_{\rho }(x,y) P_{xy}(u(y)) \,dy = \frac{1}{\theta (x)}\int _{B_\rho (x)} k_{\rho }(x,y) P_{xy}(u(y)) \,dy, \end{aligned}$$

for \(x\in M\) (compare with [2, Definition 5.2]). We estimate the energy and the \(L^2\)-norm of Iu in the following two lemmas.

Lemma 3.4

For any \(u\in L^2(M,E)\), we have

$$\begin{aligned} \Vert Iu\Vert ^2_{L^2} \ge \frac{J_{\min }}{J_{\max }}\, \Vert u\Vert ^2_{L^2} - \frac{n+2}{2\nu _n\rho ^n} D^{\rho }(u) . \end{aligned}$$

Proof

Consider the weighted \(\rho \)-connection Laplacian \(\Delta ^\rho _{\theta ,k_{\rho }}\) defined in Section 2. By (2.5),

$$\begin{aligned} \Delta ^\rho _{\theta ,k_{\rho }} u(x) = \frac{1}{\theta (x)}\int _{B_\rho (x)} k_{\rho }(x,y)\bigl (u(x)-P_{xy}(u(y))\bigr ) \,dy = u(x)-Iu(x) . \end{aligned}$$

where the second equality follows from the definition (3.15). Equivalently, \(Iu=u-\Delta ^\rho _{\theta ,k_{\rho }} u\). Therefore by (2.3),

$$\begin{aligned} |\!|Iu|\!|_\theta ^2 = |\!|u|\!|_\theta ^2 - 2\langle \!\langle \Delta ^\rho _{\theta ,k_{\rho }} u, u\rangle \!\rangle _\theta + |\!|\Delta ^\rho _{\theta ,k_{\rho }} u|\!|_\theta ^2 \ge |\!|u|\!|_\theta ^2 - 2\langle \!\langle \Delta ^\rho _{\theta ,k_{\rho }} u, u\rangle \!\rangle _\theta = |\!|u|\!|_\theta ^2 - 2 D_{k_{\rho }}^{\rho }(u,u). \end{aligned}$$

Observe that

$$\begin{aligned} D_{k_{\rho }}^{\rho }(u,u) \le \frac{1}{2} \max _{x,y} k_{\rho }(x,y) \cdot D^{\rho }(u) \le \frac{n+2}{4\nu _n\rho ^n} D^{\rho }(u) . \end{aligned}$$

Thus

$$\begin{aligned} |\!|Iu|\!|_\theta ^2 \ge |\!|u|\!|_\theta ^2 - \frac{n+2}{2\nu _n\rho ^n} D^{\rho }(u) . \end{aligned}$$

Then the lemma follows from the inequality above, \(J_{\max }\ge 1\), and the following trivial estimates:

$$\begin{aligned} |\!|Iu|\!|_\theta ^2 \le \max _x\theta (x) \cdot \Vert Iu\Vert ^2_{L^2} \le J_{\max } \Vert Iu\Vert ^2_{L^2} , \end{aligned}$$

and

$$\begin{aligned} |\!|u|\!|_\theta ^2 \ge \min _x\theta (x) \cdot \Vert u\Vert ^2_{L^2} \ge J_{\min } \Vert u\Vert ^2_{L^2} , \end{aligned}$$

due to (3.16). \(\square \)

Lemma 3.5

For any \(u\in L^2(M,E)\), we have

$$\begin{aligned} \Vert \nabla Iu\Vert _{L^2} \le \left( 1+C_nK^{}_{\!M}\rho ^2\right) \sqrt{\frac{n+2}{\nu _n\rho ^{n+2}} D^{\rho }(u)} + C_nK_E\,\rho \,\Vert u\Vert _{L^2} . \end{aligned}$$

Proof

We rewrite the definition of I as

$$\begin{aligned} Iu(x) = \int _{M} \widetilde{k_{\rho }}(x,y) P_{xy}(u(y)) \,dy, \end{aligned}$$
(3.18)

where

$$\begin{aligned} \widetilde{k_{\rho }}(x,y) = \theta (x)^{-1} k_{\rho }(x,y) . \end{aligned}$$

Note that for any \(x\in M\),

$$\begin{aligned} \int _M \widetilde{k_{\rho }}(x,y)\,dy = 1 . \end{aligned}$$

Differentiating this identity, we get

$$\begin{aligned} \int _M d_x\widetilde{k_{\rho }}(x,y)\,dy = 0, \end{aligned}$$
(3.19)

where \(d_x\) denotes the differential with respect to x. Differentiating (3.18) yields

$$\begin{aligned} \nabla Iu(x) = A_0(x)+A_3(x), \end{aligned}$$

where

$$\begin{aligned} A_0(x) = \int _M d_x\widetilde{k_{\rho }}(x,y)\otimes P_{xy}(u(y))\,dy = \int _{B_\rho (x)} d_x\widetilde{k_{\rho }}(x,y)\otimes P_{xy}(u(y))\,dy, \end{aligned}$$

and

$$\begin{aligned} A_3(x) = \int _M\widetilde{k_{\rho }}(x,y) \, \nabla V_{y,u}(x) \,dy = \int _{B_\rho (x)}\widetilde{k_{\rho }}(x,y) \, \nabla V_{y,u}(x) \,dy. \end{aligned}$$

In the above, \(V_{y,u}(\cdot )\) is the section of \(E|_{B_\rho (y)}\) defined by

$$\begin{aligned} V_{y,u}(z) = P_{zy}(u(y)) . \end{aligned}$$
(3.20)

For better understanding, observe that \(\nabla V_{y,u}=0\) if \(\nabla \) is a flat connection. In general, we have an estimate

$$\begin{aligned} \Vert \nabla V_{y,u}(x)\Vert \le C_n K^{}_{\!E}\, d(x,y)\, {|u(y)|} \le C_n K^{}_{\!E}\, \rho \, {|u(y)|} , \end{aligned}$$

where the norm at the left-hand side is defined by (3.1). Indeed, for any unit vector \(w\in T_xM\), we have

$$\begin{aligned} |\nabla _w V_{y,u}(x)| \le K^{}_{\!E}\, d(x,y)\, {|u(y)|}\, |w| =K^{}_{\!E}\, d(x,y)\, {|u(y)|} . \end{aligned}$$
(3.21)

This follows from Lemma 3.1 applied to u(y) in place of v, and applied to a family of minimizing geodesics from y to points x(s) where \(x(0)=x\) and \(\dot{x}(0)=w\). Note that due to the curvature bound \(\text {Sec}_M\le K_M\), the Rauch comparison theorem implies that \(\sup _t\big |\frac{d \gamma _s(t)}{ds} \big |\) in (3.8) is attained at \(t=1\) when \(\rho \) satisfies (1.2), see e.g. [13, Chapter 4, Corollary 2.8(1)].

Hence by the Cauchy–Schwarz inequality,

$$\begin{aligned} \Vert A_3(x)\Vert \le C_nK^{}_{\!E}\rho \int _{B_\rho (x)} \widetilde{k_{\rho }}(x,y) |u(y)| \, dy \le C_nK^{}_{\!E}\, \rho ^{1-\frac{n}{2}} \left( \int _{B_\rho (x)} |u(y)|^2 \, dy\right) ^{\frac{1}{2}}, \end{aligned}$$

where we used the estimates \(\widetilde{k_{\rho }}(x,y)\le C_n\rho ^{-n}\) and \({\text {vol}}(B_\rho (x))\le C_n\rho ^n\) due to (3.6). Thus

$$\begin{aligned} \Vert A_3\Vert _{L^2} = \left( \int _M \Vert A_3(x)\Vert ^2\,dx\right) ^{\frac{1}{2}} \le C_nK_E\,\rho \,\Vert u\Vert _{L^2}. \end{aligned}$$
(3.22)

Next we turn to \(A_0\). Using (3.19),

$$\begin{aligned} A_0(x)= \int _{B_\rho (x)} d_x\widetilde{k_{\rho }}(x,y)\otimes \bigl (P_{xy}(u(y))-u(x)\bigr )\,dy = -\int _{B_\rho (x)} d_x\widetilde{k_{\rho }}(x,y)\otimes \Gamma _{xy}(u)\,dy . \end{aligned}$$

Since \(d_x\widetilde{k_{\rho }}(x,y) = \theta (x)^{-1} d_x k_{\rho }(x,y) - \theta (x)^{-2} d_x\theta (x) \cdot k_{\rho }(x,y)\), we split \(A_0\) into two terms:

$$\begin{aligned} A_0(x) = A_1(x) + A_2(x), \end{aligned}$$

where

$$\begin{aligned} A_1(x) = -\theta (x)^{-1} \int _{B_\rho (x)} d_x k_{\rho }(x,y)\otimes \Gamma _{xy}(u)\,dy, \end{aligned}$$

and

$$\begin{aligned} A_2(x) = \theta (x)^{-2} d_x\theta (x) \otimes \int _{B_\rho (x)} k_{\rho }(x,y) \Gamma _{xy}(u) \,dy. \end{aligned}$$

In view of (3.6), (3.16) and (3.17),

$$\begin{aligned} \Vert A_2(x)\Vert \le C_nK^{}_{\!M}\rho ^{1-\frac{n}{2}} \left( \int _{B_\rho (x)} |\Gamma _{xy}(u)|^2\,dy\right) ^{\frac{1}{2}}. \end{aligned}$$

This inequality and the definition (3.4) of \(D^{\rho }(u)\) yield that

$$\begin{aligned} \Vert A_2\Vert _{L^2} \le C_nK^{}_{\!M}\, \rho ^{1-\frac{n}{2}} \sqrt{D^{\rho }(u)} = C_nK^{}_{\!M}\, \rho ^2 \, \sqrt{\frac{1}{\rho ^{n+2}} D^{\rho }(u)}\, . \end{aligned}$$
(3.23)

Now we estimate \(A_1\). From the definition of \(k_{\rho }\), for \(w\in T_xM\), we have

$$\begin{aligned} d_x k_{\rho }(x,y)\cdot w = \frac{n+2}{\nu _n\rho ^{n+2}}\, \langle \exp _x^{-1}(y), w\rangle \end{aligned}$$

where \(\langle \, ,\rangle \) is the Riemannian inner product in \(T_xM\). Substituting this into the formula for \(A_1\) yields

$$\begin{aligned} A_1(x)\cdot w = -\theta (x)^{-1} \frac{n+2}{\nu _n\rho ^{n+2}} \int _{B_\rho (x)} \langle \exp _x^{-1}(y), w\rangle \, \Gamma _{xy}(u)\,dy . \end{aligned}$$

Using the substitution \(y=\exp _x(v)\), we get

$$\begin{aligned} A_1(x)\cdot w = -\theta (x)^{-1} \frac{n+2}{\nu _n\rho ^{n+2}} \int _{{\mathcal {B}}_\rho (0)\subset T_xM} \langle v,w\rangle \, \varphi (v) J_x(v)\, dv, \end{aligned}$$
(3.24)

where

$$\begin{aligned} \varphi (v) = \Gamma _{x,\exp _x(v)}(u) \in E_x . \end{aligned}$$

To proceed we need the following sublemma.

Sublemma

For any \(L^2\) function \(f:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}\), one has

$$\begin{aligned} \bigg | \int _{{\mathcal {B}}_\rho (0)\subset {{\mathbb {R}}}^n} f(v)\, v \,dv \bigg |^2 \le \frac{\nu _n\rho ^{n+2}}{n+2} \int _{{\mathcal {B}}_\rho (0)} f(v)^2 \,dv . \end{aligned}$$

Proof

Denote

$$\begin{aligned} F = \int _{{\mathcal {B}}_\rho (0)\subset {{\mathbb {R}}}^n} f(v)\, v \,dv \in {{\mathbb {R}}}^n. \end{aligned}$$

Let \(v_0\in {{\mathbb {R}}}^n\) be the unit vector in the direction of F. Then

$$\begin{aligned} |F| = \langle F,v_0\rangle = \int _{{\mathcal {B}}_\rho (0)} f(v) \langle v,v_0\rangle \, dv . \end{aligned}$$

Therefore

$$\begin{aligned} |F|^2 \le \Bigg (\int _{{\mathcal {B}}_\rho (0)} f(v)^2\, dv\Bigg ) \Bigg ( \int _{{\mathcal {B}}_\rho (0)} \langle v,v_0\rangle ^2\, dv \Bigg ) . \end{aligned}$$

Using (3.11), the second integral equals to \(\frac{\nu _n\rho ^{n+2}}{n+2}\). The sublemma follows. \(\square \)

We use the sublemma and (3.24) to estimate \(A_1\). Fix \(x\in M\) and an orthonormal basis \(\zeta _1,\dots ,\zeta _m\in E_x\), where \(m=\dim E_x\). Then \(\varphi (v)=\sum _j \big (f_j(v) + i \,g_j(v)\big ) \zeta _j\) for some functions \(f_j,g_j:{\mathcal {B}}_\rho (0)\rightarrow {{\mathbb {R}}}\), \(j=1,\dots ,m\). Then the formula (3.24) takes the form

$$\begin{aligned} A_1(x) \cdot w = -\theta (x)^{-1} \frac{n+2}{\nu _n\rho ^{n+2}} \sum _{j=1}^m \bigl ( \left\langle a_j , w \right\rangle + i \left\langle b_j , w \right\rangle \bigr ) \zeta _j, \end{aligned}$$

for any \(w\in T_xM\), where \(a_j,b_j\in T_xM\) are given by

$$\begin{aligned} a_j = \int _{{\mathcal {B}}_\rho (0)} f_j(v) v J_x(v)\, dv,\qquad b_j = \int _{{\mathcal {B}}_\rho (0)} g_j(v) v J_x(v)\, dv . \end{aligned}$$

Hence by using (3.1),

$$\begin{aligned} \Vert A_1(x)\Vert ^2 = \left( \theta (x)^{-1} \frac{n+2}{\nu _n\rho ^{n+2}}\right) ^2 \ \sum _{j=1}^m (a_j^2+b_j^2). \end{aligned}$$

Applying the Sublemma to \(f_j J_x\) and \(g_j J_x\) in place of f, we obtain

$$\begin{aligned} a_j^2 + b_j^2 \le \frac{\nu _n\rho ^{n+2}}{n+2} \int _{{\mathcal {B}}_\rho (0)} \big (f_j(v)^2+g_j(v)^2\big ) J_x(v)^2 \,dv . \end{aligned}$$

Thus,

$$\begin{aligned} \Vert A_1(x)\Vert ^2\le & {} \theta (x)^{-2} \frac{n+2}{\nu _n\rho ^{n+2}} \int _{{\mathcal {B}}_\rho (0)} |\varphi (v)|^2 J_x(v)^2 \,dv \\\le & {} \theta (x)^{-2} \frac{n+2}{\nu _n\rho ^{n+2}} J_{\max } \int _{B_\rho (x)} |\Gamma _{xy}(u)|^2 \,dy , \end{aligned}$$

where we used again the substitution \(y=\exp _x(v)\) and the definition of \(\varphi \) in the last inequality.

By (3.16) and (3.6), we have \(\theta (x)^{-2}J_{\max }\le 1+C_nK^{}_{\!M}\rho ^2\). Hence,

$$\begin{aligned} \Vert A_1(x)\Vert ^2 \le (1+C_nK^{}_{\!M}\rho ^2) \frac{n+2}{\nu _n\rho ^{n+2}} \int _{B_\rho (x)} |\Gamma _{xy}(u)|^2 \,dy . \end{aligned}$$

Integrating over M yields that

$$\begin{aligned} \Vert A_1\Vert _{L^2} \le (1+C_nK^{}_{\!M}\rho ^2) \sqrt{\frac{n+2}{\nu _n\rho ^{n+2}} D^{\rho }(u)}\, . \end{aligned}$$
(3.25)

From \(\nabla Iu=A_1+A_2+A_3\), the lemma follows from (3.25), (3.23), and (3.22). \(\square \)

The lower bound for \(\widetilde{\lambda }_k\) is a consequence of Lemma 3.4 and Lemma 3.5.

Proposition 3.6

For every \(k\in {{\mathbb {N}}}_+\) satisfying \(\widetilde{\lambda }_k\le \sigma _n \rho ^{-2}\), we have

$$\begin{aligned} \lambda _k^{1/2} \le \left( 1+C_nK^{}_{\!M}\rho ^2+\widetilde{\lambda }_k \rho ^2 \right) \widetilde{\lambda }_k^{1/2} + C_nK^{}_{\!E}\rho \, . \end{aligned}$$

Proof

Let \(u_1,\dots , u_k\in L^2(M,E)\) be the first k eigen-sections of \(\Delta ^\rho \) corresponding to \(\widetilde{\lambda }_1,\dots ,\widetilde{\lambda }_k\). Let \(\widetilde{L}\) be the linear span of \(u_1,\dots ,u_k\). Then \(\widetilde{L}\) realizes the infimum in the min-max formula (3.5). Hence for all \(u\in \widetilde{L}\),

$$\begin{aligned} D^{\rho }(u) \le \frac{\nu _n\rho ^{n+2}}{n+2} \widetilde{\lambda }_k \Vert u\Vert ^2_{L^2}. \end{aligned}$$
(3.26)

This and Lemma 3.4 imply that

$$\begin{aligned} \Vert Iu\Vert ^2_{L^2} \ge \frac{J_{\min }}{J_{\max }}\, \Vert u\Vert ^2_{L^2} - \frac{n+2}{2\nu _n\rho ^n} D^{\rho }(u) \ge \left( \frac{J_{\min }}{J_{\max }} - \frac{1}{2}\rho ^2 \widetilde{\lambda }_k \right) \Vert u\Vert ^2_{L^2} . \end{aligned}$$
(3.27)

For \(\rho ^2 \widetilde{\lambda }_k\le 1\) and \(\frac{J_{\min }}{J_{\max }}>\frac{1}{2}\), the left-hand side of the inequality above is positive for all \(u\in \widetilde{L}\setminus \{0\}\). In particular, I is injective on \(\widetilde{L}\).

Define \(L=I(\widetilde{L})\subset C^{0,1}(M,E)\). Since \(I|_{\widetilde{L}}\) is injective, we have \(\dim L=\dim \widetilde{L}=k\). Now the min-max formula (3.2), Lemma 3.5 and (3.27) imply that

$$\begin{aligned} \lambda _k^{1/2}\le & {} \sup _{ u \in L\setminus \{0\}} \frac{\Vert \nabla u\Vert _{L^2}}{\Vert u\Vert _{L^2}} = \sup _{ u \in \widetilde{L}\setminus \{0\}} \frac{\Vert \nabla Iu\Vert _{L^2}}{\Vert Iu\Vert _{L^2}} \\\le & {} \frac{ (1+C_nK^{}_{\!M}\rho ^2) \sqrt{\frac{n+2}{\nu _n\rho ^{n+2}} D^{\rho }(u)} + C_nK_E\,\rho \,\Vert u\Vert _{L^2} }{ \left( \frac{J_{\min }}{J_{\max }} - \frac{1}{2}\rho ^2 \widetilde{\lambda }_k \right) ^{\frac{1}{2}} \Vert u\Vert _{L^2} }. \end{aligned}$$

This and (3.26) imply that

$$\begin{aligned} \lambda _k^{1/2} \le \frac{ (1+C_nK^{}_{\!M}\rho ^2) \widetilde{\lambda }_k^{1/2} + C_nK_E\,\rho }{ \left( \frac{J_{\min }}{J_{\max }} - \frac{1}{2}\rho ^2 \widetilde{\lambda }_k \right) ^{\frac{1}{2}} } . \end{aligned}$$
(3.28)

Using the Jacobian estimate (3.6), one sees that \(\frac{J_{\min }}{J_{\max }} \ge 1-C_nK^{}_{\!M}\rho ^2\), which implies that

$$\begin{aligned} \left( \frac{J_{\min }}{J_{\max }} - \frac{1}{2}\rho ^2 \widetilde{\lambda }_k \right) ^{-\frac{1}{2}} \le 1 + C_nK^{}_{\!M}\rho ^2 + \widetilde{\lambda }_k \rho ^2 . \end{aligned}$$

Then the proposition follows. \(\square \)

Proof of Theorem 1

The estimate directly follows from Proposition 3.3 and Proposition 3.6, after converting all error terms involving \(\widetilde{\lambda }_k\) to \(\lambda _k\) by using Proposition 3.3. \(\square \)

4 Discretization of the connection Laplacian

In this section we prove Theorem 2. Let \(M^n\) be a compact, connected Riemannian manifold of dimension n without boundary, and let E be a smooth Euclidean (or Hermitian) vector bundle over M equipped with a smooth Euclidean (or Hermitian) connection \(\nabla \). Suppose \(P=\{P_{xy}\}\) is the \(\rho \)-connection given by the parallel transport canonically associated with the connection \(\nabla \). Recall that P is unitary and symmetric. Let \(\Gamma =(X_{\varepsilon },d|_{X_{\varepsilon }},\mu )\) (short for \(\Gamma _{\varepsilon }\)) be the discrete metric-measure space defined in Definition 1.3, where \(X_{\varepsilon }=\{x_i\}_{i=1}^N\) is a finite \(\varepsilon \)-net in M for \(\varepsilon \ll \rho \). We consider the \(\rho \)-connection Laplacian (2.5) on this discrete metric-measure space \(\Gamma \), acting on the restriction of the vector bundle E onto \(X_{\varepsilon }\).

The vector bundle E restricted onto \(X_{\varepsilon }\) is equipped with the norm

$$\begin{aligned} \Vert {\bar{u}}\Vert _{\Gamma }^2=\sum _{i=1}^N \mu _i |{\bar{u}}(x_i)|^2, \end{aligned}$$
(4.1)

for \({\bar{u}}\in L^2(X_{\varepsilon },E|_{X_{\varepsilon }})\). Choosing the weights \(\alpha (x)=1\) and \(\beta (x,y)=\frac{2(n+2)}{\nu _n\rho ^{n+2}}\) in (2.5) the same as in the case of smooth connections, the graph connection Laplacian \(\Delta ^{\rho }_{\Gamma }\) is given by

$$\begin{aligned} \Delta ^{\rho }_{\Gamma } {\bar{u}}(x_i)=\frac{2(n+2)}{\nu _n \rho ^{n+2}} \sum _{d(x_i,x_j)<\rho } \mu _j \big ({\bar{u}}(x_i)-P_{x_i x_j}{\bar{u}}(x_j)\big ), \end{aligned}$$
(4.2)

and its energy (2.2) is given by

$$\begin{aligned} \Vert \delta {\bar{u}}\Vert ^2:=\frac{n+2}{\nu _n \rho ^{n+2}}\sum _i\sum _{j: d(x_i,x_j)<\rho } \mu _i \mu _j \big |{\bar{u}}(x_i)-P_{x_i x_j}{\bar{u}}(x_j) \big |^2. \end{aligned}$$
(4.3)

Denote by \(\widetilde{\lambda }_k(\Gamma )\) the k-th eigenvalue of \(\Delta _{\Gamma }^{\rho }\). Our goal is to prove that \(\widetilde{\lambda }_k(\Gamma )\) approximates the eigenvalue \(\lambda _k\) of the connection Laplacian \(\Delta \) for every k, as \(\rho +\frac{\varepsilon }{\rho }\rightarrow 0\). In the light of what we have already done in Sect. 3, we only need to obtain a few more estimates.

For the upper bound for \(\widetilde{\lambda }_k(\Gamma )\), we follow Section 4 in [2] and define the following discretization operator \(Q: L^2(M,E) \rightarrow L^2(X_{\varepsilon },E|_{X_{\varepsilon }})\) by

$$\begin{aligned} Qu(x_i)=\frac{1}{\mu _i}\int _{V_i} P_{x_i y}u(y)\,dy. \end{aligned}$$
(4.4)

Define an extension operator \(Q^{*}: L^2(X_{\varepsilon },E|_{X_{\varepsilon }}) \rightarrow L^2(M,E)\) by

$$\begin{aligned} Q^{*}{\bar{u}}(y)=\sum _{i=1}^{N} P_{y x_i}{\bar{u}}(x_i) 1_{V_i}(y), \end{aligned}$$
(4.5)

where \(1_{V_i}\) denotes the characteristic function of the set \(V_i\). Note that \(Q\circ Q^{*}=Id_{L^2(X_{\varepsilon },E|_{X_{\varepsilon }})}\). The energy of Qu for \(u\in L^2(M,E)\) is given by

$$\begin{aligned} \Vert \delta (Qu)\Vert ^2=\frac{n+2}{\nu _n \rho ^{n+2}}\sum _i\sum _{j: d(x_i,x_j)<\rho } \mu _i \mu _j \big |Qu(x_i)-P_{x_i x_j}Qu(x_j) \big |^2. \end{aligned}$$
(4.6)

To control the upper bound for \(\widetilde{\lambda }_k(\Gamma )\), we need to estimate \(\Vert Qu\Vert _{\Gamma }\) and \(\Vert \delta (Qu)\Vert \). We start with the following lemma as an application of Lemma 3.1.

Lemma 4.1

Let \(\rho <r_{inj}(M)/2\), \(\varepsilon <\rho /4\), and \(x_i,x_j,y,z\in M\) be given satisfying \(d(x_i,x_j)<\rho ,\,d(y,x_i)<\varepsilon ,\, d(z,x_j)<\varepsilon \). Then for any \(v\in E_z\), we have

$$\begin{aligned}\big |P_{y x_i} P_{x_i z}v-P_{yz}v \big |\le K_E (\rho +2\varepsilon ) \varepsilon |v|,\end{aligned}$$

and

$$\begin{aligned} \big | P_{y x_i}P_{x_i x_j}P_{x_j z}v -P_{yz}v \big |\le 2 K_E (\rho +2\varepsilon )\varepsilon |v|. \end{aligned}$$

Proof

Let \(V_{z,v}(x)=P_{xz}(v)\), and let \(\gamma _{y,x_i}: [0,d(y,x_i)] \rightarrow M\) be the unique minimizing geodesic from y to \(x_i\) with arclength parametrization. By definition,

$$\begin{aligned}\nabla _s V_{z,v}\big (\gamma _{y,x_i}(s)\big )=\lim _{s'\rightarrow 0}\frac{P_{s,s+s'}V_{z,v}\big (\gamma _{y,x_i}(s+s')\big )-V_{z,v}\big (\gamma _{y,x_i}(s)\big )}{s'},\end{aligned}$$

where \(P_{s,s+s'}\) denotes the parallel transport from \(\gamma _{y,x_i}(s+s')\) to \(\gamma _{y,x_i}(s)\) along the geodesic \(\gamma _{y,x_i}\). Apply \(P_{0,s}\) to both sides:

$$\begin{aligned}P_{0,s}\Big (\nabla _s V_{z,v}\big (\gamma _{y,x_i}(s)\big )\Big )=\lim _{s'\rightarrow 0}\frac{P_{0,s+s'}V_{z,v}\big (\gamma _{y,x_i}(s+s')\big )-P_{0,s} V_{z,v}\big (\gamma _{y,x_i}(s)\big )}{s'}.\end{aligned}$$

Observe that \(P_{0,\cdot } V_{z,v}\big (\gamma _{y,x_i}(\cdot )\big )\) is a curve in \(E_y\). Since P is unitary, the formula above shows that the tangent vectors of this \(E_y\)-curve have lengths bounded by \(K_E (\rho +2\varepsilon )|v|\) due to (3.21). Thus,

$$\begin{aligned}\big |P_{y x_i}V_{z,v}(x_i)-V_{z,v}(y) \big |=\Big |\int _0^{d(x_i,y)} P_{0,s}\Big (\nabla _s V_{z,v}\big (\gamma _{y,x_i}(s)\big )\Big ) \,ds\, \Big | \le K_E (\rho +2\varepsilon )\varepsilon |v|.\end{aligned}$$

Then the first conclusion directly follows from the definition \(V_{z,v}(x)=P_{xz}(v)\).

The second conclusion can be derived using the first conclusion. Namely, we apply the first conclusion with \(x_j,z,y\) in place of \(y,x_i,z\), and \(P_{yz}v\) in place of v:

$$\begin{aligned}\big | P_{x_jz} P_{zy} (P_{yz}v)-P_{x_j y}(P_{yz}v) \big | \le K_E (\rho +2\varepsilon )\varepsilon |P_{yz}v|= K_E (\rho +2\varepsilon )\varepsilon |v|.\end{aligned}$$

Since P is symmetric and unitary, the inequality above is equivalent to

$$\begin{aligned} \big | P_{y x_j}P_{x_jz} v-P_{yz}v \big | \le K_E (\rho +2\varepsilon )\varepsilon |v|. \end{aligned}$$
(4.7)

Applying the first conclusion with \(y,x_i,x_j\) in place of \(y,x_i,z\), and \(P_{x_j z}v\) in place of v gives

$$\begin{aligned} \big | P_{y x_i}P_{x_i x_j} (P_{x_j z}v)-P_{y x_j} (P_{x_j z} v) \big | \le K_E (\rho +2\varepsilon )\varepsilon |P_{x_j z}v|=K_E (\rho +2\varepsilon )\varepsilon |v|. \end{aligned}$$
(4.8)

Thus the second conclusion follows from (4.7), (4.8) and the triangle inequality. \(\square \)

The following two lemmas enable us to obtain an upper bound for \(\widetilde{\lambda }_k(\Gamma )\).

Lemma 4.2

For any \(u\in L^2(M,E)\), we have

$$\begin{aligned}\big | \Vert u\Vert _{L^2}-\Vert Qu\Vert _{\Gamma } \big |^2 \le \frac{C}{\nu _n(\rho -\varepsilon )^n}D^{\rho }(u)+C K_E^2\rho ^4\Vert u\Vert _{L^2}^2.\end{aligned}$$

Proof

Observe that \(Q^{*}\) preserves the norms. Hence,

$$\begin{aligned}\big | \Vert u\Vert _{L^2}-\Vert Qu\Vert _{\Gamma } \big |= \big | \Vert u\Vert _{L^2}-\Vert Q^{*}Qu\Vert _{L^2} \big | \le \Vert u-Q^{*}Qu\Vert _{L^2}.\end{aligned}$$

By the definitions of Q and \(Q^{*}\),

$$\begin{aligned} \Vert u-Q^{*}Qu\Vert _{L^2}^2= & {} \sum _{i=1}^N \int _{V_i} \big |u(x)-P_{x x_i}Qu(x_i) \big |^2 dx \\= & {} \sum _{i=1}^N \int _{V_i} \Big |u(x)-\frac{1}{\mu _i}\int _{V_i} P_{x x_i}P_{x_i y}u(y)\, dy \Big |^2 dx. \end{aligned}$$

By the Cauchy-Schwarz inequality, we have

$$\begin{aligned}\Vert u-Q^{*}Qu\Vert _{L^2}^2 \le \sum _{i=1}^N \frac{1}{\mu _i} \int _{V_i}\int _{V_i} \big |u(x)-P_{x x_i}P_{x_i y}u(y) \big |^2 dx dy. \end{aligned}$$

Since \(V_i\subset B_{\varepsilon }(x_i)\), then Lemma 4.1 yields that

$$\begin{aligned} \Vert u-Q^{*}Qu\Vert _{L^2}^2 \le \sum _{i=1}^N \frac{1}{\mu _i} \int _{V_i}\int _{V_i} \big (|u(x)-P_{xy}u(y)|+2K_E\varepsilon ^2 |u(y)| \big )^2 dxdy . \end{aligned}$$

To deal with the first term, we follow the proof of Lemma 3.4 in [2]. We fix \(x,y\in V_i\) and consider the set \(U=B_{\rho }(x)\cap B_{\rho }(y)\). Observe that U contains the ball of radius \(\rho -|xy|/2\ge \rho -\varepsilon \) centered at the midpoint between x and y. Hence we have \(\text {vol}(U)\ge C \nu _n (\rho -\varepsilon )^n\) by (3.6). Recall that P is unitary and symmetric. Then for every \(z\in U\), we have

$$\begin{aligned} |u(x)-P_{xy}u(y)|\le & {} |u(x)-P_{xz}u(z)|+|P_{xz}u(z)-P_{xy}u(y)| \\= & {} |u(x)-P_{xz}u(z)|+|u(z)-P_{zx}P_{xy}u(y)| \\\le & {} |u(x)-P_{xz}u(z)|+|u(z)-P_{zy}u(y)|+K_E\rho ^2 |u(y)|, \end{aligned}$$

where we applied Lemma 4.1 in the last inequality. Then

$$\begin{aligned} |u(x)-P_{xy}u(y)|^2\le & {} \frac{C}{{\text {vol}}(U)} \int _{U} \Big (|u(x)-P_{xz}u(z)|^2+|u(y)-P_{yz}u(z)|^2+K_E^2 \rho ^4 |u(y)|^2 \Big ) dz \\\le & {} \frac{C}{{\text {vol}}(U)}\big (F(x)+F(y)\big ) +C K_E^2 \rho ^4 |u(y)|^2, \end{aligned}$$

where \(F(x)=\int _{B_{\rho }(x)}|u(x)-P_{xz}u(z)|^2 dz\). Hence by definition (3.4), we obtain

$$\begin{aligned} \Vert u-Q^{*}Qu\Vert _{L^2}^2\le & {} \sum _{i=1}^N \frac{C}{\mu _i} \int _{V_i}\int _{V_i} \frac{1}{{\text {vol}}(U)}\big (F(x)+F(y)\big )dxdy + CK_E^2 \rho ^4 \Vert u\Vert _{L^2}^2 \\= & {} \frac{C}{{\text {vol}}(U)} D^{\rho }(u) +CK_E^2 \rho ^4 \Vert u\Vert _{L^2}^2 \le \frac{C}{\nu _n(\rho -\varepsilon )^n} D^{\rho }(u) +CK_E^2 \rho ^4 \Vert u\Vert _{L^2}^2. \end{aligned}$$

\(\square \)

Lemma 4.3

For any \(u\in L^2(M,E)\), we have

$$\begin{aligned}\Vert \delta (Qu)\Vert ^2\le \frac{n+2}{\nu _n \rho ^{n+2}}(1+4\rho ^2) D^{\rho +2\varepsilon }(u)+ C_n K_E^2 (\frac{\varepsilon }{\rho })^2 \Vert u\Vert _{L^2}^2.\end{aligned}$$

Proof

The definition of Q yields that

$$\begin{aligned}Qu(x_i)-P_{x_i x_j}Qu(x_j)=\frac{1}{\mu _i \mu _j}\int _{V_i}\int _{V_j} \big ( P_{x_i y}u(y)-P_{x_i x_j}P_{x_j z}u(z)\big ) dy dz.\end{aligned}$$

Then by the Cauchy-Schwarz inequality and the fact that P is unitary and symmetric,

$$\begin{aligned} \big |Qu(x_i)-P_{x_i x_j}Qu(x_j)\big |^2\le & {} \frac{1}{\mu _i \mu _j}\int _{V_i}\int _{V_j} \big | P_{x_i y}u(y)-P_{x_i x_j}P_{x_j z}u(z) \big |^2 dy dz\\= & {} \frac{1}{\mu _i \mu _j}\int _{V_i}\int _{V_j} \big | u(y)-P_{y x_i}P_{x_i x_j}P_{x_j z}u(z) \big |^2 dy dz. \end{aligned}$$

The parallel transport appeared in the quantity above goes through the path \([z x_j x_i y]\), while what we need is to go through the minimizing geodesic [zy]. Thus (4.6) and Lemma 4.1 imply that

$$\begin{aligned} \Vert \delta (Qu)\Vert ^2\le & {} \frac{n+2}{\nu _n \rho ^{n+2}} \sum _i\sum _{j: d(x_i,x_j)<\rho } \int _{V_i}\int _{V_j} \big | u(y)-P_{y x_i}P_{x_i x_j}P_{x_j z}u(z) \big |^2 dy dz \\\le & {} \frac{n+2}{\nu _n \rho ^{n+2}} \sum _i\sum _{j: d(x_i,x_j)<\rho } \int _{V_i}\int _{V_j} \big (| u(y)-P_{yz}u(z) |+ 4 K_E\rho \varepsilon |u(z)| \big )^2 dy dz \\\le & {} \frac{n+2}{\nu _n \rho ^{n+2}} \sum _i\sum _{j: d(x_i,x_j)<\rho } \int _{V_i}\int _{V_j} \Big ((1+4\rho ^2)| u(y)-P_{yz}u(z) |^2+CK_E^2\varepsilon ^2|u(z)|^2 \Big ) dy dz. \end{aligned}$$

Here the last inequality above used the inequality that

$$\begin{aligned}2K_E\rho \varepsilon |u(z)| \cdot | u(y)-P_{yz}u(z) | \le \rho ^2| u(y)-P_{yz}u(z) |^2+K_E^2\varepsilon ^2|u(z)|^2.\end{aligned}$$

Since \(\bigcup _{j: d(x_i, x_j)<\rho }V_j \subset B_{\rho +2\varepsilon }(y)\) for \(y\in V_i\), we have

$$\begin{aligned} \Vert \delta (Qu)\Vert ^2\le & {} \frac{n+2}{\nu _n \rho ^{n+2}}(1+4\rho ^2) \int _{M}\int _{B_{\rho +2\varepsilon }(y)} | u(y)-P_{yz}u(z) |^2 dydz +C_n K_E^2 \frac{\varepsilon ^2 (\rho +2\varepsilon )^n}{\rho ^{n+2}}\Vert u\Vert _{L^2}^2 \\\le & {} \frac{n+2}{\nu _n \rho ^{n+2}}(1+4\rho ^2) D^{\rho +2\varepsilon }(u)+ C_n K_E^2(\frac{\varepsilon }{\rho })^2\Vert u\Vert _{L^2}^2. \end{aligned}$$

\(\square \)

The lower bound for \(\widetilde{\lambda }_k(\Gamma )\) almost immediately follows from Lemmas 3.4 and 3.5, since these two lemmas hold for any \(L^2\) section. For any \({\bar{u}}\in L^2(X_{\varepsilon },E|_{X_{\varepsilon }})\), we consider \(Q^{*}{\bar{u}} \in L^2(M,E)\) and apply those two lemmas to \(Q^{*}{\bar{u}}\). Recall that \(\Vert Q^{*}{\bar{u}}\Vert _{L^2}^2=\Vert {\bar{u}}\Vert ^2_{\Gamma }\). The only part left is to estimate \(D^{\rho }(Q^{*}{\bar{u}})\) in terms of \(\Vert \delta {\bar{u}}\Vert ^2\).

Lemma 4.4

For any \({\bar{u}}\in L^2(X_{\varepsilon },E|_{X_{\varepsilon }})\), we have

$$\begin{aligned}D^{\rho -2\varepsilon }(Q^{*}{\bar{u}}) \le \frac{\nu _n \rho ^{n+2}}{n+2}(1+4\rho ^2) \Vert \delta {\bar{u}}\Vert ^2 + C_n K_E^2\varepsilon ^2(\rho +\varepsilon )^n \Vert {\bar{u}}\Vert _{\Gamma }^2.\end{aligned}$$

Proof

Since \(B_{\rho -2\varepsilon }(y) \subset \bigcup _{j: d(x_i, x_j)<\rho }V_j \) for \(y\in V_i\), we have

$$\begin{aligned} D^{\rho -2\varepsilon }(Q^{*}{\bar{u}})= & {} \int _M \int _{B_{\rho -2\varepsilon }(y)} |Q^{*}{\bar{u}}(y)-P_{yz}Q^{*}{\bar{u}}(z)|^2 dz dy \\\le & {} \sum _i \sum _{j: d(x_i,x_j)<\rho } \int _{V_i} \int _{V_j} |Q^{*}{\bar{u}}(y)-P_{yz}Q^{*}{\bar{u}}(z)|^2 dz dy. \end{aligned}$$

By the definition of \(Q^{*}\), for any \(y\in V_i,\,z\in V_j\),

$$\begin{aligned}Q^{*}{\bar{u}}(y)-P_{yz}Q^{*}{\bar{u}}(z) =P_{y x_i}{\bar{u}}(x_i)-P_{yz}P_{z x_j}{\bar{u}}(x_j) .\end{aligned}$$

Since P is unitary and symmetric, Lemma 4.1 implies that

$$\begin{aligned} \big |Q^{*}{\bar{u}}(y)-P_{yz}Q^{*}{\bar{u}}(z)\big |^2= & {} \big |{\bar{u}}(x_i)-P_{x_i y}P_{yz}P_{z x_j}{\bar{u}}(x_j) \big |^2 \\\le & {} \big (|{\bar{u}}(x_i)-P_{x_i x_j}{\bar{u}}(x_j)|+4K_E \rho \varepsilon |{\bar{u}}(x_j)| \big )^2 \\\le & {} (1+4\rho ^2) \big |{\bar{u}}(x_i)-P_{x_i x_j}{\bar{u}}(x_j) \big |^2+ CK_E^2\varepsilon ^2 |{\bar{u}}(x_j)|^2. \end{aligned}$$

Integrating the last inequality over \(V_i,V_j\), by definition (4.3), we obtain

$$\begin{aligned} D^{\rho -2\varepsilon }(Q^{*}{\bar{u}})\le & {} (1+4\rho ^2) \sum _{i,j} \mu _i \mu _j \big |{\bar{u}}(x_i)-P_{x_i x_j}{\bar{u}}(x_j) \big |^2 + CK_E^2 \varepsilon ^2 \sum _{i,j}\mu _i \mu _j |{\bar{u}}(x_j)|^2 \\\le & {} \frac{\nu _n \rho ^{n+2}}{n+2}(1+4\rho ^2) \Vert \delta {\bar{u}}\Vert ^2 + C_n K_E^2\varepsilon ^2(\rho +\varepsilon )^n \Vert {\bar{u}}\Vert _{\Gamma }^2, \end{aligned}$$

where the last inequality used the fact that \(\sum _{i: d(x_i,x_j)<\rho } \mu _i \le {\text {vol}}(B_{\rho +\varepsilon }(x_j))\). \(\square \)

Proof of Theorem 2

The upper bound for \(\widetilde{\lambda }_k(\Gamma )\) follows from Lemmas 4.2, 4.3 and 3.2. The lower bound follows from Lemmas 4.4, 3.4 and 3.5. The calculations are straightforward, similar to Proposition 3.6. \(\square \)