1 Introduction

The main focus of our paper is on the study of spectra of quantum graphs. The notion of “quantum graph” refers to a graph \({{G}}\) considered as a one-dimensional simplicial complex and equipped with a differential operator. The spectral and scattering properties of Schrödinger operators on such structures attracted a considerable interest during the last two decades, as they provide, in particular, relevant models of nanostructured systems (we only mention recent collected works and monographs with a comprehensive bibliography: [8, 9, 24, 57]).

Let \({{G}}\) be a locally finite connected metric graph, that is, a locally finite connected combinatorial graph \({{G}}_d=({{V}},{{E}})\), where each edge \(e\in {{E}}\) is identified with a copy of the interval [0, |e|] and \(|\cdot |\) denotes the edge length. We shall always assume throughout the paper that each edge has finite length, that is, \(|\cdot |:{{E}}\rightarrow (0,\infty )\). In the Hilbert space \(L^2({{G}}) = \bigoplus _{e\in {{E}}} L^2(e)\), we can define the Hamiltonian \({\mathbf {H}}\) which acts in this space as the (negative) second derivative \(-\frac{d^2}{dx_e^2}\) on every edge \(e\in {{E}}\). To give \({\mathbf {H}}\) the meaning of a quantum mechanical energy operator, it must be self-adjoint and hence one needs to impose appropriate boundary conditions at the vertices. Kirchhoff (also known as Kirchhoff–Neumann) conditions (2.6) are the most standard ones (cf. [9]) and the corresponding operator denoted by \({\mathbf {H}}\) is usually called a Kirchhoff (Kirchhoff–Neumann) Laplacian (we refer to Sects. 2.22.4 for a precise definition of the operator \({\mathbf {H}}\)). If the graph \({{G}}\) is finite (\({{G}}\) has finitely many vertices and edges), then the spectrum of \({\mathbf {H}}\) is purely discrete (see, e.g., [9]). During the last few years, a lot of effort has been put in estimating the first nonzero eigenvalue of the operator \({\mathbf {H}}\) (notice that 0 is always a simple eigenvalue if \({{G}}_d\) is connected) and also in understanding its dependence on various characteristics of the corresponding metric graph including the number of essential vertices of the graph (vertices of degree 2 are called inessential); the number or the total length of the graph’s edges; the edge connectivity of the underlying (combinatorial) graph, etc. For further information we refer to a brief selection of recent articles [3, 4, 7, 40, 41, 44, 58].

If the graph \({{G}}\) is infinite (there are infinitely many vertices and edges), then the corresponding pre-minimal operator \({\mathbf {H}}_0\) defined by (2.7) is not automatically essentially self-adjoint. One of the standard conditions to ensure the essential self-adjointness of \({\mathbf {H}}_0\) is the existence of a positive lower bound on the edges lengths, \(\ell _*({{G}}) = \inf _{e\in {{E}}}|e|>0\) (see [9]). Only recently several self-adjointness conditions without this rather restrictive assumption have been established in [25, 43] (see Sect. 2.3 for further details). Of course, the next natural question is the structure of the spectrum of the operator \({\mathbf {H}}\). Clearly, the spectrum of an infinite quantum graph is not necessarily discrete and hence one is interested in the location of the bottom of the spectrum, \(\lambda _0({\mathbf {H}})\), as well as of the bottom of the essential spectrum, \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\), of \({\mathbf {H}}\). Since the graph is infinite, many quantities of interest for finite quantum graphs (e.g., the number of vertices, edges, or its total length) are no longer suitable for these purposes and the corresponding bounds usually lead to trivial estimates. However, it is widely known that quantum graphs in a certain sense interpolate between Laplacians on Riemannian manifolds and difference Laplacians on combinatorial graphs and hence quantum graphs can be investigated by modifying techniques that have been developed for operators on manifolds and graphs and we explore these analogies in the present paper. Notice that this insight has already proved to be very fruitful and it has led to many important results in spectral theory of operators on metric graphs (see, e.g., [9]). Although quantum graphs are essentially operators on one-dimensional manifolds, our point of view is that the corresponding results and estimates should be of combinatorial nature.

Our central result is a Cheeger-type estimate for quantum graphs, which establishes lower bounds for \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) in terms of the isoperimetric constant \(\alpha ({{G}})\) of the metric graph \({{G}}\) (Theorem 3.4). Although the Cheeger-type bound for (finite) quantum graphs was proved 30 years ago by Nicaise (see [50, Theorem 3.2]), we give a new purely combinatorial definition of the isoperimetric constant (see Definition 3.2) and as a result this establishes a connection with isoperimetric constants for combinatorial graphs [see Lemma 4.2 and also (4.10)–(4.11)]. To a certain extent this connection is expected (cf. Theorem 2.11 and also [10, 14, 57, 63]). Moreover, it was observed recently in [25, 43] by using the ideas from [42] that spectral properties of the operator \({\mathbf {H}}\) are closely connected with the corresponding properties of the discrete Laplacian defined in \(\ell ^2({{V}};m)\) by the expression

$$\begin{aligned} (\tau _{{{G}}} f)(v) := \frac{1}{m(v)} \sum _{u\sim v} \frac{f(v) - f(u)}{|e_{u,v}|},\quad v\in {{V}}, \end{aligned}$$
(1.1)

where the weight function \(m:{{V}}\rightarrow {\mathbb {R}}_{>0}\) is given by

$$\begin{aligned} m:v\mapsto \sum _{u\sim v}|e_{u,v}|. \end{aligned}$$
(1.2)

Using this connection, several criteria for \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) to be positive have been established in [25], however, in terms of isoperimetric constants and volume growth of the combinatorial graphs, which were introduced, respectively, in [5] and [27, 33] [in this paper we obtain these results as simple corollaries of our estimate (3.8)].

Despite the combinatorial nature of (3.3) and (3.4), it is known that computation of the combinatorial isoperimetric constant is an NP-hard problem [48] (see also [34, 36] for further details). Motivated by Bauer et al. [5] and Dodziuk [20], we introduce a quantity, which sometimes is interpreted as a curvature of a graph, leading to estimates for the isoperimetric constants \(\alpha ({{G}})\) and \(\alpha _{{{\mathrm{ess}}}}({{G}})\). It also turns out to be very useful in many situations of interest as we show by the examples of trees and antitrees. Another way to estimate isoperimetric constants is provided by the volume growth. Namely, we can apply the exponential volume growth estimates for regular Dirichlet forms from [62] (see also [33, 51]) to prove upper bounds (Brooks-type estimates [6]) for quantum graphs (see Theorem 7.1). However, this can be done under the additional assumption that the metric graph is complete with respect to the natural path metric (notice that in this case \({\mathbf {H}}_0\) is essentially self-adjoint and \({\mathbf {H}}\) coincides with its closure, see Corollary 2.3).

The quantities \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) are of fundamental importance for several reasons. From the spectral theory point of view, the positivity of \(\lambda _0({\mathbf {H}})\) or \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) corresponds to bounded invertibility or Fredholmness of the operator \({\mathbf {H}}\). Moreover, \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})=+\infty \) holds precisely when the spectrum of \({\mathbf {H}}\) is purely discrete, which is further equivalent to the compactness of the embedding \(H^1_0({{G}})\) into \(L^2({{G}})\) (the definition of the form domain \(H^1_0({{G}})\) is given in Sect. 2.4). It is difficult to overestimate the importance of \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) in applications. For example, in the theory of parabolic equations \(\lambda _0({\mathbf {H}})\) gives the speed of convergence of the system towards equilibrium. On the other hand, Cheeger-type inequalities have a venerable history. Starting from the seminal work of Cheeger [15], where a connection between the isoperimetric constant of a compact manifold and a first nontrivial eigenvalue of the Laplace–Beltrami operator was found, this topic became an active area of research in both manifolds and graphs settings. One of the most fruitful applications of Cheeger’s inequality in graph theory (this inequality was first proved independently in [19, 21] and [1, 2]) is in the study of networks connectivity, namely, in constructing expanders (see [16, 18, 34, 46]). Notice also that the positivity of the isoperimetric constant (also known as a strong isoperimetric inequality) is of fundamental importance in the study of random walks on graphs (we refer to [65] for further details).

Let us now finish the introduction by describing the content of the article. First of all, we review necessary notions and facts on infinite quantum graphs in Sect. 2, where we introduce the pre-minimal operator \({\mathbf {H}}_0\) (Sect. 2.2), discuss its essential self-adjointness (Sect. 2.3) and the corresponding quadratic form \(\mathfrak {t}_{{G}}\) (Sect. 2.4), and also touch upon its connection with the difference Laplacian (1.1) (Sect. 2.5).

Section 3 contains our first main result, Theorem 3.4, which provides the Cheeger-type estimate for quantum graphs. Its proof follows closely the line of arguments as in the manifold case with the only exception, Lemma 3.7, which enables us to replace the isoperimetric constant (3.12) having the form similar to that of in [50] (see also [40, 56]) by the quantity (3.3) having a combinatorial structure. The latter also reveals connections with the combinatorial isoperimetric constant \(\alpha _{\mathrm{{comb}}}\) from [2, 19], which measures connectedness of the underlying combinatorial graph, and with the discrete isoperimetric constant \(\alpha _d\) introduced recently in [5] for the difference Laplacian (1.1). Bearing in mind the importance of both \(\alpha _{\mathrm{{comb}}}\) and \(\alpha _d\) in applications as well as the fact that these quantities are widely studied, we discuss these connections in Sect. 4.

Similar to manifolds and combinatorial Laplacians, one can estimate \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) by using the isoperimetric constant not only from below but also from above (Lemma 5.1). However, the price we have to pay is the existence of a positive lower bound on the edges lengths, \(\inf _{e\in {{E}}}|e|>0\). Combining these estimates with the results from Sect. 4, we conclude that in this case the positivity of \(\lambda _0({\mathbf {H}})\) (resp., \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\)) is equivalent to the validity of a strong isoperimetric inequality, i.e., \(\alpha _{\mathrm{{comb}}}>0\) (resp., \(\alpha _{\mathrm{{comb}}}^{{{\mathrm{ess}}}}>0\)).

In Sect. 6, we introduce a quantity which may be interpreted as a curvature of a metric graph. Firstly, using this quantity we are able to obtain estimates on the isoperimetric constant. Secondly, we discuss its connection with the curvatures introduced for combinatorial Laplacians in [20] and for unbounded difference Laplacians in [5]. The latter, in particular, enables us to obtain simple discreteness criteria for \(\sigma ({\mathbf {H}})\) (see Lemma 6.5 and Corollary 6.6), which to a certain extent can be seen as the analogs of the discreteness criteria from [22] and [30].

The estimates in terms of the volume growth are given in Sect. 7. In Sect. 8, we consider several illustrative examples. The case of trees is treated in Sect. 8.1. We show that for trees without inessential vertices and loose ends (vertices having degree 1), \(\lambda _0({\mathbf {H}})>0\) if and only if \(\sup _{e}|e|<\infty \). Moreover, the spectrum of \({\mathbf {H}}\) is purely discrete if and only if the number \(\#\{e\in {{E}}:|e|>\varepsilon \}\) is finite for every \(\varepsilon >0\). Notice that under the additional symmetry assumption that a given metric tree is regular similar results, however, for the so-called Neumann Laplacian were observed by Solomyak [61]. The case of antitrees is considered in Sect. 8.2. We provide some general estimates and also focus on two particular examples of exponentially and polynomially growing antitrees. In particular, it turns out that for a polynomially growing antitree, our results provide rather good estimates for \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) (see Example 8.9). In the last subsection, we consider the case of Cayley graphs of finitely generated groups. Similar to combinatorial Laplacians, the amenability/non-amenability of the underlying group plays a crucial role.

Finally, in “Appendix A” we provide a slight improvement to the Cheeger estimates from [5] by noting that one can replace intrinsic path metrics in the definition of isoperimetric constants simply by edge weight functions having an intrinsic property.

2 Quantum graphs

2.1 Combinatorial and metric graphs

In what follows, \({{G}}_d = ({{V}}, {{E}})\) will be an unoriented graph with countably infinite sets of vertices \({{V}}\) and edges \({{E}}\). For two vertices u, \(v\in {V}\) we shall write \(u\sim v\) if there is an edge \(e_{u,v}\in {E}\) connecting u with v. For every \(v\in {V}\), we denote the set of edges incident to the vertex v by \({{E}}_v\) and

$$\begin{aligned} \deg _{{{G}}}(v):= \#\{e|\, e\in {{E}}_v\} \end{aligned}$$
(2.1)

is called the degree (or combinatorial degree) of a vertex \(v\in {{V}}\). When there is no risk of confusion which graph is involved, we shall write \(\deg \) instead of \(\deg _{{{G}}}\). By \(\#(S)\) we denote the cardinality of a given set S. A path \({\mathcal {P}}\) of length \(n\in {\mathbb {Z}}_{\ge 1}\cup \{\infty \}\) is a sequence of vertices \(\{v_0,v_1,\dots , v_n\}\) such that \(v_{k-1}\sim v_k\) for all \(k\in \{1,\dots ,n\}\). If \(v_0 = v_n\) and all intermediate vertices are distinct, then \({\mathcal {P}}\) is called a cycle.

We shall always make the following assumption.

Hypothesis 2.1

The infinite graph \({{G}}_d\) is locally finite (\(\deg (v) < \infty \) for every \(v \in {{V}}\)), connected (for any two vertices \(u,v\in {{V}}\) there is a path connecting u and v), and simple (there are no loops or multiple edges).

Next we assign each edge \(e \in {{E}}\) a finite length \(|e|\in (0,\infty )\). In this case \({{G}}:=({{V}},{{E}},|\cdot |) = ({{G}}_d,|\cdot |)\) is called a metric graph. The latter enables us to equip \({{G}}\) with a topology and metric. Namely, by assigning each edge a direction and calling one of its vertices the initial vertex \(e_0\) and the other one the terminal vertex \(e_{i}\), every edge \(e\in {{E}}\) can be identified with a copy of the interval \({\mathcal {I}}_e= [0,|e|]\); moreover, the ends of the edges that correspond to the same vertex v are identified as well. Thus, \({{G}}\) can be equipped with the natural path metric \(\varrho _0\) (the distance between two points \(x,y\in {{G}}\) is defined as the length of the “shortest” path connecting x and y). Moreover, a metric graph \({{G}}\) can be considered as a topological space (one-dimensional simplicial complex). For further details we refer to, e.g., [9, Chapter 1.3].

Also throughout this paper we shall assume the following conditions.

Hypothesis 2.2

There is a finite upper bound for lengths of graph edges:

$$\begin{aligned} \ell ^*({{G}}):= \sup _{e\in {{E}}} |e|<\infty . \end{aligned}$$
(2.2)

In fact, Hypothesis 2.2 is not a restriction for our purposes [see Lemma 2.8 and also Remark 2.9(i)].

Hypothesis 2.3

All edges in \({{G}}\) are essential, that is, \(\deg (v)\ne 2\) for all \(v\in {{V}}\).

This assumption is not a restriction at all since vertices of degree 2 are irrelevant for the spectral properties of the Kirchhoff Laplacian and hence can be removed (see, e.g., [9, Remark 1.3.3]).

2.2 Kirchhoff’s Laplacian

Let \({{G}}\) be a metric graph satisfying Hypothesis 2.12.3. Upon identifying every \(e\in {{E}}\) with a copy of the interval \({\mathcal {I}}_e\) and considering \({{G}}\) as the union of all edges glued together at certain endpoints, let us introduce the Hilbert space \(L^2({{G}})\) of functions \(f:{{G}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} L^2({{G}}) = \bigoplus _{e\in {{E}}} L^2(e) = \Big \{f=\{f_e\}_{e\in {{E}}}\big |\, f_e\in L^2(e),\ \sum _{e\in {{E}}}\Vert f_e\Vert ^2_{L^2(e)}<\infty \Big \}. \end{aligned}$$

The subspace of compactly supported \(L^2({{G}})\) functions will be denoted by

$$\begin{aligned} L^2_c({{G}}) = \big \{f \in L^2({{G}})| \; f \ne 0 \text { only on finitely many edges } e \in {{E}}\big \}. \end{aligned}$$

Next let us equip \({{G}}\) with the Laplace operator. For every \(e\in {{E}}\) consider the maximal operator \(\mathrm{{H}}_{e,\max }\) acting on functions \(f\in H^2(e)\) as a negative second derivative. Here and below \(H^n(e)\) for \(n\in {\mathbb {Z}}_{\ge 0}\) denotes the usual Sobolev space. In particular, \(H^0(e)= L^2(e)\) and

$$\begin{aligned} H^1(e) = \{f\in AC(e):f'\in L^2(e)\},\quad H^2(e) = \{f\in H^1(e):f'\in H^1(e)\}. \end{aligned}$$

Now consider the maximal operator on \({{G}}\) defined by

$$\begin{aligned} {\mathbf {H}}_{\max } = \bigoplus _{e\in {{E}}} \mathrm{{H}}_{e,\max },\qquad \mathrm{{H}}_{e,\max } = -\frac{\mathrm{{d}}^2}{\mathrm{{d}}x_e^2},\quad {{\mathrm{dom}}}(\mathrm{{H}}_{e,\max }) = H^2(e). \end{aligned}$$
(2.3)

For every \(f_e\in H^2(e)\) the following quantities

$$\begin{aligned} f_e(e_o)&:= \lim _{x\rightarrow e_o} f_e(x),&f_e(e_i)&:= \lim _{x\rightarrow e_i} f_e(x), \end{aligned}$$
(2.4)

and

$$\begin{aligned} f_e'(e_o)&:= \lim _{x\rightarrow e_o} \frac{f_e(x) - f_e(e_o)}{|x - e_o|},&f_e'(e_i)&:= \lim _{x\rightarrow e_i} \frac{f_e(x) - f_e(e_i)}{|x - e_i|}, \end{aligned}$$
(2.5)

are well defined. The Kirchhoff (or Kirchhoff–Neumann) boundary conditions at every vertex \(v\in {{V}}\) are then given by

$$\begin{aligned} {\left\{ \begin{array}{ll} f\ \text {is continuous at}\ v,\\ \sum _{e\in {{E}}_v}f_e'(v) =0. \end{array}\right. } \end{aligned}$$
(2.6)

Imposing these boundary conditions on the maximal domain \({{\mathrm{dom}}}({\mathbf {H}}_{\max })\) and then restricting to compactly supported functions we get the pre-minimal operator

$$\begin{aligned} \begin{aligned} {\mathbf {H}}_{0}&= {\mathbf {H}}_{\max }\upharpoonright {{{\mathrm{dom}}}({\mathbf {H}}_{0})},\\&{{\mathrm{dom}}}({\mathbf {H}}_{0}) = \{f\in {{\mathrm{dom}}}({\mathbf {H}}_{\max })\cap L^2_{c}({{G}})|\, f\ \text {satisfies}\ (2.6),\ v\in {{V}}\}. \end{aligned} \end{aligned}$$
(2.7)

Integrating by parts one obtains that \({\mathbf {H}}_0\) is symmetric. We call its closure the minimal Kirchhoff Laplacian. Notice that the values of f at the vertices (2.4) and one-sided derivatives (2.5) do not depend on the choice of orientation on \({{G}}\). Moreover, the second derivative is also independent of orientation on \({{G}}\) and hence so is the operator \({\mathbf {H}}_0\).

Remark 2.1

If \(\deg (v) = 1\), then Kirchhoff’s condition (2.6) at v is simply the Neumann condition

$$\begin{aligned} f_e'(v) = 0. \end{aligned}$$
(2.8)

Let us mention that one can replace it by the Dirichlet condition

$$\begin{aligned} f_e(v) = 0 \end{aligned}$$
(2.9)

and we shall consider the operator \({\mathbf {H}}_0\) with mixed boundary conditions (either Neumann or Dirichlet) at the vertices \(v\in {{V}}\) of the graph \({{G}}\) such that \(\deg (v)=1\).

In the rest of our paper, we shall denote by \({{V}}_D\) (respectively, by \({{V}}_N\)) the set of vertices \(v\in {{V}}\) such that \(\deg (v)=1\) and the Dirichlet condition (2.9) [respectively, the Neumann condition (2.8)] is imposed at v. The sets of corresponding edges will be denoted by \({{E}}_D\) and \({{E}}_N\), respectively.

2.3 Self-adjointness

In the rest of our paper we shall always assume that the graph \({{G}}_d\) is infinite, that is, both sets \({{V}}\) and \({{E}}\) are infinite (since \({{G}}_d\) is assumed to be locally finite). In this case the operator \({\mathbf {H}}_0\) is not necessarily essentially self-adjoint (that is, its closure may have nonzero deficiency indices) and finding self-adjointness criteria is a challenging open problem. The next results were proved recently in [25]. Define the weight function \(m:{{V}}\rightarrow {\mathbb {R}}_{>0}\) by

$$\begin{aligned} m:v\mapsto \sum _{e\in {{E}}_v}|e|, \end{aligned}$$
(2.10)

and then let \(p_m:{{E}}\rightarrow {\mathbb {R}}_{>0}\) be given by

$$\begin{aligned} p_m:e_{u,v}\mapsto m(u) + m(v). \end{aligned}$$
(2.11)

The path metric \(\varrho _m\) on \({{V}}\) generated by \(p_m\) is defined by

$$\begin{aligned} \varrho _m(u,v) := \inf _{{\mathcal {P}}=\{v_0,\dots ,v_n\}:v_0=u\ v_n=v}\sum _{k} p_m(e_{v_{k-1},v_k}), \end{aligned}$$
(2.12)

where the infimum is taken over all paths connecting u and v.

Theorem 2.2

([25]) If \(({{V}},\varrho _m)\) is complete as a metric space, then \({\mathbf {H}}_0\) is essentially self-adjoint. In particular, \({\mathbf {H}}_0\) is essentially self-adjoint if

$$\begin{aligned} \inf _{v\in {{V}}} m(v) >0. \end{aligned}$$
(2.13)

Replacing \(p_m\) in (2.12) by the edge length \(|\cdot |\), we end up with the natural path metric \(\varrho _0\) on \({{V}}\). Clearly, \(({{V}},\varrho _m)\) is complete if so is \(({{V}},\varrho _0)\) and hence we arrive at the following Gaffney-type theorem for quantum graphs.

Corollary 2.3

([25]) If \({{G}}\) equipped with a natural path metric is complete as a metric space, then \({\mathbf {H}}_0\) is essentially self-adjoint.

The next well known result (see [9, Theorem 1.4.19]) also immediately follows from Theorem 2.2.

Corollary 2.4

If

$$\begin{aligned} \ell _*({{G}}) := \inf _{e\in {{E}}} |e|>0, \end{aligned}$$
(2.14)

then \({\mathbf {H}}_0\) is essentially self-adjoint.

2.4 Quadratic forms

In this section we present the variational definition of the Kirchhoff Laplacian. Consider the quadratic form

$$\begin{aligned} \mathfrak {t}_{{G}}^0[f]:= ({\mathbf {H}}_0 f,f)_{L^2({{G}})},\qquad f\in {{\mathrm{dom}}}(\mathfrak {t}^0_{{G}}):={{\mathrm{dom}}}({\mathbf {H}}_0). \end{aligned}$$
(2.15)

For every \(f\in {{\mathrm{dom}}}({\mathbf {H}}_0)\), an integration by parts gives

$$\begin{aligned} \mathfrak {t}^0_{{G}}[f] = \int _{{G}}|f'(x)|^2\,dx = \Vert f'\Vert ^2_{L^2({{G}})}. \end{aligned}$$
(2.16)

Clearly, the form \(\mathfrak {t}_{{G}}^0\) is nonnegative. Moreover, it is closable since \({\mathbf {H}}_0\) is symmetric. Let us denote its closure by \(\mathfrak {t}_{{G}}\) and the corresponding domain by \(H^1_0({{G}}):={{\mathrm{dom}}}(\mathfrak {t}_{{G}})\). By the first representation theorem, there is a unique nonnegative self-adjoint operator corresponding to the form \(\mathfrak {t}_{{G}}\).

Definition 2.5

The self-adjoint nonnegative operator \({\mathbf {H}}\) associated with the form \(\mathfrak {t}_{{G}}\) in \(L^2({{G}})\) will be called the Kirchhoff Laplacian.

If the pre-minimal operator \({\mathbf {H}}_0\) is essentially self-adjoint, then \({\mathbf {H}}\) coincides with its closure. In the case when \({\mathbf {H}}_0\) is a symmetric operator with nontrivial deficiency indices, the operator \({\mathbf {H}}\) is the Friedrichs extension of \({\mathbf {H}}_0\).

Remark 2.6

Of course, one may consider the maximally defined form

$$\begin{aligned} \mathfrak {t}_{{G}}^{(N)}[f] := \int _{{G}}|f'(x)|^2\,dx,\qquad f\in {{\mathrm{dom}}}\left( \mathfrak {t}^{(N)}_{{G}}\right) , \end{aligned}$$
(2.17)

where

$$\begin{aligned} {{\mathrm{dom}}}\left( \mathfrak {t}^{(N)}_{{G}}\right) := \left\{ f\in L^2({{G}})|\ f\in H^1_{{{\mathrm{loc}}}}({{G}}),\ f'\in L^2({{G}})\right\} = :H^1({{G}}), \end{aligned}$$
(2.18)

and then associate a self-adjoint positive operator, let us denote it by \({\mathbf {H}}^N\), with this form in \(L^2({{G}})\). Clearly, the forms \(\mathfrak {t}_{{G}}\) and \(\mathfrak {t}_{{G}}^{(N)}\) coincide if and only if \({\mathbf {H}}\) is the unique positive self-adjoint extension of \({\mathbf {H}}_0\) (this in particular holds if \({\mathbf {H}}_0\) is essentially self-adjoint). We are not aware of a description of the self-adjoint operator \({\mathbf {H}}^N\) associated with the form \(\mathfrak {t}_{{G}}^{(N)}\) if the pre-minimal operator has nontrivial deficiency indices (however, see the recent work [12, 37]). Moreover, to the best of our knowledge, the description of deficiency indices of \({\mathbf {H}}_0\) and its self-adjoint extensions is a widely open problem.

If at some vertices \(v\in {{V}}\) with \(\deg (v)=1\) the Neumann condition (2.8) is replaced by the Dirichlet condition (2.9), then the corresponding form domain will be denoted by \({{\widetilde{H}} }^1_0({{G}})\). Notice that

$$\begin{aligned} {{\widetilde{H}} }^1_0({{G}}) = \left\{ f\in H^1_0({{G}})|\ f_e(v)=0,\ v\in {{V}}_D\right\} . \end{aligned}$$
(2.19)

By abusing the notation, we shall denote the corresponding self-adjoint operator by \({\mathbf {H}}\). The bottom of the spectrum of \({\mathbf {H}}\) can be found by using the Rayleigh quotient

$$\begin{aligned} \lambda _0({\mathbf {H}}):=\inf \sigma ({\mathbf {H}}) = \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}})\\ f\ne 0 \end{array}}\frac{({\mathbf {H}}f,f)_{L^2({{G}})}}{\Vert f\Vert ^2_{L^2({{G}})}} = \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}})\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}})}}{\Vert f\Vert ^2_{L^2({{G}})}}. \end{aligned}$$
(2.20)

Moreover, the bottom of the essential spectrum is given by

$$\begin{aligned} \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}):=\inf \sigma _{{{\mathrm{ess}}}}({\mathbf {H}}) = \sup _{{{\widetilde{{G}}} }\subset {{G}}} \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}}{\setminus }{{\widetilde{{G}}} })\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}}{\Vert f\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}}, \end{aligned}$$
(2.21)

where the \(\sup \) is taken over all finite subgraphs \({{\widetilde{{G}}} }\) of \({{G}}\). Here for any \({{\widetilde{{G}}} } \subset {{G}}\) we define \( {{\widetilde{H}} }^1_0({{G}}{\setminus }{{\widetilde{{G}}} })\) as the set of \(H^1_0({{G}}{\setminus }{{\widetilde{{G}}} })\) functions satisfying the following boundary conditions: for vertices in \({{G}}{\setminus }{{\widetilde{{G}}} }\) having one or more edges in \({{\widetilde{{G}}} }\), we change the boundary conditions from Kirchhoff–Neumann to Dirichlet; for all other vertices in \({{G}}{\setminus } {{\widetilde{{G}}} }\), we leave them the same. This equality is known as a Persson-type theorem (or Glazman’s decomposition principle in the Russian literature, see [32]) and its proof in the case of quantum graphs is analogous to the case of Schrödinger operators (see, e.g., [17, Theorem 3.12]).

Remark 2.7

Let us mention that the following equivalence holds true

$$\begin{aligned} \lambda _0({\mathbf {H}}) = 0 \qquad \Longleftrightarrow \qquad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})=0. \end{aligned}$$
(2.22)

The implication “ \(\Leftarrow \) ” is obvious. However, \(\lambda _0({\mathbf {H}}) = 0\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\ne 0\) holds only if 0 is an isolated eigenvalue. On the other hand, (2.16) implies that 0 is an eigenvalue of \({\mathbf {H}}\) only if \({\mathbb {1}}\in L^2({{G}})\). The latter happens exactly when

$$\begin{aligned} \mathrm{{mes}}({{G}}):= \sum _{e\in {{E}}}|e|<\infty . \end{aligned}$$

and hence the equivalence (2.22) holds true whenever \(\mathrm{{mes}}({{G}})=\infty \),

On the other hand, it turns out that \({\mathbb {1}}\notin H^1_0({{G}})\) if \(\mathrm{{mes}}({{G}})<\infty \) and hence 0 is never an eigenvalue of \({\mathbf {H}}\) [see Corollary 3.5(iv)]. In particular, the latter implies that \(\mathfrak {t}_{{G}}\ne \mathfrak {t}_{{G}}^{(N)}\) if the metric graph \({{G}}\) has finite total volume, \(\mathrm{{mes}}({{G}})<\infty \). The analysis of this case is postponed to a separate publication.

If \( {{G}}_1\), \( {{G}}_2\) are finite subgraphs with \( {{G}}_1 \subseteq {{G}}_2 \subset {{G}}\), then \({{\widetilde{H}} }^1_0({{G}}{\setminus } {{G}}_2) \subseteq {{\widetilde{H}} }^1_0({{G}}{\setminus } {{G}}_1)\) in the sense that every function in \({{\widetilde{H}} }^1_0 ({{G}}{\setminus } {{G}}_2)\) can be extended to be in \({{\widetilde{H}} }^1_0 ({{G}}{\setminus } {{G}}_1)\) by setting it zero on remaining edges. Thus,

$$\begin{aligned} \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}}{\setminus } {{G}}_2)\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}}{\setminus }{{G}}_2)}}{\Vert f\Vert ^2_{L^2({{G}}{\setminus } {{G}}_2)}} \ge \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}}{\setminus } {{G}}_1)\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}}{\setminus } {{G}}_1)}}{\Vert f\Vert ^2_{L^2({{G}}{\setminus }{{G}}_1)}}. \end{aligned}$$

Let \({\mathcal {K}}_{{G}}\) be the set of all finite, connected subgraphs of \({{G}}\) ordered by the inclusion relation “\(\subseteq \)” and hence \({\mathcal {K}}_{{G}}\) is a net. Moreover,

$$\begin{aligned} \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) = \sup _{{{\widetilde{{G}}} }\in \mathcal {K}_G} \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}}{\setminus }{{\widetilde{{G}}} })\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}}{\Vert f\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}} = \lim _{{{\widetilde{{G}}} }\in \mathcal {K}_G} \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}}{\setminus }{{\widetilde{{G}}} })\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}}{\Vert f\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}}. \end{aligned}$$
(2.23)

The limit is understood in the sense of nets and in this case we will say that \({{\widetilde{{G}}} }\) tends to \({{G}}\).

The next result provides an estimate, which easily follows from (2.20) to (2.21).

Lemma 2.8

Set

$$\begin{aligned} \ell ^*_{{{\mathrm{ess}}}}({{G}}) := \inf _{{{\widetilde{{E}}} }} \sup _{e\in {{E}}{\setminus }{{\widetilde{{E}}} }} |e|, \end{aligned}$$
(2.24)

where the infimum is taken over all finite subsets \({{\widetilde{{E}}} }\) of \({{E}}\). Then

$$\begin{aligned} \lambda _0({\mathbf {H}}) \,{\le }\, \frac{\pi ^2}{\ell ^*({{G}})^2},&\quad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \,{\le }\, \frac{\pi ^2}{\ell ^*_{{{\mathrm{ess}}}}({{G}})^2}. \end{aligned}$$
(2.25)

Proof

By construction, the set \({{\widetilde{H}} }^1_c({{G}}):={{\widetilde{H}} }^1_0({{G}})\cap L^2_c({{G}})\) is a core for \(\mathfrak {t}_{{G}}\). Moreover, every \(f\in {{\widetilde{H}} }^1_c({{G}})\) admits a unique decomposition \(f=f_\mathrm{lin} + f_0\), where \(f_\mathrm{lin}\in {{\widetilde{H}} }^1_c({{G}})\) is piecewise linear on \({{G}}\) (that is, it is linear on every edge \(e\in {{E}}\)) and \(f_0\in {{\widetilde{H}} }^1_c({{G}})\) takes zero values at the vertices \({{V}}\). It is straightforward to check that

$$\begin{aligned} \mathfrak {t}_{{{G}}}[f]=\int _{{{G}}} |f'(x)|^2 dx = \int _{{{G}}} |f_\mathrm{lin}'(x)|^2 dx + \int _{{{G}}} |f_0'(x)|^2 dx = \mathfrak {t}_{{{G}}}[f_\mathrm{lin}] + \mathfrak {t}_{{{G}}}[f_0].\nonumber \\ \end{aligned}$$
(2.26)

Now the estimates (2.25) easily follow from the decomposition (2.26). Indeed, for every \(f=f_0 \in {{\widetilde{H}} }^1_c({{G}})\)

$$\begin{aligned} \mathfrak {t}_{{G}}[f_0] = \sum _{e\in {{E}}} \Vert f_{0,e}'\Vert ^2_{L^2(e)}, \end{aligned}$$
(2.27)

where \(f_{0,e}:= f_0\upharpoonright e \in H^1_0(e)\). Noting that

$$\begin{aligned} \inf _{f\in H^1_0([0,l])}\frac{\Vert f'\Vert ^2_{L^2}}{\Vert f\Vert ^2_{L^2}} = \left( \frac{\pi }{l}\right) ^2, \end{aligned}$$

and then taking into account (2.20) and (2.21), we arrive at (2.25). \(\square \)

Remark 2.9

A few remarks are in order:

  1. (i)

    The estimate (2.25) shows that the condition (2.2) is not a restriction since in the case \(\ell ^*({{G}})=\infty \) one immediately gets \(\lambda _0({\mathbf {H}})=\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})=0\). Moreover, in this case \(\sigma ({\mathbf {H}})\) coincides with the positive semi-axis \({\mathbb {R}}_{\ge 0}\) (see [60, Theorem 5.2]).

  2. (ii)

    The second inequality in (2.25) implies that the condition \(\ell ^*_{{{\mathrm{ess}}}}({{G}})=0\) is necessary for the spectrum of \({\mathbf {H}}\) to be purely discrete. Notice that \(\ell ^*_{{{\mathrm{ess}}}}({{G}})=0\) means that the number \(\#\{e\in {{E}}|\, |e|>\varepsilon \}\) is finite for every \(\varepsilon >0\).

  3. (iii)

    The estimates (2.25) can be slightly improved by noting that we can use other test functions on the edges \(e\in {{E}}_N\) to improve the bound \((\pi /|e|)^2\) by \((\pi /2|e|)^2\). For example, we get the following estimate

    $$\begin{aligned} \lambda _0({\mathbf {H}}) \le \min \Big \{ \inf _{e\in {{E}}{\setminus }{{E}}_N} \left( \frac{\pi }{|e|}\right) ^2, \inf _{e\in {{E}}_N} \left( \frac{\pi }{2|e|}\right) ^2\Big \}. \end{aligned}$$
    (2.28)

2.5 Connection with the difference Laplacian

In this section we restrict for simplicity to the case of Neumann boundary conditions at the loose ends, that is, \(f_e'(v)=0\) for all \(v\in {{V}}\) with \(\deg (v)=1\). Let the weight function \(m:{{V}}\rightarrow {\mathbb {R}}_{>0}\) be given by (2.10). Consider the difference Laplacian defined in \(\ell ^2({{V}};m)\) by the expression

$$\begin{aligned} (\tau _{{{G}}} f)(v) := \frac{1}{m(v)} \sum _{u\sim v} \frac{f(v) - f(u)}{|e_{u,v}|},\quad v\in {{V}}. \end{aligned}$$
(2.29)

Namely, \(\tau _{{G}}\) generates in \(\ell ^2({{V}};m)\) the pre-minimal operator

$$\begin{aligned} \begin{array}{cccc} {\mathbf {h}}_0 :&{} {{\mathrm{dom}}}({\mathbf {h}}_0) &{} \rightarrow &{} \ell ^2({{V}};m) \\ &{} f &{} \mapsto &{} \tau _{{G}}f \end{array},\qquad {{\mathrm{dom}}}({\mathbf {h}}_0):= C_c({{V}}), \end{aligned}$$
(2.30)

where \(C_c({{V}})\) is the space of finitely supported functions on \({{V}}\). The operator \({\mathbf {h}}_0\) is a nonnegative symmetric operator. Denote its Friedrichs extension by \({\mathbf {h}}\).

It was observed in [25] that the operators \({\mathbf {H}}\) and \({\mathbf {h}}\) are closely connected (for instance, by [25, Corollary 4.1(i)], \({\mathbf {H}}_0\) and \({\mathbf {h}}_0\) are essentially self-adjoint only simultaneously). In fact, it is not difficult to notice a connection between \({\mathbf {H}}\) and \({\mathbf {h}}\) by considering their quadratic forms (see [25, Remark 3.7]). Namely, let \({\mathcal {L}}= \ker ({\mathbf {H}}_{\max })\) be the kernel of \({\mathbf {H}}_{\max }\), which consists of piecewise linear functions on \({{G}}\). Every \(f\in {\mathcal {L}}\) can be identified with its values \(\{f(e_i), f(e_o)\}_{e\in {{E}}}\) on \({{V}}\) and, moreover,

$$\begin{aligned} \Vert f\Vert ^2_{L^2({{G}})} = \sum _{e\in {{E}}} |e| \frac{|f(e_i)|^2 + {{\mathrm{Re}}}(f(e_i)f(e_o)^*) + |f(e_o)|^2}{3}. \end{aligned}$$
(2.31)

Now restrict ourselves to the subspace \({\mathcal {L}}_{cont} = {\mathcal {L}}\cap C_c({{G}})\). Clearly,

$$\begin{aligned} \sum _{e\in {{E}}} |e| ({|f(e_i)|^2 + |f(e_o)|^2}) = \sum _{v\in {{V}}} |f(v)|^2\sum _{e\in {{E}}_v}|e|= \Vert f\Vert ^2_{\ell ^2({{V}};m)} \end{aligned}$$

defines an equivalent norm on \({\mathcal {L}}_{cont}\) since the Cauchy–Schwarz inequality immediately implies

$$\begin{aligned} \frac{1}{6}\Vert f\Vert ^2_{\ell ^2({{V}};m)} \le \Vert f\Vert ^2_{L^2({{G}})} \le \frac{1}{2}\Vert f\Vert ^2_{\ell ^2({{V}};m)}. \end{aligned}$$
(2.32)

On the other hand, for every \(f\in {\mathcal {L}}_{cont}\) we get

$$\begin{aligned} \begin{aligned} \mathfrak {t}_{{G}}[f] = ({\mathbf {H}}f,f)_{L^2({{G}})}&= \sum _{e\in {{E}}} \int _{e} |f'({x_e})|^2 d{x_e} = \sum _{e\in {{E}}} \frac{|f(e_o) - f(e_i)|^2}{|e| }\\&=\frac{1}{2}\sum _{u,v\in {{V}}} \frac{|f(v) - f(u)|^2}{|e_{u,v}|} =({\mathbf {h}}f,f)_{\ell ^2({{V}};m)}=:\mathfrak {t}_{{\mathbf {h}}}[f]. \end{aligned} \end{aligned}$$
(2.33)

Hence we end up with the following estimate.

Lemma 2.10

$$\begin{aligned} \lambda _0({\mathbf {H}}) \,{\le }\, 6\lambda _0({\mathbf {h}}), \qquad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \,{\le }\, 6\lambda _0^{{{\mathrm{ess}}}}({\mathbf {h}}). \end{aligned}$$
(2.34)

Proof

Clearly, the Rayleigh quotient (2.20) together with (2.32) and (2.33) imply

$$\begin{aligned} \lambda _0({\mathbf {H}}) = \inf _{f\in H^1_0({{G}})} \frac{\mathfrak {t}_{{G}}[f]}{\Vert f\Vert ^2_{L^2({{G}})}}&\le \inf _{f\in {\mathcal {L}}_{cont}} \frac{\mathfrak {t}_{{G}}[f]}{\Vert f\Vert ^2_{L^2({{G}})}} \\&\le \inf _{f \in C_c({{V}})} \frac{\mathfrak {t}_{\mathbf {h}}[f]}{\frac{1}{6}\Vert f\Vert ^2_{\ell ^2({{V}};m)}} = 6\lambda _0({\mathbf {h}}). \end{aligned}$$

\(\square \)

If \({{G}}\) is equilateral (that is, \(|e|= 1\) for all \(e\in {{E}}\)), then \(m(v) = \deg (v)\) for all \(v\in {{V}}\) and hence \(\tau _{{G}}\) coincides with the combinatorial Laplacian

$$\begin{aligned} (\tau _\mathrm{comb} f)(v) := \frac{1}{\deg _{{G}}(v)} \sum _{u\sim v} f(v) - f(u),\quad v\in {{V}}. \end{aligned}$$
(2.35)

In this particular case spectral relations between \({\mathbf {H}}\) and \({\mathbf {h}}\) have already been observed by many authors (see [63, 14, Theorem 1], [23] and [10, Theorem 3.18]).

Theorem 2.11

If \(|e|=1\) for all \(e\in {{E}}\), then

$$\begin{aligned} \lambda _0({\mathbf {h}}) = 1 - \cos \big (\sqrt{\lambda _0({\mathbf {H}})}\big ), \qquad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {h}}) = 1 - \cos \big (\sqrt{\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})}\big ). \end{aligned}$$
(2.36)

Remark 2.12

Actually, far more than (2.36) is known in the case of equilateral quantum graphs. In fact, there is a sort of unitary equivalence between equilateral quantum graphs and the corresponding combinatorial Laplacians (see [52, 53] and also [45]).

Hence for equilateral graphs we obtain

$$\begin{aligned} \lambda _0({\mathbf {h}}) \le \frac{1}{2}\lambda _0({\mathbf {H}}), \qquad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {h}}) \le \frac{1}{2}\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}). \end{aligned}$$

The latter together with (2.34) imply that for equilateral graphs the following equivalence holds true

$$\begin{aligned}&\lambda _0({\mathbf {H}})>0\quad \big (\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})>0\big )\Longleftrightarrow \lambda _0({\mathbf {h}})>0\quad \big (\lambda _0^{{{\mathrm{ess}}}}({\mathbf {h}})>0\big ). \end{aligned}$$
(2.37)

In fact, it was proved recently in [25, Corollary 4.1] that the equivalence (2.37) holds true if the metric graph \({{G}}\) satisfies Hypothesis 2.2. Unfortunately, there is no such simple connection like (2.36) if \({{G}}\) is not equilateral.

Remark 2.13

Spectral gap estimates for combinatorial Laplacians is an established topic with a vast literature because of their numerous applications (see [1, 2, 16, 18, 19, 26, 34, 65] and references therein). Recently there was a considerable interest in the study of spectral bounds for discrete (unbounded) Laplacians on weighted graphs (see [5, 39]). On the one hand, (2.36) and (2.37) indicate that there must be analogous estimates for quantum graphs, however, we should stress that (2.36) holds only for equilateral graphs. On the other hand, these connections also indicate that spectral estimates for quantum graphs should have a combinatorial nature.

Remark 2.14

Since \(\frac{4}{\pi ^2}x\le 1-\cos (\sqrt{x})\) for all \( x\in [0,{\pi ^2}/{4}]\), (2.36) implies the following estimate for equilateral quantum graphs

$$\begin{aligned} \lambda _0({\mathbf {H}}) \le \frac{\pi ^2}{4}\lambda _0({\mathbf {h}}), \qquad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \le \frac{\pi ^2}{4}\lambda _0^{{{\mathrm{ess}}}}({\mathbf {h}}), \end{aligned}$$

which improves (2.34). Moreover, the constant \(\pi ^2/4\) is sharp in the equilateral case. However, it remains unclear to us how sharp is the estimate (2.34).

3 The Cheeger-type bound

For every \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}\) we define the boundary of \({{\widetilde{{G}}} }\) with respect to the graph \({{G}}\) as the set of all vertices \(v\in {{\widetilde{{V}}} }{\setminus }{{V}}_N\) such that either \(\deg _{{{\widetilde{{G}}} }}(v)=1\) or \(\deg _{{{\widetilde{{G}}} }}(v)<\deg _{{{G}}}(v)\), that is,

$$\begin{aligned} \partial _{{G}}{{\widetilde{{G}}} } := \big \{v\in \widetilde{{{V}}} |\ v \in {{V}}_D \ \text {or} \ \deg _{{{\widetilde{{G}}} }}(v)<\deg _{{{G}}}(v)\big \}. \end{aligned}$$
(3.1)

For a given finite subgraph \({{\widetilde{{G}}} }\subset {{G}}\) we then set

$$\begin{aligned} \deg (\partial _{{G}}{{\widetilde{{G}}} }) := \sum _{v\in \partial _{{G}}{{\widetilde{{G}}} }} \deg _{{{\widetilde{{G}}} }}(v). \end{aligned}$$
(3.2)

Remark 3.1

Let us stress that our definition of a boundary is different from the combinatorial one. In particular, we define the boundary as the set of vertices whereas the combinatorial definition counts the number of edges connecting \({{\widetilde{{V}}} }\) with its complement \({{V}}{\setminus }{{\widetilde{{V}}} }\).

Definition 3.2

The isoperimetric (or Cheeger) constant of a metric graph \({{G}}\) is defined by

$$\begin{aligned} \alpha ({{G}}) := \inf _{{{\widetilde{{G}}} } \in {\mathcal {K}}_{{G}}} \; \frac{ \deg (\partial _{{G}}{{\widetilde{{G}}} }) }{\mathrm{{mes}}({{\widetilde{{G}}} })} \in [0, \infty ), \end{aligned}$$
(3.3)

where \(\mathrm{{mes}}({{\widetilde{{G}}} })\) denotes the Lebesgue measure of \({{\widetilde{{G}}} }\), \( \mathrm{{mes}}({{\widetilde{{G}}} }) := \sum _{e\in {{\widetilde{{E}}} }} |e|\).

The isoperimetric constant at infinity is defined by

$$\begin{aligned} \alpha _{{{\mathrm{ess}}}}({{G}}) := \sup _{{{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}} \alpha ({{G}}{\setminus } {{\widetilde{{G}}} }) \in [0,\infty ]. \end{aligned}$$
(3.4)

Recall that for any \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}\) we consider \( {{G}}{\setminus } {{\widetilde{{G}}} }\) with the following boundary conditions: for vertices in \({{G}}{\setminus } {{\widetilde{{G}}} }\) having one or more edges in \( {{\widetilde{{G}}} }\), we change the boundary conditions from Kirchhoff–Neumann to Dirichlet; for all other vertices in \({{G}}{\setminus } {{\widetilde{{G}}} }\), we leave them the same. These boundary conditions imply that for a subgraph \({\mathcal {Y}}\in {\mathcal {K}}_{{{G}}{\setminus } {{\widetilde{{G}}} }}\),

$$\begin{aligned} \partial _{{{G}}{\setminus } {{\widetilde{{G}}} }} {{\mathcal {Y}}} = \partial _{{{G}}} {{\mathcal {Y}}}, \end{aligned}$$
(3.5)

where the left-hand side is the boundary of \({\mathcal {Y}}\) with respect to \({{G}}{\setminus } {{\widetilde{{G}}} }\) (with the new Dirichlet conditions) and the right-hand side is the boundary with respect to the original graph \({{G}}\). Hence,

$$\begin{aligned} \alpha ({{G}}{\setminus } {{\widetilde{{G}}} }) = \inf _{{\mathcal {Y}}\in {\mathcal {K}}_{ {{G}}{\setminus } {{\widetilde{{G}}} }} }\; \frac{ \deg (\partial _{ {{G}}{\setminus } {{\widetilde{{G}}} }} {\mathcal {Y}}) }{\mathrm{{mes}}( {\mathcal {Y}})} = \inf _{{\mathcal {Y}}\in {\mathcal {K}}_{ {{G}}{\setminus } {{\widetilde{{G}}} }} }\; \frac{ \deg (\partial _{ {{G}}} {\mathcal {Y}}) }{\mathrm{{mes}}( {\mathcal {Y}})} \end{aligned}$$

and \(\alpha ({{G}}{\setminus } {{G}}_1 ) \le \alpha ({{G}}{\setminus } {{G}}_2 )\) whenever \( {{G}}_1 \subseteq {{G}}_2\). Thus,

$$\begin{aligned} \alpha _{{{\mathrm{ess}}}}({{G}}) = \sup _{ {{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}} \alpha ({{G}}{\setminus } {{\widetilde{{G}}} }) = \lim _{ {{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}} \alpha ({{G}}{\setminus } {{\widetilde{{G}}} }). \end{aligned}$$
(3.6)

Remark 3.3

Choosing \({{\widetilde{{G}}} }\) as an edge \(e\in {{E}}\) or a star \({{E}}_v\) with some \(v\in {{V}}\), one gets the following simple bounds on the isoperimetric constant

$$\begin{aligned} \alpha ({{G}}) \le \frac{2}{\ell ^*({{G}})}, \qquad \alpha ({{G}}) \le \inf _{v\in {{V}}} \frac{\deg _{{G}}(v)}{m(v)}. \end{aligned}$$
(3.7)

The next result is the analog of the famous Cheeger estimate for Laplacians on manifolds [15].

Theorem 3.4

$$\begin{aligned} \lambda _0({\mathbf {H}}) \ge \frac{1}{4}\alpha ({{G}})^2, \qquad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \ge \frac{1}{4}\alpha _{{{\mathrm{ess}}}}({{G}})^2. \end{aligned}$$
(3.8)

As an immediate corollary we get the following result.

Corollary 3.5

  1. (i)

    \({\mathbf {H}}\) is uniformly positive whenever \(\alpha ({{G}})>0\).

  2. (ii)

    \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})>0\) if \(\alpha _{{{\mathrm{ess}}}}({{G}})>0\).

  3. (iii)

    The spectrum of \({\mathbf {H}}\) is purely discrete if \(\alpha _{{{\mathrm{ess}}}}({{G}})=\infty \).

  4. (iv)

    If the metric graph \({{G}}\) has finite total volume, \(\mathrm{{mes}}({{G}})<\infty \), then \({\mathbf {H}}\) is a uniformly positive operator with purely discrete spectrum.

Proof

Clearly, we only need to prove (iv). Since \(\mathrm{{mes}}({{G}})<\infty \) and taking (3.3) into account, we immediately obtain

$$\begin{aligned} \alpha ({{G}}) \ge \frac{1}{\mathrm{{mes}}({{G}})}, \end{aligned}$$
(3.9)

which together with (3.8) implies the inequality \(\lambda _0({\mathbf {H}})>0\). Next, using (3.4) together with the estimate (3.9) and the net property of \({\mathcal {K}}_{{G}}\), one gets \(\alpha _{{{\mathrm{ess}}}}({{G}}) =\infty \), which finishes the proof. \(\square \)

Before proving the estimates (3.8) we need several preliminary lemmas. In what follows, for every \(U\subseteq {{G}}\), we shall denote by \(\partial U\) the boundary of a set U in the sense of the natural metric topology on \({{G}}\) (see Sect. 2.1). For every measurable function \(h:{{G}}\rightarrow {\mathbb {R}}\) and every \(t\in {\mathbb {R}}\) let us define the set

$$\begin{aligned} \Omega _h(t):= \{ x \in {{G}}| \; h(x)>t\}. \end{aligned}$$
(3.10)

The next statement is known as the co-area formula and we give its proof for the sake of completeness.

Lemma 3.6

If \(h :{{G}}\rightarrow {\mathbb {R}}\) is continuous on \({{G}}\) and continuously differentiable on every edge \(e\in {{E}}\), then

$$\begin{aligned} \int _{{{G}}} |h'(x)| \; dx = \int _{\mathbb {R}}\#( \partial \Omega _h(t) ) \; dt. \end{aligned}$$
(3.11)

Proof

Assume first that \(\mathrm{supp}(h)\subset e\) for some \(e\in {{E}}\). We can identify e with the open interval (0, |e|) and hence

$$\begin{aligned} M_e := \{x \in e| \; h'(x) \ne 0\} \end{aligned}$$

can be written as \(M_e = \bigcup _{n} I_n\) for (at most countably many) disjoint open intervals \(I_n \subseteq (0, |e|)\). Since h is strictly monotone on each of these intervals,

$$\begin{aligned} \int _{{G}}|h'(x)| \; dx&= \int _e |h'(x)| \; dx =\int _{M_e} |h'(x)| \; dx \\&= \sum _{n} \int _{I_n} |h'(x)|\; dx = \sum _{n} \mathrm{{mes}}(h(I_n)) = \sum _{n} \int _{\mathbb {R}}{\mathbb {1}}_{h(I_n)}(s) \; ds. \end{aligned}$$

Here \(\mathrm{{mes}}(X)\) denotes the Lebesgue measure of \(X\subseteq {\mathbb {R}}\). Moreover, by continuity of h, it is straightforward to check that \( {\mathbb {1}}_{h(I_n)}(t) = \#( \partial \Omega _h(t) \cap I_n)\) for all \(t\in {\mathbb {R}}\). Hence we end up with

$$\begin{aligned} \sum _{n} \int _{\mathbb {R}}{\mathbb {1}}_{h(I_n)}(t) \; dt = \sum _{n} \int _{\mathbb {R}}\#( \partial \Omega _h(t) \cap I_n) \; dt = \int _{\mathbb {R}}\#( \partial \Omega _h(t) \cap M_e) \; dt. \end{aligned}$$

Now assume that \(t \in {\mathbb {R}}\) is such that \(\partial \Omega _h(t) \cap M_e^c \ne \varnothing \), where

$$\begin{aligned} M_e^c:=e{\setminus } M_e = \{x \in e| \; h'(x) = 0\} \end{aligned}$$

is the set of critical points of h. By Sard’s Theorem [59], \(h(M_e^c)\) has Lebesgue measure zero and hence

$$\begin{aligned} \int _{\mathbb {R}}\#( \partial \Omega _h(t) \cap M_e) \; dt = \int _{\mathbb {R}}\#( \partial \Omega _h(t) \cap e) \; dt. \end{aligned}$$

Assume now that \(h:{{G}}\rightarrow {\mathbb {R}}\) is an arbitrary function satisfying the assumptions. Then we get

$$\begin{aligned} \int _{{{G}}} |h'(x)| \; dx&= \sum _{e \in {{E}}} \int _e |h'(x)| \; dx \\&= \sum _{e \in {{E}}} \int _{\mathbb {R}}\#( \partial \Omega _h(t) \cap e) \; dt = \int _{\mathbb {R}}\#( \partial \Omega _h(t) \cap ({{G}}\backslash {{V}})) \; dt. \end{aligned}$$

If \(\partial \Omega _h(t) \cap {{V}}\ne \varnothing \), then \(t \in h({{V}})\). Since \({{V}}\) is countable, we arrive at (3.11). \(\square \)

Next it will turn out useful to rewrite the Cheeger constant (3.3) in the following way. Let

$$\begin{aligned} {{\widetilde{\alpha }} }({{G}}) := \inf _{U \in {\mathcal {U}}_{{G}}} \frac{\#(\partial U)}{\mathrm{{mes}}(U)}, \end{aligned}$$
(3.12)

where \({\mathcal {U}}_{{G}}= \cup _{{{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}} {\mathcal {U}}_{{{\widetilde{{G}}} }}\) and

$$\begin{aligned} {\mathcal {U}}_{{{\widetilde{{G}}} }} = \{ U \subseteq {{\widetilde{{G}}} } | \; U \text { is open}, \; U \cap {{V}}_D = \varnothing \text { and } \partial U \cap {{V}}= \varnothing \}. \end{aligned}$$
(3.13)

Lemma 3.7

Let \(\alpha ({{G}})\) be defined by (3.3). Then

$$\begin{aligned} \alpha ({{G}}) = {{\widetilde{\alpha }} }({{G}}). \end{aligned}$$
(3.14)

Proof

(i) It easily follows from the definition of \({{\widetilde{\alpha }} }({{G}})\) that

$$\begin{aligned} {{\widetilde{\alpha }} }({{G}}) \le \alpha ({{G}}). \end{aligned}$$

Indeed, assume first that \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}\) and identify \({{\widetilde{{G}}} }\) with a closed subset of the graph. For a sufficiently small \(\varepsilon >0\), we cut out a ball \(B_\varepsilon (v)\) of radius \(\varepsilon \) at each point in \(v\in \partial _{{G}}{{\widetilde{{G}}} }\) and obtain the set

$$\begin{aligned} U := {{\widetilde{{G}}} }\backslash \bigcup _{v \in \partial _{{G}}{{\widetilde{G}} } } B_\varepsilon (v). \end{aligned}$$

We have \(U \in {\mathcal {U}}_{{G}}\) and, moreover, \(\partial U\) has precisely \(\deg (\partial _{{G}}{{\widetilde{{G}}} })\) points. In total,

$$\begin{aligned} \frac{\#(\partial U)}{\mathrm{{mes}}(U)} = \frac{\deg (\partial _{{G}}{{\widetilde{{G}}} })}{\mathrm{{mes}}({{\widetilde{{G}}} }) - \varepsilon \deg (\partial _{{G}}{{\widetilde{{G}}} })}. \end{aligned}$$

Letting \(\varepsilon \) tend to zero, we obtain the desired inequality.

(ii) To prove the other inequality, let \(U \in {\mathcal {U}}_{{G}}\) and \({{\widetilde{{G}}} }= ({{\widetilde{{V}}} }, {{\widetilde{{E}}} })\) be the finite subgraph consisting of all edges \(e\in {{E}}\) with \(e \cap U \ne \varnothing \) and all vertices incident to such an edge. Clearly, \( \mathrm{{mes}}(U) \le \mathrm{{mes}}({{\widetilde{{G}}} })\). Also, by (3.2),

$$\begin{aligned} \deg (\partial _{{G}}{{\widetilde{{G}}} })&= \sum _{v \in \partial {{\widetilde{{G}}} }} \deg _{{{\widetilde{{G}}} }} (v) = \# \left\{ e \in {{\widetilde{{E}}} }| \; e \text { connects } \partial _{{G}}{{\widetilde{{G}}} }\text { and } {{\widetilde{{G}}} }\backslash \partial _{{G}}{{\widetilde{{G}}} }\right\} \\&\quad + 2 \# \left\{ e \in {{\widetilde{{E}}} }| \; \text { both vertices are in } \partial _{{G}}{{\widetilde{{G}}} }\right\} . \end{aligned}$$

Since U is open, every point of \(\partial _{{G}}{{\widetilde{{G}}} }\) is not in U. Therefore, every edge in the subgraph \({{\widetilde{{G}}} }\) connected to a vertex in \(\partial _{{G}}{{\widetilde{{G}}} }\) must contain at least one boundary point of U. If both vertices of the edge are in \(\partial _{{G}}{{\widetilde{{G}}} }\), it must even contain at least two boundary points of U. Also, since \({{V}}\cap \partial U = \varnothing \), the boundary points lie in the strict interior of each edge and therefore cannot coincide for different edges. Thus, \(\deg (\partial _{{G}}{{\widetilde{{G}}} }) \le \#(\partial U)\).

Finally, notice that \({{\widetilde{{G}}} }\) might be disconnected. If it is the case, then write \({{\widetilde{{G}}} }= \dot{\cup }_{n} {{\widetilde{{G}}} }_n\) as a disjoint, finite union of connected subgraphs \({{\widetilde{{G}}} }_n\in {\mathcal {K}}_{{G}}\). Then

$$\begin{aligned} \frac{\#(\partial U)}{\mathrm{{mes}}(U)} \ge \frac{\deg (\partial _{{G}}{{\widetilde{{G}}} })}{\mathrm{{mes}}({{\widetilde{{G}}} })} = \frac{\sum _n \deg (\partial _{{G}}{{\widetilde{{G}}} }_n)}{\sum _n \mathrm{{mes}}({{\widetilde{{G}}} }_n)} \ge \min _{n} \frac{ \deg (\partial _{{G}}{{\widetilde{{G}}} }_n)}{\mathrm{{mes}}({{\widetilde{{G}}} }_n)}, \end{aligned}$$

which implies that \({{\widetilde{\alpha }} }({{G}}) \ge \alpha ({{G}})\). \(\square \)

Now we are in position to prove the Cheeger-type estimates (3.8).

Proof of Theorem 3.4

Let us show that the following inequality

$$\begin{aligned} \alpha ({{G}})\,\Vert f\Vert _{L^2({{G}})} \le 2\Vert f'\Vert _{L^2({{G}})} \end{aligned}$$
(3.15)

holds true for all \(f\in {{\mathrm{dom}}}(\mathfrak {t}_{{G}}^0)={{\mathrm{dom}}}({\mathbf {H}}_0)\). Without loss of generality we can restrict ourselves to real-valued functions. So, suppose \(f\in {{\mathrm{dom}}}({\mathbf {H}}_0)\) is real-valued. Observe that (see, e.g., [31, Lemma I.4.1])

$$\begin{aligned} \Vert f\Vert ^2_{L^2({{G}})} = \int _{{{G}}} f(x)^2 \; dx = \int _0^\infty \mathrm{{mes}}( \Omega _{f^2}(t)) \; dt. \end{aligned}$$

Next we want to use Lemma 3.7 with \(h=f^2\). If \(t >0\) is such that \(\partial \Omega _{f^2}(t) \cap {{V}}\ne \varnothing \), then \(t \in f^2({{V}})\) by continuity of \(f^2\). Since \({{V}}\) and hence \(f^2({{V}})\) are countable, we get that \(\Omega _{f^2}(t) \in {\mathcal {U}}_{{G}}\) for almost every \(t >0\). Thus, in view of Lemma 3.7

$$\begin{aligned} \alpha ({{G}}) \Vert f\Vert _{L^2}^2 \le \int _0^\infty \#(\partial \Omega _{f^2}(t) ) \; dt. \end{aligned}$$
(3.16)

On the other hand, applying Lemma 3.6 to \(h=f^2\) and then the Cauchy–Schwarz inequality, we get

$$\begin{aligned} \int _0^\infty \#(\partial \Omega _{f^2}(t)) dt = 2\int _{{{G}}}|f(x)f'(x)| dx \le 2\Vert f\Vert _{L^2({{G}})}\Vert f'\Vert _{L^2({{G}})}. \end{aligned}$$
(3.17)

Combining the last two inequalities, we arrive at (3.15), which together with the Rayleigh quotient (2.20) proves the first inequality in (3.8).

The proof of the second inequality in (3.8) follows the same line of reasoning since by (2.21)

$$\begin{aligned} \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\ge \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1_0({{G}}{\setminus }{{\widetilde{{G}}} })\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}}{\Vert f\Vert ^2_{L^2({{G}}{\setminus }{{\widetilde{{G}}} })}}, \end{aligned}$$

for every finite subgraph \({{\widetilde{{G}}} }\) of \({{G}}\). Notice that the boundary conditions on \({{G}}{\setminus }{{\widetilde{{G}}} }\) are defined after (3.4). \(\square \)

Remark 3.8

The Cheeger estimate for finite quantum graphs was first proved in [50] (see also [56, §6] and [39]). Our result extends [50, Theorem 3.2] to the case of infinite graphs and also provides a bound on the essential spectrum of \({\mathbf {H}}\). However, our definition of the isoperimetric constant (3.8) is purely combinatorial since the infimum is taken over finite connected subgraphs of \({{G}}\), although the definition in [50] (see also [40, 56]) is similar to (3.12).

Let us mention that one can obtain a similar statement for the operator \({\mathbf {H}}^N\) that is related to the maximally defined quadratic form (see Remark 2.6). However, one needs to take the infimum in the definition of the isoperimetric constant over all subgraphs of finite volume.

Taking into account the equivalence (2.22), let us finish this section with the next observation.

Lemma 3.9

The following equivalence holds true

$$\begin{aligned} \alpha ({{G}}) =0 \quad \Longleftrightarrow \quad \alpha _{{{\mathrm{ess}}}}({{G}}) =0. \end{aligned}$$
(3.18)

Proof

Clearly, we only need to prove the implication \(\alpha ({{G}}) =0\ \Rightarrow \ \alpha _{{{\mathrm{ess}}}}({{G}}) =0\). Assume the converse, that is, there is an infinite graph \({{G}}\) satisfying Hypotheses 2.12.3 such that \(\alpha ({{G}}) =0\) and \(\alpha _{{{\mathrm{ess}}}}({{G}})>0\). Then by (3.3), there is a sequence \(\{{{G}}_n\}\subset {\mathcal {K}}_{{G}}\) such that

$$\begin{aligned} \alpha ({{G}}) = \lim _{n\rightarrow \infty } \frac{\deg (\partial _{{G}}{{G}}_n)}{\mathrm{{mes}}({{G}}_n)} = 0. \end{aligned}$$

On the other hand, (3.4) implies that there is \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}\) such that \(\alpha ({{G}}{\setminus }{{\widetilde{{G}}} }) = \alpha _0>0\). In particular, taking into account (3.5), the latter is equivalent to the fact that

$$\begin{aligned} \frac{\deg (\partial _{{{G}}{\setminus }{{\widetilde{{G}}} }} {\mathcal {Y}})}{\mathrm{{mes}}({\mathcal {Y}})} = \frac{\deg (\partial _{{{G}}} {\mathcal {Y}})}{\mathrm{{mes}}({\mathcal {Y}})} \ge \alpha _0>0 \end{aligned}$$

for every finite subgraph \({\mathcal {Y}}\subset {{G}}{\setminus }{{\widetilde{{G}}} }\).

Next observe that

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{\deg (\partial _{{G}}({{G}}_n{\setminus }{{\widetilde{{G}}} }))}{\mathrm{{mes}}({{G}}_n{\setminus }{{\widetilde{{G}}} })} = 0, \end{aligned}$$

which leads to a contradiction. Indeed, by construction, \(\lim _{n\rightarrow \infty }\mathrm{{mes}}({{G}}_n) = \infty \) and hence \(\mathrm{{mes}}({{G}}_n{\setminus }{{\widetilde{{G}}} }) = \mathrm{{mes}}({{G}}_n)(1+o(1))\) as \(n\rightarrow \infty \). It remains to note that

$$\begin{aligned} \deg (\partial _{{G}}{{G}}_n) - \deg ({{\widetilde{{G}}} }) \le \deg \left( \partial _{{G}}({{G}}_n{\setminus }{{\widetilde{{G}}} })\right) \le \deg (\partial _{{G}}{{G}}_n) + \deg ({{\widetilde{{G}}} }). \end{aligned}$$

\(\square \)

4 Connections with discrete isoperimetric constants

For every vertex set \(X \subseteq {{V}}\), we define its boundary and interior edges by

$$\begin{aligned} {{E}}_b(X)&= \{e \in {{E}}| \; e \text { connects } X \text { and } {{V}}\backslash X \}, \\ {{E}}_i(X)&= \{ e \in {{E}}| \; \text { all vertices incident to } e \text { are in } X \}. \end{aligned}$$

Also, for a vertex set \(X \subseteq {{V}}\) we set

$$\begin{aligned} m(X) := \sum _{v \in X} m(v), \end{aligned}$$

where \(m:{{V}}\rightarrow (0,\infty )\) is defined by (2.10) (in fact, \(m(v) = \mathrm{{mes}}({{E}}_v)\) for every \(v\in {{V}}\)). The (discrete) isoperimetric constant \(\alpha _d(Y)\) of \(Y\subseteq {{V}}\) is defined by

$$\begin{aligned} \alpha _d(Y) := \inf _{ \begin{array}{c} X \subseteq Y \\ X \text { is finite} \end{array}} \; \frac{\#({{E}}_b(X))}{m(X)} \in [0, \infty ). \end{aligned}$$
(4.1)

The discrete isoperimetric constant of the graph \({{G}}\) is then given by

$$\begin{aligned} \alpha _d({{V}}) := \inf _{ \begin{array}{c} X \subseteq {{V}}\\ X \text { is finite} \end{array}} \; \frac{\#({{E}}_b(X))}{m(X)} \in [0, \infty ). \end{aligned}$$
(4.2)

Moreover, we need the discrete isoperimetric constant at infinity

$$\begin{aligned} \alpha _d^{{{\mathrm{ess}}}}({{V}}) := \sup _{ \begin{array}{c} X \subseteq {{V}}\\ X \text { is finite} \end{array}} \alpha _d({{V}}{\setminus } X) \in [0, \infty ]. \end{aligned}$$
(4.3)

Remark 4.1

Our definition of the isoperimetric constants follows the one provided in “Appendix A” (see Remark A.4). This definition is slightly different from the one given in [5], which uses the notion of an intrinsic metric on \({{V}}\) (cf. [28]). In particular, the natural path metric \(\varrho _0\) (cf. Sect. 2.3) is intrinsic in the sense of [5, 28] and in certain cases (if, for example, \({{G}}_d\) is a tree) the corresponding definitions from [5] coincide with (4.2) and (4.3). Notice that the following Cheeger-type estimates for the discrete Laplacian (2.29)–(2.30) (see [5, Theorems 3.1 and 3.3] and Theorem A.1) hold true

$$\begin{aligned} \lambda _0({\mathbf {h}})&\ge \frac{1}{2}\alpha _d({{V}})^2,&\lambda _0^{{{\mathrm{ess}}}}({\mathbf {h}})&\ge \frac{1}{2}\alpha ^{{{\mathrm{ess}}}}_d({{V}})^2. \end{aligned}$$
(4.4)

The next result provides a connection between isoperimetric constants.

Lemma 4.2

The isoperimetric constants (3.3) and (4.2) can be related by

$$\begin{aligned} \frac{1}{2}\alpha ({{G}}) \,{\le }\, \alpha _d({{V}}), \quad \frac{2}{\alpha ({{G}})} \,{\le }\, \frac{1}{\alpha _d({{V}})} + \ell ^*({{G}}). \end{aligned}$$
(4.5)

In particular, the isoperimetric constants at infinity (3.4) and (4.3) satisfy

$$\begin{aligned} \frac{1}{2}\alpha _{{{\mathrm{ess}}}}({{G}}) \,{\le }\, \alpha ^{{{\mathrm{ess}}}}_d({{V}}), \quad \frac{2}{\alpha _{{{\mathrm{ess}}}}({{G}})} \le \frac{1}{\alpha _d^{{{\mathrm{ess}}}}({{V}})} + \ell ^*_{{{\mathrm{ess}}}}({{G}}). \end{aligned}$$
(4.6)

Proof

(i) First, let \(X \subset {{V}}\) be finite. Let also \({{\widetilde{{G}}} }= ( {{\widetilde{{V}}} }, {{\widetilde{{E}}} })\) be the finite subgraph of \({{G}}\) consisting of all edges with at least one vertex in the set X. Observe that

$$\begin{aligned} {{\widetilde{{E}}} }= \bigcup _{v\in X} {{E}}_v = {{E}}_i (X) \cup {{E}}_b(X). \end{aligned}$$

Then

$$\begin{aligned} m(X) = \sum _{v \in X} m(v) = 2 \sum _{e \in {{E}}_i(X)} |e| + \sum _{e \in {{E}}_b(X)} |e| \le 2 \sum _{e \in {{\widetilde{{E}}} }} |e| = 2\, \mathrm{{mes}}({{\widetilde{{G}}} }). \end{aligned}$$

Note that for every \(v \in X\), the whole star \({{E}}_v\) attached to it is in \({{\widetilde{{G}}} }\). Therefore, every vertex from \(\partial _{{G}}{{\widetilde{{G}}} }\) is not in X. Now consider an edge e in the subgraph \({{\widetilde{{G}}} }\) which is connected to a vertex \(v\in \partial _{{G}}{{\widetilde{{G}}} }\). Then its other endpoint must be in X (because of the definition of \({{\widetilde{{G}}} }\)). Hence

$$\begin{aligned} \deg (\partial _{{G}}{{\widetilde{{G}}} })&= \sum _{v\in \partial {{\widetilde{{G}}} }} \deg _{{{\widetilde{{G}}} }}(v) = \sum _{v\in \partial {{\widetilde{{G}}} }} \#\{e | \; e \text { connects } v \text { and } X \} \\&\le \#\{e \in {{\widetilde{{E}}} }| \; e \text { connects } X \text { and } {{V}}\backslash X \} = \#( {{E}}_b(X)). \end{aligned}$$

Splitting \({{\widetilde{{G}}} }\) in finitely many connected components as in the proof of Lemma 3.7, we arrive at the first inequality in (4.5).

To prove the second inequality, assume \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}\). Write \({{\widetilde{{E}}} }= {{\widetilde{{E}}} }_0 \cup {{\widetilde{{E}}} }_1 \cup {{\widetilde{{E}}} }_2\), where \({{\widetilde{{E}}} }_0\), \({{\widetilde{{E}}} }_1\), \({{\widetilde{{E}}} }_2\) are the sets of edges in the subgraph with, respectively, none, one, and two vertices in \(\partial _{{G}}{{\widetilde{{G}}} }\). Clearly,

$$\begin{aligned} \deg (\partial _{{G}}{{\widetilde{{G}}} }) = \#({{\widetilde{{E}}} }_1) + 2 \#({{\widetilde{{E}}} }_2). \end{aligned}$$
(4.7)

Now define the finite vertex set \(X := {{\widetilde{{V}}} }\backslash \partial _{{G}}{{\widetilde{{G}}} }\). We have

$$\begin{aligned} {{E}}_i (X) = {{\widetilde{{E}}} }_0, \quad {{E}}_b (X) = {{\widetilde{{E}}} }_1. \end{aligned}$$

Thus,

$$\begin{aligned} 2 \frac{\mathrm{{mes}}({{\widetilde{{G}}} }) }{\deg (\partial _{{G}}{{\widetilde{{G}}} })}&= 2 \frac{\sum _{e\in {{\widetilde{{E}}} }_0} |e| + \sum _{e\in {{\widetilde{{E}}} }_1} |e| + \sum _{e\in {{\widetilde{{E}}} }_2} |e|}{\#({{\widetilde{{E}}} }_1) + 2 \#({{\widetilde{{E}}} }_2)} \\&= \frac{ 2 \sum _{e\in {{E}}_i(X)} |e| + \sum _{e\in {{E}}_b(X)} |e| }{\#({{E}}_b(X)) + 2 \#({{\widetilde{{E}}} }_2)} + \frac{\sum _{e\in {{E}}_b(X)} |e| + 2 \sum _{e\in {{\widetilde{{E}}} }_2} |e| }{\#({{E}}_b(X)) + 2 \#({{\widetilde{{E}}} }_2)} \\&= \frac{ m(X) }{\#({{E}}_b(X)) + 2 \#({{\widetilde{{E}}} }_2)} + \frac{\sum _{e\in {{E}}_b(X)} |e| + 2 \sum _{e\in {{\widetilde{{E}}} }_2} |e| }{\#({{E}}_b(X)) + 2 \#({{\widetilde{{E}}} }_2)}\\&\le \frac{ m(X) }{\#({{E}}_b(X))} + \frac{\sum _{e\in {{E}}_b(X)} |e| + 2 \sum _{e\in {{\widetilde{{E}}} }_2} |e| }{\#({{E}}_b(X)) + 2 \#({{\widetilde{{E}}} }_2)} \le \frac{ m(X) }{\#({{E}}_b(X))} + \sup _{e\in {{E}}} |e|. \end{aligned}$$

(ii) To prove (4.6), let first \(X \subseteq {{V}}\) be a finite and connected (in the sense that for two vertices in X, there always exists a path connecting them and only passing through vertices in X) set of vertices. Then the subgraph \({{\widetilde{{G}}} }_X \subseteq {{G}}\) consisting of all edges with both vertices in X is finite and connected. Now note that for a finite vertex set \(Y \subseteq {{V}}{\setminus } X\), the subgraph \({{\widetilde{{G}}} }_Y\) defined above is contained in \({{G}}{\setminus } {{\widetilde{{G}}} }_X\). Hence taking into account (3.5) and using the same line of reasoning as in (i), we get \(\alpha ( {{G}}{\setminus } {{\widetilde{{G}}} }_X) \le 2\alpha _d ({{V}}{\setminus } X)\). Finally, choose an increasing sequence \(\{X_n\} \subseteq {{V}}\) of finite and connected vertex sets such that every finite vertex set \(X \subseteq {{V}}\) is eventually contained in \(X_n\). Then the corresponding sequence \(\{{{\widetilde{{G}}} }_n\} \subseteq {\mathcal {K}}_{{{G}}}\) of subgraphs is increasing and every finite, connected subgraph \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}\) is eventually contained in \({{\widetilde{{G}}} }_n\). In view of (3.6), we obtain the first inequality in (4.6) by taking limits.

To prove the second, for a subgraph \({{G}}_0 \in {\mathcal {K}}_{{G}}\), choose X to be the set of vertices in \({{G}}_0\). Let \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{{G}}{\setminus } {{G}}_0}\). If a vertex v is both in \({{\widetilde{{V}}} }\) and in X, then it has at least one incident edge which lies in the cut out graph \({{G}}_0\) and therefore \(v \in \partial _{{G}}{{\widetilde{{G}}} }\). Thus, the vertex set \(Y = {{\widetilde{{V}}} }\backslash \partial _{{G}}{{\widetilde{{G}}} }\) satisfies \(Y \cap X = \varnothing \). Refining the previous estimate,

$$\begin{aligned} 2 \frac{\mathrm{{mes}}({{\widetilde{{G}}} }) }{\deg (\partial _{{G}}{{\widetilde{{G}}} })} \le \frac{ m(Y) }{\#({{E}}_b(Y))} + \frac{\sum _{e\in {{E}}_b(Y)} |e| + 2 \sum _{e\in {{\widetilde{{E}}} }_2} |e| }{\#({{E}}_b(Y)) + 2 \#({{\widetilde{{E}}} }_2)} \le \frac{ m(Y) }{\#({{E}}_b(Y))} + \ell ^*({{G}}{\setminus } {{G}}_0), \end{aligned}$$

and hence

$$\begin{aligned} \frac{2}{\alpha ({{G}}{\setminus } {{G}}_0)} \le \frac{1}{\alpha _d({{V}}{\setminus } X)} + \ell ^*({{G}}{\setminus } {{G}}_0). \end{aligned}$$

Choosing an increasing sequence \(\{{{G}}_n\} \subseteq {\mathcal {K}}_{{G}}\) such that every \({{G}}_0 \in {\mathcal {K}}_{{G}}\) is eventually contained in \({{G}}_n\) and applying the same limit argument as before, we arrive at the second inequality in (4.6). \(\square \)

Remark 4.3

It can be seen by examples that the estimates (4.5) and (4.6) are sharp. Indeed, on the equilateral Bethe lattice (see Example 8.3), one gets equalities in the second inequalities (4.5) and (4.6) [cf. (8.3)].

Combining (4.5) with Corollary 3.5, we obtain Theorem 4.18 from [25].

Corollary 4.4

([25])

  1. (i)

    \(\lambda _0({\mathbf {H}})>0\) if \(\alpha _d({{V}})>0\).

  2. (ii)

    \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})>0\) if \(\alpha _d^{{{\mathrm{ess}}}}({{V}})>0\).

  3. (iii)

    The spectrum of \({\mathbf {H}}\) is purely discrete if the number \(\#\{e\in {{E}}:|e|>\varepsilon \}\) is finite for every \(\varepsilon >0\) and \(\alpha _d^{{{\mathrm{ess}}}}({{V}})=\infty \).

Proof

We only need to mention that \(\ell ^*_{{{\mathrm{ess}}}}({{G}}) = 0\) if and only if the number \(\#\{e\in {{E}}:|e|>\varepsilon \}\) is finite for every \(\varepsilon >0\). Moreover, in this case it follows from (4.6) that \(\alpha ^{{{\mathrm{ess}}}}({{G}}) = \alpha _d^{{{\mathrm{ess}}}}({{V}})\). \(\square \)

Finally, let us mention that in the case of equilateral graphs the discrete isoperimetric constants coincide with the combinatorial isoperimetric constants introduced in [21]:

$$\begin{aligned} \alpha _{\mathrm{{comb}}}({{V}}) \,{=}\, \inf _{\begin{array}{c} X \subseteq {{V}}\\ X \text {\ is finite} \end{array}} \frac{\#(\partial X)}{\deg (X)}, \quad \alpha _{\mathrm{{comb}}}^{{{\mathrm{ess}}}}({{V}}) \,{=}\, \sup _{ \begin{array}{c} X \subseteq {{V}}\\ X \text { is finite} \end{array}} \alpha _{\mathrm{{comb}}}({{V}}{\setminus } X) \end{aligned}$$
(4.8)

Comparing (4.8) with (4.2) and (4.3) and noting that

$$\begin{aligned} \ell _*({{G}})\deg _{{G}}(v)\le m(v) \le \ell ^*({{G}})\deg _{{G}}(v) \end{aligned}$$

for all \(v\in {{V}}\), one easily derives the estimates

$$\begin{aligned} \frac{\alpha _{\mathrm{{comb}}}({{V}})}{\ell ^*({{G}})} \,{\le }\, \alpha _d({{V}}) \,{\le }\, \frac{\alpha _{\mathrm{{comb}}}({{V}})}{\ell _*({{G}})}, \quad \frac{\alpha _{\mathrm{{comb}}}^{{{\mathrm{ess}}}}({{V}})}{\ell _{{{\mathrm{ess}}}}^*({{G}})} \,{\le }\, \alpha _d^{{{\mathrm{ess}}}}({{V}}) \,{\le }\, \frac{\alpha _{\mathrm{{comb}}}^{{{\mathrm{ess}}}}({{V}})}{\ell ^{{{\mathrm{ess}}}}_*({{G}})}. \end{aligned}$$

Here

$$\begin{aligned} \ell _*^{{{\mathrm{ess}}}}({{G}}) := \sup _{{{\widetilde{{E}}} }} \inf _{e\in {{E}}{\setminus }{{\widetilde{{E}}} }} |e|, \end{aligned}$$
(4.9)

and the supremum is taken over all finite subsets \({{\widetilde{{E}}} }\) of \({{E}}\). Moreover, taking into account Lemma 4.2, we get the following connection between our isoperimetric constants and the combinatorial ones:

$$\begin{aligned} \frac{2\,\alpha _{\mathrm{{comb}}}({{V}})}{\ell ^*({{G}})(1+ \alpha _{\mathrm{{comb}}}({{V}}))} \le \alpha ({{G}}) \le \frac{2\,\alpha _{\mathrm{{comb}}}({{V}})}{\ell _*({{G}})} \end{aligned}$$
(4.10)

and

$$\begin{aligned} \frac{2\,\alpha ^{{{\mathrm{ess}}}}_{\mathrm{{comb}}}({{V}})}{\ell _{{{\mathrm{ess}}}}^*({{G}})(1+ \alpha ^{{{\mathrm{ess}}}}_{\mathrm{{comb}}}({{V}}))} \le \alpha ^{{{\mathrm{ess}}}}({{G}}) \le \frac{2\,\alpha _{\mathrm{{comb}}}^{{{\mathrm{ess}}}}({{V}})}{\ell ^{{{\mathrm{ess}}}}_*({{G}})}. \end{aligned}$$
(4.11)

Since \(\alpha _{\mathrm{{comb}}}({{V}})\in [0,1)\), we end up with the following result.

Corollary 4.5

Let \({{G}}\) be a metric graph such that \(\ell ^*({{G}})<\infty \). Then:

  1. (i)

    \(\lambda _0({\mathbf {H}})>0\) if \(\alpha _{\mathrm{{comb}}}({{V}})>0\).

  2. (ii)

    \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})>0\) whenever \(\alpha _{\mathrm{{comb}}}^{{{\mathrm{ess}}}}({{V}})>0\).

  3. (iii)

    The spectrum of \({\mathbf {H}}\) is purely discrete if \(\ell ^*_{{{\mathrm{ess}}}}({{G}}) = 0\) and \(\alpha _{\mathrm{{comb}}}^{{{\mathrm{ess}}}}({{V}})>0\).

5 Upper bounds via the isoperimetric constant

It is possible to use the isoperimetric constants to estimate \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) from above, however, for this we need to impose additional restrictions on the metric graph.

Lemma 5.1

Suppose that \(\ell _*({{G}})=\inf _{e\in {{E}}}|e|>0\). Then

$$\begin{aligned} \lambda _0( {\mathbf {H}})&\le \frac{\pi ^2}{2\, \ell _*({{G}})}\alpha ({{G}}),&\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \le \frac{\pi ^2}{2\, \ell _{*}^{ {{\mathrm{ess}}}} ({{G}})}\alpha _{{{\mathrm{ess}}}}({{G}}). \end{aligned}$$
(5.1)

Proof

To estimate \(\lambda _0({\mathbf {H}})\), choose any \(\phi \in H^1([0,1])\) with \(\phi (0) =0\), \(\phi (1)=1\) and \(\Vert \phi \Vert _{L^2(0,1)}=1\) and set

$$\begin{aligned} {{\widetilde{\phi }(x)} }:= {\mathbb {1}}_{[0,1/2]}(x)\phi (2x) + {\mathbb {1}}_{(1/2,1]}(x)\phi (2-2x),\qquad x\in [0,1]. \end{aligned}$$

Assume a subgraph \({{G}}_0 \in {\mathcal {K}}_{{G}}\) and a finite, connected subgraph \({{\widetilde{{G}}} }= ({{\widetilde{{V}}} }, {{\widetilde{{E}}} })\) of \({{{G}}{\setminus } {{G}}_0}\). Then define \(g \in {{\widetilde{H}} }^1_c({{G}}{\setminus } {{G}}_0)\) by setting

$$\begin{aligned} g(x_e) := {\left\{ \begin{array}{ll} 0, &{}\qquad e \in {{E}}_{{{G}}{\setminus } {{G}}_0}, \; e \notin {{\widetilde{{E}}} }\\ 1, &{}\qquad e \in {{\widetilde{{E}}} }_0\\ \phi (\frac{|x_e-u|}{|e|}), &{}\qquad e=e_{u,\tilde{u}}\in {{\widetilde{{E}}} }_1, u \in \partial {{\widetilde{{G}}} }\\ {{\widetilde{\phi }} }(\frac{|x_e- e_o|}{|e|}), &{}\qquad e \in {{\widetilde{{E}}} }_2 \end{array}\right. }\ , \end{aligned}$$

where \({{\widetilde{{E}}} }_0\), \({{\widetilde{{E}}} }_1\), \({{\widetilde{{E}}} }_2\) are defined as in the previous subsection and \(|x_e - y|\) denotes the distance between \(x_e\in e\) and some \(y\in e\). If \({{G}}_0 \ne \varnothing \) and \(v \in {{G}}{\setminus } {{G}}_0\) is a vertex with at least one incident edge in \({{G}}_0\), then either v is not in \({{\widetilde{{V}}} }\) or v is a boundary vertex of \({{\widetilde{{G}}} }\). In both cases, g vanishes at v. Therefore, \(g \in {{\widetilde{H}} }^1({{G}}{\setminus } {{G}}_0)\). Next we get

$$\begin{aligned} \Vert g\Vert _{L^2({{G}}{\setminus } {{G}}_0)}^2&= \sum _{e \in {{\widetilde{{E}}} }_0} |e| + \sum _{e \in {{\widetilde{{E}}} }_1} |e| \Vert \phi \Vert ^2_{L^2(0,1)} + \sum _{e \in {{\widetilde{{E}}} }_2} 2 \frac{|e|}{2} \Vert \phi \Vert ^2_{L^2(0,1)} = \mathrm{{mes}}({{\widetilde{{G}}} }), \end{aligned}$$

and, in view of (4.7),

$$\begin{aligned} \Vert g' \Vert _{L^2({{G}}{\setminus } {{G}}_0)}^2&= \sum _{e \in {{\widetilde{{E}}} }_1} \frac{1}{|e|} \Vert \phi '\Vert ^2_{L^2(0,1)} + \sum _{e \in {{\widetilde{{E}}} }_2} \frac{4}{|e|} \Vert \phi '\Vert ^2_{L^2(0,1)} \\&\le \frac{ \Vert \phi ' \Vert _{L^2(0,1)}^2}{\ell _*({{G}}{\setminus } {{G}}_0)} (\#( {{\widetilde{{E}}} }_1 ) + 4 \#( {{\widetilde{{E}}} }_2) ) \le \frac{2 \Vert \phi ' \Vert _{L^2(0,1)}^2 }{\ell _*( {{G}}{\setminus } {{G}}_0)} \deg (\partial _{{G}}{{\widetilde{{G}}} }). \end{aligned}$$

Choosing \(\phi (x) = \sqrt{2}\sin (\frac{\pi }{2}x)\), we obtain the estimate

$$\begin{aligned} \frac{ \Vert g' \Vert _{L^2({{G}}{\setminus } {{G}}_0)}^2 }{\Vert g\Vert _{L^2({{G}}{\setminus } {{G}}_0)}^2 } \; \le \; \frac{\pi ^2}{2 \; \ell _*( {{G}}{\setminus } {{G}}_0 ) } \frac{ \deg (\partial _{{G}}{{\widetilde{{G}}} }) }{ \mathrm{{mes}}({{\widetilde{{G}}} }) }. \end{aligned}$$

Choosing \({{G}}_0 = \varnothing \), (2.20) and (3.3) imply the first inequality in (5.1). Now assume \({{G}}_0 \ne \varnothing \). Then

$$\begin{aligned} \inf _{\begin{array}{c} f\in {{\widetilde{H}} }^1({{G}}{\setminus } {{G}}_0 )\\ f\ne 0 \end{array}}\frac{\Vert f'\Vert ^2_{L^2({{G}}{\setminus } {{G}}_0 )}}{\Vert f\Vert ^2_{L^2({{G}}{\setminus } {{G}}_0)}} \; \le \; \frac{\pi ^2}{2 \; \ell _*( {{G}}{\setminus } {{G}}_0) } \alpha ({{G}}{\setminus } {{G}}_0). \end{aligned}$$

Finally, using (2.23) and (3.6) we end up with

$$\begin{aligned} \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \le \; \lim _{{{G}}_0 \in {\mathcal {K}}_{{G}}} \frac{\pi ^2}{2 \; \ell _*( {{G}}{\setminus } {{G}}_0 )} \alpha ({{G}}{\setminus } {{G}}_0) = \frac{\pi ^2}{2 \; \ell _{*}^{{{\mathrm{ess}}}} ( {{G}})}\alpha _{{{\mathrm{ess}}}}({{G}}). \end{aligned}$$

\(\square \)

Combining Lemma 5.1 with the Cheeger-type bounds (3.8) and the estimates (4.10)–(4.11) and taking into account Lemma 3.9, we immediately get the following result.

Corollary 5.2

If \(\ell _*({{G}})>0\) and \(\ell ^*({{G}}) < \infty \), then the following are equivalent:

  1. (i)

    \(\lambda _0({\mathbf {H}})>0\),

  2. (ii)

    \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})>0\),

  3. (iii)

    \(\alpha _{\mathrm{{comb}}}({{G}})>0\),

  4. (iv)

    \(\alpha ^{{{\mathrm{ess}}}}_{\mathrm{{comb}}}({{G}})>0\).

Remark 5.3

A few remarks are in order:

  1. (i)

    If \(\ell _*({{G}})=0\), then the estimate in (5.1) becomes trivial.

  2. (ii)

    Notice that (5.1) is better than (2.25) only if the isoperimetric constant satisfies

    $$\begin{aligned} \alpha ({{G}}) < \frac{2\,\ell _*({{G}})}{\ell ^*({{G}})^2}. \end{aligned}$$
  3. (iii)

    In [11], Buser noticed that the isoperimetric constant can be used for obtaining upper estimates on the spectral gap for Laplacians on compact Riemannian manifolds. Hence estimates of the type (5.1) are often called Buser-type estimates. Let us mention that for combinatorial Laplacians a Buser-type estimate was first proved in [2] (see also [16, 18]). For finite quantum graphs, a Buser-type bound can be found in [40, Proposition 0.3], which is, however, different from our estimate (5.1).

6 Bounds by curvature

Despite the combinatorial nature of isoperimetric constants (3.3) and (3.4), it is known that computation of the combinatorial isoperimetric constant (4.8) is an NP-hard problem (see [34, 36, 48] for further details). Our next aim is to introduce a quantity, which provides estimates for \(\alpha ({{G}})\) and \(\alpha _{{{\mathrm{ess}}}}({{G}})\) and also turns out to be very useful in many situations (see Sect. 8).

Suppose now that our graph is oriented, that is, every edge is assigned a direction. For every \(v\in {{V}}\), let \({{E}}_{v}^+\) and \({{E}}_{v}^-\) be the sets of outgoing and incoming edges, respectively. Next define the function \(\mathrm{{K}}:{{V}}\rightarrow {\mathbb {R}}\cup \{-\infty \}\) by

$$\begin{aligned} \mathrm{{K}}:v\mapsto \frac{\#({{E}}_v^+) - \#({{E}}_v^-)}{\#({{E}}_v^+)}\inf _{e\in {{E}}_v^+}\frac{1}{|e|}. \end{aligned}$$
(6.1)

\(\mathrm{{K}}\) can take both positive and negative values, and \(\mathrm{{K}}(v)= - \infty \) whenever \(\#({{E}}_v^+)=\emptyset \).

Lemma 6.1

Assume \({{G}}\) is an oriented graph such that the function \(\mathrm{{K}}\) is positive. Then the isoperimetric constant (3.3) satisfies

$$\begin{aligned} \alpha ({{G}}) \ge \mathrm{{K}}({{G}}):=\inf _{v \in {{V}}} \mathrm{{K}}(v) \ge 0 . \end{aligned}$$
(6.2)

Proof

Let \({{\widetilde{{G}}} }\in {\mathcal {K}}_{{G}}\) be a finite and connected subgraph. For every \(v\in {{\widetilde{{V}}} }\), denote by \({{E}}_{v}^+({{\widetilde{{G}}} })\) and \({{E}}_{v}^-({{\widetilde{{G}}} })\) the sets of outgoing and incoming edges in \({{\widetilde{{G}}} }\). Since \(\mathrm{{K}}(v)>0\) is positive, we get

$$\begin{aligned} \sup _{e\in {{E}}^+_v} |e| \le \frac{1}{\mathrm{{K}}(v)} \Big ( 1 - \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)}\Big ), \end{aligned}$$

for all \(v\in {{V}}\). Therefore,

$$\begin{aligned} \mathrm{{mes}}({{\widetilde{{G}}} }) = \sum _{e \in {{\widetilde{{E}}} }} |e| =&\sum _{v \in {{\widetilde{{V}}} }}\, \sum _{e \in {{E}}^+_v({{\widetilde{{G}}} })}|e| \le \frac{1}{\mathrm{{K}}({{G}})} \sum _{v \in {{\widetilde{{V}}} }}\, \sum _{e \in {{E}}^+_v({{\widetilde{{G}}} })} 1- \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)} \\&= \frac{1}{\mathrm{{K}}({{G}})}\sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^+({{\widetilde{{G}}} }))\Big (1 - \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)}\Big ). \end{aligned}$$

First observe that

$$\begin{aligned} \sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^+({{\widetilde{{G}}} })) = \sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^-({{\widetilde{{G}}} })) = \#({{\widetilde{{E}}} }). \end{aligned}$$

Moreover, for any non-boundary point \(v \in {{\widetilde{{V}}} }{\setminus } \partial _{{G}}{{\widetilde{{G}}} }\), the whole star \({{E}}_v\) is contained in \({{\widetilde{{G}}} }\) and hence \({{E}}_{v}^\pm ({{\widetilde{{G}}} }) = {{E}}_v^\pm \). Therefore, we get

$$\begin{aligned} \sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^+({{\widetilde{{G}}} })) \Big (1 - \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)}\Big )&= \sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^+({{\widetilde{{G}}} })) - \sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^+({{\widetilde{{G}}} })) \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)}\\&= \sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^-({{\widetilde{{G}}} })) - \sum _{v \in {{\widetilde{{V}}} }} \#({{E}}_{v}^+({{\widetilde{{G}}} })) \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)}\\&= \sum _{v \in \partial _{{{G}}}{{\widetilde{{G}}} }}\#({{E}}_{v}^-({{\widetilde{{G}}} })) - \#({{E}}_{v}^+({{\widetilde{{G}}} })) \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)} \\&\le \sum _{v \in \partial _{{{G}}} {{\widetilde{{G}}} } }\deg _{{{\widetilde{{G}}} }} (v) = \deg ( \partial _{{G}}{{\widetilde{{G}}} }). \end{aligned}$$

Combining this with the previous estimates, we end up with the following bound

$$\begin{aligned} \mathrm{{mes}}({{\widetilde{{G}}} }) \le \frac{1}{\mathrm{{K}}( {{G}})} \deg ( \partial _{{G}}{{\widetilde{{G}}} }), \end{aligned}$$

which proves the claim. \(\square \)

Remark 6.2

The function \(\mathrm{{K}}\) is sometimes interpreted as curvature. Several notions of curvature have been introduced for discrete and combinatorial Laplacians. Perhaps, the closest one to (6.1) have been introduced in [38]. Namely, since the natural path metric \(\varrho _0\) is intrinsic, define the function \(\mathrm{{K}}_d:{{V}}\rightarrow {\mathbb {R}}\) by

$$\begin{aligned} \mathrm{{K}}_d:v\mapsto \frac{\#({{E}}^+_{v}) - \#({{E}}^-_{v})}{m(v)}. \end{aligned}$$
(6.3)

Moreover, \(m(v) = \deg (v)\) for all \(v\in {{V}}\) if the corresponding metric graph is equilateral (i.e., \(|e|\equiv 1\)), and hence (6.3) coincides with the definition suggested for combinatorial Laplacians in [20]. Notice that for equilateral graphs (6.1) reads

$$\begin{aligned} \mathrm{{K}}(v) = \mathrm{{K}}_\mathrm{comb}(v) := 1 - \frac{\#({{E}}_v^-)}{\#({{E}}_v^+)},\quad v\in {{V}}, \end{aligned}$$
(6.4)

and hence in this case

$$\begin{aligned} \frac{2}{\mathrm{{K}}(v)} = \frac{2}{\mathrm{{K}}_\mathrm{comb}(v)} = 1 + \frac{1}{\mathrm{{K}}_d(v)},\quad v\in {{V}}. \end{aligned}$$
(6.5)

It seems there is no nice connection between \(\mathrm{{K}}\) and \(\mathrm{{K}}_d\) in the general case.

Remark 6.3

Let us also mention that Lemma 6.1 can be seen as the analog of [5, Theorem 6.2], where the following bound for the discrete isoperimetric constant was established:

$$\begin{aligned} \alpha _d({{V}}) \ge \mathrm{{K}}_d({{V}}) := \inf _{v\in {{V}}} \mathrm{{K}}_d(v), \end{aligned}$$
(6.6)

if \(\mathrm{{K}}_d\) is nonnegative on \({{V}}\). Combining (6.6) with the second inequality in (4.5), we end up with the following bound

$$\begin{aligned} \frac{2}{\alpha ({{G}})} \le \frac{1}{\mathrm{{K}}_d({{V}})} + \ell ^*({{G}}). \end{aligned}$$
(6.7)

In what follows we shall call the function \(\mathrm{{K}}_\mathrm{comb}:{{V}}\rightarrow {\mathbb {Q}}\cup \{- \infty \}\) defined by (6.4) as the combinatorial curvature (in [20, p. 32], \(\mathrm{{K}}_\mathrm{d}\) is called a curvature of the combinatorial distance spheres). Note that the curvature can take both positive and negative values, and \(\mathrm{{K}}_\mathrm{comb}(v)=- \infty \) whenever \(\#({{E}}_v^+)=\emptyset \). The next simple estimate turns out to be very useful in applications.

Lemma 6.4

Assume \(\mathrm{{K}}_{\text {comb}}\) is positive on \({{V}}\) and

$$\begin{aligned} \mathrm{{K}}_\mathrm{comb}({{V}}):= \inf _{v\in {{V}}}\mathrm{{K}}_\mathrm{comb}(v). \end{aligned}$$

Then the isoperimetric constant (3.3) satisfies

$$\begin{aligned} \alpha ({{G}}) \ge \frac{\mathrm{{K}}_\mathrm{comb}({{V}})}{\ell ^*({{G}})}. \end{aligned}$$
(6.8)

Proof

Noting that \(\mathrm{{K}}_\mathrm{comb}\) is positive and comparing (6.4) with (6.1), we get

$$\begin{aligned} \frac{\mathrm{{K}}_\mathrm{comb}(v)}{\ell ^*({{G}})} \le \mathrm{{K}}(v) \end{aligned}$$
(6.9)

for all \(v\in {{V}}\). Hence the claim follows from Lemma 6.1. \(\square \)

With a little extra effort and using an argument similar to that in the proof of (4.5) one can show the following bounds.

Lemma 6.5

Assume \({{G}}\) is an oriented graph such that the function \(\mathrm{{K}}\) (and hence \(\mathrm{{K}}_{\mathrm{{comb}}}\)) is positive on \({{V}}\) and set

$$\begin{aligned} \mathrm{{K}}^{{{\mathrm{ess}}}}({{G}}) \,{:=}\,\liminf _{v \in {{V}}} \mathrm{{K}}(v), \quad \mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({{V}}) \,{:=}\,\liminf _{v\in {{V}}}\mathrm{{K}}_\mathrm{comb}(v). \end{aligned}$$
(6.10)

Then the isoperimetric constant at infinity (3.4) satisfies

$$\begin{aligned} \alpha _{{{\mathrm{ess}}}}({{G}}) \ge \mathrm{{K}}^{{{\mathrm{ess}}}}({{G}}), \end{aligned}$$
(6.11)

and

$$\begin{aligned} \frac{\mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({{V}})}{\ell ^*_{{{\mathrm{ess}}}}({{G}})} \le \alpha _{{{\mathrm{ess}}}}({{G}}) \le \frac{2}{\ell ^*_{{{\mathrm{ess}}}}({{G}})}, \end{aligned}$$
(6.12)

Combining Lemma 6.5 with the Cheeger-type estimate, we immediately get the following result.

Corollary 6.6

If \({{G}}\) is an oriented graph such that the function \(\mathrm{{K}}_\mathrm{comb}\) is nonnegative on \({{V}}\), then

$$\begin{aligned} \lambda _0({\mathbf {H}}) \,{\ge }\, \frac{\mathrm{{K}}_\mathrm{comb}({{V}})^2}{4\,\ell ^*({{G}})^2}, \quad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\,{\ge }\, \frac{\mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({{V}})^2}{4\,\ell ^*_{{{\mathrm{ess}}}}({{G}})^2}. \end{aligned}$$
(6.13)

In particular, if \(\mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({{V}})>0\), then the spectrum of \({\mathbf {H}}\) is purely discrete precisely when \(\ell ^*_{{{\mathrm{ess}}}}({{G}})=0\).

Remark 6.7

Let us mention that in the case when \(\mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({{V}})=0\) the condition \(\ell ^*_{{{\mathrm{ess}}}}({{G}})=0\) is no longer sufficient for the discreteness. For further details we refer to Sect. 8.2 and, more specifically, to the example of polynomially growing antitrees (see Example 8.7).

7 Growth volume estimates

Here we plan to exploit the results from [62] to get upper bounds on the spectra of quantum graphs in terms of the exponential volume growth rates, the so-called Brooks-type estimates (cf. [6, 33, 62] for further details and references). Following [62], we introduce the following notation. For every \(x\in {{G}}\) and \(r>0\), let

$$\begin{aligned} B_r(x): = \{y\in {{G}}|\ \varrho _0(x,y)<r\}. \end{aligned}$$
(7.1)

Here \(\varrho _0\) is the natural path metric on \({{G}}\). Let also

$$\begin{aligned} \mathrm{{vol}}_x(r) := \mathrm{{mes}}(B_r(x)), \end{aligned}$$
(7.2)

and

$$\begin{aligned} \mathrm{{vol}}_*(r) := \inf _{x\in {{G}}} \frac{\mathrm{{mes}}(B_r(x))}{\mathrm{{mes}}(B_1(x))}. \end{aligned}$$
(7.3)

Next we define the following numbers

$$\begin{aligned} \mu _x({{G}}) := \liminf _{r\rightarrow \infty } \frac{\log (\mathrm{{vol}}_x(r))}{r}, \quad \mu _*({{G}}) := \liminf _{r\rightarrow \infty } \frac{\log (\mathrm{{vol}}_*(r))}{r}. \end{aligned}$$
(7.4)

Notice that \(\mu _x({{G}})\) does not depend on \(x\in {{G}}\) if \({{G}}= \cup _{r>0}B_r(x)\) for some (and hence for all) \(x\in {{G}}\). If both conditions are satisfied, then we shall write \(\mu ({{G}})\) instead of \(\mu _x({{G}})\).

Theorem 7.1

Suppose \(({{V}},\varrho _0)\) is complete as a metric space. Then

$$\begin{aligned} \lambda _0({\mathbf {H}}) \le \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \le \frac{1}{4}\mu _*({{G}})^2 \le \frac{1}{4}\mu ({{G}})^2. \end{aligned}$$
(7.5)

Proof

The first and the last inequalities in (7.5) are obvious and hence it remains to show that

$$\begin{aligned} \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \le \frac{1}{4}\mu _*({{G}})^2. \end{aligned}$$

Notice that by Corollary 2.3, the pre-minimal operator \({\mathbf {H}}_0\) is essentially self-adjoint and hence \({\mathbf {H}}\) is its closure. Let us consider the corresponding quadratic form \(\mathfrak {t}_{{G}}\) defined as the closure in \(L^2({{G}})\) of the form \(\mathfrak {t}_{{G}}^0\) (see (2.15) and (2.16)). It is not difficult to check that the form \(\mathfrak {t}_{{G}}\) is a strongly local regular Dirichlet form (see [29] for definitions). On the other hand, using the Hopf–Rinow type theorem for graphs (see [35]), with a little work one can show that every ball \(B_r(x)\) is relatively compact if \(({{V}},\varrho _0)\) is complete. Therefore, by [62, Theorem 5] and [51, Theorem 1], [33, Theorem 1.1], we get

$$\begin{aligned} \lambda _0({\mathbf {H}})&\le \frac{1}{4}\mu _*({{G}})^2,&\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})&\le \frac{1}{4}\mu ({{G}})^2. \end{aligned}$$

Noting that \(\mathrm{{mes}}(B_1(x))\ge 1\) for all \(x\in {{G}}\) and taking into account [33, Remark (e) on p.885], we arrive at the desired estimate. \(\square \)

The next result is straightforward from Theorem 7.1.

Corollary 7.2

Let \(({{V}},\varrho _0)\) be complete as a metric space. Then:

  1. (i)

    \(\lambda _0({\mathbf {H}})=\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})=0\) if \(\mu ({{G}})=0\).

  2. (ii)

    The spectrum of \({\mathbf {H}}\) is not discrete if \(\mu _*({{G}})<\infty \).

Remark 7.3

Clearly, to compute or estimate \(\mu _*({{G}})\) is a much more involved problem comparing to that of \(\mu ({{G}})\). However, it might happen that \(\mu _*({{G}})<\mu ({{G}})\) and hence \(\mu _*({{G}})\) provides a better bound (see Example 8.4).

Remark 7.4

Let us mention that these results have several further consequences for the heat semigroup \(\mathrm{{e}}^{-t{\mathbf {H}}}\) generated by the operator \({\mathbf {H}}\). For example, \(\mu _*({{G}})=0\) implies the exponential instability of the corresponding heat semigroup on \(L^p({{G}})\) for all \(p\in [1,\infty ]\) (see [62, Corollary 2]).

We finish this section with comparing the estimates (7.5) with the ones obtained in [25] in terms of the volume growth of the corresponding discrete graph. Following [33] (see also [25, §4.3]), define the constant

$$\begin{aligned} \mu _d({{G}}):=\liminf _{r\rightarrow \infty } \frac{\log m(B_r(v))}{r} \end{aligned}$$
(7.6)

for a fixed \(v\in {{V}}\). Here

$$\begin{aligned} m(B_r(v)) = \sum _{u\in B_r(v)} m(u),\qquad v\in {{V}}. \end{aligned}$$

Notice that \(\mu _d({{G}})\) does not depend on the choice of \(v\in {{V}}\) if \({{G}}=\cup _{r>0} B_r(x)\).

Lemma 7.5

If \(\ell ^*({{G}})<\infty \) and \(({{V}},\varrho _0)\) is complete as a metric space, then

$$\begin{aligned} \mu ({{G}}) = \mu _d({{G}}). \end{aligned}$$
(7.7)

Proof

First observe that

$$\begin{aligned} m(B_r(v)) = 2\sum _{\{u,\tilde{u}\} \subset B_r(v)} |e_{u,\tilde{u}}| + \sum _{\begin{array}{c} \{u,\tilde{u}\}\not \subset B_r(v)\\ \{u,\tilde{u}\}\cap B_r(v) \ne \varnothing \end{array}}|e_{u,\tilde{u}}| \ge \mathrm{{mes}}(B_r(v)) = \mathrm{{vol}}_v(r). \end{aligned}$$

for all \(v\in {{V}}\) and \(r>0\), which immediately implies \(\mu ({{G}}) \le \mu _d({{G}})\). Similarly, we also get

$$\begin{aligned} m(B_r(v)) \le 2\mathrm{{mes}}(B_{r+\ell ^*}(v)) \end{aligned}$$
(7.8)

for all \(v\in {{V}}\) and \(r>0\) and hence

$$\begin{aligned} \mu _d({{G}}) \le \liminf _{r\rightarrow \infty } \frac{\log (2\mathrm{{vol}}_v(r+\ell ^*))}{r} = \mu ({{G}}), \end{aligned}$$

which finishes the proof of (7.7). \(\square \)

Remark 7.6

A few remarks are in order.

  1. (i)

    On the one hand, it does not look too surprising that the exponential growth rates for two Dirichlet forms \(\mathfrak {t}_{{G}}\) and \(\mathfrak {t}_{{\mathbf {h}}}\) coincide. In particular this reflects the equivalence (2.37) in the case of sub-exponential growth rates. However, comparing (7.7) with the fact that there is no equality between \(\lambda _0({\mathbf {H}})\) and \(\lambda _0({\mathbf {h}})\) (see Sect. 2.5), one can conclude that in the case of an exponential growth of volume balls, (7.5) might not lead to qualified estimates (and examples of trees and antitrees in the next section confirm this observation).

  2. (ii)

    Combining (7.7) with Corollary 7.2 we obtain Theorem 4.19 from [25].

8 Examples

In this section we are going to apply our results to certain classes of graphs (trees, antitrees, and Cayley graphs of finitely generated groups). Let us also recall that we always assume Hypotheses 2.12.3 to be satisfied.

8.1 Trees

Let us first recall some basic notions. A connected graph without non-trivial cycles (i.e., cycles of lengths 2) is called a tree. We shall denote trees (both combinatorial and metric) by \({\mathcal {T}}\). Notice that for any two vertices u, v on a tree \({\mathcal {T}}= ({{V}}, {{E}})\) there is exactly one path \({\mathcal {P}}\) connecting u and v. A tree \({\mathcal {T}}= ({{V}}, {{E}})\) with a distinguished vertex \(o\in {{V}}\) is called a rooted tree and o is called the root of \({\mathcal {T}}\). In a rooted tree the vertices can be ordered according to (combinatorial) spheres. Namely, let \(d(\cdot ) := d(o,\cdot )\) be the combinatorial distance to the root o and \(S_n\) be the n-th (combinatorial) sphere, i.e., the set of vertices \(v\in {{V}}\) with \(d(v)=n\). A vertex in the \((n+1)\)-th sphere, which is connected to v in the n-th sphere, is called a forward neighbor of v. In what follows, we define an orientation on a rooted tree according to combinatorial spheres, that is, for every edge e its initial vertex belongs to the smaller combinatorial sphere.

We begin with the following simple estimate for rooted trees. According to the choice of orientation, we get \(\mathrm{{K}}_\mathrm{comb}(o) = 1\) and

$$\begin{aligned} \mathrm{{K}}_\mathrm{comb}(v) = \frac{\#({{E}}_{v}^+) - \#({{E}}_{v}^-)}{\#({{E}}_{v}^+)} = \frac{\deg (v)-2}{\deg (v) -1} \end{aligned}$$

for all \(v\in {{V}}{\setminus }\{o\}\). Therefore, \(\mathrm{{K}}_\mathrm{comb}\) is nonnegative on \({{V}}\) if there are no loose ends, that is, \(\deg (v)\ne 1\) for all \(v\in {{V}}{\setminus }\{o\}\). Let

$$\begin{aligned} \deg _*({{V}}) :=\inf _{v\in {{V}}}\deg (v), \quad \deg _*^{{{\mathrm{ess}}}}({{V}}) :=\liminf _{v\in {{V}}}\deg (v). \end{aligned}$$

Hence we easily get

$$\begin{aligned} \mathrm{{K}}_\mathrm{comb}({\mathcal {T}}\,\,) = \frac{\deg _*({{V}})-2}{\deg _*({{V}}) -1}, \quad \mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({\mathcal {T}}\,\,) = \frac{\deg _*^{{{\mathrm{ess}}}}({{V}})-2}{\deg _*^{{{\mathrm{ess}}}}({{V}}) -1}, \end{aligned}$$

and therefore we end up with the following estimate.

Lemma 8.1

Assume \({\mathcal {T}}\) is a rooted tree without loose ends. Then

$$\begin{aligned} \lambda _0({\mathbf {H}}) \,{\ge }\, \frac{\mathrm{{K}}_\mathrm{comb}({\mathcal {T}}\,\,)^2}{4\,\ell ^*({{G}})^2}, \quad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) \,{\ge }\, \frac{\mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({\mathcal {T}}\,\,)^2}{4\,\ell ^*_{{{\mathrm{ess}}}}({{G}})^2}. \end{aligned}$$
(8.1)

In particular, \(\lambda _0({\mathbf {H}})>0\) if and only if \(\ell ^*({{G}})<\infty \) and the spectrum of \({\mathbf {H}}\) is purely discrete if and only if \(\ell ^*_{{{\mathrm{ess}}}}({{G}})=0\).

Proof

The proof immediately follows from Corollary 6.6, Remark 2.9(i) and the fact that the combinatorial curvature admits the following bound (take also into account Hypothesis 2.3)

$$\begin{aligned} \frac{1}{2} \le \mathrm{{K}}_\mathrm{comb}({\mathcal {T}}\,\,) <1. \end{aligned}$$

\(\square \)

Remark 8.2

A few remarks are in order.

  1. (i)

    In the case of regular metric trees (these are rooted trees with an additional symmetry—all the vertices from the same distance sphere have equal degrees as well as all the edges of the same generation are of the same length), the second claim in Lemma 8.1 was observed by Solomyak in [61]. In fact, under Hypothesis 2.3, conditions (5.1) and (5.5) of [61] hold true if and only if, respectively, \(\ell ^*({{G}})<\infty \) and \(\ell ^*_{{{\mathrm{ess}}}}({{G}})=0\). However, the case of the Neumann Laplacian is considered in [61], and it follows that criteria for the positivity and discreteness for the Neumann and Dirichlet Laplacians coincide.

  2. (ii)

    Let us mention that the positivity (however, without estimates) of a combinatorial isoperimetric constant for the type of trees considered in Lemma 8.1 is known (see [65, Theorem 10.9])

In the case of trees the estimates (8.1) can be improved, however, instead of providing these generalizations we are going to consider only one particular case.

Example 8.3

(Bethe lattices) Fix \(\beta \in {\mathbb {Z}}_{\ge 3}\) and consider the combinatorial graph, which is a rooted tree such that all vertices have degree \(\beta \). This type of trees is called Bethe lattices (also known as Cayley trees or homogeneous trees) and they will be denoted by \({\mathbb {T}}_\beta \). Suppose that the corresponding metric graph is equilateral, that is, \(|e|=1\) for all \(e\in {{E}}\). By abusing the notation, we shall denote the corresponding metric graph by \({\mathbb {T}}_\beta \) too. Then one computes

$$\begin{aligned} \mathrm{{K}}_\mathrm{comb}({\mathbb {T}}_\beta ) = \mathrm{{K}}^{{{\mathrm{ess}}}}_\mathrm{comb}({\mathbb {T}}_\beta ) = \frac{\beta -2}{\beta -1} = :\mathrm{{K}}_{\beta }. \end{aligned}$$

Noting that \(\mathrm{{K}}_{\beta } \in [1/2,1)\) and applying Lemma 8.1, we arrive at the following estimate

$$\begin{aligned} \lambda _0^{{{\mathrm{ess}}}}({\mathbb {T}}_\beta )\ge \lambda _0({\mathbb {T}}_\beta )\ge \frac{1}{4}\mathrm{{K}}_\beta ^2. \end{aligned}$$
(8.2)

On the other hand, it is straightforward to check that (see, e.g., [20])

$$\begin{aligned} \alpha ( {\mathbb {T}}_\beta ) \,{=}\, \mathrm{{K}}_\mathrm{comb}({\mathbb {T}}_\beta ) \,{=}\, \frac{\beta -2}{\beta -1}, \quad \alpha _d( {\mathbb {T}}_\beta )&\,{=}\, \frac{\beta -2}{\beta }. \end{aligned}$$
(8.3)

In particular, this implies that the equality holds in the second inequality in (4.5). Moreover, the spectra of both operators \({\mathbf {H}}\) and \({\mathbf {h}}\) can be computed explicitly (see, e.g., [61, Example 6.3] or [20, Theorem 1.14] together with Theorem 2.11) and, in particular,

$$\begin{aligned} \lambda _0({\mathbf {H}}) = \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) = \arccos ^2\Big (\frac{2\sqrt{\beta -1}}{\beta }\Big ). \end{aligned}$$

Comparing the last equality with the estimate (8.2), one can notice a gap between these estimates.

Let us mention that

$$\begin{aligned} \mu ({\mathbb {T}}_\beta ) = \mu _o({\mathbb {T}}_\beta ) = \mu _*({\mathbb {T}}_\beta ) = \beta -1, \end{aligned}$$

and thus the volume growth estimates (7.5) do not provide a reasonable upper bound for large values of \(\beta \). \(\square \)

Finally, we would like to mention that the absence of loose ends in Lemma 8.1 is essential as the next example shows.

Example 8.4

(A “sparse” tree with loose ends) Consider the half-line \({\mathbb {R}}_{\ge 0}\) as an equilateral graph with vertices at the integers. Let us write \(v_n\) for the vertex placed at \(n \in {\mathbb {Z}}_{\ge 0}\). Then, we will modify this graph by attaching edges to the vertices \(v_n\) with \(n\ge 1\). More precisely, to the \(j^2\)-th vertex \(v_{j^2}\) with \(j\in {\mathbb {Z}}_{\ge 1}\), we attach \(2^{j^2}\) edges and to every other vertex \(v_n\) with \(n \notin \{j^2\}_{j\ge 1}\), we attach exactly one edge (see Fig. 1).

Fig. 1
figure 1

Tree with loose ends

Clearly, we end up with a tree graph \({\mathcal {T}}\). For simplicity, we shall assume that the corresponding metric graph is equilateral, that is, \(|e|=1\) for all \(e\in {\mathcal {T}}\). This tree is in a certain sense sparse and as a result it turns out that

$$\begin{aligned} \mu _*({\mathcal {T}}) = 0, \end{aligned}$$

and hence, by Theorem 7.1,

$$\begin{aligned} \lambda _0({\mathbf {H}}) = \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}) = 0. \end{aligned}$$

In fact, it is enough to show that \(\mathrm{{vol}}_*(r) = 1\) for all \(r>1\). Namely, take \(r>1\) and set \(j_r:= 1+\lfloor (r+1)/2 \rfloor \), where \(\lfloor \cdot \rfloor \) is the usual floor function. Since \(j_r^2 - (j_r-1)^2 >r\), we get

$$\begin{aligned} 1 \le \mathrm{{vol}}_*(r) \le \inf _{n\ge j_r} \frac{ \mathrm{{mes}}(B_r(v_{n^2} )) }{B_1(v_{n^2}) } = \inf _{n\ge j_r} \frac{2^{n^2} + 2r + 2(r-1) }{ 2^{n^2} +2 } = 1. \end{aligned}$$

It is interesting to mention that in this case \(\mu ({\mathcal {T}}) = \log (2)>0\). Indeed,

$$\begin{aligned} 2r-1 + \sum _{k=1}^{\lfloor \sqrt{r} \rfloor -1} (2^{k^2}-1) \le \mathrm{{vol}}_o(r) = \mathrm{{mes}}( B_r(v_0) ) \le 2r - 1 + \sum _{k=1}^{\lfloor \sqrt{r} \rfloor } (2^{k^2}-1) \end{aligned}$$

and hence for all \(r>1\) we get

$$\begin{aligned} 2^{(\lfloor \sqrt{r} \rfloor -1)^2} < \mathrm{{vol}}_o(r) \le 2^{\lfloor \sqrt{r} \rfloor ^2+1}, \end{aligned}$$

which implies the desired equality. \(\square \)

8.2 Antitrees

Let \({{G}}_d = ({{V}}, {{E}})\) be a connected combinatorial graph. Fix a root vertex \(o \in {{V}}\) and then order the graph with respect to the combinatorial spheres \(S_n\), \(n \in {\mathbb {Z}}_{\ge 0}\) (notice that \(S_0=\{o\}\)). The connected graph \({{G}}_d\) is called an antitree if every vertex in \(S_n\) is connected to every vertex in \(S_{n+1}\) and there are no horizontal edges, i.e., there are no edges with all endpoints in the same sphere (see Fig. 2). Clearly, an antitree is uniquely determined by the sequence \(s_n := \#(S_n)\), \(n\in {\mathbb {Z}}_{\ge 1}\).

Let us denote antitrees by the letter \({\mathcal {A}}\) and also define the edge orientation according to the combinatorial ordering, that is, for every edge e its initial edge is the one in the smaller combinatorial sphere. It turns out that the curvatures of antitrees can be computed explicitly. Namely, define the following quantities:

$$\begin{aligned} \ell _n := \sup _{e\in {{E}}_v^+:v\in S_n}|e|, \end{aligned}$$
(8.4)

and

$$\begin{aligned} \mathrm{{K}}_0&:=1, \quad \mathrm{{K}}_{n+1} := 1 - \frac{s_{n}}{s_{n+2}} \end{aligned}$$
(8.5)

for all \(n\in {\mathbb {Z}}_{\ge 0}\).

Lemma 8.5

If \({\mathcal {A}}\) is an antitree, then

$$\begin{aligned} \mathrm{{K}}_\mathrm{comb}({\mathcal {A}}) \,{=}\, \inf _{n\ge 0} {\mathrm{{K}}_n}, \quad \mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({\mathcal {A}}) \,{=}\, \liminf _{n\rightarrow \infty }{\mathrm{{K}}_n}, \end{aligned}$$
(8.6)

and

$$\begin{aligned} \mathrm{{K}}({\mathcal {A}}) \,{=}\, \inf _{n\ge 0} \frac{\mathrm{{K}}_n}{\ell _n}, \quad \mathrm{{K}}^{{{\mathrm{ess}}}}({\mathcal {A}}) \,{=}\, \liminf _{n\rightarrow \infty }\frac{\mathrm{{K}}_n}{\ell _n}. \end{aligned}$$
(8.7)

Proof

The proof follows by a direct inspection since \(\mathrm{{K}}_\mathrm{comb}(v) = \mathrm{{K}}_n\) for all \(v\in S_n\) and \(n\in {\mathbb {Z}}_{\ge 0}\). \(\square \)

Combining Lemma 8.5 with the estimates for the corresponding isoperimetric constants (e.g., Corollary 6.6), we immediately end up with the estimates for \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\). Let us demonstrate this by considering two examples.

Example 8.6

(Exponentially growing antitrees) Fix \(\beta \in {\mathbb {Z}}_{\ge 2}\) and let \({\mathcal {A}}_\beta \) be an antitree with sphere numbers \(s_n = \beta ^n\). Then \(\mathrm{{K}}_0 = 1\) and

$$\begin{aligned} \mathrm{{K}}_n = 1 - \beta ^{-2} \end{aligned}$$
(8.8)

for all \(n\in {\mathbb {Z}}_{\ge 1}\). Hence by Lemma 8.5

$$\begin{aligned} \frac{1 - \beta ^{-2}}{\ell ^*({\mathcal {A}}_\beta )}\le \mathrm{{K}}({\mathcal {A}}_\beta ) \le \frac{1}{\ell ^*({\mathcal {A}}_\beta )} \end{aligned}$$

and

$$\begin{aligned} \mathrm{{K}}^{{{\mathrm{ess}}}}({\mathcal {A}}_\beta ) = \frac{1 - \beta ^{-2}}{\ell _{{{\mathrm{ess}}}}^*({\mathcal {A}}_\beta )}. \end{aligned}$$

Applying Lemmas 6.1 and 6.5 together with Theorem 3.4 and Lemma 2.8, we get

$$\begin{aligned} \frac{(1 - \beta ^{-2})^2}{4\,\ell ^*({\mathcal {A}}_\beta )^2} \le \lambda _0({\mathbf {H}}_\beta ) \le \frac{\pi ^2}{\ell ^*({\mathcal {A}}_\beta )^2}, \end{aligned}$$
(8.9)

and

$$\begin{aligned} \frac{(1 - \beta ^{-2})^2}{4\,\ell ^*_{{{\mathrm{ess}}}}({\mathcal {A}}_\beta )^2} \le \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}_\beta ) \le \frac{\pi ^2}{\ell ^*_{{{\mathrm{ess}}}}({\mathcal {A}}_\beta )^2}. \end{aligned}$$
(8.10)

In particular, these bounds imply that the Kirchhoff Laplacian \({\mathbf {H}}_\beta \) is uniformly positive if and only if \(\ell ^*({\mathcal {A}}_\beta )<\infty \). Moreover, its spectrum is purely discrete exactly when \(\ell ^*_{{{\mathrm{ess}}}}({\mathcal {A}}_\beta ) = 0\) (cf. Corollary 6.6).

Fig. 2
figure 2

Example of an antitree with \(s_n = n+1\)

Finally, let us compare these estimates with the volume growth estimates under the assumption that the tree is equilateral. In this case,

$$\begin{aligned} \mathrm{{K}}({\mathcal {A}}_\beta ) = \mathrm{{K}}^{{{\mathrm{ess}}}}({\mathcal {A}}_\beta ) = 1-\beta ^{-2}. \end{aligned}$$

On the other hand,

$$\begin{aligned} \mathrm{{mes}}(B_n(o)) = \sum _{k=0}^{n-1} \beta ^{2k+1} = \beta \frac{\beta ^{2n} - 1}{\beta ^2-1}, \end{aligned}$$

and then (7.4) implies that \(\mu ({\mathcal {A}}_\beta ) = 2\log (\beta )\). With a little more work one can show that

$$\begin{aligned} \mu _*({\mathcal {A}}_\beta ) = \mu ({\mathcal {A}}_\beta ) = 2\log (\beta ). \end{aligned}$$

Indeed, it suffices to note that \(\mu _*({\mathcal {A}}_\beta ) \le \mu ({\mathcal {A}}_\beta )\). Moreover, for all \(x\in e_{u,v}\) where e connects \(S_n\) with \(S_{n+1}\), \(n\in {\mathbb {Z}}_{\ge 0}\) we have

$$\begin{aligned} \mathrm{{mes}}(B_1(x)) \le \mathrm{{mes}}(B_1(v)) = \beta ^{n}+\beta ^{n+2} = \beta ^n(\beta ^2+1) \end{aligned}$$

and for all \(r>2\)

$$\begin{aligned} \mathrm{{mes}}(B_r(x))&\ge \mathrm{{mes}}( B_{\lfloor r \rfloor } (u)) = \mathrm{{mes}}( B_{n+\lfloor r \rfloor } (o)) - \mathrm{{mes}}( B_{n-\lfloor r \rfloor }(o)) \\&\ge \mathrm{{mes}}( B_{n+\lfloor r \rfloor } (o)) - \mathrm{{mes}}( B_{n}(o)) = \sum _{k=n}^{n+\lfloor r \rfloor -1} \beta ^{2k+1} = \beta ^{2n+1}\frac{\beta ^{2\lfloor r \rfloor } - 1}{\beta ^2-1}. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \mathrm{{vol}}_*(r) = \inf _{x \in {{G}}} \frac{ \mathrm{{mes}}(B_r(x))}{\mathrm{{mes}}(B_1(x))} \ge \inf _{n{\ge 0}} \frac{\beta ^{2n+1}\frac{\beta ^{2\lfloor r \rfloor } - 1}{\beta ^2-1}}{\beta ^n(\beta ^2+1)} = \frac{\beta ^{2\lfloor r \rfloor +1} - \beta }{\beta ^4-1}, \end{aligned}$$

which shows that \(\mu _*({\mathcal {A}}_\beta ) \ge 2\log (\beta )\) and hence we are done.

Notice that the volume growth estimates (7.5) do not provide a reasonable upper bound for large values of \(\beta \). \(\square \)

Example 8.7

(Polynomially growing antitrees) Fix \(q \in {\mathbb {Z}}_{\ge 1}\) and let \({\mathcal {A}}^q\) be the antitree with sphere numbers \(s_n = (n+1)^q\), \(n\ge 0\) (the case \(q=1\) is depicted on Fig. 2). Then

$$\begin{aligned} \mathrm{{K}}_n = 1 - \frac{n^{q}}{(n+2)^{q}} = 1 - \Big (\frac{n}{n+2}\Big )^q = \frac{2q}{n} + {\mathcal {O}}(n^{-2}), \end{aligned}$$
(8.11)

as \(n\rightarrow \infty \). Hence, by Lemma 8.5,

$$\begin{aligned} \mathrm{{K}}_\mathrm{comb}({\mathcal {A}}^q) =\mathrm{{K}}_\mathrm{comb}^{{{\mathrm{ess}}}}({\mathcal {A}}^q) = 0 \end{aligned}$$

and

$$\begin{aligned} \mathrm{{K}}({\mathcal {A}}^q)&= \inf _{n\ge 0} \frac{1}{\ell _n}\left( 1 - \Big (\frac{n}{n+2}\Big )^q \right) ,&\mathrm{{K}}^{{{\mathrm{ess}}}}({\mathcal {A}}^q)&= \liminf _{n\rightarrow \infty } \frac{1}{\ell _n}\left( 1 - \Big (\frac{n}{n+2}\Big )^q \right) . \end{aligned}$$

Clearly, further analysis heavily depends on the behavior of the sequence \(\{\ell _n\}\). Let us consider one particular case. Fix an \(s\ge 0\) and assume now that

$$\begin{aligned} |e| = (n+1)^{-s} \end{aligned}$$

for each edge e connecting \(S_n\) and \(S_{n+1}\). Let us denote the corresponding Kirchhoff Laplacian by \({\mathbf {H}}_{q,s}\). It is not difficult to show by applying Theorem 2.2 that the corresponding pre-minimal operator is essentially self-adjoint whenever \(s \le q+1\), however, \(({{V}}_q,\varrho _0)\) is complete exactly when \(s\in [0,1]\).

Remark 8.8

In our forthcoming publication we shall show that the pre-minimal operator \({\mathbf {H}}_{0}\) is essentially self-adjoint exactly when the corresponding metric graph has infinite volume, that is, when \(s \le 2q+1\). Moreover, in the case \(s>2q+1\), the deficiency indices of \({\mathbf {H}}_0\) are equal to 1 and one can describe all self-adjoint extensions of \({\mathbf {H}}_0\).

Since \(\ell _n=(n+1)^{-s}\) for all \(n\in {\mathbb {Z}}_{\ge 0}\), we get

$$\begin{aligned} \ell ^*({\mathcal {A}}^q) \,{=}\, 1, \quad \ell ^*_{{{\mathrm{ess}}}}({\mathcal {A}}^q) \,{=}\, {\left\{ \begin{array}{ll} 1, &{} s=0\\ 0, &{} s>0\end{array}\right. }, \end{aligned}$$

and

$$\begin{aligned} \mathrm{{K}}^{{{\mathrm{ess}}}}({\mathcal {A}}^q) = \lim _{n\rightarrow \infty } (n+1)^s\left( 1 - \Big (\frac{n}{n+2}\Big )^q\right) = {\left\{ \begin{array}{ll} 0, &{} s\in [0,1), \\ 2q, &{} s=1, \\ +\infty , &{} s>1.\end{array}\right. } \end{aligned}$$
(8.12)

In the case \(s=1\), it is easy to show that the sequence \(\{\mathrm{{K}}_n/\ell _n\}\) is strictly increasing and hence this is also true for all \(s>1\). Hence

$$\begin{aligned} \mathrm{{K}}({\mathcal {A}}^q) = \mathrm{{K}}(o) = 1,\quad s\ge 1. \end{aligned}$$

Moreover, the corresponding isoperimetric constant is given by \(\alpha ({\mathcal {A}}^q) =\mathrm{{K}}({\mathcal {A}}^q) = 1\) (to see this just take the ball \(B_1(o)\) as a subgraph \({{G}}\) and then one gets \(\alpha ({\mathcal {A}}^q)\le 1\), which together with (6.2) implies the equality).

Next let us compute \(\mu ({\mathcal {A}}^q)\) assuming that \(s\in [0,1]\) (otherwise we can’t apply the result from Sect. 7). Set

$$\begin{aligned} r_n:= \sum _{k=0}^{n-1} \ell _k = \sum _{k=0}^{n-1} \frac{1}{(1+k)^s} =(1+o(1))\times {\left\{ \begin{array}{ll} \frac{n^{1-s}}{1-s}, &{} s\in [0,1),\\ \log (n), &{}s=1, \end{array}\right. } \end{aligned}$$

as \(n\rightarrow \infty \). Then

$$\begin{aligned} \mathrm{{vol}}_o(r_n) = \sum _{k=0}^{n-1}\ell _k s_{k}s_{k+1} = \sum _{k=0}^{n-1}(k+1)^{q-s} (k+2)^{q} = \frac{n^{2q-s+1}}{2q-s+1}(1+o(1)) \end{aligned}$$

as \(n\rightarrow \infty \). Therefore, it is not difficult to show that

$$\begin{aligned} \mu ({\mathcal {A}}^q) = \mu _o({\mathcal {A}}^q) = \lim _{n\rightarrow \infty } \frac{\log (\mathrm{{vol}}_o(r_n))}{r_n} = {\left\{ \begin{array}{ll} 0, &{} s\in [0,1),\\ 2q, &{} s=1.\end{array}\right. } \end{aligned}$$
(8.13)

Applying Theorem 7.1 together with Lemmas 6.1 and 6.5, we end up with the following estimates.

Lemma 8.9

Assume \(q\in {\mathbb {Z}}_{\ge 1}\) and \(s\in {\mathbb {R}}_{\ge 0}\). Then

$$\begin{aligned} \lambda _0({\mathbf {H}}_{q,s}) = \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}_{q,s}) = 0 \end{aligned}$$
(8.14)

if and only if \(s\in [0,1)\). If \(s\ge 1\), then the operator \({\mathbf {H}}_{q,s}\) is uniformly positive and

$$\begin{aligned} \frac{1}{4}\le \lambda _0({\mathbf {H}}_{q,s}) \le \pi ^2, \quad \lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}}_{q,s})&= {\left\{ \begin{array}{ll} q^2, &{} s=1,\\ +\infty , &{} s>1. \end{array}\right. } \end{aligned}$$
(8.15)

Remark 8.10

The exact value of \(\lambda _0({\mathbf {H}}_{q,s})\) for \(s\ge 1\) or at least its asymptotic behavior with respect to q remains an open problem.\(\square \)

8.3 Cayley graphs

Suppose \(\Gamma \) is a finitely generated (infinite) group with the set of generators S. The Cayley graph \({\mathcal {C}}(\Gamma ,S)\) of \(\Gamma \) with respect to S is the vertex set \(\Gamma \) and \(u\sim v\) exactly when \(u^{-1}v\in S\). This graph is connected, locally finite and regular (\(\deg (v) = \#S\) for all \(v\in \Gamma \)). We assume that the unit element o does not belong to the set S (this excludes loops). The lattice \({\mathbb {Z}}^d\) is the standard example of a Cayley graph. Notice also that the Bethe lattice \({\mathbb {T}}_\beta \) is a Cayley graph if either \(S=\{a_1,\dots ,a_\beta |\, a_i^2=o,\ i=1,\dots ,\beta \}\) or \(\beta =2N\) and \(\Gamma = \mathbb {F}_N\) is a free group of N generators.

It is known that the positivity of a combinatorial isoperimetric constant \(\alpha _{\mathrm{{comb}}}\) is closely connected with the amenability of the group \(\Gamma \) (this is a variant of Følner’s criterion, see, e.g., [65, Proposition 12.4]).

Theorem 8.11

If \({{G}}_d={\mathcal {C}}(\Gamma ,S)\) is the Cayley graph of a finitely generated group \(\Gamma \), then \(\alpha _{\mathrm{{comb}}}(\Gamma ) = 0\) if and only if \(\Gamma \) is an amenable group.

Notice that the class of amenable groups contains all Abelian groups, all subgroups of amenable groups, all solvable groups etc. In turn, the class of non-amenable groups includes countable discrete groups containing free subgroups of two generators. For further information on amenability and Cayley graphs we refer to [47, 49, 54, 55, 64, 65].

Combining Theorem 8.11 with Corollaries 4.5 and 5.2, we arrive at the following result.

Lemma 8.12

Let \({{G}}_d\) be a Cayley graph \({\mathcal {C}}(\Gamma ,S)\) of a finitely generated group \(\Gamma \). Also, let \(|\cdot |:{{E}}\rightarrow {\mathbb {R}}_{>0}\) and \({{G}}= ({{G}}_d,|\cdot |)\) be a metric graph. Then:

  1. (i)

    If \(\Gamma \) is non-amenable, then \(\lambda _0({\mathbf {H}})>0\) if and only if \(\ell ^*({{G}})<\infty \). Moreover, the spectrum of \({\mathbf {H}}\) is purely discrete if and only if \(\ell _{{{\mathrm{ess}}}}^*({{G}})=0\).

  2. (ii)

    If \(\Gamma \) is amenable, then \(\lambda _0({\mathbf {H}})=\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})=0\) whenever \(\ell _*({{G}})>0\).

Remark 8.13

  1. (i)

    If \(\Gamma \) is an amenable group, then the analysis of \(\lambda _0({\mathbf {H}})\) and \(\lambda _0^{{{\mathrm{ess}}}}({\mathbf {H}})\) in the case \(\ell _*({{G}})=0\) remains an open (and, in our opinion, rather complicated) problem.

  2. (ii)

    The volume growth provides a number of amenability criteria. For example, groups of polynomial or subexponential growth are amenable. For further results and references we refer to [55].

  3. (iii)

    Using a completely different approach, the inequality \(\lambda _0({\mathbf {H}})>0\) was proved recently in [13, Theorem 4.16] for Cayley graphs of free groups under the additional symmetry assumption that edges in the same edge orbit have the same length.